►
From YouTube: JupyterLab Team Meeting - 18 January 2023
Description
A meeting to share and discuss features, ideas, issues, and pull requests in JupyterLab and other Jupyter frontends. This meeting is open to anyone and everyone.
Join future calls via the Jupyter community calendar: https://docs.jupyter.org/en/latest/community/content-community.html#jupyter-community-meetings
Notes for upcoming meetings can be found on the agenda: https://hackmd.io/Y7fBMQPSQ1C08SDGI-fwtg
Past notes can be found on the JupyterLab team compass: https://github.com/jupyterlab/team-compass/issues/170
A
Hello
and
welcome
to
the
January
18th
weekly
Jupiter
lab
call.
Let's
see
today
we
have
22
people
on
the
call
and
if
you
just
joined,
please
find
the
agenda
in
the
chat
and
Let's
see.
We
have
some
stuff
in
the
agenda
already,
but
doesn't
look
like
a
full
hour's
worth.
So
if
there's
something
you'd
like
to
talk
about,
please
add
an
agenda
item
and
let's
talk
about
it,
but
for
now
why
don't
we
get
started?
The
first
person
on
the
agenda
today
is
Fred.
B
Hello,
everybody,
so
it's
more
updates
than
red
discussion,
Point,
just
a
reminder:
we're
gonna
have
the
first
2023
performance
meeting
after
this
one,
so
maybe
in
an
hour,
if
we
get
this
one
already
and
on
the
two
next
release
of
Jupiter
lab
so
3.6,
we
have
the
release
candidate.
That's
about
since
a
week
this
week
there
has
been
quite
lots
of
CIA
failures
that
needed
to
be
fixed,
I,
think
I
have
addressed
all
of
them
or
so
for
the
final
release.
B
There
are
only
two
two
PR
that
are
missing
and
if
please,
if
somebody
is
thinking
of
something
else
like
it
at
some
bullet
points
on
that
list,
but
otherwise
I
got
the
release
of
the
final
version
as
soon
as
those
two
are
in
and
on
the
for
oh
a
plan,
there
is
nothing
very
new
I
made
the
release
of
lumino
2
beta0
and
for
the
before
releasing
the
release
candidate.
B
There
are
still
the
features
of
Mike
of
introducing
some
shadowed
capability
in
the
widget
that
has
to
be
discussed
and
he
has
opened
a
PR
and
there
is
the
link
and
I
don't
know
if
he's
here.
This
is
here.
So
I
don't
know
if
he
wants
to
say
more
about
it,
but
other
than
I
think
if
people
are
interested
they
can
just
go
to
the
the
issue
and
appear
to
know
more
about
it,
and
that's
all
for
me
for
the
studies.
C
A
D
A
Sorry
about
that
I
also
was
a
little
bit
like
I.
Think
I
can
oh,
no
okay
cool
any
other
questions.
Comments,
reactions
to
Fred.
A
Cool
okay:
we
have
a
group
entry
here
and
I,
don't
know
which
of
the
four
of
you
would
like
to
talk
about
this,
but
I
will
allow
you
to
just
jump
in.
E
All
right,
yeah
I,
can
take
over
for
the
presentation
here,
so
our
team
just
submitted
a
proposal
to
add
this
new
project
under
the
Jupiter
lab
org
and
basically,
the
project
is
Jupiter
generative
AI
the
proposal
and
the
vote
link
are
both
in
there
and
I
also
want
to
give
a
brief
five
minute
screen
sharing
live
demo,
if
that's
all
right
with
everyone.
D
D
F
E
See
awesome,
yeah,
so
I
think
a
great
question
to
start
off
with
is
what
the
heck
is:
Jupiter,
generative,
AI
right,
and
what
this
is
a
project.
E
Our
team
has
been
working
on
truly
intensively
for
I'd,
say
about
the
last
month,
and
it's
essentially,
as
you
might
have
expected,
this
work
was
sort
of
spurred
into
action
by
the
release
of
chat
GPT,
and
we
were
all
completely
blown
away
by
its
functionality
and
we
wanted
a
a
way
of
using
these
new
generative
AI
models
within
Jupiter
lab
as
a
sort
of
essentially
a
helper
to
a
developer,
using
Jupiter
lab
right,
like
having
it
integrated
as
part
of
the
development
toolkit
that
we
can
use
so
Jupiter
Jupiter
gai
is
essentially
a
framework
for
integrating
generative
AI
models
into
Jupiter
lab
right.
E
So
it's
essentially
essentially
users
will
be
able
to
install
these.
What
we
call
gai
modules
that
ex
that
basically
allow
users
to
use
generative
AI
models
right.
So,
for
example,
one
of
the
ones
that
we
one
of
the
modules
that
we
ship
with
by
default,
is
a
gpd3
which
is
a
large
language
model
provided
by
open
Ai,
and
they
actually
do
have
a
convenient
python,
SDK
and
corresponding
HTTP
API
that
can
be
used
and
basically
the
main
usage
flow
right
now
looks
like
this.
E
So
user
can
select
a
range
of
text
and
right
now.
This
is
just
the
entire
contents
of
the
cell
right,
but
you
could
imagine
that
in
the
in
the
future
you
could
like
you,
could
select
a
sub
range
of
a
cell
and
generate
output
from
that
and
then,
once
the
once,
the
cell's
contents
are
selected,
a
user
can
right
click
and
then
click
on
click
on
this
option.
The
context
menu
generate
output
from
selection
with
gai
and
a
user
is
then
presented
a
list
of
tasks.
E
These
tasks
are
basically
an
abstraction
that
basically
it's
it's
a
very
vague
definition,
because
tasks
can
be
very
vague
right,
like
basically,
they
just
perform
tasks
for
you.
So
in
this
example,
we're
going
to
be
looking
at
the
explain
code
task
and
essentially
what
the
Jupiter
GI
backend
does
is.
Each
task
has
a
corresponding
prompt
template
right
and
the
prompt
template
basically
fills
in
your
selected
range
into
this
into
this
body
variable
and
curly
braces
here.
So
it's
really
quite
simple
really,
and
it
can
also.
E
There
are
also
other
options
that
control
how
the
output
is
inserted
back
into
the,
how
how
the
output
is
inserted
back
into
the
notebook
right.
So
here
it,
the
task
will
insert
the
output
immediately
above
the
selected
text
right.
So
if
we
run
this
job
or
this
task
rather
in
a
fairly
long
comment,
it
will
be.
E
It
will
explain
exactly
what
this
code
does,
which,
as
you
might
know,
it
just
simply
prints
the
integers
from
one
to
five
and
while
that
might
not
be
particularly
impressive,
if
we
were
to
extend
these
results
to
some
much
more
complicated
code
such
as
this
one
below
the
value
of
Jupiter
gai
becomes
much
more
apparent
here.
We
are
going
to
generate
output
in
a
new
cell
above
the
current
one.
E
Just
so
it's
much
easier
to
read-
and
this
is
yes
so
here
gpt3
correctly
realize
recognizes
this
block
of
code
as
a
convolutional
neural
network
defined
with
pi
torch,
which
is
really
really
useful.
As
you
might
imagine,
in
the
future,
we
could
even
add
utilities
to
automatically
annotate
entire
notebooks
at
a
time,
meaning
that
every
single
code
cell
gets
a
corresponding
explanation.
E
This
is
pretty
useful
for
people
who
actually
use
notebooks
in
production
and
need
need
support,
debugging
or
fixing
issues
and
the
applications
that
gp3
are
not
limited
to
code
explanation.
It
can
even
write
its
own
code,
so
here
we
have
here.
We
have
some
prompts
to
play
around
with,
and
here
we
can
tell
tell
gp3
to
generate
Python
3
code
and
when
we
run
this
task,
we
will
see
that
gpt3
does
indeed
correctly
provide
us
an
algorithm
that
calculates
Pi.
E
Actually,
I,
don't
know
if
this
will
even
run
it's
not
as
gpt3
is
not
as
good
as
chat
GPT,
which
unfortunately
doesn't
have
an
API.
As
of
yet
oh.
D
E
Well,
you
can
it
actually
worked
yeah,
actually,
not
not
very
accurate.
I
mean
it's
3.13
instead
of
3.14,
but
you
know
it's
still
very
interesting
and
what's
much
more
impressive
is
that
its
capability
can
actually
be
pushed
pretty
far
here
we
have
a
much
more
complex
problem.
E
I,
don't
know
how
many
of
you
all
do
leak
code
I'm,
not
sure,
but
this
is
a
much
more
difficult
question
and
actually
it
seems
to
also
work.
Sometimes
the
output
does
get
truncated,
though,
unfortunately,
when
there's
not
enough
tokens
to
provide
the
full
output,
that's
kind
of
a
implementation
error
on
our
end
and
right
then
so
this
is
the
Jeep
D3
gai
module,
that's
provided
by
default.
E
That's
providing
all
this,
but
I
also
wanted
to
demonstrate
how
this,
how
exactly
Jupiter
gai
is
modular
and
essentially
a
gai
module,
is
just
a
python
package
right.
It's
just
like
a
server
extension.
It's
it's!
Well,
it's
not
really
a
server
extension,
but
the
the
installation
process
is
very
similar.
You
just
do
pip
install
and
it
essentially
just
magically
works
right.
So
here
I
have
my
terminal
and
you'll
see
the
terminal
by
the
way.
E
Yeah,
oh
here
we
go
here.
We
go
Command
Ship
there
I
had
to
all
right.
So
normally
a
user
would
just
do
Jupiter,
gai
and
then
usually
the
GI
modules.
Don't
really
have
to
follow
any
format,
I
mean
technically
they
can
be
any
name,
but
we
would
prefer
to
adopt
a
Convention
of
Jupiter
gai
underscore
and
then
the
module
name
right.
So,
for
example,
if
we
were
to
use
want
to
use
a
gai
module
named
Dolly,
which
is
the
image
generation
model
provided
by
open
AI,
then
a
user
would
just
type
this.
E
Obviously
this
isn't
published
yet
to
the
pi
pipe
registry.
So
right
now
we're
just
going
to
install
our
local
version
right
here.
E
Demand
here
we
go
right
and
then
we
restart
the
server.
E
E
So
when
I
restart
the
server
you
will
see,
I
do
have
new
tasks
available
to
me
now,
I
have
generate
image
below
generate,
photo,
realistic
image
and
generate
cartoon
image,
and
these
are
all
tasks
that
are
provided
by
the
W
model.
Engine
and
a
model
engine
is
basically
just
a
very
thin
wrapper
around
a
actual
machine
learning
model
that
basically
just
handles
like
prompt
synthesis
and
actually
executing
the
model,
which
can
be
very,
very
different
right,
as
you
might
know,
running
a
machine
learning
model
locally
versus
running
it
by
a
rest.
E
Api
are
two
very,
very
different
things,
and
the
model
engine
is
essentially
just
a
thin
layer
of
abstraction
that
hide
away
the
details
of
that.
Basically,
it's
a
Subway
of
executing
a
model
right
and
the
model
engine
can
expose
default
tasks
here.
So
when
you
install
them,
these
tasks
automatically
get
populated
into
the
back
end.
So
these
are
the
three
default
tasks
that
are
provided
by
the
value
model.
E
A
E
E
Basically,
this,
as
you
can
see,
Jupiter
gai
is
indeed
actually
modular
and
we
designed
it
to
be
that
way
from
the
start,
and
we
believe
that
this
has
a
lot
of
application
in
Jupiter
lab,
especially
with
regard
to
development
tooling,
and
we
would
like
everybody
to
go
ahead
and
vote
for
it.
Yeah.
E
That's
it
from
me,
I
think
any
questions
people
may
have.
We
should
probably
answer
them
in
the
parking
lot
since
I've
already
taken
up
a
lot
of
time.
David.
E
Yeah
so
I
think
I
think.
The
idea
that,
like
Jupiter
lab,
is
only
for
data
scientists
is
a
fairly
narrow
view
of
Jupiter
lab
right,
like
a
notebook,
supports
many
different
multimedia
formats
and
I.
Think
it's
entirely
possible
that
developers
working
with
these
different
media
formats
could
need
some
way
of,
for
example,
generating
generating
images
first
like,
for
example,
stock
images
or
for
textures
yeah
and
scientific
figures,
and
illustrations
are
also
worthwhile
right.
E
So
I
I
think
one
of
the
most
interesting
examples
of
the
image
generation
that
I've
seen
is
that
a
lot
of
game
developers
and
their
toolkits?
They
there's
actually
kind
of
a
racket.
Apparently
some
kind
of
like
text,
stock,
texture,
racket
that
normally
people
would
have
to
pay
a
subscription
fee
to
access
a
lot
of
stock
textures,
but
nowadays
with
generative
AI.
They
can
generate
these
textures
for
free
and
really
focus
on
diving,
deep
into
3D,
rendering
and
all
those
processes,
rather
than
paying
a
subscription
fee
to
some
company.
A
F
I,
admittedly,
have
a
lot
of
questions.
I
don't
know
if
we
have
time
for
them
and
I
will
also
note.
I
haven't
gotten
to
read
all
the
proposal
stuff
in
full.
So
if
I
ask
a
question
that
you
think
is
answered
there
like
feel.
D
F
E
E
Wait
is
this
a
wait?
Is
this
an
old
draft
I,
don't
see
the
section
that
we
had
proposed.
H
David,
that's
that's
in
the
compass
issue.
Oh.
A
E
H
Yeah
I
just
needed
a
place
to
show
the
code,
so
there
are.
There
are
three
kind
of
modules
there
that
people
can
actually
download
and
play
with.
They
can
install
it
if
you
want
and
I
I
needed
a
place
to
put
that.
So
that's
the
only
reason.
It's
it's
just
a
placeholder.
We
don't
intend
to
merge
that
code.
A
Okay,
cool,
so
yeah
I,
don't
know
if
you've
answered
Isabella's
question.
You
said:
there's
a
section
in
the
issue
that
talks
about
this.
Could
you
maybe
just
yeah.
E
Let
me
link
it
to
it
to
it
again
in
the
Zoom
chat,
but
it's
the
very
first
paragraph
right
there,
but
essentially
we
believe
that
Jupiter
will
want
to
support.
We.
We
believe
that
the
direction
development
tooling
is
taking
is
that
generative,
Ai
and
especially
large
language
models
such
as
chat,
GPT
and
gpt3,
will
become
increasingly
used
by
developers,
and
we
understand
that
right
now
you
know
some
people,
especially
some
pessimists
in
the
room,
might
laugh
at
that,
but
we
believe
that
it
will
it
eventually.
E
These
capabilities
will
evolve
to
a
point
where
they
can
no
longer
be
ignored
and
we
believe
that
providing
a
default
implementation
that
gives
that
Jupiter
lab.
Essentially,
we
provide
an
implementation
that
Jupiter
lab
provides
its
full
backing
and
support
for.
We
believe
this
will
save
a
lot
of
wasted
effort
and,
furthermore,
some
of
these,
some
of
these
large
language
models
will
be.
Will
there
will
be
a
demand
by
use
in
the
user
base
to
use
these
models
to
help
them
develop
code.
I,
think
that
is
an
absolute
guarantee.
There
will
be
demand.
E
There
will
be
people
if,
if
slash
when
chat,
GPT
releases,
an
arrest
API
for
that
model,
I
can
guarantee
that
there
will
be
users
asking
how
can
I
use
chat,
GPT
in
Jupiter
and
I
think
having
an
official
project
within
Jupiter
lab
to
point
people
towards
and
tell
them
we
are
going
to
support
this.
This
is
a
priority
for
us.
I
think
that
is
really
important
in
building
trust
in
the
Jupiter
Community.
F
Yeah
so
ideologically
that
makes
sense,
I
think
something
I'm
wondering
with
this
that's
coming
up
in
the
chat
is
right
that
people
want
this
integration
I'm
not
actually
sure
what
this
integration
means
and
that
could
be
on
me:
I'm,
not
a
developer,
so
I
don't
know.
If
there's
I'll
read
the
description
description
more
but
yeah,
that's
what
I
was
kind
of
wondering
is
like
what
are
the
changes,
we're
making
wait,
yeah.
D
E
Yeah
and
something
I
do
wish
to
point
out
is
that
the
Jupiter
gai
modules
do
not
need
to.
We
are
not.
There
is
no
plan
to
install
them
by
default,
and
what
like
right
now
right
now
they
are
provided
by
default,
but
there
is
absolutely
no
intention
of
forcing
users
to
have
an
open
AI
account
to
use
Jupiter
Labs
like
that.
That
is
preposterous
right,
so
we
do
want
to
emphasize
that
this
is
current.
We
are
essentially
model
agnostic
right.
E
We
don't
demonstrate
any
particular
preference
for
whatever
flavor
of
the
month
model
is
available.
It's
just
right
now,
the
most.
To
be
frank,
the
most
capable
models
for
a
live
demo
just
happened
to
be
hosted
by
open
AI.
We
don't
really
but
there's
nothing
stopping
users
from
using
other
models,
even
ones
that
they
are
running
locally,
although
that
would
be
much
more
difficult.
F
Yeah
and
I
think
we
should
go
to
Mike.
Thank
you
for
answering
that
I
can
also
add
comments
on
the
issue
itself.
So
thanks.
A
C
All
right
so
two
questions
I
I
wanted
to
bring
up
from
the
chat.
One
is
about
the
naming,
so
is
generative
parts.
Will
it
stick
or
should
we
go
for
something
more
generic
and
what
I
think
is
trying
to
do
is
support
and
assistance
the
loss
of
tasks
that
the
user
might
perform.
C
That's
maybe
something
called
discussion.
That's
in
compass,
I,
don't
expect
there
yeah.
E
E
C
Okay
and
the
other
question
that
I
think
unique
varies
is
that
some
of
these
tasks
could
be
actually
performed
by
a
language
server
in
the
accessible
LSP
specification
from
Microsoft,
and
currently
the
LSP
specification
does
not
include
tasks
like
this.
Specifically,
there
is
no
designated
actions
or
like.
D
C
Cell
with
output,
but
there
are
already
commands
which
can
be
run
and
defined
by
the
server.
So
that's
a
big
question:
if
the
community
of
language
server
goes
into
that
direction,
could
it
potentially
create
duplication
and
the
interface
parts
that
would
still
be
useful,
whatever
happens,
but
the
buckets
and
Link
of
that
this
might
be
a
little
bit
problematic,
workplace
construction
applications.
So
have
you
looked
into
the
plans
and
the
landscape
here.
E
Absolutely
so
we
are
the
user
experience
and
the
user
stories
here
are
absolutely
not
set
in
stone
and
in
fact,
we've
actually
been
planning
to
migrate
away
from
the
dialogue
user
experience
that
we
just
demoed
where
a
user
selects
a
range
of
text
and
then
right
clicks
and
then
it
opens
a
dialogue.
E
We've
we've
actually
been
thinking
about
migrating
away
from
that
user
experience
just
because
it's
too
many
actions
and
as
for
language
server,
integration,
I
think
anything's
on
the
table
as
long
as
it
can
be
implemented
in
a
modular
way
and
what
I
mean
by
that
is.
If
a
user
runs,
pip
install
gai
module
and
it
just
works
like
it
just
does
whatever
magic
it
needs
to
on
the
language
server
side
again,
I,
don't
know
anything
about
this
I'm.
E
Just
speaking
in
hypotheticals,
then
there's
absolutely
no
reason
it
couldn't
be
used
as
a
an
implementation
and
there's
no
reason
why
it
wouldn't
be
the
default,
but
I
think
there's
a
lot
of
research.
That
needs
to
be
that's.
That
needs
to
be
conducted
before
we
can
make
that
kind
of
agreement
to
add
it
to
the
interface.
I
I'm
sorry,
this
looks.
Can
you
hear
me
yeah?
This
looks
really
exciting.
I
really
appreciate
and
applaud
the
effort
that
went
into
bringing
these
capabilities
into
Jupiter.
One
of
the
big
questions.
I
have
any
time
somebody
proposes
opening
in
a
repo
is,
what's
the
plan
for
maintaining
this
repo
and
it
kind
of
goes
to
Isabella's
question
of
all
right.
Why
should
this
be
a
Jupiter
package
with
all
the
the
work
that
comes
with
having
something
in
the
Jupiter
repo?
So
do
you
have
a
plan
for
deck?
I
Do
you
a
couple
of
people
plan
on
keeping
developing
and
maintaining
this
or
what's
the
plan
around
that.
E
Yeah,
so
because
we
are
because
we
are
employed
by
AWS,
we
we
also.
We
also
have
certain
internal
processes
that
we
we
have,
that
ensure
that
we
are
maintaining
the
the
projects
that
we
own.
So,
for
example,
we
have
an
on-call
process
on
our
team
and
in
fact
I
am
this
week's
on
call
and
we
dedicate
we
have
an
assigned
engineer,
dedicating
20
hours
a
week
to
fixing
any
existing
bugs
and
CI
issues
and
all
of
the
repos
that
we
own
so,
for
example,
Jupiter
scheduler
Jupiter
file,
ID
right.
E
These
are
some
of
the
other
projects
that
our
team
has
been
had
been
working
on,
and
we
continue
to
maintain
these
today.
So
I
think
that
I
I
think
that
in
terms
of
Maintenance,
we
are
very
we.
We
are
very
qualified
to
maintain
this
project.
D
E
I
Joining
in
onto
this
yeah,
can
you
add
that
to
your
Team
Compass
issue,
what
the
plan
is
initially
for
maintenance
that
would
be
great
yeah.
A
Invokes
an
external
service
over
the
internet
and
does
stuff
cool
Jupiter
lab
GitHub.
That's
a
great
example.
I
wanted
to
know
if
we
have
prior
art
for
an
official
package
that
we
publish
that
interfaces
with
a
commercial
product.
So
that's
a
really
great
example.
Any.
D
A
H
So
something
I
just
want
to
point
out,
like
I,
think
David
touched
on
that
point
that
the
model
engines,
the
actual
calls
to
these
back-ends-
is
something
that
would
be
user
specific.
We
don't
plan
to
I,
think
correct
me
David,
it's
wrong.
We
don't
plan
to
have
default
model
engines
installed
yeah
unless
the
user
user
wants
to
right
and.
E
So
I
think
the
key
picture
here
is
that
gai
modules
are
entirely
opt-in.
A
user
has
to
explicitly
type
the
package
name,
pip
install
it,
and
then
only
then,
and
only
then
is
that
gir
module
available
right
now
we
had
gpt3
as
a
model
engine
just
included
by
default,
just
purely
for
a
technical
demo
purpose,
and
we
actually
do
plan
on
completely
moving
that
out
and
putting
it
in
a
separate
GI
module
that
that's
always
been
the
roadmap.
G
E
And
I
think
I
think
that
there's
a
lot
of
concern
a
lot
of
rightful
concern
that
Jupiter
lab
May
or,
more
specifically
us
I,
suppose
we
could
be
demonstrating
preference
towards
certain,
because
you
know
right
now.
Generative
AI
is
a
very
hot
topic
and
Enterprise
applications
and
there's
a
lot
of
startups
are
evolving
around
this
I
think
there's
some
concern
that
we
might
demonstrate
favoritism
so
to
speak
to
one
company
or
another
and
I
think
this
is
another
reason
why
it's
really
valuable
to
have
it
under
Jupiter
lab
I.
E
Think
the
Jupiter
governance
model
really
does
apply
here
where
you
know,
if,
if
it
has
the
Jupiter
name
on
it,
I
think
that
people
like
we
a
governance
model
ensures
that
each
default
implementation
that
we
do
provide
and
here
default
implementation,
doesn't
mean
it's
installed
by
default.
It
just
means
it's
supported
by
default
and
anyways
for
these
default.
Implementations
I
think
it's
important
that
a
Jupiter
governance
model
is
applied
here
and
that
no
company
or
organization
is
given
preference
over
another
yeah.
A
If
this
was
a
standalone
open
source
package,
like
I,
don't
know
if
you
saw
in
the
chat
like
Tony
suggested
in
in
Jupiter
contrib,
or
something
like
that,
do
you
think
that
would.
E
I
we
have
and
for
the
reasons
that
we've
already
discussed,
we
do
believe
that
it's
more
applicable
to
have
this
immediately
under
the
Jupiter
lab
org.
Now
we
do
wish
people
to
vote
on
this,
and
obviously
everybody
have
their
opinion
voiced,
but
we
do
think
that
this
is
significant
enough
to
warrant
it
belonging
under
the
Jupiter
lab
org.
C
Follow
Lincoln
dots,
each
sea
steps.
It
includes
a
cookie
cutter
in
The
Proposal.
So
that's
getting
a
pretty
big
mono
repo.
C
To
whether
you
want
that's
to
be
a
single
repository
or
multiple
repositories
and
then
maybe
the
approach
that
we're
thinking
with
Jupiter
lab
and
speak
of
having
an
organization
which
is
separate
from
or
maybe
a
hybrid
model
where
the
protein
package
would
sit
in
the
Jupiter
lab
organization
or
like
to
promote
it
more
widely.
So
it's
easily
discoverable
but
the
rest
of
the
implementation.
Specific
packages
would
be
in
a
separate
organization.
E
Yeah
we
had,
we
had
thought
of
that.
The
difficulty
there
is
that
in
the
multi-repo
setup,
as
you
might
know,
it's
more
difficult
to
coordinate
releases
that
involve
breaking
changes,
that
of
the
actual
API
surface.
E
Of
course,
as
you
very
as,
as
you
pointed
out,
a
monorepo
brings
its
own
difficulties
and
we
are
fully
aware
of
that
and
it
and
but
we
do
think
that
this
is
a
good
experiment
for
us
to
take
here.
E
E
We
use,
we
think,
there's
a
lot
of
ambiguity
as
to
how
that
would
be
carried
over
to
a
mono
repo
Repository,
but
we
do
think
that
the
benefits
strongly
outweigh
the
downsides
here
and
the
way
we
see
it
is
it's
very
possible
for
us
to
begin
shipping
more
model
engines
and
as
that
number
grows,
it
becomes
increasingly
painful
to
have
each
and
every
one
be
a
separate
get
repository,
not
to
mention
that
we
might
have
to
like
like
if
we
were
to
provide
them
official
support
and
actually
have
them
under
a
Jupiter,
GitHub
org.
E
We
would
have
to
call
a
vote
every
single
time
to
add
a
new
default
model
engine
when,
in
reality,
implementing
a
new
model
engine
is
actually
very
simple,
especially
with
that
cookie
cutter.
It
can
actually,
it
only
took
me
like
15
minutes
to
implement
the
Dolly
model
engine,
so
the
cookie
cutter
really
enables
developers
to
create
new
model
engines
really
really
fast,
and
that's
part
of
the
reason
why
we
think
these
default
ones
belong
in
a
Model,
A,
mono
repo.
A
J
I
gotta
start
off
and
say
I'm
really
excited
about
where
this
goes
right.
Now
the
eagerness
and
hype
around
it
makes
me
cautious,
because,
when
I
think
about
the
notebook,
the
goal
is
Insight
right
insight
into
your
compute
and
how
the
tools
that
we
have
that
we
use
around
it
should
facilitate
that.
So.
E
J
Does
this
come
with?
How
do
we
protect
new
users
right
because,
like
this
could
easily
be
a
gateway
drug
into
Jupiter
I
can
totally
see
a
class
educating
somebody
about
how
generative
AI
incorporates
into
their
classroom
incorporates
into
their
homework
eventually,
while
we're
getting
there,
what
kinds
of
concerns
would
you
you
know
what
kind
of
things
would
you
be
concerned
about
honestly
to
a
Luddite
like
me,
you
know
near
term:
how
can
we
ease
the
blow
of
actually
probably
what
is
an
order
of
magnitude?
Larger
population
of
develop
than
developers
are
people.
E
Mm-Hmm
right
there
are
a
lot
of
social
and
ethical
questions
that
come
with
integrating
generative
AI
by
default.
I
absolutely
agree
with
that,
and
our
perspective
is
that
eventually,
generative
AI
will
be
as
common
as
copy
and
paste
and
spell
check.
E
I
I
mean,
if
you
really
think
about
it
spell
check,
is
kind
of
an
artificial
intelligence
in
a
way
that
people
grew
accustomed
to
having,
and
we
see
that
these
new
co-generation
and
explanation
capabilities
and
these
large
language
models
are
essentially
an
extension
of
that
and
we
we
see
it
as
almost
inevitable,
and
our
perspective
is
that
we
shouldn't
try
to
fight
against
it,
but
rather
work
with
it.
Right
and
I.
E
I
do
agree
that
there
are
these
concerns
right
that
you
rightfully
possess,
and
that's
also
one
of
the
reasons
why
we
believe
that
this
deserves
to
be
an
official
Jupiter
sanctioned
project
with
that
is
governed
by
the
existing
Jupiter
governance
model.
E
As
for
the
social
concerns,
I
do
want
to
like
I
know,
I'm
speaking
a
bit
vaguely
here
so
I
do
want
to
give
a
specific
response
to
your
question
and
I
think
there
is
also
social
benefit
to
integrating
these
generative
AI
models.
It's
not
just
I
know.
There
are
some
concerns
over
like
vague
harm,
but
I
can
list
the
specific
benefit
here,
and
that
is
is
that
it
really
democratizes
the
process
of
software
development.
E
If
you
have
these
tools,
if
we
have
these
tools
that
help
explain
code
in
a
human
interpretable
way,
humans
will
be
will
be
better
at
writing
code.
The
barrier
to
entry
is
lowered
and
more
people
can
enter
software
development
without
feeling
gate
capped
or
pushed
aside,
and
we
think
that
there
is
them
demonstrables
social
good
here
as
being
provided.
We
would
not
add
a
new
project
just
because
it's
very
hyped
up
right
now.
In
fact,
some
of
the
hype
is
already
kind
of
dying.
You
don't
really
see
like
that.
E
Many
people
talk
about
chat,
GPT
anymore.
Our
our
goal
was
never
the
hype
or
the
fame
or
anything.
We
genuinely
believe
that
this
is
a
tool
for
software
development
and
it
it
makes
people
more
productive.
In
short,
it's
like
a
force
multiplier.
It's
like
a
leaper.
We
think
that
it
will
make
people
better
and
it
will
the
lower
barriers
to
entry
and
bring
other
social
benefits
as
well.
A
Okay,
I
think
there's
a
lot
more.
We
can
talk
about
so.
A
I
propose
you'll
move
on
so
there's
a
decision
you
need
to
make,
though,
do
you
want
to
call
a
vote
right
away,
or
would
you
prefer
to
wait
and
have
a
bit
more
conversation
around
it?
Because-
and
you
don't
have
to
answer
me
right
now-
just
think
about
that,
because
on
the
issue,
you
can
call
a
vote
or
you
can
leave
it
to
talk
next
week
or
maybe
perhaps
even
once
we
stop
recording.
A
There
might
be
more
conversation
to
be
had
before
the
performance
call
for
the
time
being,
though,
let's
move
on,
because
we
do
have
two
more
agenda
items
and
thank
you
for
both
the
demo
and
the
q,
a
that
was
really
helpful.
Thank.
C
A
Okay,
Jason
girl.
I
Yeah,
just
briefly
to
the
Jupiter
Security
Council
has
been
approached
by
an
organization,
that's
interested
in
funding,
a
bug
Bounty
for
Jupiter
software,
we're
not
sure
if
it's
a
good
fit
actually
for
Jupiter,
so
we're
still
working
through
some
of
the
details,
but
the
deadlines
are
also
approaching
so
I
just
wanted
to
one
give
a
heads
up
that
we
might
that
there
might
be
this
opportunity
to
do
a
bug.
Bounty
program,
that's
funded
by
an
outside
entity
for
the
benefit
of
the
open
source
program.
I
It
would
involve
some
work
from
if
Jupiter
lab
the
Jupiter
lab
subproject
was
enrolled
as
part
of
this
Bud
Bounty.
It
would
involve
some
work
from
people
in
Jupiter
lab
to
identify
which
specific
packages
in
Jupiter
lab.
I
So
we
don't
know
for
sure.
If
this
program
can
accommodate
the
the
the
Jupiter
packages,
but
if
it
can
I
just
wanted
a
quick
understanding,
are
there
people
interested
in
engaging
with
a
six
to
eight
week,
security
bug
Bounty
program
with
security
researchers
for
Jupiter
lab
packages,
just
a
quick
like
raise
your
hand
and
zoom?
I
If
you
think
that's
interesting
and
you
would
be
interested
in,
you
know,
participating
in
some
way
either
helping
identify
packages
that
you
want
people
to
look
at
or
or
working
with
bug
reports
that
have
been
vetted
to
some
level
already
and
see
if
those
were
okay.
I
I
see
one
two:
three:
four
people:
okay,
great,
that's
enough
interest
that
that
we're
happy
to
continue
pursuing
the
conversation
and
seeing
if
we
can
do
something
about
this,
so
I
will
how
so
if,
if
it
ends
up
that,
we
end
up
doing
a
bug
penalty
program,
I'll,
post
to
the
Jupiter
lab
Council
as
a
sort
of
for
an
official
answer
from
the
Jupiter
sub
project.
But
I'll
also
mentioned
it
in
these.
In
these
meetings,.
A
Thanks
Jason,
okay
I
have
two
questions.
One
is
and-
and
please
feel
free
to
just
answer
in
the
chat,
if
not
to
chat
the
minutes.
If
you
wish
does
anyone
know?
A
Is
there
a
reasonable
way
to
create
a
lab
icon
instance
when
you
have
the
URL
of
an
SVG
and
not
the
SVG
as
a
base64
encoded
string,
because
in
the
API,
all
I'm
finding
is
svgs
imported
via
typescript
and
webpack,
but
I
want
to
take
a
kernel,
spec
and
use
its
icon
to
generate
a
lab
icon
and
everywhere
that
we're
using
the
kernel,
spec
resources,
logos,
people
are
actually
creating
image
tags
or
react
image
tags
or
something
like
that:
they're,
not
using
lab
icon.
A
So
if
anyone
knows,
please
just
add
a
note
in
there,
but
we
can
move
on
from
it.
The
other
is
I
opened
a
PR
last
year
and
we
had
a
bunch
of
conversation.
Yeah
I
could
fetch
the
content
I
just
don't
want
to
do
that.
That
is
inelegant,
but
yes
and
and
the
pr
was
to
create
a
way
to
swap
out
the
service
manager,
and
there
were
a
few
things
about
it
that
warranted
further
discussion.
A
One
was
a
security
conversation
that
I
I
still
think
is
resolvable,
as
just
do
we
open
up
new
vectors?
A
Is
this
a
problem
or
not,
but
the
other
was
just
a
general
like
do
we
even
want
this
I'm
happy
to
close
my
PR
if
we
think
that
having
a
special
type
of
Jupiter
front-end
plug-in
that
specifically
allows
you
to
replace
the
service
manager
as
a
not
a
good
pattern
and
not
something
we
should
pursue,
but
because
it's
well
because
of
this
PR
I
I,
put
on
the
4.0
Milestone
I
just
want
to
get
rid
of
it.
A
If
we
don't
want
this
feature
and
I
wanted
to
know,
if
anyone
had
any
thoughts
on
it,
I
think
Nick
added
something
late
in
the
year
last
year
about
how
maybe
actually
what
we
want
is
some
way
to
swap
out
things
in
the
existing
service
manager.
Instead
of
writing
a
plug-in.
A
There
are
other
approaches
here,
so
I
don't
know
any
any
thoughts.
Let
me
find
the
pr
to
link
as
well.
K
Look
and
I
can
summarize
quickly.
One
I
have
overloaded
service
manager,
we've
overloaded
service
manager
like
two
or
three
times
and
in
various
settings,
and
it
was
only
for
behaviors
of
the
sub
pieces
like
where
do
I
get
kernel
specs,
who
can
turn
on
kernels
and
the
stuff?
That's
already
parameterized,
like
Fetch
and
and.
K
So
I
think
it's
good
to
be
able
to
rely
on
it
being
there
I
think
it's
bad
that
you
can
get
multiple
copies
of
service
manager
from
outside.
That's
that
creates
problems,
because
if
some
extension
creates
its
own
service
manager,
that's
not
configured
with
the
Upstream.
If
it
has
been
customized,
it
doesn't
go
all
the
way
down,
but
more
classes
of
plugins
are
not
gonna
help.
Yeah
I,
don't
know,
and
then
specifically
in
the
case
of
Jupiter
light,
it
wouldn't
help
us.
We
still
need
a
custom
application.
A
That
one
okay,
so
I
mean
the
the
premise
of
an
alternative
kind
of
plug-in
is
that
app.servicemanager
will
always
be
there
for
any
plugin.
No
plugin
code
would
have
to
be
modified,
but
but
realistically,
if
even
an
application,
that's
a
literal
one-to-one
clone
of
Jupiter
lab
won't
benefit
from
this
I
can't
think
of
a
better
use
case
than
that.
So
if
this
doesn't
fix
your
problem,
it
probably
doesn't
fix
anyone's
problem,
so
I'm
happy
to
close
the
pr.
A
K
Okay,
yeah,
but
in
the
meantime,
if
we
reopen
something
about
making
kernel,
spec,
Discovery
portable
and
making
kernel
instantiation
plugable
that
that
would
that
would
actually
help
Jupiter
light.
A
lot.
A
The
alternative
version
of
the
kernel
manager
just
supports
those
same
endpoints,
because
it'd
all
be
behind
promises
and
you
can
do
whatever
you
want
with
it
when
it's
client-side,
but
when
it's
server
side,
presumably
that's
something
we
should
support
on
the
Jupiter
server,
rest
API
to
say:
hey
I,
want
to
programmatically
generate
a
kernel
spec
anyway.
Here
it
is
something
like
that.
But
I
don't
think
that's
actually
I,
don't
know.
If
that
falls.
Within
Jupiter
lab
slice
of
the
stack
I
think
it's
a
bit
lower.
A
K
Yeah
I
mean
the
from
the
front
of
perspective.
It
looks
just
like
what
we
have
for
registering
code
mirror
modes
which
I
haven't
even
look
at
how
that's
done
for
now,
but
this
is
a
place
that
you
could
get
code
mirror
modes.
That's
not
the
disk
directory,
a
code
mirror
on
disk
with
node
modules
and
just
some
way
to
distinguish
between
them
like
same
exact
pattern:
okay,
not
and
not,
conflate,
Colonel,
spec,
providing
and
kernel
provisioning
too
closely.
A
Okay,
this
sounds
like
a
longer
conversation
than
we
have
time
for
so,
let's
move
to
Mike's
Point.
C
I'm
happy
to
discuss
this
boring
jobs.
I
hope
we
understand
Gordon.
A
Okay,
I
mean
have:
we
talked
about
a
3.7.
A
Can
we
okay
anyway,
you
know
what
I
think
this
is
probably
a
good
spot
to
stop
recording,
because.