►
From YouTube: June 6th, 2020 Jupyter RTC Community Meeting
Description
Recording of the Jupyter Real Time Collaboration public meeting. Notes are available as well https://github.com/jupyterlab/rtc/issues/3#issuecomment-644282106
Meetings are hold biweekly: https://hackmd.io/UbnBH58hS8itoWgfiWT77A
A
A
My
background
is,
after
physicist,
I
was
working
in
an
investment
bank
for
a
couple
of
years,
I'm
now
in
cryptocurrency,
and
my
interest
in
Jupiters
I'm,
trying
to
essentially
take
a
lot
of
the
the
high
performance,
software
and
HPC
technology,
that's
being
used
in
science
and
basically
trying
to
merge
the
word
with
the
worlds
of
science
and
finance
and
to
encourage
like
technology
transfer
between
particle,
physicists
and
quants
that
banks
and
edge
funds.
So
the
thing
that
I'm
doing
right
now
is
I'm
been
spending
the
past
months,
trying
to
go
through
the
Hong
Kong
bureaucracy.
A
B
A
Yeah
I
mean
I.
Think
one
of
the
things
that
I'd
like
to
do
is
a
lot
of
the
stuff
that
I've
been
working.
I
mean
just
defined
a
lot
of
the
stuff
that
I'm
working
on
would
probably
be
best
discussed
offline,
so
I'm
actually
trying
to
get
active
on
some
of
the
the
chat
groups
and
sort
of
the
the
the
offline
form
so
that
we
can
actually
start
going
through
what
can
be
done.
A
But
basically,
when
I'm
actually
trying
to
do
with
the
grant
is
to
make
it
so
that
it's
as
open
as
possible,
so
that
whatever
the
need
is
I've
written
the
grant
in
such
a
way.
So
that
whatever
the
need
from
the
community
is
that
I
can
I
can
convince
the
funders
that
this
is.
This
is
what
should
be
done
so
that
so
so
that
that
was
part
of
why
it
took
so
long
is
I
wanted
to
make
it
so
that
I
wasn't
locked
into
a
particular
a
particular
set
of
technologies.
B
C
D
Hi
everyone,
I'm,
zhehan
and
I
will
be
a
PhD
student
at
the
University
of
Michigan
and
I.
My
research
interest
is
in
HCI
and
my
future
advisor
have
been
in
a
previous
meeting,
and
so
we
have
built
some
system
or
UX
design
for
Jupiter
notebook
and
now
we're
interested
in
some
authorization
and
access
control,
I'll
interpreter
lab,
so
I'm
here
to
see
there's
like
what's
the
current
situation
is
like
or
if
there
is
anything
we
can
help
or
like
help
to
develop
or
design
something.
D
Yeah
we
actually
we've
had
a
plug-in
build
for
Jupiter
notebook,
which
is
sort
of
like
a
real-time
collaboration
system,
and
we
have
this
paper
published
it's
actually
in
the
last
week's
meeting
note,
but
it's
more
of
a
demo
type
of
things
yeah,
so
that
research
topic
was
more
of
like
enhancing
the
communication
part
while
in
the
real-time
collaboration
settings
and
now
we're
interested
in
more
like
the
sysctl
part
and
we're
looking
forward
to
do
it
on
jupiter
lab
sort
of
like
this.
That's.
D
Just
started
it
to
reading
some
like
some
notes
in
the
issues,
something-
and
I
noticed
that
some
like
use
cases
like
provided
by
FID
are-
was
vite
like
very
interesting.
It's
like
basically
what
we're
trying
to
look
for
or
solve
this
stuff,
like
that,
but
I
haven't
gotten
to
like
the
implementation
level
of
things.
E
B
F
G
H
And
then
the
first
rule,
RTC
clubs,
you
have
to
talk
hi
I'm,
Nick,
I
work
at
Georgia,
Tech,
I,
write
code,
Ben
Duke
record
for,
however
long
I
don't
know
I'm
interested
just
because
I've
built
two
different,
real-time
collaboration
front
ends
for
the
or
back
ends
and
fern
ends
for
the
the
Jupiter
stack
and
I'm
interested
to
see.
If
there's
any
insights
from
having
been
down
that
painful
road
that,
if
I
can
share,
but
also
to
see
what
the
goodies
are.
B
Sweet
thanks,
Nick,
yeah
and
I
was
just
thinking
so
that
sort
of
to
go
over
a
little
bit
more.
The
format
of
the
meeting,
I'm
sort
of
just
cribbing
this
from
how
Drupal
app
runs
their
meetings.
But
if
you
have
anything,
you'd
like
to
discuss
it's
sort
of
open
for
anyone
to
suggest
topics.
So,
if
you'd
like
to
have
a
longer
discussion
during
this
meeting,
feel
free
to
put
an
item
down
on
the
agenda
yeah
and
so
we'll
kind
of
just
go
through
those.
B
B
So
yeah
I
was
trying
to
work
on
just
a
little
proof
of
concept
to
demonstrate
the
idea
of
having
like
a
node
on
the
server
that
connects
to
the
Jupiter
server
and
keeps
the
data
model
up-to-date,
and
this
is
different
than
how
do
pa'dar
works
currently
or
Jupiter
lab,
at
least
because
it
connects
directly
to
the
server
itself.
So
what
you'll
see
here
is
this
is
a
little
debugger
and
it
shows
all
the
tables
that
we
have
in
our
data
store
and
they
start
out
empty,
and
here
this
is
just
a
little
UI.
B
Just
has
a
few
cells
in
it,
so
I
can
load
this
notebook
and
then
this
is
just
a
very
rough
kind
of
UI
to
let
us
start
to
look
at
it.
If
this
is
working
or
not,
and
so
this
is
trying
to
show
you
that
outline
of
the
notebook,
the
different
cells
etc.
And
then,
if
we
look,
for
example,
how
this
works
is
there
are
all
these
tables
right
in
the
data
store
and
so
they're,
loaded
or
they're
filled
in
by
the
server
who
talks
to
the
jupiter
server?
B
So
here
we
can
see,
there's
one
item
in
the
contents
table
for
this
notebook
file
and
it
points
to
a
Content
ID
and
that's
in
the
notebooks
table.
So
if
we
look
in
the
notebooks
table,
then
we'll
see
hey,
there's
a
notebook
with
a
certain
ID
and
it
has
a
number
of
cells
each
would
have
which
have
IDs.
So
I
can
look
in
the
cells
table
and
I
can
see
each
of
these
cells.
B
You
know
they
have
their
metadata
and
their
execution,
and
so
that's
any
executions
table
so
yeah.
This
is
still
very
preliminary.
I
don't
have
most.
You
know
just
sort
of
like
a
rough
and
end
kind,
of
example
here
to
illustrate
proposed
design,
I,
guess
yeah.
So
maybe
I'll
stop
here.
I
I,
imagine
we're
all
probably
different
levels
of
comfortability
with
this
kind
of
model,
so
maybe
I'll
just
open
for
questions
or
feedback
from
others.
At
this
point.
E
B
A
B
E
What
one
one
question
theme
for
us
to
think
about
is,
and
mostly
let
me
describe
for
the
broader
audience
here,
so
the
these
tables
and
their
schema
are
are
in
the
alumino
data
store
in
the
jupiter
lab
repo,
and
we
support
the
following
field
types
in
those
tables.
So
we
support
scalars
maps,
lists
and
text
and
the
scalars
are
just
normal
types,
so
numbers
strings,
etc
that
are
not
collaborative
internally.
E
E
Are
there
performance
issues
that
we
need
to
think
about
here
with
outside
of
the
traditional
notebook
text,
editing
type
of
but,
for
example,
with
sessions
or
with
continence
or
the
other
parts
of
the
API
that
we
we've
never
thought
about?
Modeling
it
with
this
data
store
before
and
and
so
I'm
wondering
sort
of
what?
How
that
how
you
see
them
yeah.
B
I
think
there's
definitely
performance
could
be
performance
issues
in
a
lot
of
places.
I
guess
the
the
way
I've
been
thinking
about
it
so
far
is
trying
so
for
things
that
aren't
collaborative
or
to
people
wouldn't
need
to
be
editing
at
the
same
time,
and
maybe
whether
that's
like
even
something
like
I,
don't
know.
I
can
find
a
good
example,
but
like
here
like
a
kernel,
spec
file
right
like
that,
doesn't
really
need
to
be
collaborative
you're,
not
going
to
be
both
editing
that
or,
for
example,
like
other
kind
of
compound
fields.
B
Where
you
don't
really
need
individual
edits
on
each,
then
you
can
kind
of
group
those
as
one
field
but
there's
yeah,
there's.
Definitely
some
trade-offs
and
I
mean
so
I've
been
using
for
things
that
you
wouldn't
both
want
to
edit
character
by
character.
We
can
I've
just
been
using
string
instead
of
text.
So
it's
just
a
yeah
I,
don't
know
if
that's
where
they're
know
what
you're
getting
at
or
what.
E
A
Because
if
you
do
that,
then
what
happens
in
a
lot
of
cases
that
you
want
to
do
as
much
processing
on
the
backend
as
you
can,
and
so
the
idea
would
be.
If
you
have
like
a
database
with
five
million
fields
right,
you
wouldn't
actually
do
the
processing
Jupiter.
You
tell
the
database.
Okay
I
want
to
process
this
stuff.
E
Some
of
them
so
the
that
gets
around
the
scaling
problems
in
turn
as
the
number
of
Records
yeah
grows,
but
each
but
I
see
what
you're
saying
yeah
each
field
in
each
record
still
maintains
its
full.
The
full
history
of
its
changes.
Yeah
and
all
those
changes
are
needed
by
any
process
that
wants
to
look
at
the
current
state
of
that
field,
yeah,
so
the
yeah.
Absolutely,
we
would
be
storing
these
transactions
in
some
sort
of
scalable
database,
whatever
whatever
that
means
for
the
current
situation.
I.
B
I
Have
some
question
about
the
super
node
the
foundation
I
haven't
looked
at
it
too
closely,
so
just
for
summary
you're
currently
using
it.
I
guess
that
the
Drupal
app
services
package,
but
are
you
using
and
if
the
other
packages
to
kind
of
process
the
responses
from
there
or
is
that
just
super
trivial
to
to
react
to
the
server
messages.
B
B
B
H
H
So
what
I
mean
that's
gonna
go
away
tomorrow.
It's
not
guaranteed
to
be
running
in
the
same
place
as
where
your
kernel
is
actually
running,
whereas
a
RTC
kernel
could
be
guaranteed
to
run
in
the
place
where
you're
expecting
it
to
actually
be
and
also
maintaining
kernel
environments
is
a
better
understood
problem
than
notebook
server
environment.
B
I
saw
your
yeah
your
work
on
the
debugger
stuff
as
a
Weston
Colonel,
but
I
think
I
can
kind
of
see
where
you're
coming
from
I
think
I'm
understanding
that
right,
I
hadn't
thought
about
that
I
think
I.
Think
about
as
a
different
level
like
this
is
just
throwing
this
out
there
like
in
that
future
world.
Where
this
existed.
You
wouldn't
have
support
for
comms
this.
Would
this
would
give
you
support
for
Colonels,
so
you
wouldn't
have.
This
would
be
more
primitive
than
a
Jupiter
client
app
that
connects
to
your
kernels
right.
H
E
The
other
is
honestly
not
just
variable
compute,
but
is
significant
compute
it
to
have
a
process
that
sort
of
adjacent
to
the
jupiter
server.
That's
running
a
large
number
of
like
is
cpu
and
ram
intensive
changes,
the
profile
of
deploying
jupiter,
even
in
in
jupiter
hub
context,
and
even
though
from
the
like
from
the
network
protocol
perspective
in
this
case,
I'm
not
sure
that
comms
actually
yeah
I
agree.
It's
simplified
certain
things,
but
it
may
complicate
other
things,
so
I
I
wasn't
thinking
of
this.
I
H
No
I'm
good
I
mean
you
know
just
just
the
the
experience
from
the
the
aforementioned
PR,
which
was
not
actually
on
debugger.
Anybody.
That's
working
on
debugger
I,
wasn't
I,
didn't
I'm
not
going
to
any
annoying
stuff
he's
on
the
language
server
protocol.
A
language
server
looks
a
lot
like
a
kernel
process
except
it's
different.
It's
got
a
different
life
cycle.
H
You
know
you
just
talk
to
it
and
I
think
that's
the
level
of
robustness
that
that
we
need
that
extensions.
Authors
I,
you
know
cord
god
bless
to
you
guys,
but
you
know,
as
extension
authors
they're
gonna
need
something.
That's
that's
simple:
to
interact
with
and
I'm,
not
sure.
If,
if
having
to
deal
with
the
guts
of
a
custom,
WebSocket
are
gonna
really
make
them
happy.
I,
don't
know.
Maybe
the
datastore
takes
care
of
all
that
and
the
extensions
authors
don't
care
at
all,
but
yeah
any
up.
So
yes,
please,
Peter
and
I'll.
Just.
I
It
was
a
sorry
I
just
wanted
to
ask
you
if
you
had
any
thoughts
about
whether
such
a
kernel
should
be
per
server
per
collaborative
object
per
document,
or
whether
this
should
be
just
for
the
super
node
or
exactly
how
you
were
envisioning
this
or
whether
it
was
it
just
kind
of
protocol
like
network.
It's.
H
Protocol
I
mean
I
I
got
a
I'm
working
on
you
know
and
I.
You
know
furl
a
holy
other
topic,
but
in
this
case
I
trust
the
browser
as
far
as
I
can
throw
it
and
so
in
an
authenticated
environment.
I
would
trust
my
kernel
way,
the
hell
more
than
my
browser
and.
I
H
I
H
H
You'd
be
one
per
user.
You
are
representing
yourself
as
so.
Let's
say
we
were
on
this
call:
I
had
a
Jupiter
lab,
open,
I
connect
to
this
call
as
Nick
the
unauthorized
user
of
a
zoom
web
client
and
I
had
to
hack
the
URL
to
get
here
and
whatever
right
like
you,
don't
trust
me
I
am
nobody.
Maybe
you
can
trust
me
by
the
things
that
I
say
in
the
notebooks
that
I
write,
but
then
in
the
same
lab
I
may
want
to
be
working
a
back-channel
with
my
people
in
my
company.
H
H
I
I
H
H
B
B
Think
one
thing
to
note
is
at
least
the
way
I'm
saying
this
is
this
wouldn't
require
Jupiter
lab
so
requiring
the
joopa
lab?
What
like
comms
and
Colonel
mechanism
defeats
some
of
the
purpose
here,
because
the
hope
is
that
it
would
provide
these
things
outside
of
the
Jupiter
environment
as
well
for
other
front-ends.
E
Well,
I
mean
a
Jupiter
front.
Column
is
a
Jupiter
front,
end
abstraction,
so
that
is
not
yeah.
I'm
not
I
mean
I,
do
understand
it.
If,
if
we
were
trying
to
represent
for
arbitrary
web
applications
that
nothing
to
do
with
Jupiter,
that's
a
different
story,
but
here
that
I
think
the
scope
is
Jupiter
and
comms
are
a
part
of
that
I
mean
I.
Don't
think!
That's
the
only
concern
but
yeah,
but
yeah.
E
I'm
gonna
continue
on
that.
The
other
thing
that
I've
started
to
work
on
is
the
the
performance
characteristic
of
the
different
fields
is
highly
dependent
on
how
we
generate
the
underlying
ids.
These
serie
DT
algorithms
rely
on
totally
ordered
IDs
to
represent
a
space
metadata
on
the
fields
to
track
the
history
of
the
objects
across
users.
E
We
so
one
of
the
things
we're
gonna
need
to
do
to
change.
The
performance
characteristics
is
change.
How
we're
doing
a
degeneration
to
do
that
in
a
robust
way.
We
need
a
test
suite
so
I'm,
adding
tests
to
the
ID
generation
code
paths,
basically
trying
to
get
the
test
suite
and
the
performance
benchmarks
there
as
a
baseline.
E
And
this
is
really
relevant
when
you're
loading,
the
initial
state
of
a
document,
you
have
to
basically
replay
all
the
transactions,
so
yjs
has
really
not
done.
They've
done
a
lot
of
work
on
on
a
binary,
packing
format
for
transactions,
and
we
just
haven't
looked
at
it
yet,
but
you
know
it's
I.
Don't
think
that
part
is
tricky
from
algorithmic
perspective.
It's
more,
you
know,
leveraging
something
other
than
verbose,
JSON
data
structures
for
transactions.
I
think
we'll
get
against
most
of
the
way
there
so
yeah.
I
If
I
could
just
add
something
there
in
terms
of
the
loading
and
document
there,
could
there
are
ways
to
add
check,
pointing
logic
so
that
you
wouldn't
need
to
apply
a
full
stack
of
history.
If
this
is
a
document
with
a
lot
of
history,
director
are
some
ways
to
do
it.
There
is
a
synchronization
step
that
it's
needed
to
ensure
that
any
people
who
are
who
are
having
network
issues
as
we
create
a
checkpoint
are
not
left
out,
but
that's
a
solvable
problem.
Yeah.
E
That's
a
really
good
point
Bedard
and
some
of
what
some
of
where
this
will
be
interesting
and
it
I
think
that
these
challenges
are
independent
of
any
particular
Sierra
TT
algorithm
implementation,
where
those
checkpoints
are
also
relevant,
is
when
you
double-click
on
the
file.
You
pull
the
file
from
disk.
You
have
to
make
some
attempt
to
validate
that
the
state
of
the
file
on
disk
is
actually
represented
in
the
current
transaction
history
of
that
document.
The
datastore
and
so
I
think
that
type
of
chapel.
I
E
I
B
Did
I'm
curious
that
the
that
other
library
is
going
through
some
pretty
active
changes,
it
seems
like
in
their
performance
characteristics
as
well
and
I'm.
I
haven't
had
a
chance
to
like
look
at
that
at
all
deeply
but
curious.
If
you
had
looked
at
that
or
had
any
like
insights
from
at
or
any
thoughts
so.
E
So
my
fundamental
conclusion
here
is
that
there
is
no
magical,
CR
DT
that
has
all
right,
characteristics,
flexibility,
etc.
It's
a
pretty
active
area
of
research
and
and
there's
also
not
a
single
requirement
in
the
sense
that
some
seared
ET
algorithms
are
good
at
certain
things
and
bad
at
others,
and
you
know
that,
for
example,
our
our
scalar
and
map
implementation
I
think
is
particularly
good
right
now.
Our
list
attacks
are
decent,
but
not
as
memory
efficient
as
they
could
be
in
what
what
trade-offs
you
make
depends
on
your
usage
case.
E
And
so,
if
we,
you
know,
if
we
discover
there
there's
a
particular
implementation
of
a
text,
C
R
DT
that
looks
really
attractive
for
large
text
files
or
or
for
small
notebook
cells,
great
I,
don't
I,
think
we'll
be
able
to
to
pivot
around
that
see.
I,
don't
I,
don't
mean
it.
So
I
think
that
it's
great
to
see
a
lot
of
people
exploring
these
things
and
I
think
it's
still
needed
at
this
phase.
E
I'm
not
actually
so,
but
more
concretely,
I've
not
run
Lumino
vs.
yd
s
versus
the
improved
Auto
words.
Yet
I've
not
done
that.
So,
oh
I,
don't
have
a
sense
of
where
we're
up
the
the
improved
Auto
merge
comes
out
the
ygs
author
compared
it
to
ygs,
but
you
really
need
to
run
it
all
on
one
machine
to
get
a
consistent
baseline
on
that.
E
E
Yeah
so
in
Jupiter
lab
the
document,
manager
and
document
registry
have
this
idea
of
a
model
Factory,
and
it
allows
the
extension
author
of
a
particular
document
widget
to
model
basically
manage
the
model
lifecycle
of
the
that
sits
underneath
the
the
widget
or
the
view.
Right
now
we
have
a
sort
of
multi
layered
model
system
and
at
one
layer
we
have
this.
This
object
called
model
DB,
which
is
a
very.
E
Biased
way
of
representing
of
modeling
data
in
the
front-end
that
I
did
so
in
in
V
DARS
earlier
work,
he
implemented
after
between
the
Lumino,
datastore
and
model
DB
I.
Don't
I
think
that
I,
don't
think
that's
gonna,
be
the
ideal
way
of
of
managing
this
I
think
that
the
model
of
the
two
data
stores
is
pretty
different
and
you
end
up
with
so
many
different
layers
of
model.
It's
really
confusing
thing
to
follow.
E
The
other
is
that
we
want
to
be
able
to
build
real-time
capabilities
into
Jupiter
lab
without
breaking
core
api's.
In
other
words,
we'd
like
to
be
able
to
do
this,
in
extension,
work
that
we
eventually
merged
into
master
rather
than
massive
huge
pull
requests
that
break
lots
of
the
api's
that
are
long
live.
So
what
what
I've
been
starting
to
do?
E
As
their
data
front-end
data
model,
it
should
be
really
easy
for
the
model
factory
to
provide
models
according
to
that,
rather
than
using
model,
DB
I
think
it's
actually
a
teeny
amount
of
work
and
we'll
be
able
to
do
this
for
three
dono
without
I'm,
hoping
without
breaking
any
api's,
just
additively
I'm
not
proposing
that
we
for
three
don't
get
rid
of
model
DB
more
just
that
we
make
it
optional
for
extension,
authors.
So
these
are
they
trying
to
make
the
RTC
work
easier
to
do
in
a
more
loosely
coupled
way.
I.
I
I
But
my
the
other
kind
of
use
case
you
would
have
is
that
hey
I'm,
an
admin
and
I'm
deploying
this
for
our
system
and
we
want
all
different
extension
systems
to
preferably
use
this
kind
of
model,
because
that's
what
quit
suits
our
plant
is
that
are
those
two
compatible
anyway.
What
would
we
kind
of
be
the
inter
intermeshing
of
those
two
two
two
cases
or
in
spirit
I.
E
Don't
think
they're
compatible
in
the
sense
that
there
are
extension
authors
that
are
building
extensions
using
already,
but
in
a
happy
way,
their
favorite
data,
modeling
system
and
I
I.
Don't
think
there's
any
way
or
any
any
any
way
for
all
extension
authors,
you
know
so,
for
example,
maybe
maybe
your
your
your
document
is
backed
by
a
graph
QL
endpoint
I,
like
I,
think
it's
I
don't
see
any
way.
We
can
reasonably
expect
extension,
authors
or
admins
for
that
matter
to
say
I
want
this.
I
Right,
but
so
what
I'm
trying
to
ask
is
so
say
we
have
the
RTC
system
and
somebody
writes
an
extension
that
uses
just
graph
QL
thing
back
and
instead,
if
they
wanted
RT
seat
and
they
would
have
to
use
if
it
have
to
build
their
own,
our
GC
system
on
top
of
that
and
yeah.
E
Absolutely
some
of
what
I'm
observing
is
that
different
extensions
have
different
need
to
support
different
user
stories
and
use
cases,
and
for
some
of
them
real-time
collaboration
is
not
relevant
and
they
can
use
graph
QL.
There
may
be
other
reasons
that
there
they
they
want
to
use
graph
QL,
where
the
big
shift
here
so
right
now.
E
I
think
I'm
not
proposing
we
get
rid
of
that
code
pass,
but
that
we
allow
extensions
if
they
want
to
say
I,
don't
need
that
one
I
I
in
all
deployments.
My
extension
needs
to
use
this
type
of
data
store,
rather
than
one
that's
provided
by
Jupiter
lab
services
longer
term.
We
could
think
about.
Okay,
should
you
know,
should
mama
DB
itself
be
replaced
by
the
data
store?
It's
sort
of
a
default
available
thing,
but
I
think
that
would
be
a
longer
term
project.
I
If
I
just
understand
you
correctly
you're
saying
that,
so
if
you
have
a
separate
data
format,
a
separate
data
extension
to
some
some
other
like
I,
don't
know
a
movie
or
so
some
data
cube
that
you
want
to
manipulate,
and
you
make
an
extension
for
interacting
with
that.
And
what
we're
saying
is
that
if
you
have
two
different
extensions
that
can
both
act
as
viewers
or
editors
of
that,
how
will
they
they
will
not
be
able
to
cooperate
on
that?
They
would
just
need
to
use
their
different
data
stores
to
represent
the
same
content
side-by-side.
E
This
is
one
of
the
questions
that
I
think
is
important,
so
the
document
manager
caches
these
context,
objects
that
hold
on
and
the
context
object
is
template
over
the
underlying
type
of
the
model
like
some
of
this
is
already
actually
in
place.
I'm
hoping
we
could
still
we'll
have
multiple
multiple
front,
independent
front.
I
I
I
The
question
of
how
large,
how
much
data
should
one
session
and
compares
should
it
be
like
the
full
UI
state,
or
should
it
be
per
document
or
some
other
kind
of
logical
level?
And
the
other
question
is
what
kind
of
lifetime
do
we
expect
those
kind
of
sessions
to
have
and
for
me
the
reason
I
want
to
ask
this
question
is
that
they
kind
of
intermesh
closely
with
some
of
the
other
kind
of
K
you
use
cases
you
want
to
have
them,
what
kind
of
user
experience
you
could
create
on
top
of
the
RTC.
I
If
that's
the
gist
et
default,
think
then
the
question
is:
can
I
share
a
part
of
this
session
with
somebody
else?
So,
for
example,
can
I
share
a
specific
notebook
with
somebody
else
without
them
gaining
access
to
all
the
other
stuff.
I
have
in
my
session,
so
that's
probably
possible,
but
but
the
question
is
just
how
much
work
would
that
be,
and
of
course,
if
you
say
we
should
put
the
collaboration
or
the
session
person
per
collaborative
session
on
a
per
document
thing,
then
you
consider
yourself.
I
Before
you
I
stayed
back
facility
for
the
entire
lifetime
of
everything.
Then
the
performance
is,
you
know
everything
I,
don't
know
Megha
if
you
say
that
this
is
something
I
turn
on
for
an
hour.
So
I
can
share
this
document
with
a
class
or
with
a
friend
then,
if
at
the
end
stage,
is
starting
to
come
a
bit
sluggish
or
large.
That's
not
this
much
of
an
issue.
E
E
I
B
E
Yeah
and
also
I'm
less
worried
about
there
being
a
large
number
of
tables
and
a
large
number
of
Records
in
those
tables,
I'm
I'm
more
worried
about
maintaining
long
histories
for
particular
fields
in
particular
records.
That's
the
that's
where
the
performance
is
gonna,
be
really
yeah,
I
mean
I,
don't
even
know
if,
if
there's
any
CDT
on
the
planet
that
we
could
reasonably
expect
to
be
able
to,
for
example,
I
installed
Jupiter
lab
on
my
laptop
I
start
working
in
three
years
from
now,
all
I've
done
is
add
an
atom
to
that
history.
E
That,
or
even
just
blasting
there's
synchronization
questions
here,
but
the
simplest
thing
to
do
is
after
some
period
of
time,
of
no
one
having
a
particular
document
open.
You
lost
the
history
of
that
document
from
the
data
store
tables
and
you
reload
that
state
next
time
someone
opens
a
document
yeah.
B
It
it
spiders
there
yeah
I'm,
just
saying
it's
more
complicated,
probably
to
look
to
the
history
for
a
particular
model
like
what,
if
it's
a
transaction
with
multiple
documents
and
one
yeah
I'm,
just
thinking
the
simplest
would
just
be
hey.
You
know,
take
all
history
Oh
after
10
minutes
or
half
an
hour
and
collapse
it
to
one
transaction
or
something
like
that.
I
I
Yes
and
I'm
saying
that
the
smaller
the
amount
of
things
that
can
be
worked
on
is
the
easier
it
is
to
avoid
debilitating
problems.
About
that
that
doesn't
mean
we
shouldn't
try
to
necessarily
tackle
the
big
one.
I'm
just
I
would
argue
for
starting,
smaller
and
and
then
scaling
it
up
as
needed,
but
but
yeah
dad
might
be
just
me
being
overly
cautious.
I,
don't
know
I.
E
Totally
agree
and
I
honestly
I
think
the
core
algorithms
of
CRT
tear
too
quickly
to
the
background
as
we
tackle
these
problems.
I
think
these
questions
are
going
to
be
independent,
of
which
CR
DT
algorithm
we've
implemented
or
if
we
would
use
some
other
library.
It's
all
this
stuff,
that's
going
to
be
the
actual
hard
part
and
where
we
need
a
high
degree
of
customizability,
because
we're
making
all
these
decisions
and
trade-offs
and
so
on
and.
B
I
think
the
permissioning
thing
is
even
more
complicated
if
we
do
end
up
with
a
better
like
a
role
based,
permissioning
or
very
granular
permission
I'm,
not
sure
at
all
how
that
will
work
with
this
system
and
how
yeah,
how
you
verify
hey
I
changed.
This
document
am
I
allowed
to
change
this
document,
who
verifies
that
does
that
have
to
be
in
the
relay
server.
I
Oh
I
said
other
way
to
to
organize
this,
which
you
could
be
have
one
session.
That
is
like
the
UI
accession
order
to
fully
UX
session,
that
only
tracks
stuff
down,
store,
dude
dark
which
documents
are
open
and
there
it
just
links
to
another
RTC
session.
That
is
a
specific
RTC
session
for
the
document.
B
So
you
would
haven't
gotten
one
session
per
document
that
are
different
right,
like
you
might
for
sorry,
and
you
might
say:
hey
you
can
execute
this
cell
and
can't
execute
that
so
or
you
can
edit
this
cell,
but
can't
edit
that
cell
or
I
guess.
My
concern
is
that
the
permissions
are
gonna
be
possibly
so
we
might
want
them
so
granular
that
it
doesn't.
It
might
not
help
to
break
up
the
split
the
tables
into
who
has
permissions
no.
I
I
was
I,
wasn't
thinking
about
the
permission
issues
now.
I
was
thinking
about
more
like
a
lifetime
and
syncing
issues,
but
that's
that's
also
a
good
point.
So
what
I
was
thinking
is
if
you
have
one
RTC
session
per
document,
because
normally
documents
started
stuff
that
gets
edited
most
heavily
right.
I
Then
you
have
a
higher
number
of
points.
Where
you
can
you
can
kind
of
collapse
the
state
or
reset
the
state
as
needed?
Then
you
could
have
one
RTC
session
that
is
tied
to
the
global
UI
state,
for
example.
That
would
then
just
have
the
lifetime
of
whatever
the
browser
tab
that
is
open
or
whatever
or
I.
Guess:
I
guess
that
user
session
and
that's
gonna
be
collapsed
whenever
two
pages
refreshed
or
or
something
like
that.
E
But
a
bigger
comment
on
permissions
Jupiter
allows
arbitrary
code
execution
in
the
front
end
and
back
end,
and
so,
if
someone
can
do
that,
finer
grained
permissions
are
pretty
meaningless.
That's
not
to
say
we
can't
do
some
notion
of
permissions,
but
if
the
user
can
run
any
code
they
can
run
all
code
and
they
can
do
all
things
and
so
yeah.
B
If
it
can
run
any
code,
but
sorry
well,
just
say
what,
if
the
curls
in
to
contain
you
have
a
remote
kernel
or
something
in
a
container
and
I
mean
you
might
not
be
able
to
access
just
because
you
can
run
something
in
the
kernel.
Doesn't
necessary,
mean
you
could
change
documents
right
or
mess
with
your
host
system?
Er.
I
I
I
So
please
answer
afterwards,
but
there
wasn't
discussion
previously
about
having
the
super
node
being
responsible
for
handling
the
execute
requests
to
the
server
and
at
that
point
the
permissioning
kind
of
gets
Lin
yeah
rockin
it,
and
also,
if
that's
if
the
super
node
is
writing
state
to
deed
notebook
document
model.
You
know,
then
that
basically
means
that
everybody
who
has
permission
to
write
so
the
RTC
session
all
also
gets
permission
to
write
to
the
notebook
state.
I
mean
it's
not
possible
to
hide
something
yeah.
B
Yeah
I
think
there's
a
there's,
a
bunch
of
different
permission,
questions
how
to
hide
how
to
stop
edits
and
then
also
you
how
to
stop
you
from
allowing
to
do
actions
on
the
server
and
probably
all
have
to
be
handled
separately.
I
mean
yeah,
like
one
thought,
is
a
if
you're
doing
an
edit.
Maybe
when
you
send
a
transaction,
you
have
a
you
could
have
a
user
ID
and
it
could
verify
that
you
know
somehow
the
transaction
is
linked
to
you
or
your
user
ID
or
something
like
that.