►
Description
🙏 Thank you for watching! Hit 👍 and subscribe 🚩 to support this work
🌱Join the Community🌱
on Discord https://discord.gg/uM4ZWDjNfK
or say hello on Telegram https://t.me/tecommons
Join the conversation https://forum.tecommons.org/
Follow us on Twitter: https://twitter.com/tecmns
Learn more http://tecommons.org/
A
Welcome
to
the
rewards
work
group
weekly
sync
for
december
15th.
A
lot
has
been
happening
this
last
few
days.
As
always,
I
guess,
but
my
the
number
of
discussion
points
I
have
on
the
agenda
for
today
is
fewer
than
it
has
been
the
last
few
weeks.
A
Yes,
great
and
so,
and
if
you
would
like
to
bring
up
any
other
discussion
points,
just
let
me
know
no
doing
the
the
status
updates
and
we'll
take
it
from
there.
So
I'm
beginning
with
a
mitch
who
is
not
here
for
the
first
quant
preparations
and
execution.
Let's
skip
him
for
now
see
if
he
appears
otherwise
we'll
maybe
can
make
some
sort
of
status
update
together.
Based
on
what
we
have
heard,
we.
C
We
just
really
quickly
from
what
zeptimus
and
I
have
been
up
to.
I
know:
we've
been
actively
recruiting
some
quantifiers
we're
shooting
for
a
pool
of
30.
we're
roughly
at,
like,
I
think,
19
or
20.
Now
we
brought
it
up
on
communitas
and
then
we've
also
got
some
of
the
onboarding
docks
and
stuff
together
too.
C
Nope,
no
just
this
optimus
is
working
through
a
couple
couple
points
to
get
screenshots
in
for
our
documentation
and
stuff.
A
C
Yep
and
mitch
just
happened
to,
I
believe
the
the
trial
quant
is
going
to
be
smaller,
but
the
pool
is
to
prepare
for
that
first
one
after,
but
I.
E
Sorry,
my
internet
died
here
for
a
bit
first
thing:
primp
and
quan.
What
are
you
talking
about
exactly
we're
talking.
A
About
yeah,
first
prep
quant,
we
were
discussing
the
quant
pool
if
you
were
planning
to
use
these
approximately
30
people
for
the
trial
point
or
if
that
is
mostly
preparation
for
the
first
real
quant.
E
A
F
E
Like
I
said
in
the
I
put
in
the
I
tagged
everyone
in
general
last
week
and
I
said
in
the
community
call
like
we
have
the
thread
for
calling
out
quantifiers.
So
just
like
come
in
there
and
signal
that
you
want
to
participate,
and
in
that
thread
I
pinned
the
reward
system
outlining
the
reward
system
v2
in
there.
So
if
they
want
to
participate
just
come
into
the
thread
and
signal.
A
Nice
are
there
any
more
more
updates
about
the
this
topic.
E
A
Maybe
we
can
call
it
done
up
to
to
say
like
most
likely
it's
like
90
done
and
and
we
will
be
able
to
do
the
remaining
10
when
we
actually
get
started
for
real
that,
because
then
we
will
unearth
or
uncover
some
more
tiny
details
that
we
maybe
would
like
to
take
a
note
of,
etc.
E
Okay,
cool
so
yeah,
just
the
last
point
there
that
ms
may
or
may
not
have
mentioned
so
we
have
18
right
now,
16
or
18.
I
can't
remember,
and
so
we're
shooting
for
30..
So
I'll
probably
do
another
call
in
the
community
call
and
again
in
the
just
in
the
discord
channel
and
then
we'll
try
and
get
to
that
number.
A
Let's
move
on
to
the
meeting
tracker
bot
and
the
vive
iv.
D
Well,
so
right
now
the
meeting
trap
report
is
that
there's
some
questions
that
need
to
be
asked,
and
I
was
hoping
that
I
could
talk
to
someone
from
saskatoon
but
the
the
time
and
schedule
the
call
they
weren't
there.
So
probably
have
this
sometime
later
this
week.
But
I
did
get
a
plug-in
sort
of
a
plug-in
work
to
work
and
let
me
just
quickly
just
share.
C
D
All
right,
so,
oh
wait
a
minute,
that's
yeah,
so
this
is
a
meeting
track
external
plug-in
mock-up
that
we
made-
and
you
know
it's
kind
of
simple
right
now,
but
can
be
extended
further.
So
it
just
follows
a
semantic
of
member
attended
meeting,
and
so
I
added
a
few
tests.
It's
a
like.
D
I
think
I
kept
the
three
calls
in
the
data
and
septimus
was
in
all
of
them.
Correct
was
in
one
or
two
of
them,
so
we
had.
We
have
the
credit
minted
for
these
people,
but
there's
a
small
issue
which
is
like:
if
there
are
lesser
number
of
people
in
a
call
you
get
more
grade
which
can
be
fixed
which
can
be
fixed.
I
can
fix
that.
D
D
So,
do
we
want
some
form
of
weights
or,
and
another
like
specification,
is
that
do
we
treat
all
calls
at
as
equal
or
you
we
say,
hey
the
param
parties
on
you
more
credit
or
we
say
hey
if
you're
coming
to
the
community,
call
that
kid
gives
you
slightly
more
credit,
something
of
the
sort
or
slightly
less
skill.
Do
we
want
that
form
of
dynamics
to
exist.
A
So
if
we
would
like
over
time
to
incentivize
meeting
attendance
more
than
we
can
adjust
that
upwards,
but
I
also
think
what
what
was
asking
is
that
now
the
you,
you
don't
get
the
same
amount
of
credit,
if
the,
if
it's
a
small
meeting,
compared
to
if
it's
a
meeting
with
many
attendants,
and
would
we
like
to
keep
that
dynamics
somehow
and
if
for
for
a
reason,
I
I
think
my
gut
feeling
says
it's
it's
difficult
to
find
like
a
clear
way
of
defining
making
up
the
rule.
A
Is
it
worth
more
or
less
being
joining
a
meeting
with
few
participants
and
for
whichever
reason
would
that
be
so?
I
I
I
think
it
would
you
get
you.
You
should
get
the
same
amount
of
cred.
Whatever
number
of
participants
were
in
the
meeting.
That
is
my
my.
F
A
F
C
A
D
C
D
So,
okay,
I
I
agree
with
some
of
the
ways
that
we
shouldn't
like
keep
it
dynamic,
dependent
on
the
like
the
number
of
members
in
the
coil.
So
one
I'm
wondering
what
is
what
do
we
want
that
constant
value
to
be
just
for
now,
because
I'd
have
to
break
the
values
to
make
it
work.
So
do
we
say,
what's
what
exactly
do
we
want
the
amount
of
credit
a
person
gets
for
attending
her
meeting
to
be
we'll
change
this
over
time,
but
for
now,
wouldn't
we.
D
Yeah
we
can
change
it
as
a
parameter,
but,
like
I'm
asking
for
now
for
just
testing
purposes,
is
there
a
value
we
can
decide
on,
because
I'd
have
to
see
how
to
make
it
v
in
a
way
that
you
know
it,
it
doesn't
account
the
number
of
participants
and
it
just
distributes
grid
without
thinking
about
that.
A
Do
you
use
that
then
set
the
weight
on
on
the
on
the
edge
between
the
nodes
of
the
attending
edge
instead
of
setting
a
weight
on
the
the
node
of
meeting?
Is
that
how
you
would
do
it
so
now
it
becomes
a
bit
technical.
D
So
right
now
it's
it's
on
the
edge
it's
on
the
forward
edge.
So
if
someone
attends
did
a
meeting
they
get
by
default,
it's
the
weight
of
one
and
then
it
like
a
way.
Is
it
respected
to
how
many
people
attended
that
meeting.
D
Yeah,
so
one
thing
we
could
do
is
that
just
give
them
like
make
a
custom
weight
and
make
that
dependent
on
the
number
of
few
number
of
people
in
the
meeting,
and
then
we
just
cancel
out
that
factor
and
then
rig
the
values
to
come
into
like
something
like
0.5.1,
whatever
we
want
to
give
out
yeah
so
for
now.
Maybe
I
can
keep
it
a
constant
something
8.5
and
then
it
can
be
dynamic
like
if
we
can
choose
whatever
we
want
it
to
be.
A
I
I
think
so
because
it's
only
in
the
context
of
all
the
other
parameters
so
that
it
is
possible
to
have
a
have
an
opinion
of
what.
What
will
the?
What
should
the
weight
be,
because
it
depends
on
all
the
other
parameters
for
the
other
plugins.
D
Yeah
definitely
in
india.
A
D
D
Maybe
we
just
fork
an
instance
and
try
to
see
what
happens
yeah,
so
that's
pretty
much
the
catch
for
this
week
and
might
have
something
next
week
after
talking
to
theater
and
seeing
like.
What's
what
are
better
ways
to
do
this.
C
D
So
this
is
something
I
think
we
were
talking
about
earlier
last
last
last
call
maybe,
which
is
it
can
be
anything
right
now,
lik
right
now
it
just
locks
when
you're
joining
when
you're
leaving-
and
you
know
the
difference
between
the
timestamps
is
the
amount
of
time
you
were
in
the
in
the
call
and
that's
going
to
be
locked
somewhere
in
and
then
the
source
grid,
then
source
code
picks
it
up,
and
the
plugin
calculates
a
graph
and
inserts
it
into
so
spread
so
yeah.
D
One
thing
we
could
do
is
like
use
time
as
a
as
a
wait,
but
I'm
not
sure
if
that's
something
we
want
to
do,
because
that
incentivize
people
to
just
you
know,
stay
in
calls
and
not
do
anything.
A
I
think
we
we
started
out
with
it
being.
We
said
that
there
needed
to
be
some
sort
of
threshold
yeah
and
but
there
we
said
you
should
be.
You
should
join
attend
at
least
60
of
them
meeting
more
than
half
and
then
that
turned
out
to
be
technically
complicated
with
the
old
method
we
planned.
If
I
remember
correctly,
and
then
we
sort
of
said
that
maybe
10
minutes
is
okay,
just
as
a
base
thing
and
then,
but
now
by
as
I
understand
it
now,
we
could
do
the
percentage
if
we
like
again.
D
Yeah
totally
so
right
now
the
current
bot
just
picks
up
when
you're
joining
and
leaving.
So
it
can
totally
like
measure
the
amount
of
time
you're
in
the
meeting.
No
the
length.
D
No,
we
don't
like.
We
can
get
that
information
based
on
you
know
when
the
event
in
discord
ends
there's
an
option,
for
you
know
one
of
the
administrators
that
you
know
end
this
event,
so
we
could
measure
that,
but
that,
but
we
can
keep
it
as
a
default
of
like
one
hour
or
something,
because
most
of
our
meetings
are
one
hour
yeah.
D
The
reason
I
chose
10
minutes
is
because
the
bank
list
dow
has
a
bot
that
does
this
for
pull-ups
and
they
chose
10-minute,
so
just
like
might
as
well
go
with
10
minutes,
yeah.
A
F
A
D
Previously
it
was
connected
to
the
google
calendar,
but
then
there
was
a
reliance
on
the
events
inside
the
discord
server.
So
now
it's
probably
going
to
focus
on
that
because
there's
a
few
issues
with
using
a
calendar
that
resolved
with
using
the
events.
F
D
No,
it's
it's
right
now,
it's
only
manually,
but
eventually
that's
something
we
can
do,
I'm
not
sure
if
we
want
to
do
that,
but
yeah.
That's
that's
not,
certainly
that
something
that
can
be
done.
A
There
were
some
some
technical
considerations
there,
but
first
we
we
plan
to
track
the
google
calendar,
but
then
there's
the
issue
of
how
do
we
know
which
channel,
for
instance,
people
are
meeting
in.
How
do
we
convey
that
information,
then
that
would
require
all
the
meeting
facilitators
to
before
they
start
a
meeting
to
somehow
announce
to
the
bot
that
we
are
meeting
in
this
room
and-
and
we
sort
of
assume
that
this
will
not.
That
will
not
happen.
A
Everyone
will
not
definitely
not
remember
doing
that
every
time,
so
it
felt
like
tracking
the
discord
events.
It's
a
that
is
a
foolproof
solution.
Then
we
will
track
those
events,
100
percent,
instead
of
having
a
more
loosely
coupled
tracking,
where
you
can
track
more
or
less
stuff
that
you,
you
add
yourself
to
the
calendar,
but
but
then
that
would
require
require
much
more
of
you
as
a
facilitator
to
remember,
maybe
also
to
start
the
meeting
and
maybe
also
to
stop
the
meeting
tracking
etc.
A
So
it
has
sort
of
a
plus
and
minus.
D
So
right
now
I
think
it
shouldn't
be
an
issue,
so
the
current
version-
yeah
the
coin
version,
just
assumes
that
the
call
is
going
to
last
for
an
hour.
It
just
queries
the
event
list
and
sees
you
know
if
an
event
exists.
It
doesn't
care
about.
If
it's
started
or
not
it
just
checks.
If
it
exists,
then
it
assumes
that
it's
going
to
last
for
an
hour
so
yeah,
that's
that's
how
it's
working
right
now
we
can
keep
it
that
way.
D
A
Yeah,
so
for
the
the
pollenbot
mateo
is
not
here
today,
as
far
as
you
have
also
some
insight
into
this.
As
far
as
I
know
that
they're,
the
only
thing
that
needs
to
be
done
for
the
pollenbot
is
to
do
some.
Some
configuration
to
make
sure
that
the
different
parts
of
the
this
bot
that
the
bot
actually
contains
of
three
different
components
that
have
a
separate
github
repositories
and
and
all
need
to
interact
they're
using
github
actions.
So
there
needs
to
be.
A
We
need
to
give
those
sort
of
access
to
each
other
etc,
and
that
is
not
100
online
yet
and
also
there's
the
the
rebranding
issue
of
not
calling
it
the
the
pollenbot
for
for
us
and
and
making
sure
it
has
a
tec
logo
instead
of
a
pollen
logo,
et
cetera
and
by
value.
I
think
you
are
you
are
on
that
one
aren't
you.
D
Yeah
I'll
rebrand
it
that's
something
that
I'll
probably
work
on
today
or
tomorrow,
and
I
think
that
what's
left
is
basically
like
integrating
all
these
parts
together
and
seeing
if
it
works,
which
thing
might
happen
sometime
this
week.
If
we
like
sync
up,
talked
about
like
having
a
sync
alcohol.
A
Yeah,
so
no
no
blockers
other
than
our
calendars
and
so
what's
needed.
I
think
it's
an
hour
or
two
a
joint
work
session,
where
we
just
do
the
last
configurations,
et
cetera.
So,
let's
move
on
to
praise
end
and,
unfortunately,
the
the
back
end.
Development
has
been
slow
the
last
week
as
well
same
as
last
week
and
the
week
before.
A
I
hope
to
be
able
to
announce
soon
some
more
traction
in
this
area.
So
we
we
we
catch
up
with
the
rest
of
the
rewards
upgrade.
That
is
all
I
would
like
to
announce
for
for
today.
So
I'll
pass
it
to
two
nebs
for
the
praise
front
end.
A
A
And
you
might
say
that
the
the
the
praise
fronted
and
if
it's
alive
and
well
really
so
so
it's
most
of
the
difficult
stuff
is
done
and
and
so
once
the
backend
catches
up,
then
then
the
rest
will
be
quite
done
quite
quickly.
A
Cool
we'll
spend
the
half
an
hour.
Let's
move
on
to
the
the
main
event,
the
rad,
the
rad,
the
demo
or
maybe
you
would
like
to
say
yeah
yeah.
No,
I
just
I'll
pass
it
to
you.
I'm
not
saying.
B
D
B
I
just
I
just
shared
my
screen.
There
was
some
some
changes
from
yesterday
because
the
ms
found
a
great
solution
for
this
download
issue.
We
had
so
that
sorted
out
so
big
praise
there
give
me
a
second
and
here
so.
Can
you
see
my
screen.
B
B
I
try
to
describe
every
step
also
in
the
notebook,
so
everybody
can
just
read
it
and
follow
along
what
is
happening
right
now,
I'm
using
it
globally,
but
it
will
be
on
a
on
a
binder
online
within
which
anybody
can
just
open
and
then
play
around
with
so
first,
let
me
choose
the
files,
okay,
that
it
has
to
start
up
there.
It
is
so
you
can
just
choose
the
files.
I
made
some
some
mock
praise
and
mock
source
credit
data
data
and
also
reward
board
there.
B
B
I'm
pretty
much
data
right
right
now.
Sorry,
I
try
to
print
more
than
usually
just
for
for
this
demo.
D
B
So
yeah
this
here
at
the
beginning,
you
can
steps
how
many
tokens
you
want
to
distribute
and
how
you
want
to
plan
it.
So
in
this
case
I
did
around
2000
tokens
for
the
price
for
small
script,
500
for
the
quantifiers
and
100
for
the
workboard,
and
you
set
the
how
you
want
the
file
to
be
called
the
output
file,
and
if
you
change
anything
here
and
just
run
it,
it
updates
for
the.
So,
if
you
change
anything,
you
can
run
it
again
and
it
updates
the
graph.
B
So
you
can
also
like
play
around
with
it
and
see
and
see
how
it
looks
so
yeah.
Well,
first,
it
just
takes
the
data
and
cleans
it
up
a
bit
and
then
it
combines
it,
and
it
also
prepares
the
the
praised
by
users
for
kind
of
puts
together
all
the
users
that
use
the
the
system
and
adds
up
each
separate
phrase
they
had
and
and
how
much,
how
much
tokens
they
will
receive.
B
It
also
takes
the
raw
price
data
and
sorts
it
by
quantifier.
So
you
have
it.
You
can
see
yeah
which
quantifier
had
which
place
and
how
he
valued
it.
So
you
have
it
all
together,
so
you
can
also
make
analysis
on
that
side
and
yeah,
and
then
you
just
combine
it
and
it
tells
you
how
much
tokens
the
each
user
gets
from
source
grid
and
the
praise.
B
B
It's
I
I
I
yeah,
I
always
explain
it
first,
so
we
always
explain
it
first,
so
people
can
just
read
it
and
know:
what's
what's
what
it's
all
about
yeah
and
then
we
get
to
the
to
the
break
to
the
brace
analysis.
First,
we
just
show
yeah
how
often
each
praise
each
praise
valuation
was
used.
So
since
we
had
this
choice
of
five
different
valuations
from
zero
to
144
yeah,
how
often
each
one
gets
gets
chosen,
so
you
can
so
you
can
kind
of
see.
B
For
example,
people
are
just
valuing
very,
very
high
or
very
very
low,
and
maybe
that's
kind
of
so
new.
We
should
change
that
or
change
back
to
the
seven
different
numbers.
Whatever
we
we
choose,
then
we
get
a
look
at
the
distribution,
so
we
can
visualize
here
each
which
percentage
of
the
total
price
each
user
got.
B
So,
in
this
case,
we
can
see
it
like
it's
a
pareto
distribution
which
well
it's
the
mock
data
we
created
for
this
is
using
this
since
the
old
phrase,
distribution
ended
up
being
like
that,
but
it
will
just
show
how
it
works
and
sorted
by
by
size,
and
you
can
always
check
out
how
each
which,
in
each
individual,
how
much
you
got.
B
Then
we
have
this
amazing
graph
that
invented
children
from
the
place
research
group,
which
shows
you
yeah,
where
the
phrase
is
going.
So
here
you
have
the
the
top
give
me
a
second
top
20
praise,
givers
top
25
praise
receivers,
and
you
can
just
yeah
see
each
single
each
single
stream
and
the
rest.
If
you
want
to
change
it,
you
can
just
change
the
number.
B
And
see
more
data,
so
yeah,
and
then
we
continue
with
source
grid.
We
have
the
same
for
source
grid
each
how
what
which
percentage
is
use
of
the
source
grid
got.
B
So
you
can
also
take
a
look
at
it
and
see.
If
there
are,
there
are
some
big
disparities
or
not,
then
here
we
have
the
quantifier
data.
For
now
it's
only
looking
which
percentage
of
the
total
price
each
quantifier
did,
because
we,
since
we
are
assigning,
always
one
quantifier
to
one
user,
it
could
end
up
being
that
some
quantifier
does
more
a
lot
more
praise
than
somebody
else.
So
this
would
kind
of
keep
in
check
that
we
see
if
this
happens
and
of
course
any
any
other,
any
other
yeah.
B
B
That
can
be
changed,
of
course,
and
then
we
just
prepare
the
final
data
table
which
tells
us
yeah.
It
is
each
user.
How
much
praise
they
got,
how
much
source
grade
they
got
if
they
get
rewarded
because
they
are
a
quantifier
if
they
got
reward
because
they
are
part
of
the
reward
board
and
finally,
which
is
the
total
number
of
tokens
which
then
at
the
end,
we
also
can
visualize,
so
we
can
see
again
each
user
and
how
much
they
got
from
each
side.
B
The
cool
thing
about
this
is
you
can
remove
something
so,
for
example,
if
we
want
to
get
away
this
quad
reward
just
click
here
and
it
disappears,
and
you
can
see
the
distribution
there
or
you
can
just
get
source
grid
and
put
the
quant
reward.
Oh
you
can
kind
of.
You
can
take
a
look
and
and
see
how
it
affects
everything
and
then,
finally,
you
can
generate
files
to
download
to
download
to
just
click
them
and
and
download
them.
B
B
So
this
is
just
copy
paste
into
the
dispersed
app
and
it
should
work
once
you
set
the
token
you
want
to
send,
and
this
one
is
the
the
complete
praise
data
that
we
have
that
we
generated
before
with
you
know
the
each
quantifier
switch
modifications
and
what
percentage
each
single
phrase
I
can.
I
can
just
show
it.
Maybe
it's
easier.
Let
me
open
it.
E
B
B
Yeah
I
tried
to
put
it
onto
onto
a
binder
and
drop
the
link
into
the
rewards
channel,
but
you
can
you
can
download
it
and
run
it
locally
right
now,
so
feel
free
to
do
that.
F
Yeah
amazing
work,
nugget
looks
great.
I
have
one
cup
and
one
question:
do
we
have
anything
that
prevents
one
quantifier
from
quantifying
their
own
parades.
B
A
Actually,
we
haven't
taken
note
of
that.
It
was
a
good
pointing
that
out,
but
obviously
yeah.
F
Yeah,
this
is
more
of
a
cultural,
something
that
I
think
we
can
discuss,
but
you
see
all
that
blue
is
ivy
and
and
she's
showing
up
like
that,
because
she's
been
dishing
out
the
praise
for
forum
contributions
for
twitter
contributions
and
all
of
the
praise
from
the
community
calls.
So
we
have
like
that
is
the
making
the
data
dirty
and,
and
now
we're
gonna
start
to
see
mount
manu
too.
With
that
like
large
chunk,
so
I
think
it's
time
to
move
away
from
that
like.
F
I
think
it
was
a
great
idea
to
have
iv
dishing's
praise
and
to
incentivize
people
to
come
to
the
community
call
and
give
like
voice
their
praise
and
know
that
that
was
taken
account
of,
but
I
think
everybody
started
to
write
their
praise
anyways
and
we're
like
grown-ups
to
dish
our
own
praise,
and
maybe
we
should
like
move
towards
that.
So
in
the
future,
we
don't
keep
having
this
discrepancy
of
of
data
of
like
who
did
what?
D
D
Okay,
go
ahead.
I
have
a
suggestion
for
that,
since
we're
making
the
what
you
might
as
well
like
add
an
extra
command
inside
the
bot,
that's
just
like
for
anyone
in
transparency
or
anyone
working
with
this
data
who's
like
dishing
place
or
calls
there's
a
separate
command
for
them,
and
they
can
mention
from
whom
this
place
is
going
to
who
and
they're
just
typing
it
out.
Now
again,
that's
like
incentivizes
them,
because
I
think
they
were
planning
on
giving
them
some
small
amount
of
cred
or
something
for
doing
that.
A
F
A
It
definitely
opens
up
a
door
for
making
the
the
data
really
noisy.
If
we
were
to
allow
many
people
to
praise
on
behalf
of
others,
we
would
also
need
to
track
who
who
was
the?
Who
was
the
the
praise
who
delivered
the
praise
for
for
another
person,
so
that
would
be
another
data
point
that
we
would
like
to
analyze
and
try.
So
actually
it's.
It
is
a
bit.
It
adds
another
layer
of
complexity
and
I
would
really
like
to
keep
the
core
trace
system
a
really
a
simple.
A
Maintaining
you
know
it's
a
simple
heritage.
A
But,
but
using
this
this,
this
dashboard
imagine
using
this
bi-weekly
over
over
a
long
period
of
time,
the
the
amount
of
insight
that
we
will
gain
by
doing
this
into
a
a
a
ritual,
almost
like
a
bi-weekly
ritual
where
we
go
through
this
and
have
the
insights.
A
The
discussion
hopefully
have
insights,
tweak
the
parameters,
the
tweak,
the
analysis
and
then
for
each
each
period
coming
to
greater
and
greater
conclusions
and
and
yeah
what
one
thing
that
struck
me
is
that
we
really
need
to
make
it
part
of
the
process
that
we
back
up
or
or
store
the
final
distribution
for
for
each
period,
because
this
will
be
a
the
dashboard
will
be
a
living
document.
A
So
if
you,
if
you
open
up
the
dashboard
three
months
later,
it
will
have
it
maybe
other
parameters
and
other
analysis
that
wasn't
there
three
months
before
so,
if
you
do
analyze
a
an
older
phrase,
data
set,
it's
not
guaranteed
that
you
will
get
the
same
result
again.
Do
you
did
that
make
sense.
A
A
Into
the
github
repository,
my
proposal
was
that
we
we
have
for
each
period
one
folder
in
the
repository
where
we
have
the
data
files
and
we
also
have
the
the
final
exported
files
and
you
might
as
well
print
the
the
notebook
as
a
pdf
and
put
it
in
there
as
well.
So
so
it's
really
easy
for
someone
to
just
go
and
look
at
data
afterwards.
Look
at
the
analysis
without
even
having
to
run
the
run,
the
notebook.
A
D
A
Yes,
but
but
that
won't
be
a
historical,
valid
thing,
because
the
it's
one,
one
notebook
many
data
sets
and
the
notebook
changes,
the
the
the
formulas
can
change
the
parameters,
can
change,
etc.
So
the
notebook
in
its
current
form,
the
notebook
book,
is
a
tool
and
the
the
data
will
stay
the
same.
A
My
initial
suggestion
was
that
we
we
copied
that
the
notebook
to
each
and
made
a
new
copy
of
the
notebook
to
each
new
period
folder,
because
then
the
actual
the
notebook
would
also
be
a
historical
document
of
how
did
the
formulas
exactly
look
at
that
time,
but
of
course
that
that
is
still
the
option.
No!
No,
because
now
now
we
have,
we
are
doing
one
one
notebook
where
you
instead
open
up
the
old
choose
which
data
files
you
open.
C
A
But
it's
still
not
one
to
one.
If
I
would
like
to
view,
I
would
want
to
recalculate
reanalyze
a
period
13
periods
back.
That
wouldn't
necessarily
mean
that
we
have
13
exact
versions
of
the
notebook,
so
I
can
move
back
13
versions.
How
would
I
open
up
the
correct
version
of
the
notebook?
It
would
be
difficult
for
me
to
know
which
version.
C
I
almost
wonder
if
we
could
keep
the
different
versions,
though,
because
one
of
the
things
I
think
would
be
an
interesting
thought.
Experiment,
too,
is,
as
the
the
binder
continues
to
come
to
life
and
take
shape
the
notebook.
Sorry
it
just
for
me
because
I'm
a
nerd,
but
I
would
love
to
even
take
like
future
versions
that
we
get
to
with
algorithm
changes
and
look
at
past
analysis
and
see
how
it
would
have
played
out
with
those
future
versions
too.
C
D
A
No,
I
I
just
want
to
say
that
that
if
you
have
a
if
each
phrase
period,
if
we
export
a
pdf
as
well
based
on
this,
then
of
course
the
the
the
version
of
the
notebook
could
be
visible
at
the
top,
and
when
we
do
a
new
version
of
the
notebook
it
we
give
it
a
new.
A
We
use
the
versioning
standard
so
that,
then
you
can
easily
see
that
this
period
used
this
version
of
the
of
the
notebook
and
also
we
would
need
to
make
sure
to
use
github
version
tracking
and
mark
it
as
a
as
a
release.
Every
time
we
we
make
a
new
version,
etc.
So
you
can
actually
move
back
in
time
and
and
open
up
version
0.0.5
again,
if
you
would
like
to
just
do
that
analysis
again,
sorry
yeah
invite
please.
D
So
I
think
one
of
the
things
that
we
might
want
to
do
is
like
a
make,
make
an
updated
notebook
that
can,
you
know,
interact
with
all
of
the
older
data
and
the
new
data,
but
we
also
keep
our
older
notebooks
around.
So
if,
if
we
don't
want
them
to
be
like
accessible
normally
to
people,
we
can
like,
since
they're
using
getter
for
this
they'll
they'll,
probably
be
stored
somewhere
in
the
kit
history.
So
we
can
yeah.
D
As
christopher
mentioned,
we
can
make
releases
to
keep
track
of
those
or,
and
another
thing
is
that
all
of
this
code
in
the
notebook
is
eventually
going
to
it's
just
python,
so
we
can
probably
make
a
website
for
it,
but
that's
a
large
approach,
but
in
like
in
terms
of
future
applicability.
I
think
it
would
be
neat
to
have
a
dashboard
for
this.
A
This
has
is
a
huge
benefit.
You
using
the
notebooks
this
way
that
you
can
always
continue
to
evolve
them
really
easily
and
like
in
the
login
show
you
can.
You
know
just
adjust
a
few
parameters
in
in
the
formula
super
easily
et
cetera,
and
one
one
way
of
doing
versioning
of
the
notebooks.
Really
a
stupid
simple
way
would
be
that
when
we
do
a
new
version
of
the
notebook
we
copy
the
notebook
and
we
name
them
0.0.1,
we
have
them
all
in
the
same
folders.
Then
you
don't
need
any.
A
Like
expert
github
knowledge
of
opening
up
an
older
version.
It
will
just
be
a
list
of
notebooks
with
increasing
numbers
in
the
same
folders.
You
can
basically
choose
anyone
can
really
easily
choose
which
version
of
the
notebook
they
would
like
to
bring
up
to
analyze,
which
which
period
data
they
would
like.
A
D
A
Yeah
I
agreed
totally
agreed.
My
favorite
today
was
the
last
graph,
the
the
one
that
showed
you,
how
the
different
sort
of
the
different
streams
of
you
get
token
tokens
as
a
quantifier
as
a
praise,
receiver,
etc.
You
and
how
it
combines.