►
From YouTube: W28 Rewards WG: Reward board and dev updates
Description
🙏 Thank you for watching! Hit 👍 and subscribe 🚩 to support this work
🌱Join the Community🌱
on Discord https://discord.gg/uM4ZWDjNfK
or say hello on Telegram https://t.me/tecommons
Join the conversation https://forum.tecommons.org/
Follow us on Twitter: https://twitter.com/tecmns
Learn more http://tecommons.org/
A
Some
sometime
soon,
perhaps
what
else
we
have
the
next
generation
of
the
red
dashboard.
We
have
some
ideas
on
how
we
could
to
do
that.
So
nagan
is
going
to
talk
a
little
bit
about
that,
what
else
a
bunch
of
other
stuff.
So,
let's,
let's
just
just
jump
in
with
the
status
updates
begin
with
and
who
would
like
to
go
first,
let's
see
who
is
here,
why
would
you
like
to
talk
about
the
race
forward?
A
B
So
the
baseball
functionality
has
been
talked
about
in
the
dc
provide,
so
you
might
have
seen
that
during
meetings,
especially
during
the
community
meeting,
we
like
to
do
praise
on
call,
because
it's
like
more
humane
to
do
it.
It's
like
a
better,
like
kind
of
community
building
process
that
we
follow,
but
one
of
the
issues
with
that
is
that
whenever
someone
starts
transcribing
the
place
from
there,
so
usually
transparency
does
that
by
reader
set
or
sometimes
marman.
B
You
see
that
there's
a
lot
of
inconsistency
in
our
place
data.
So
you
know,
if
we're
doing
analysis
over
that
data,
we'd
find
that
a
lot
of
the
praise
comes
from
people
involved
in
transcribing
place.
Another
issue
is
that
there's
no
clear
structural
relationship
between
the
giver
and
the
receiver,
which
makes
it
more
confusing
for
anyone
quantifying
the
place,
so
it
makes
us
feel
like
there's
a
lot
of
duplicates.
B
So
this
solves
the
problem
by
adding
a
new
command
to
discord,
server,
slash
forward
and
only
certain
people
can
use
this
command
people.
That
is
people
that
have
the
forwarder
role
inside
our
play
system.
So
we
added
a
new
role
to
the
system.
Normal
users,
quantifiers
admins
and
then
there's
forwarders.
B
It
might
be
refined
later
on
into
something
else.
Maybe
we
want
more
transparency,
involvement
inside
the
play
system,
but
for
now
it's
forwards
and
yeah,
this
command
is
usable,
and
if
we
deploy
the
bot
sometime
soon,
we
should
even
should
be
able
to
use
that
command.
I
think
some
ui
changes
have
also
been
done.
I
mean
they're,
probably
not
merged
yet
but
yeah.
B
It
would
also
be
visually
different
in
our
ui
on
the
place
dashboard
and
the
second
update
is
admin
announce
which
is
the
command
that
allows
admins
to
send
messages
to
multiple
people
inside
the
server
based
on
their
place
rules.
So
currently
it
just
supports
four
categories:
one
is
normal
place,
activated
users,
second
categories,
all
the
quantifiers
third
categories,
all
the
crafted
quantifiers
in
a
specific
period
and
the
fourth
category
is
all
the
pending
quantifies
in
a
specific
period.
B
This
would
make
it
a
very
a
lot
more
easier
for
anyone
who's
facing
down
point
fast
to
remind
them
about
their
quantification
and
for,
like
other
purposes,
just
giving
them
notification
about
whether
they're
drafted
or
not-
and
this
is
currently
done
by
a
discord.
Maybe
in
future
there
might
be
something
some
kind
of
buttons
inside
our
tray's
dashboard
or
the
place
admin
dashboard,
maybe
in
future.
But
for
now
it's
a
discord
command.
B
A
C
A
A
There
will
be
a
small
icon,
a
small
forward
symbol
and,
and
you
hover
over
that
symbol
and
and
a
small
tooltip
will
appear
saying
like
this
was
forwarded
by
in
this
case
the
same
person
just
because
I
have
been
testing
it
like
that.
A
Yes
and
the
the
the
the
praise
forwarder
is
not
really
interesting.
It
is
only
you
know,
for
transparency
purposes,
your
you
might
want
to
look
up
that
once
in
a
while.
So
we
won't
be
placing
much
focus
on
adding
that
as
a
you
know,
in
a
cool
way
to
the
ui,
but
that
little
symbol,
then,
if
you
go
and
if
you
go
to
a
place,
detail
page,
that
information,
of
course
also
will
be
there.
The
prey
score,
the
praise
id
and
then
forwarded
by
whoever
forwarded
the
phrase.
A
B
I
think
it's
doable,
do
you?
Should
we
like
demo,
the
admin
announce
feature
as
well,
yeah.
C
B
Maybe
we
can
do
that
towards
the
end
and
I've
been
getting.
A
Yeah,
okay,
then
we'll
move
on
to
the
next
we'll
stick
with
praise
for
now
and
I'll
pass
it
to
to
matty.
If
you
have
something
to
say
about
overriding
settings
for
periods.
C
Yeah,
I
can
just
say
that
it
should
be
ready
to
ready
for
review
and
then
either
either
today
or
tomorrow
and
yeah.
I
think
that's
all
I
I
need
to
say,
for
this
call.
A
And
the
ui
there
will
be
quite
simply
adding
to
this
new
sub
menu
on
the
premiere
detail
page
there
will
be
an
another
tab,
saying
settings
and,
and
it's
the
the
right
hand
side
will
you
can
change
the
settings
for
for
that
specific
period?.
C
Yep
exactly
and
the
one
other
thing
is
we're
just
going
to
keep
it.
Keep
the
settings
visible
for
non-admins,
so
it'll
just
look
like
a
form,
but
they
can't
actually
do
anything
on
it.
It'll
be
like
grayed
out.
A
Okay,
what
next
we
have
the
rad
dashboard,
the
next
next
iteration
of
the
the
or
quite
a
major
iteration
of
the
rad
dashboard
that
we
would
like
to
to
do
at
least
nagan.
Would
you
care
to
say
something
about
that.
F
F
So
this
the
red
dashboard
is
growing
a
big
a
bit
in
scope
and
we
have
we're
having
this
idea
of
making
it
something
more
modular
like
not
just
this
little
tool
for
the
tc
to
kind
of
help
out
a
bit
during
the
process,
but
like
really
as
a
standalone
product,
which
is
intended
to
help
other
dows
with
other
distribution
methods
to
just
kind
of
plug
in
their
own
reward
systems
and
have
a
have
a
tool,
a
set
of
tools
for
distribution
and
analysis
right
on
hand
and
yeah
and
that's
kind
of
the
new,
the
new
scope.
F
We
are
trying
and
we're
going
to
do
this
through
jupiter
notebooks
still,
but
on
a
more
modular
way.
So
we
like,
if
you
can
zoom
in
there
we'd,
have
like
several
little
individual
notebooks
which
are
linked
to
one
another.
So
you
can
have
you
can
export
them
and
and
change
them
out
to
adapt
your
needs.
F
So
you
could
kind
of
just
set
your
your
own
params
for
your
your
reward
system
and
feed
them
into
this
into
this
process
and
at
the
end,
you'd
have
just
a
different
set
of
analytics
and
and
the
output
distribution
lists
and
yeah.
And
that's
that's
the
plan.
F
I've
I've
been
playing
around
this
week
with
some
little
prototypes
and
I
think
it's
it's
doable
and
it's
going
to
make
a
lot
of
stuff
a
lot
easier
yeah.
So
I'm
actually
pretty
excited,
and
I
think
if
we
do
it
modular
enough,
we
can
also
convince
like
people
from
coordinate
or
whatever
to
kind
of
integrate
that
easily,
because
at
the
end
of
the
day,
a
lot
of
the
people
use
jupyter
notebooks
and
it's
not
not
really
hard
to
to
make
one.
F
A
Yeah
I
wanted
to
make
this
this
illustration
before,
but
but
I
didn't
have
time
so
showing
that
the
the
rad
dashboard
is
sort
of
the
where
all
the
the
reward
systems
meet
and
we
make
it
super
modular
so
that
we
can
the
the
new
rad
dashboard
can
import
data
from
from
any
any
report
system,
and-
and
one
thing
I'd
like
to
add
to
what
you
said-
is
the
the
importance
of
we.
We
will
build
up
a
library
of
small
python
based
modules
and
there
are
importers,
exporters,
distribution.
A
Modules,
what
else
did
we
say
like
sources,
distributions
exports
and
thereby
allowing
token
engineers
and
other
communities
to
just
add
their
modules,
whichever
modules
they
they
like
for
their
specific
needs?
A
So,
basically,
to
take
the
modules
you
need
combine,
combine
them
into
your
own
rad
dashboard
for
for
the
needs
you
have,
and
we
can
also
the
export
modules
we
can
also
use
not
only
for
for
doing
token,
based
exports
like
exporting
the
the
analysis
and
the
distribution
to
be
sent
to
disburse.app
or
or
to
be
sent
to
an
argon
dao.
A
A
I
think
we
need
to
to
give
it
some
consideration
how
we
will
move
forward
if
we
need
to
do.
I
saw
the
fancy
proposals
that
have
been
made
for
the
proposal.
Inverter
securing
funds
that
way.
May
we
might
need
to
do
something
similar
for
the
next
generation
rad
project,
especially
if
we
like
other
communities
to
be
involved
in
the
development
of
the
the
actual
modules.
A
F
F
C
F
Showing
want
with
high
spread,
I
actually
implemented
that
in
the
dashboard
last
week.
So
if
you
run
the
dashboard
you
can
have,
it
just
can
generate
a
table
with
more
controversial
quants.
So
for
the
next
session
tomorrow
we
can
actually
go
ahead
and
use
that
to
pinpoint
the
quants
with
the
highest
disagreement
and
save
some
time
there.
F
A
And
that
is
the
the
thing
with
this,
this
new
setup
that
when
adding
new
modules,
it's
not
one
notebook
that
becomes
bigger
and
bigger
and
more
sluggish
for
each
module.
You
add,
because
that
happens
quite
quickly
with
the
euclid
notebooks,
in
my
experience
at
least
instead
you're
adding
a
a
new
module
to
a
library
that
a
community
can
use
if
they
need
it,
and
they
can.
You
know,
turn
it
on
for
for
a
specific
anal
analysis
and
then
turn
it
off
again
and
now.
So
it's
it's
going
to
be
great
fun
to
explore
this.
F
Yeah
also
one
one
thing:
the
beauty
of
this
is
by
having
different
notebooks
that
are
specialized
in
different
aspects
of
the
same
data
set.
You
can
at
the
end
of
the
process,
export
them
without
all
the
code,
just
the
graphs
and
a
bit
of
text
and
can
just
generate
automatically
like
four
or
five
different
reports,
focusing
on
different
aspects
without
having
to
you
know,
run
it
yourself
and
and
cut
out
the
stuff.
So
there's
a
lot
of
flexibility.
There.
C
D
So,
ahead
of
tomorrow's
call,
how
do
I,
how
do
I
get
this
and
run
it
and
then
like
find
these
things
that
you're
talking
about,
like
the
discrepancies.
A
A
You
don't
need
to
run
it
locally.
Even
you
can
just
go
to
the
rep
on
github
and
push
this
the
launch
binder
icon
and
in
the
best
of
worlds
it
launches.
Sometimes
it
has
sort
of
got
stuck
in
this
boot
process.
A
But
then,
of
course,
you
need
the
the.
What
is
not
here
yet
in
the
in
the
repo
is,
of
course,
the
the
the
data
for
for
round
four,
because
it's
not
closed
yet
and
and
when
you,
when
you
have
closed
the
the
round.
I
can
let
me
know
when
you
have
closed
around
and
I
will
run
the
data
through
through
this
spreadsheet
so
that
it
gets
the
extra
columns,
because
if
it
doesn't
have
the
extra
columns
right
now
it
the
the
the
dashboard
won't
work.
D
A
Yeah-
and
that
is
a
discussion
point
for
later
in
this
meeting
or
right
now,
if
it's
not
a
big
one,
if
it's,
if
we
have
given
up
on
the
idea
of
actually
tweaking
and
adjusting
the
the
quantifications
during
the
quant
session,
then
we
could
use
the
the
raw
format
exported
from
from
the
praise
dashboard.
Of
course,
then
we
don't
need
these
extra
columns
that
will
allow
us
to
make
the
manual
changes.
A
Then
we
we
can
remove
that
whole
manual
step
and
then
we
could,
in
theory,
allow
the
the
next
generation
rad
dashboard
to
import
data
directly
from
from
from
the
prey
system
through
the
api.
Without
us,
you
know.
E
A
C
A
A
No,
this
is
a
a
sheet
that
I
have
shared
with
you
before,
but
this
is
a
totally
separate
sheet,
so
the
process
to
to
make
this
work
currently,
is
that
you
you
take
the
exported
file
from
from
from
the
price
system.
You
you
create
a
new
tab
in
this
sheet.
You
import
the
the
praise
data
and
you
you
copy
these
columns
from
from
this
this
sheet
yeah.
D
C
Maybe
limit
the
editing
to
like
completely
removing
like,
what's
essentially
spanx
like
you're,
basically
describing
someone
spamming
the
system,
so
maybe
just
like
a
yes,
no
like
like
an
ability
to
completely
remove
a
quant
all
of
the
quantifiers
quantifications.
D
And
so
this
this
maybe
kind
of
goes
into
the
next
point
of,
like
we
kind
of
said,
like
okay
quantifiers,
that
don't
finish
will
just
take
like
a
bigger
average
like
take
four
quantifiers
per
price
instead
of
three,
but
then
on
the
sheet
and
correct
me
for
wrong.
But
it
shows
up
as
zero
if
they
didn't
quantify
it,
and
so.
A
A
A
Yeah
but
but
the
the
the
idea
at
least,
is
that
if,
if
two
persons
give
give
a
praise
score,
let's
take
a
simple
number:
you
have
two
eights
and
then
you
have
a
missing
a
missing
quantification.
Then
the
average
for
that
phrase
should
be
eight
plus
eight
divided
by
two,
not
by
three,
because
that
the
missing
quant
is
not
not
counted,
because
otherwise
a
a
missing
quant
would
would
lead
to
pushing
the
average
score
down
quite
radically.
D
E
F
I
think
yes,
I'd
have
to
check,
but
actually,
if
dismissed,
praise
gets
just
a
zero
right,
so
it
will.
It
would
just
see
that
somebody
put
it
as
zero
and
somebody
put
it
as
89.
C
C
E
A
We
now
we
currently
have
a
script
that
allows
us
to
to
switch
a
quantifier
for
another.
It's
a
script,
it's
not
a
button
in
the
ui,
and
I
thought
that
maybe
that
that
is
a
good
thing
for
now,
because
that
creators
create
some
some
kind
of
threshold
it
it.
You
know
it
produces
some
some
kind
of
friction
to
to
use
that
it's
not
super
simple.
To
use
it,
maybe
change!
Removing
quantifiers
completely
could
be
on
the
same
level.
A
Just
now
that
we
we
create
a
script
to
remove
quantifiers
completely,
and
then
we
can
we
if,
if
needed,
we
can
run
that
script
for
a
price
period,
but
but
we
won't
build
it
and
make
it
a
ui
feature
for
for
now,.
C
A
C
A
Cool
and
then
and
then
it
sounds
like
we
agreed
to
say
that
we
don't
need
the
the
manual
correction
anymore,
because
the
the
amount
of
data
is
too
too
big
to
be
able
to
finish.
That
anyway,
though,.
D
There
we
go:
let's
try
this
now
yeah
there's
a
couple
of
delinquents,
but
whatever
I
guess
we'll
leave
it,
as
is.
D
What
was
I
gonna
say
yeah
when
you
start
to
come
people
like
kind
of
just
going
on
the
path
of
instead
of
making
manual
quantifications,
just
like
find
the
ones
that
are
out
of
whack
and
maybe
just
like
talk
to
a
couple
quantifiers
and
the
big
thing
is
like
finding
the
trends
and
then
finding
where
the
gaps
are
in,
like
our
onboarding,
so
I
mean
trying
to
figure
out
the
best
way
to
make
sure
people
have
all
the
information.
D
So
with
that,
I
updated
pretty
much
all
of
the
all
of
the
reward
system
documentation
to
have
like
better
information,
I'll
close
the
quant
at
the
end
of
today
I'll
open
up
the
new
quant
on
friday.
The
same
time
we'll
set
up
the
schedule
for
for
doing
the
review
session,
and
I
think
the
review
session
this
week
is
scheduled
for
some
time
early
tomorrow
for
quant
4.
D
So
I
think
business
as
usual.
We
just
need
to
fix
our
onboarding
and
make
it
better
and
maybe
figure
out
some
some
requirements
like
me.
Making
people
come
to
calls
or
something,
but
it's
just
like
it's
super
hard
because,
like
it's
like
25
plus
people
and
having
new
people
join
in
a
time
that
convenes
for
everybody
is
really
difficult
yeah.
D
D
A
D
A
Yeah,
that
is,
that
is
tied
to
this.
That
is
why
we
need
the
the
the
settings
tied
to
periods
and
there
might
be
a
issues
with
the
export.
It
might
be
hard
coded
for
three
quantifiers,
but.
C
A
D
Okay,
anyway,
so
yeah,
I
guess
this
will
continue
with
what
we've
been
doing.
It
seems
to
be
mostly
successful.
I
can't
wait
to
have
a
bot
command
to
dm
everybody
at
once.
C
E
E
So
yeah
most
of
the
comments,
it
was
how
it
was
time
consuming
and
confusing
to
mark
the
duplicates.
So
there
was
some
thoughts
on
if
we
could
standardize
the
process.
Somehow
someone
suggested
we
have
a
checkbox
to
mark
multiple
duplicates
at
once
or
to
have
a
keyword
search.
E
A
E
A
A
A
That
could
be
one
thing
marking
doing
it
doing
conducting
actions
on
more
than
one
item,
so
it
could
be
marking
a
a
number
of
praise
items
and
marking
them
as
dismissed
as
well,
and
the
third
thing
would
be
finding
a
a
nice,
a
smart
way
to
do,
marking
duplicates.
C
A
Yeah
inspired
by
those
who
that's
what
I
said:
let's,
let's
work
on
those
and
and
try
to
add
these
other
things
I
said
and
and
make
it
into
a
like
a
a
table.
Toolbar
that
goes.
A
B
I
have
a
question
about
duplication
and
this
is
from
a
data
point
of
view.
Do
we
want
duplicates
to
only
be
a
trace?
That
is
thing
to
the
same
person
by
different
givers
for
functioning?
What's
the
same
thing
to
be
marked
as
duplicate
or
do
we
want
only
pray
only
like
accidentally
sent?
Sorry,
let
me
repeat
that
what
do
we
want
trace
that
has
been
sent
by
the
same
user
to
the
same
user
as
a
duplicate?
B
This
essentially
like,
if
it's
multiple
people
placing
the
same
person
for
the
same
thing,
I
I
won't
call
that
a
duplicate.
Maybe
we
can
rename
that
because
duplicate,
it's
not
duplicative.
It's
coming
from
two
different
givers,
even
if
it's
for
the
same
thing.
A
Apparently,
that
is
the
the
definition
we
have
made
is
that
that
is
a
duplicate,
so
that
that
would
represent
quite
a
big
shift
and
trying
to
to
to
call
that
some
something
else.
B
Or
mlr
place
plays
that
can
be
grouped
together
because
there's
like
two
functionally
different
groups,
one
is
praise
that
is
sent
twice
for
the
same
thing
by
the
same
person
and
then
the
duplicate
place.
That
is
marked
because
there's
two
different
keywords
praising
the
person
for
the
same
thing.
B
A
C
E
So,
when
you're,
when
you're
saying
that
the
same
action
being
praised
by
multiple
different
people
should
not
be
considered
duplicate,
you're
talking
about
the
value
of
the
gratitude
like
what
each
person
felt
was
different
than
the
action
that
happened
and
then,
when
we're
saying
that
praises
are
duplicate,
even
if
they
are
praised
by
multiple
different
people.
We're
talking
about
the
task
itself
like
what
was
performed
so
one
the
value
is
in
the
praised
and
the
other.
E
The
value
is
in
the
person
being
praised
and
the
one
who's
praising
the
one
being
praised,
and
I
think
with
what
what
we
have
now
we're
addressing
both
of
them.
So
when
different
people
praise
one
person
for
the
same
thing,
we
are
giving
the
value
of
the
gratitude,
but
just
the
percentage
to
it,
and
we
are
putting
most
of
the
value
on
the
action
itself.
E
B
I
don't
know
if
that
already
exists
and
mentioned,
like
the
nuances
of
the
settings,
because,
like
there's
likelihood
that
some
people
who
are
quantifying
might
not
have
their
primary
language
as
like
english,
so
there
can
be
some
context
lost
between
the
options.
E
There
wasn't
one
suggestion
I
think
you
gave
by
that.
I
also
took
a
note
here
that
is
adjusting
praise
on
adjusting
praise
based
on
impact,
so,
for
example,
increase
praise
reference
value
if
10
different
people
praise
the
same
thing
versus
one
person
praising
multiple
times
or
just
one
or
two
people
praising
with
that
one
thing,
so
this
could
be
different.
Values
should
be
integrated.
I
don't
know
how
difficult
is
this.
A
I
it's
it's
always
true.
Do
we
ever
sort
of
touched
on
this
a
little
bit
before
about
categorization
and
and
that
kind
of
stuff
as
well
and-
and
it's
of
course
super
interesting
to
add
additional
data
layers
to
this
quite
simple
data
stream
we
have,
but
it's
also
it
comes
with
trade-offs,
and
it's
really
fine
fine
balance
there,
making
the
quantification
potentially
more
complex,
making
it
more
difficult
to
find
quantifiers.
A
It
takes
longer
time,
etc,
and
but
I
think
definitely
once
we
have
really
once
we
have
like
launched
version,
one
of
the
the
current
feature
set,
because
now
we're
not
yeah
we're
adding
new
features,
but
we
are
sort
of
adding
features
that
support
the
current
features
to
make
them
more
make
what
we
have
more
smooth
to
use
and
that
this
would
more
represent
a
totally
new
new
new
feature
and
addition
to
the
data
sets.
A
But
I
do
think
we
should
really
do
a
a
a
good
brainstorming
session
and
really
talk
about.
You
know
an
integrated
findings
from
research
etc
to
plan
the
the
sort
of
the
long-term
roadmap
and
future
features
once
we
are
done
with
the
first
version.
E
Yeah
sorry
to
take
too
long
on
this,
I
just
wanted
us
to
understand
what
is
the
main
problem,
and
I
don't
know
if
that
was
clear.
The
main
the
main
problem
with
the
duplicate
praise
is
now
when
we
were
looking
at
different
values
that
people
gave
were
which
one
to
consider
the
baseline
duplicate
like
there
were
some
phrases
that
that
had
so
many
duplicates
that
people
would
choose
different
baselines,
for
which
one
they
would
mark
as
the
reference
and
then
that
pollutes
the
data
like
that.
Would
that
would
come
up
in
the
analysis
as.
A
That
shouldn't
affect
the
the
final
score,
which,
whichever
baseline
phrase
you
choose,
if
you're
for
different
quantifiers
and
there's
a
bunch
of
similar
praise,
it
shouldn't
affect
the
the
total
average
score
in
the
end
which
baseline
you
choose.
A
But
of
course
it
makes
the
the
data
more
noisy
that
people
haven't
seen
that
this
is
the
original
price,
and
these
were
the
duplicates
and
because
oftentimes,
that
is
completely
impossible
to
know,
and
I
I
noticed
myself
that
sometimes
I
go
through
the
list
from
from
top
down
and
there's
other
times
I
go
from
from
bottom
up
and
then,
of
course
you
it's,
the
first,
the
first
you
and
your
run
run
into
and
yeah.
I
think
we
need
to
move
on
actually
to
get
to
the
end.
A
This
is
a
super
interesting
conversation,
but
maybe
bridging
over
to
reward
board
stuff
and
finishing
on
on
round
around
five
thing.
It
seems
like
we
have
agreement
from
the
reward
board
that
we
will
try
to
quantify
the
next
round
with
without
pseudonyms
and
see
how
that
feels
with
real
real
username
and
real
icons.
A
And
other
reward
board
stuff.
We
need
to
think
we
need
to
make
a
plan
for
the
first
token
payout
for
for
round
one
beginning,
bringing
the
round
one
of
course,
and
if
that
is
not
next
week
or
in
next
month
and
waiting
for
the
source,
credit
params
was
a
blocker
and
waiting
for
source
cred.
How
we
could
make
the
the
source
green
distributions.
We
waited
for
that
up
until
the
point
when
we
finally
realized
that
we
cannot
make
a
a
grain
distribution
for
each
in
in
a
similar
fashion.
A
Like
we
do
the
retractive
praise
quantification
that
is
technically
impossible
to
do
with
source
quests,
we
would
either
have
to
do
like
one
mega
source
credit
distribution
for
the
whole
period
from
basically
from
now
back
until
july
of
last
year,
or
we
we
decide
to
ignore
all
that
has
happened
on
source
credit
up
until
a
certain
point
and-
and
we
start
with
a
smaller,
smaller
first
grain
distribution,
so
that
that
might
be
something
to
to
discuss
with
the
brother
community.
F
Yeah,
I
think,
specifically
thinking
of
the
of
the
chart
you
shared.
If
it's
going
to
be
that
big
of
a
difference,
we
should
consult
the
community
about
what
we
do
with
source
grid
like
if
the
difference
was
between
like
two
thousand
dollars
and
twenty
thousand
something
like
that.
So
I
think
that's
definitely
something
that
should
be
should
be
asked.
C
A
And
and
the
the
the
so
there
are,
these
are
present
two
alternatives,
knowing
that
we
will
not
have
source
credit
data
for
for
for
all
these
periods,
when
we
do
the
when
we
close
down
the
the
rewards
rounds.
So
how
do
we
handle
this
situation?
In
the
first
alternative?
A
We
we
allocate
a
number
of
die
per
period
according
to
the
the
amount
that
we
in
the
reward
board
have
discussed,
and
this
this
amount
we
will
also.
A
We,
we
we
allocate
all
the
funds
towards
space
instead,
so
css
100
to
price
0
to
source
credit,
which
means
and-
and
we
do
that
up
until
the
point
where
we
have
a
a
source
cred
grain
distribution
will,
which
will
be
in
this
case
in
may
and
and
then
we
start
doing
the
the
split
that
we
have
discussed,
which
is
a
2075
towards
brazen
and
25
towards
source
cred,
and
that
means
it's
source.
Credit
then
gets
the
25
of
the
allocated
amount
for
that
period.
When
we
first
integrate
source
credit
as
well.
A
The
other
option
is
to
instead
say
no.
We
will
do
one
big
source,
credit
distribution.
The
first
one
will
be
a
big
distribution
where
we
we
in
include
all
contributions
from
may
this
year,
back
all
the
way
to
the
july
last
year
and
during
the
the
process
for
from
now
or
from
from
july
last
year.
To
to
may
we
will
save
a
chunk
of
each
reward,
rewards
round
and
and
earmark
that
for
for
this
first
first
mega
grain
distribution
and
and
then
that
becomes
10
times
more
because
it's
saved
from
10
rounds.
A
Those
those
are
our
two
two
options.
I
could
imagine
that
there
could
be
other
options
as
well
and
there's
also
a
decision
to
be
made
if
we
we
are
going
to
start
with
a
small
grain
distribution
or
if
we
should
start
with
a
again
great
distribution.
That
looks
on
all
contributions
back
to
last
summer,
or
we
can
also
choose
to
look
at
all
contributions
made
from
the
beginning
of
time,
basically
from
when
we
first
set
up
the
the
github
repositories
and
from
when
we
first
started
the
discourse
server.
A
But
that
is
just
a
I'm,
I'm
not
starting
the
discussion
now,
because
that
that
is
a
big
discussion
I
just
want
to
let
everyone
know
that
this
is
something
we
the
reward
board
are
are
discussing
and
we
will
present
their
proposal
to
the
community
like
this.
Is
the
idea
and
the
and
seek
advice
about
that
before
we
we
plan
for
the
first
first
payout.
D
Yeah,
I'm
just
looking
at
the
points
below
because
it
seems
like
we
want
to
get
some
rewards
out
to
the
people,
and
so
apart
from
this,
like
have
we
resolved
any
of
the
other
issues.
After
that.
A
Yeah,
that's
nothing,
it's
quite
complex,
but
it's
no,
but
but
it's
not
it's
not
sorry.
Yeah.
C
A
It's
a
headache,
but
it's
not
not
a
decision
we
need
to
make.
We
just
need
to
come
up
with
a
structure
to
keep
track
of
that
and-
and
I
think
we
can
use
github
for
for
for
that,
because
the
github
is,
you
know,
sam
semi,
secure
in
that
way,
that
you
cannot
make
changes
without
those
changes
being
registered
and
you
cannot
easily
pack
the
system
without
you
know
some
it
it's
traceable
yeah,
so
we
I
think
we
can
place
data
files
in
the
in
the
actual
reward
rounds.
A
Folders
in
the
easy
rewards
repository
saying
how
much
we
ever
have
paid
out
and
then
we
can
make
a
script
that
sort
of
adds
that
stuff
together
and
to
find
out
who
who
who
have
tokens
left
to
be
paid
out.
If
you
have
activated
your,
I
think
we
can
solve
it
that
way.
It's
fairly
simple.
A
Yeah
not
for
the
first,
but
for
the
second,
because
that
that
is
when
we
start
need
to
start
thinking
about
the
those
that
have
activated
since
the
last
period.
In.
C
A
They
won't
no,
but
they
could
get
for
for
the
next
round
and
for
the
so
what
I
meant
to
when
we
do
the
second
payout.
We
need
to
it's
a
it's
a
more
complicated
equation.
Okay,
it's
easy
for
the
first
round.
We
have
the
list
of
of
these.
These
are
the
people
that
should
get
tokens
the
these
are
the
people
who
got
tokens
and
we
just
need
to
compare
those
two
lists
to
see
who
who
still
has
a
claim
sort
of,
but
for
the
second
round
it's
a
an
additional
layer.
F
I
have
one
question:
it's
about
the
new
prey
spot.
Does
the
new
prey
spot
store
the
ethe
address
like
the
one
going
live
tomorrow
in
the
onboarding?
Does
it
already
has,
because
if
people
are
going
to
onboard
again,
maybe
that's
a
good
chance
to
get
a
lot
of
addresses.
A
Yes,
so
the
the
onboarding
will
be
mostly
like
now,
the
the
the
new
place
part
is
launched,
start
using
that
and
then
the
first
thing
you
should
do
is
to
do
praise
activate,
and
so
until
you
connect
your
ethereum.
C
A
So
yeah,
let's
aim
for
having
like
a
90
coverage
or
something
in
the
first
place
round
of
actually
having
the
ethereum
addresses
for
everyone.
We
could
also
do
some
some
manual
work
of
you
know,
ping
pinging
people
and
I
get
that
could
be
a
a
thing
for
the
this
admin
announce
feature
that
vi
is
working
on
as
well
send
a
message
to
all
the
all
the
users
that
have
not
activated
their
accounts.
B
Yeah,
definitely
I'm
actually
thinking
about
making
the
the
mess
the
you
know
the
options
available
to
it
more
diverse,
like
it
will
be
a
complete
menu.
So,
oh,
I
think
we
only
have
three
minutes
I'll
just
quickly,
just
demo
how
what
it
looks
like
so
the
command
is
admin
announced
and
you
can
just
type
a
message.
B
B
So
right
now
it
only
has
four
options:
sending
dm
to
all
activated
page
users,
so
this
below
this
can
also
have,
like
an
option
say
saying,
like
all:
sending
sending
names
to
all
unactivated
place:
users
quantifiers
all
drafted
quantifiers,
and
so
it
takes
a
couple
of
seconds
to
send
out
the
messages
but
yeah.
This
is
broken
right
now,
because
this
is
running
on
a
test
test
database
where
we
don't
have
valid
discord,
ids,
but
yeah.
B
C
B
However,
I
feel
like
I've.