►
From YouTube: Agoric Monthly Community Call #10
Description
First Wednesday of the month we will share Agoric announcements, answer live questions, highlight community projects, and introduce new tools to help you build your application on chain. Set your reminder here not to miss our Community Call.
A
Hello,
everyone
welcome
to
agoric's,
10th
community
call,
I'm
roland
grouse
product
manager
to
gorick,
and
I
can't
believe
we've
already
done.
10
of
these.
It
feels
like
just
yesterday
that
we
started
doing
them.
So
I
appreciate
all
of
you
that
have
been
here
with
us
through
all
of
it
so
today's
july
and
we
are
going
to
go
through
a
few
things
today.
So
let's
bring
up
the
agenda
to
start
with,
we
just
finished
the
stress
test
of
our
the
phase
of
our
incentivized
test
net.
A
So
we'll
go
through
a
bunch
of
details
there,
I'm
really
excited
to
bring
in
brian
warner,
lead
engineer
here
to
talk
about
the
stress
test
from
his
perspective,
what
what
we
were
trying
to
do,
what
we
learned
a
lot,
a
lot
of
good
details
coming
up
and
then
also
going
to
go
through
developer
bounties,
which
we
launched
last
month
and
we've
already
had
a
bunch
of
activities
so
excited
to
talk
through
those
a
little
bit
more.
A
Give
you
a
sense
of
what's
coming
up
and
then
talk
through
a
few
additional
programs
that
we
have
so
all
right,
we'll
we'll
get
started.
So,
as
always,
this
is
a
community
call,
so
please
do
shoot
questions
here.
I'll
see
them
if
they
come
in
through
youtube.
If
they
come
in
in
discord,
we'll
try
to
have
people
funnel
them
into
here
as
well,
so
happy
to
answer.
Questions
live
as
well
as
when,
when
brian
is
on.
A
Please
please
ask
him
too
all
right,
so
the
stress
test
we
had
a
bunch
of
really
good
twitter
submissions.
One
of
the
tasks
was
to
show
yourself
doing
the
stress
test
and
feeling
stressed,
and
we
had
it.
We
had
a
bunch
of
good
ones
come
in,
so
I
wanted
to
show
a
few
of
those,
but
at
a
high
level,
really
what
we
were
trying
to
do
was
stress
some
of
the
performance,
related
features
and
and
code
that
we
had
put
into
the
system
recently.
A
So
the
stress,
the
goal
of
the
stress
test
was
to
stretch
the
system
in
a
bunch
of
different
dimensions,
and
I
think
we
accomplished
that
more
coming
with
brian,
but
that
it
really
was
an
exciting
phase.
It
was
probably
the
most
intense
phase
that
we've
had
at
the
incentivized
test
net.
A
So
far,
we
and
we
really
appreciate
all
the
validators
that
worked
really
hard
with
us,
both
late
in
the
evenings
for
them
late
in
the
evenings
for
us
through
the
weekend
it
was,
it
was
a
long
phase,
so
we
really
do
appreciate
that
had
a
whole
bunch
of
tasks
submitted.
So
86
percent
of
people
have
participated
in
the
network
tasks,
which
is
really
great.
A
The
community
tasks,
which
were
things
like
the
the
twitter
post
that
you
saw
there
and
then
five
challenge
tasks
were
submitted
as
well,
and
the
challenge
task
in
this
phase
was
to
write
a
load
gen
script,
to
try
to
stress
the
network
itself.
So
we're
excited
to
take
a
look
at
those
and,
as
always,
we
really
appreciate
everybody
moving
forward
and
submitting
those
tasks
all
right.
A
So
with
that
I'm
gonna
bring
brian
on
and
brian
is
lead
engineer
here
at
agoric,
and
I
I
think
for
those
of
you
that
have
been
in
the
community
for
a
while
you've
seen
his
talks
before,
and
I
would
especially
point
anybody
new
to
the
re-entrance
talk.
That
brian
did,
I
think
for
our
chain-link
hackathon.
We
can
someone
here
can
probably
drop
a
link
in
youtube
for
that
one.
It's
really
great
talk,
but
anyway
welcome
brian
for
those
of
you
that
aren't
familiar
with
brian.
A
Do
you
mind
giving
a
quick
introduction
of
yourself.
B
Yeah
sure
thing
yeah,
I'm
the
engineering
lead
on
the
gork.
A
lot
of
the
stuff
I've
been
focused
on
here
has
been
the
lower
level
platform
aspect,
so
not
so
much
the
kind
of
economic
contract
layer
up
above
but
a
lot
of
the
the
engine
we
have
this,
the
contract
execution
engine
is
called
swing
set
and
it's
the
part
that
lets
us
run
each
contract
in
its
own
little
isolated
unit.
A
So
so
brian,
why
don't
you
walk
us
through
the
stress
test?
From
your
perspective,.
B
Yeah,
so
so
this
is
our
first
stress
test
phase.
We
really
wanted
to
put
some
pressure
on
the
network
and
see
how
well
it
would
react.
So
we
were
running
a
bunch
of
load
generators
that
were
creating
what
I
call
economic
load
so
things
that
exercise
the
smart
contract
layer.
We
had
a
couple
that
were
running
the
the
amm
trades.
You
know
automatic
the
the
market
makers.
B
There
we
had
a
couple
that
were
exercising
the
pieces
of
our
stable
coin
machines
called
a
vault,
so
it
was
opening
up,
evolved,
loaning
some
tokens,
borrowing
some
tokens,
closing
it
back
out
again.
We
really
wanted
to
work
on
on
performance
of
the
kernel
across
a
number
of
different
dimensions
dimensions.
So,
just
before
the
phase
started,
our
engineering
goals
were
to
land
a
bunch
of
big
features.
That
would
would
let
us
measure
that
we
implemented
cross
vat
and
cross
machine
garbage
collection.
B
So
when
a
contract
that's
over
here,
it
has
a
reference
to
an
object
being
exported
by
the
contract
over
here.
When
this
one
goes
away,
the
it
sends
a
message
saying
yep:
I
don't
need
that
message
anymore.
It
goes
to
the
kernel.
It
goes
to
the
exporting
vat
that
object
gets
dropped
in
javascript
any
any
function.
You
have
might
close
over
some
other
variables.
So
when
the
function
can
be
dropped,
then
all
those
other
variables
can
be
dropped.
So
there's
this
kind
of
cascade
of
objects
being
freed.
That's
supposed
to
happen.
B
We
we
implemented
all
that
code
just
before
we
started
this
phase
and
because
we
really
wanted
to
measure
how
well
that
part
was
working.
Another
thing
we
did
just
before
the
phase
was
to
upgrade
the
embedded
javascript
engine.
We
used
one
called
xs
that
there's
a
brand
new
map,
a
weak
map
implementation-
that
is
a
very,
very
clever
implementation
of
this
kind
of
cross
cross,
pointing
linked
list
that
allows
garbage
collection
to
happen
in
constant
time.
B
Even
if
the
the
weak
map
has
a
large
number
of
objects
and
part
of
our
system
relies
upon
putting
lots
of
stuff
in
weak
maps
as
a
kind
of
rights.
Amplification
pattern
to
tell
whether
a
payment
object
is,
is
the
recognized
one
or
not.
So
we
wanted
to
go
and
test
that
out.
We
fixed
a
non-determinism
in
the
bank
module
that
was
related
to
the
way
that
the
go
language
does
iteration
in
a
random
order.
B
Every
time
you
go
through
it,
so
that
you
don't
depend
upon
that
order
and
we
recognize
that
that
was
sending
messages
in
different
orders
up
toward
our
javascript
level.
So
we
fixed
that
out
and
so
part
of
the
phase
was
to
to
make
sure
see
how
well
that
worked
and
then
and
the
the
biggest
thing
that
we
landed
was
code.
B
That
would
the
when,
when
we
restart
the
validator
and
the
vat
needs
to
be
brought
back
up
to
the
previous
state
that
it
was
in,
you
know,
you
have
this
javascript
environment,
it's
been
doing
a
bunch
of
computation.
You
now
need
to
somehow
get
back
to
that
same
state
in
a
brand
new
process,
because
you
had
to
reboot
the
validator
machine.
B
You
had
to
do
some
kind
of
upgrade,
so
the
change
that
we
landed
was
to,
with
help
from
our
partners
at
moddable
on
the
javascript
engine,
to
write
a
snapshot
of
the
entire
javascript
heap
to
to
to
disk
every
200
deliveries.
B
And
then,
when
we
start
the
validator
back
up
again,
we
can
load
that
snapshot
in
and
replay
just
a
small
subset
of
the
deliveries
that
happened
since
the
last
snapshot,
rather
than
having
to
replay
everything
since
the
beginning
of
time-
and
this
was
huge-
this
sped
up
the
restart
of
our
chain
by
a
factor
of
100..
I
had
one
test
in
which
it
took
20
minutes
to
start
up
the
first
time
around.
After
this
it
came
up
in
six
seconds,
and
I
was
I
was
crying
with
joy
when
I
saw
that
number
it
was.
B
It
was
just
amazing.
It
did
expose
some
inconsistent
behavior
in
the
garbage
collection,
and
so
we
had
to
make
some
last
minute
changes
to
kind
of
work
around
that,
but
we
think
we
understand
what
what
needs
to
happen
there
now.
A
Yeah
and
the
way
I
interpret
all
this
as
a
non-engineer
myself,
I
I
think
it's
easy
to
fall
into
the
trap
of
thinking
that
a
blockchain
really
is
just
transactions
per
second,
and
that's
the
the
way
you
think
about
performance.
The
reality
is,
there
is
a
ton
of
dimensions
that
need
to
be
optimized
across
and
some
of
them
have
trade-offs,
and
so
really
what
we
were
trying
to
do
here
is
is
figure
out.
A
What
were
the
best
builds
that
we
could
go
into
the
phase
with
the
upgrades
had
us
changing
those
changing
those
up
shipping
new
code.
It
really
was
sort
of
a
huge
effort,
in
particular
on
brian
and
brian's
team's
part,
okay,
all
right
so
so
moving
forward.
One
of
the
tasks
that
we
added
during
the
phase
was
the
slog
file
task,
and
we
had
some
validators
talking
about
it
and
I
guess
I'd
love
to
get
your
perspective
brian
on.
Why
did
we
add
that?
What
are
we
doing
with
it?
B
Yeah
so
so
swing
set
is
the
the
name
of
the
platform.
The
swing
set.
Log
file
is
a
slog
file,
and
sometimes
it's
because,
as
you're
going
through
this
data,
it
feels
like
you're
slogging
through
all
this
information,
so
the
slog
file
records
every
message
sent
to
every
vat
and
it
records
all
of
the
actions
taken
by
that
vat.
So
you
know
you,
the
kernel
says:
hey
that
here's
a
I'm
sending
a
deposit
message
to
a
a
purse
and
the
that
then
does
some
state
changes
internally.
It
sends
out
some
other
messages.
B
B
So
we
we
asked
all
of
our
validators
to
please
collect
and
submit
log
files,
and
you
did
an
amazing
job.
We
caught
a
172
slog
files
from
117
different
validators,
it's
over
half
a
terabyte
of
data,
so
we're
now
kind
of
you
know,
starting
in
the
analysis
process
of
this.
There
are
three
main
things
that
we're
trying
to
get
out
of
that
data
since
the
slug
file
records
every
action
taken
by
every
vat
by
every
contract.
The
and-
and
this
is
supposed
to
be
deterministic
all
the
different
validators.
B
B
So
we
are,
you
know,
building
a
tool
to
kind
of
correlate
the
slog
files
which
were
recorded
different
starting
times
different,
ending
times
where
we're
lining
those
things
up
almost
like
you
know,
dna
synthesis
where
you've
got
a
bunch
of
different
reads
and
you're,
trying
to
figure
out
how
they
line
up
to
make
sure
that
all
the
parts
that
do
line
up
we're
doing
the
same
thing,
and
here
we're
going
to
be
looking
for
bugs
like
like
the
problem
that
we
found
in
in
that
go
code.
Where
there's
something?
That's
that's
not
quite
deterministic.
B
We're
going
to
be
looking
for
messages
that
are
delivered
a
little
bit
too
late,
we're
going
to
be
looking
for
garbage
collection
activity
that
didn't
happen
exactly
the
same
way
on
each
of
these
different
validators,
we'll
be
looking
for
variations
in
the
order
of
parameters,
particularly
in
objects
or
records
that
go
across,
and
these
are
our
small
inconsistencies
that
might
not
be
detected
by
the
usual
cosmos
application
hash,
because
the
kernel
keeps
some
of
its
state
kind
of
private
and
doesn't
expose
all
of
the
internals
to
the
cosmos
level,
where
that
stuff
would
normally
be
hot
very
quickly.
B
Second,
since
the
so
the
javascript
engine
that
we're
using
this
one
called
xs
measures
how
many
low-level
operations
each
each
one
of
these
deliveries
takes.
So
you
know
in
in
the
ethereum
world,
you
have
the
gas
model
where
an
and
operation
cost
1.1
one
unit
and
different
things
like
that.
We
have
a
similar
metering
mechanism,
but
it
operates
at
a
higher
level
as
as
kind
of
as
suitable
for
a
higher
level
language
like
javascript
and
the
that
we're
measuring
how
much
computation
is
is
consumed.
B
How
much
cpu
time,
how
much
cpu
activity
is
provoked
with
the
made-up
unit
that
we
call
the
computron?
So
you
know
one
loop,
one
pass
through
a
for
loop,
might
cost
20
computrons
doing
a
property
lookup
on
an
object,
might
cost
10..
You
know
it's
arbitrary,
but
we're
hoping
that
we
have
something
that's
roughly
similar
to
to
how
much
the
cost
is
or
how
much
the
wall
clock
run.
Time
is
so
our
javascript
engine
has
been
implemented
to
collect
that
information
as
it
does
these
computations.
B
So
one
of
the
things
that
the
slog
file
contains
is
a
report
for
every
one
of
these
deliveries,
how
much
how
many
computrons
were
consumed
and
what
we're
looking
for
is
any
variations
in
that
if
the
if
the
engines
used
on
different
validators
disagree
in
any
way,
if
one
of
them
went
through
the
for
loop
one
more
time
than
the
other
one
did
and
that
doesn't
appear
in
the
in
the
messages
that
it
emitted.
B
That's
still
a
deviation
that
we
want
to
pay
attention
to
and
figure
out
how
to
fix,
and
then
the
third
thing
is
that
the
slog
file
also
contains
high
resolution.
Timestamps
for
the
beginning
and
end
of
every
one
of
these
deliveries-
and
this
is
a
part-
that's
not
going
to
be
consistent
across
different
machines,
because
some
of
these
machines
are
faster
than
others,
but
we
we
want
our
block
time
to
be.
You
know
predictable.
B
We
want
to
spend
a
known
amount
of
time,
doing
computation
before
we
hand,
control
back
over
to
tendermint
hand,
control
back
over
to
the
consensus
mechanism
and
and
do
the
voting
round
and
move
on
to
the
next
block,
and
unfortunately
we
can't
use
wall
clock
time
for
that
it
would
be
great
if
we
could
say:
okay
block
proposer,
run
for
five
seconds
of
computation
and
then
stop
and
let
let
voting
take
over,
but
that
wouldn't
be
the
same
across
all
these
different
validators.
B
So
instead
we're
looking
for
these
computrons,
we're
going
to
say,
keep
doing
cranks
until
you've
used
up
this
many
computrons,
and
we
don't
yet
know
what
that
number
ought
to
be.
There's
a
tuning
parameter
that
we're
going
to
need
to
get
this
metering
stuff
ready.
So
part
of
what
we're
doing
with
this
big
data
set
is
to
try
to
build
a
model
that
says
an
operation
that
takes
this.
Many
computrons
takes
this
many
milliseconds
or
or
takes
a
range
of
time.
Fastest
machine
will
do
it.
B
You
know
in
less
time
than
the
slower
machines,
but
we
want
to
be
able
to
build
a
model
so
that
we
can
say
if
we
use
this
one
particular
mapping
from
computron's
used
to
wall
clock
time,
then
we
can
roughly
constrain
our
blocks
to
fit
within
the
budget
that
we're
really
aiming
for.
B
So
we're
especially
grateful
to
the
the
validator
community
for
giving
us
such
a
diverse
data
set
with
almost
120
different
machines,
we're
hoping
we
can
build
a
bit
a
good
model,
and
this
is
the
kind
of
data
we
just
could
not
get
by
ourselves,
because
you
know
all
of
the
the
test
nets
that
we
spin
up
are
a
relatively
homogeneous
type,
a
set
of
of
machines
so
having
the
real
validators
give
us.
This
data
is
just
immensely
valuable.
A
Yeah-
and
we
especially
appreciate
it
because
it
was
a
task
added
at
the
last
minute
and
a
lot
of
you
had
to
ask
for
help
getting
it
done,
and
we
we
really
do
appreciate
that
one
one
question
coming
came
in,
which
was
how
how
do
you
actually
analyze
this
log
file.
B
Yeah,
so
so,
if
you
take
a
look
at
it,
you'll
find
that
it's
simply
lines
of
json
and
there's
a
there's
a
type
with
each
field.
So
we
have
a
type
to
say:
we're
we're
beginning
to
load
the
kernel,
there's
a
type
to
say:
we're
replaying,
a
vat,
and
here
all
the
deliveries
to
it.
B
The
the
main
pair
is
going
to
there's
a
begin
block,
an
end
block
marker
for
each
delivery
for
each
crank,
there's
a
deliver,
a
type
deliver
and
a
type
deliver
result
to
kind
of
frame
the
overall
delivery,
and
then
within
that
there
are
a
bunch
of
syscalls,
and
so
you
can
take
a
look
at
that
and
and
see
how
that
turns
into
the
kernel,
sending
a
message
into
a
vat.
B
The
vat
does
some
stuff
and
emit
some
sys
calls
eventually
the
vat
finishes,
and
so
the
the
the
tool
that
we're
building
for
this
purpose
is
basically
going
to
take
120
compressed
files
walk
through
them.
You
know
pointer
into
each
one,
that's
kind
of
stepping
forward
figure
out
for
this
particular
file
from
this
particular
validator.
B
This
is
now
working
on
block.
3000
is
working
on
delivery.
Number
four
within
that
and
find
that
same
point
in
all
of
the
other
files.
You
know
it's
going
to
be
a
120
pointer,
walk
and
when
they
line
up,
then
it'll
compare
the
data.
B
So
you
know
a
lot
if
you,
if
you're
doing
a
file
comparison
between
sorted
data-
and
you
have
multiple
pointers
that
are
iterating
at
different
rates.
It's
going
to
be
it's
going
to
be
something
like
that
and
we're
still
working
out.
You
know
the
best
way
to
to
approach
this
task.
I
just
finished
last
night
downloading
all
of
this
log
files,
so
yeah
everybody,
that's
uploaded.
One.
I've
got
that
data,
you
don't
need
to
keep
hosting
it
in
case,
that's
kind
of
they're
kind
of
big
that
can
be
kind
of
expensive.
B
A
Great
okay,
all
right,
so
I
guess
just
to
finish
up
here:
we
we
had
a
10
day
long
phase.
We
did
a
whole
bunch
of
work.
How
would
you
frame
what
we
learned
from
the
phase.
B
Yeah,
so
so
we
learned
a
bunch
of
things
we
need
to
do
better.
We
one
thing
we
we
learned
was
that
so
the
load
generators
are
machines
that
are
running
a
deploy
script
if
you've
seen
our
contract
deployment,
development
environment,
that
submits
transactions
like
the
amm
trade
or
the
vault
opening
closing
or
one
of
them,
creates
a
fungible
faucet
contract
and
then
just
taps
it.
You
know
it
says,
give
me
a
thousand
tokens.
Give
me
a
thousand
tokens.
B
The
protocol
that
we
use
from
one
of
those
solo
machines,
like
a
wallet
to
the
chain
needs
some
work.
We
found
that
that
at
low
traffic
rates
that
works
just
fine
at
higher
traffic
rates,
where
it
has
multiple
messages
that
want
to
go
out.
B
At
the
same
time,
we
don't
quite
have
the
right
kind
of
coordination
or
interlock
between
those
messages,
so
it
kind
of
it
almost
trips
over
its
own
feet
and
its
excitement
to
get
lots
of
messages
out
and,
as
we
were,
increasing
the
rate
of
traffic
generated
by
one
of
these
machines,
the
actual
rate
of
traffic
that
made
it
to
the
chain
was
dropping.
So
we
we
ended
up.
B
That
was
a
challenge
because
the
rpc
nodes
we
were
using
kept
crashing,
and
so
we
couldn't
bring
up
the
new
machine
because
they
couldn't
connect
to
the
chain
and
kind
of
get
synchronized
in
the
way
that
they
were
supposed
to.
So
we
have
a
bunch
of
work
to
do
on
infrastructure.
You
know
making
sure
we
have
the
enough
and
the
right
kind
of
rpc
machines
ready
to
go.
Make
sure
that
we
have
a
bunch
of
backups
in
place.
B
Oh
there's,
no
good
reason
why
this
thing
should
be
sticking
around,
but
it
is
and
and
find
out
why
the
next
big
development
goal
you
know.
Another
thing
we
learned
here
is
that
the
way
that
we're
doing
kind
of
flow
control-
you
know
the
load,
we
would
generate
a
bunch
of
load.
It
would
cause
one
block
to
be
really
big.
It'd
take
30
seconds
to
get
one
block
out
and
then
the
next
three
blocks
would
be
empty
and
so
there's
there's
a
task.
B
We've
been
we've
been
designing
for
quite
a
while,
but
it's
kind
of
fine.
It's
come
up
on
the
development
chart
now
to
use
metering
to
limit
the
amount
of
work
that
we
do
in
any
particular
block.
So,
instead
of
30
seconds
here
and
then
0
0
0,
it
should
be-
let's
do
10
seconds
here
and
then
10
seconds
here
and
then
10
seconds
here.
B
Part
of
that
is
dependent
upon
the
data
we're
collecting
from
this
log
files
to
figure
out.
What's
the
relationship
between
computrons
that
are
deterministic
and
that
we
can
measure
and
wall
clock
time,
which
is
what
we
need
to
you
know,
provide
to
the
the
network,
but
is
not
deterministic
and
is
not
something
we
can
measure
directly
part
of
it
is
you
know,
building
an
economic
model
for
the
way
that
computation
takes
place.
B
So
you
know
in
in
we
have
longer
term
plans
of
setting
up
a
fairly
sophisticated
scheduling
mechanism
so
that
you
can
pay
for
priority
the
first
steps
of
that
are
for
contracts
to
just
pay
for
the
execution
they're
taking.
So
a
crank
goes
into
a
vat
that
does
some
amount
of
computation
and
finishes
the
engine
says
that
used.
You
know
four
thousand
computrons
and
there
needs
to
be
an
account
that's
deducted
by
four
thousand
times
something
so
we're.
B
You
know
we're
working
out
the
details
for
that
part
of
that
is
to
incentivize,
efficient
use
of
the
platform.
Part
of
it
is,
is
a
denial
of
service
attack,
resistance
right
you,
you
can't
send
a
bunch
of
junk
messages
into
something
and
cause
it
to
burn
a
lot
of
cpu
time
without
paying
for
it
and,
if
you're
paying
for
it.
B
Well,
then
that
pays
for
people
to
you
know,
put
more
cpu
into
the
system,
so
I
have
kind
of
self-balancing
thing
and
then
that
same
mechanism,
it's
what
it
will
be
expanded
to
do
the
prioritization.
So
when
you
send
a
message
you,
you
indicate
the
bid
of
how
much
you're
willing
to
pay
for
the
message
and
then
that
influences
the
way
the
scheduling
algorithm
runs
with
the
thing
called
the
escalator
algorithm
that
we're
working
on.
B
So
you
know
those
are
the
next
couple
development
steps
that
we're
working
on
and
this
phase
you
know
made
it
clear
that
those
are
the
right
things
to
be
working
on
right
now.
Knowing
knowing
that
messages
are
kind
of
getting
backed
up,
indicates
that
hey,
if
we
could
control
the
priority
of
that
and
give
people
a
way
to
to
influence
that
that
would
really
help
smooth
out
the
behavior
of
the
chain.
A
Great
and
you're
teasing
a
little
bit
where
we're
going
with
the
next
phase,
which
I
will
be
talking
about
shortly.
So
I
guess
with
that
brian.
Thank
you.
So
much
for
coming
on,
as
everyone
can
tell,
you've
got
a
lot
on
your
plate,
so
I'm
gonna,
let
you
get
back
to
it
now,
but
I
appreciate
the
time
all
right
see
ya
all
right.
So
thank
thanks.
Everybody.
A
I
there
was
one
question
that
came
in,
which
was
I'm
just
gonna
sort
of
interpret
it
a
little
bit
more
broadly,
which
is
how
does
all
the
stuff
that
brian
just
talked
about
relate
to
our
mainnet
launch?
And
if
you
look
at
our
our
site
or
see
the
community
call
from
last
month
we
went
through
the
the
plan
schedule
for
mainnet
mainnet
phase.
Zero
does
not
depend
on
any
of
this
main
phase.
A
Zero
does
not
include
the
agora
virtual
machine,
which
brian
has
been
referring
to
as
swing
set,
and
so
that
can
launch
prior
to
this,
these
things
getting
tested
and
ironed
out
phase
one
cannot
so
phase.
A
One
is
our
agora
treasury,
and
that
includes
obviously
operating
in
javascript
and
using
our
virtual
machine,
so
we're
looking
to
to
build
and
test
a
whole
bunch
of
this
prior
to
the
phase
one
launch,
I
see
king
super
saying
too
much
jargon,
well
that
that's
too
bad
king
super
feel
free
to
ask
for
clarification
on
anything
that
that
you'd,
like
okay,
all
right,
I'm
gonna,
I'm
gonna,
keep
rolling
here
and
we'll
go
into
stress
test.
Part
two
so
got
a
lot
of
questions
in
discord.
A
This
is
an
additional
phase
that
we've
added
to
the
incentivized
test
net
and
and
just
for
clarification
this.
This
will
be
additional
points
as
well
right,
so
we
had
a
points,
balance
for
the
test
net
and
we've
upped
it
to
allow
for
this
new
phase.
The
start
date
is
planned
for
august
4th
and
the
focus
is
transaction
metering,
so
brian
talked
about
really
effectively
agorax
gas
model
right.
How
do
we?
How
do
we
limit
the
dos
of
the
the
network?
A
How
do
we
make
sure
that,
if
you
are
using
compute
of
the
validators
that
you're
paying
for
it
and
this
this
is
our
model?
For
that,
and
we
intend
to
have
some
end-to-end
flows
for
that
which
will
not
include
things
like
prioritization,
but
will
hopefully
include
the
ability
to
go
end-to-end
all
the
way
to
the
wallet
for
a
user
to
make
a
decision
about
how
much
to
pay
for
a
transaction.
A
So
we
also
brian
also
ran
through
a
whole
bunch
of
things
where
we
need
to
continue
working
based
on
what
we
learned
in
this
phase,
and
that
will
be
a
focus
for
this
next
phase
as
well
and
then
at
the
end,
we're
also
adding
in
some
beta
functionality.
So
the
javascript
level
governance.
We've
we've
had
a
few
governance
votes
in
the
incentivized
test
net
so
far,
but
they've
all
been
at
the
cosmos
level.
A
What
we're
really
excited
about
is
implementing
governance
at
the
javascript
level,
which
will
allow
it
to
reach
into
contracts
and
make
changes,
so
we're
looking
forward
to
at
least
one
task
related
to
that
in
this
next
phase.
As
always,
please
do
send
more
more
questions
into
discord
or
right
here,
right
now
and
and
I'm
happy
to
answer
them.
But
this
is
our
plan
for
the
next
phase
of
the
test.
Time
all
right
so
so,
moving
on
here
last
month
we
talked
about
developer
bounties.
A
We
talked
in
a
little
bit
of
depth
about
the
three
that
they
got
launched.
One
was
chain
link
related
infrastructure.
Another
is
a
pool
based
loan
protocol
and
the
third
is
a
soft,
peg
amm
curve,
which
I
think
is
called
a
like
asset
curve
in
the
actual
ticket.
It's
sort
of
a
curved
dot,
finance
style,
amm
curve
for
stable
coins
or
like
assets,
so
we've
gotten
a
bunch
of
applications
on
those,
and
we
really
appreciate
people
reaching
out.
I
wanted
to
take
a
moment
to
talk
a
little
bit
about
the
process
there.
A
As
we
we
launched
these
bounties
on
git
coin
as
single
developer
bounties,
which
means
that
from
our
side,
what
it
looks
like
is
a
bunch
of
applications,
and
then
we
select
one
person
to
to
work
on
it
out
of
those
applications.
If,
for
whatever
reason
that's
not
working
out
or
that
person
is
unresponsive,
then
we
do
have
the
option
to
to
go
back
and
pick
somebody
else,
but
we
can
only
pick
one
person
at
a
time,
so
that
does
affect
how
we
think
about
the
the
application
process.
A
So
the
way
we
are
handling
it
is,
if
you
do
want
to
submit
for
a
bounty.
We
ask,
please
put
some
thought
into
your
work
plan,
write
the
work
plan
in
english,
and
we
know
a
lot
of
our
community
are
non-native
english
speakers.
That's
totally
fine.
A
We
just
want
to
make
sure
that
we
can
actually
communicate
because
from
our
our
standpoint
I
think
everyone
at
a
gorick
is,
is
english
and
so
write
it
in
english,
put
some
thought
into
it
and
then
we'll
have
at
least
one
conversation
with
you
prior
to
moving
forward,
which
might
include
also
asking
for
some
other
other
code
that
you've
written
or
other
related
things
that
you've
done
and
then
we'll
move
forward.
So
three
boundaries
have
been
launched
already
and
then
we
have
a
whole
bunch
more
that
are
coming.
A
So
I
and
also
I
should
point
out.
We
have
a
bounties
channel
in
discord
if
you
have
questions
or
if
you
have
bounties
that
or
things
that
you
would
like
to
build,
that
you
were
hoping
could
become
a
bounty.
So
these
are
the
kinds
of
bounties
that
are
are
coming
up,
so
we
have
a
few
front
end
related
bounties,
so
those
of
you
that
have
played
with
our
beta,
you
probably
know
that
the
treasury
vaults
and
the
amm
they
need
a
better,
better
front
end
on
there.
A
So
we
have
bounties
for
that.
I
I'm
also
considering
an
info
page
for
the
amm,
so
for
those
of
you
that
have
used,
you
know
uni-swap
sushi
or
any
of
the
other
amms
there's
often
a
separate
page.
That
shows
details
on
pricing
and
things
like
that.
That
would
be
really
interesting
to
see.
A
We
need
an
explorer
for
swing,
set
again
the
agora
virtual
machine,
the
way
the
way
that
for
those
of
you
in
the
test
net,
when
you
put
a
transaction
in
to
make
a
trade
on
the
amm,
and
then
you
went
to
big
dipper
to
look
at
that
transaction.
Big
dipper
didn't
really
give
you
enough
information
to
understand
what
you
had
done
and
and
we'd
love
to
have
an
explorer
that
works
at
the
level
where
it
can
understand,
what's
happening
in
javascript.
A
So
that's
what
that
bounty
would
be,
and
then,
as
I
mentioned
there,
we've
got
a
bunch
of
additional
defy
apps
that
were
we're
looking
to
put
out
as
bounties
we've
had
several
people
reach
out
to
us
with
things
that
they'd
like
to
build
that
are
aligned
with
the
direction
we
want
to
go
and
if
that's
you
and
there's
something
that
you
really
want,
please
do
reach
out
and
we
can
see
if
we
can
make
it
a
bounty
for
you
and
sorry.
A
I
just
saw
king
super
since,
since
the
next
test
net
phase
wasn't
planned,
will
we
have
a
challenge
task?
And
the
answer
is
yes,
so
we're
looking
at
challenge
tasks
right
now.
It's
interesting
because
the
the
challenge
tasks
that
we're
considering
for
this
next
phase
kind
of
overlap
with
what
we
were
thinking
for
the
adversarial
test
net.
So
we
have
a
little
work
to
do
there.
A
But
yes,
we
are
planning
a
challenge
task
for
for
phase
4.5,
all
right
and
again
so
any
anything
with
with
bounties,
please
reach
out
in
the
boundaries
channel.
Also,
if
you,
if
for
some
reason,
you
don't
want
to
talk
publicly,
you
can
dm
me
in
discord.
A
So
we've
had
a
bunch
of
people
start
and
actually
we've
had
several
regional
communities
pop
up
internationally
very
recently,
and
so
we're
really
excited
for
this.
There
have
been
singapore,
turkey
and
germany
all
have
had
communities
start
recently,
and
these
are
members
of
the
community
like
you
that
have
decided
that
you
want
to
run
that,
and
so
we
just
wanted
to
say
thanks
for
those
of
you
that
have
gotten
started.
A
If
that's
something
that
you're
you're
interested
in
doing
and
again,
this
is
it's
probably
more
work
than
just
starting
a
telegram
group.
So
it's
the
sort
of
thing
where
we
want
to
talk
to
you
first
make
sure
we
understand
what
you'd
want
to
do
with
it
and
all
that
stuff.
But
this
has
been
really
great
to
see
the
agora
community
grow
internationally.
A
So
more
of
that
coming
and
really
excited
to
see
that
okay-
and-
and
so
I
I
mentioned
this
last
month
as
well-
but
agorik
is
hiring
so
we
have
a
number
of
positions
still
and
obviously
these
positions
have
been
live,
so
we
do
have
some
candidates
coming
through,
but
our
none
of
them
have
officially
been
filled.
As
far
as
I
know,
dean
may
correct
me
here,
but
there
are
four
engineering
positions,
two
marketing
positions,
two
operations
and
one
partnerships
and
programs
position.
A
All
of
these
positions
would
be
awesome,
so
if
any
of
these
fit
what
you're
looking
to
do
if
you're
excited
about
agoric,
please
do
reach
out,
because
any
one
of
these
would
be
a
great,
a
great
role
for
somebody.
That's
looking
to
either
move
into
the
blockchain
space
or
move
into
the
space
that
we
think
is
going
to
grow
solidly
over
the
next
forever.
A
So
so,
please
do
reach
out
to
us
and
we're
excited
to
bring
more
people
onto
the
team.
A
Okay
and
then
we
also
have
a
bunch
of
events
coming
up.
So
all
of
these
except
the
bottom
one
or
dean.
So
please
check
dean
out
on
on
various
podcasts.
You
know
that
he's
bringing
six
cups
of
coffee
with
him
wherever
he
goes,
so
all
those
podcasts
will
be
exciting
to
watch
and
the
nft
experience
down
at
the
bottom
that'll
be
a
panel
panel
that
I'll
be
on
and
speaking
and
as
we
get
closer
to
mainnet.
You
can
expect
the
event
schedule
to
get
even
more
populated.
A
All
right,
so
I
appreciate
all
the
questions
that
have
come
in
so
far.
I'm
gonna,
I'm
gonna,
give
a
few
more
seconds
for
any
more
questions.
If
there's
anything
that
you'd
like
me
to
cover
before
we
end
again,
as
always
always
happy
to
do
it
asynchronously
and
for
those
of
you
that
are
watching
this
as
a
recording
feel
free
to
find
us
on
discord.
But
I'll
wait
a
few
seconds
here
and
please
send
any
questions.
A
A
All
right
with
that,
I,
I
guess
we'll
end.
Our
community
call
thanks
to
everybody
for
joining
thanks
for
everybody
working
through
us
with
the
test
net
working
through
us
through
with
us
on
on
discord
with
the
stuff
that
you're
building
so
really
appreciate
the
community,
as
always
so
we'll
be
in
touch
and
see
you
at
the
next
community
call,
if
not
before,.