►
From YouTube: CasperLabs Community Call
Description
Rewards Distribution presentation & status update.
A
Okay,
good
day,
good
day,
everyone
welcome
to
our
community
call
you're
excited
to
be
here.
Thank
you
for
dialing
in
thank
you
for
listening
in
those
of
you
that
are
listening
on
YouTube
I'm,
going
to
provide
a
quick
and
detailed,
hopefully
thorough,
engineering
update,
and
then
we
will
turn
it
over
for
some
presentations
or
a
presentation
from
our
economics
team.
I
know.
A
lot
of
folks
are
curious
about
the
economics
of
the
cazuelas
blockchain,
and
so
in
the
series.
A
We
are
attempting
to
talk
with
transparency
about
our
thinking
around
the
economics
of
the
chain
and
what
it
is
we're
building
and
why
we've
done
what
we
have
chosen
to
do.
So
with
that
I'll
kick
off
the
engineering
status,
we
are
working
on
our
node
0.12
release.
You
can
interact
with
that
release
plan
to
see
what's
planned
in
the
program.
Our
goal
is
to
deliver
a
contracts
SDK
with
a
built
in
runtime
environment
for
debugging,
and
running
your
smart
contracts
from
within
a
rust
workflow.
A
For
example,
you
can
you
can
set
up
your
smart
contract
project
and
then
run
it
using
a
test
framework
that
will
enable
you
to
stub
in
tests
and
also
pass
your
contract
to
the
execution
engine
context.
So
very
nice,
you
can
actually
pre
populate
the
execution
engine
as
well
with
any
pre-existing
state
before
you
run
their
contracts,
enabling
you
to
build
your
entire
system.
A
Within
this
contracts,
SDK
the
team
entered
sprint,
29
I
just
left
our
sprint
demos,
we're
doing
some
very
exciting
work
on
the
engineering
team
and
so
without
further
ado,
we'll
dig
in
some
more.
Our
current
focus
is
the
production
implementation
of
Casper
labs
highway.
The
team
is
very
confident
that
they
will
have
a
first
version
of
the
protocol
within
the
node
12
timeframe.
A
So
we
expect,
in
another
2
to
3
weeks,
we'll
have
the
preliminary
version
of
the
protocol
with
probably
a
little
bit
of
work
rolling
over
and
to
sprint
30
to
get
that
completely
done.
But
this
will
enable
us
to
run
the
protocol
and
see
how
it
is
behaving
well
in
tandem
with
that
we're
figuring
up
figuring
out
test
environments,
because
once
the
protocols
up
and
running
in
terms
of
an
alpha
version
we'll
need
to
tune
and
test
it
and
see
how
it
performs-
and
so
that's
focus
there.
A
We're
studying
reward
distribution
and
we're
doing
research
on
spam
protection
and,
of
course,
performance
and
stabilization
and
preparation
for
test
net.
So
we've
got
for
consensus.
What
we'll
be
delivering
is
highway
with
eras
and
fixed
length
rounds
for
those
of
you
that
are
not
aware.
Highway
is
a
protocol
that
actually
adjusts
to
network
conditions
and
allows
for
shorter
round
lanes
when
network
conditions
are
good
and
longer
round
lengths
when
it's
taking
longer
for
validators
to
respond
because
of
network
congestion
and
so
right
now
for
the
first
version
is
we've
just
got
fixed
length
rounds.
A
So
the
round
lengths
will
not
update,
but
we
will
have
era's
and
we
will
have
valid
I,
don't
think
we'll
have
validator
set
rotation,
but
we
will
have
this
thing
called
switch
blocks
which
will
basically
terminate
one
era
and
begin
another
we're
also
building
our
simulator.
We've
always
had
this
notion
of
wanting
to
see
how
the
blockchain
behaves
over
very
long
durations
of
time
and
the
simulators.
A
Let
us
simulate
long
durations
of
time,
so
this
sequoia
simulator
will
aid
in
that
on
the
execution
engine,
we're
testing
that
contracts,
SDK
I,
talked
about,
and
then
we're
also
looking
at
bringing
the
system
contracts
into
the
execution
engine
fully.
We
believe
this
will
speed
up
performance,
the
mint
and
proof
of
state
contracts
and
will
be
implemented
directly
in
within
the
execution
engine,
and
that's
what
this
they're
talking
about.
A
With
the
system
contracts
and
balanced
endpoint
we're
going
to
integrate
CL
value,
we've
discovered
that
there's
we
need
to
basically
propagate
the
type
system,
all
the
way
out
to
the
node
and
so
there's
a
little
bit
of
cross
work
that
needs
to
happen
across
the
execution
engine
and
the
node
and
that's
the
work.
We're
doing
in
this
next
fret
and
then
we're
actually
updating
the
G
RPC
service
has
to
support
new
queries
on
the
node,
we're
adding
more
relationships
to
graph
QL.
This
enables
for
greater
insights.
A
You
know
basically
enriching
the
graph
QL
interface,
we're
actually
doing
a
prefetch
of
blocks
and
we
actually
also
implemented
a
sequential
and
parallel
deploys.
That's
not
on
this
I
need
to
add
this
here,
but
basically
for
those
of
you
they're
not
aware
the
the
the
cazuelas
blockchain
is
fully
concurrent,
and
that
means
we
process
all
deploys
in
parallel.
A
We
assume
that
all
deploys
do
not
conflict
and,
of
course,
this
is
not
true.
You
have
the
classic
double
spend
problem
right,
which
is
a
conflicting
transaction,
and
you
need
to
have
you
need
to
strongly
order.
Those
deploy
such
that
one
of
those
deploys
will
fail,
and
so
we've
got
a
naive,
a
fairly
naive
mechanism.
It's
basically
whichever
deploy
gets
into
the
queue.
First
is
the
one
that
gets
processed
first
and
the
other
one
fails,
and
we
probably
will
enrich
that
algorithm
in
future
releases.
But
for
the
time
being,
we're
addressing
this.
A
You
know
conflicting
deploy
problem
by
sequentially
or
during
the
deploys
that
conflict,
so
the
first
deploy
that
commutes
will
make
it
into
the
commutator
block.
The
second
one
that
conflicts
will
be
will
be
processing
a
subsequent
to
block
and
it
will
likely
fail
as
a
result.
For
example,
it
will
either
fail
or
take
the
effects
of
the
first
deploy
and
apply
it
in
the
second
deploy
block
block
prefetch
to
speed
up
processing,
and
this
is
basically
prefetching
blocks.
A
If
I
don't
have
the
parent
parent
block,
I
can
still
get
the
child
blocks
and
basically
optimize
downloading
of
those
blocks,
particularly
when
I'm
trying
to
catch
up
state
rom.
That's
what
this
prefetch
is
for.
Testin.
Sorry,
we're
doing
a
lot
of
work
around
testing
we're
building
a
testbed
environment
that
enables
us
to
spin
up
entire
networks
using
terraform
and
ansible,
and
then
we're
creating
a
rather
rich
testing
framework
that
enables
us
to
simulate
say,
for
example,
an
ER
C
20
contract.
A
That
has
you
know
a
hundred
thousand
simultaneous
accounts
that
are
doing
simultaneous
transfers,
and
this
is
for
the
large-scale
production
type
testing
that
we
want
to
be
able
to
do.
In
once,
we
hit
the
alpha
test
net
timeframe
and
even
before
that
this
also
can
be
used
by
our
staking
partners
and
validator
partners
right,
we'll
partner
with
them
to
help
them
observe
how
the
network
scales
out
and
how
it
behaves
and
very
high
throughput
sand
more
complicated
network
topologies
we're
optimizing
in
optimizing
integration,
testing
and
CI.
So
it's
faster!
A
So
this-
and
this
is
workaround
parallelizing
our
CI
test.
So
we
can
push
more
code
through
CI
more
efficiently,
we're
also
going
to.
We
have
to
go
through
an
inventory
of
our
existing
tests
in
in
continuous
integration.
So
we
built
a
whole
bunch
of
tests
based
on
naive,
Kasper,
blockchain
and
that's
what's
currently
running
in
dev
net.
A
Then
a
naive
Kasper
blockchain
has
a
very
different
big
big
difference
between
naive
cows
from
watching
Anna
and
Highway,
primarily
that
in
the
naive
Kasper
blockchain
validators
can
decide
when
they're
gonna
propose
blocks
in
the
highway
protocol.
That's
not
the
case
when
your
turn
comes.
You
have
to
propose,
and
you
have
to
propose
ballots
or
blocks,
to
maintain
liveness
right
to
prove
that
you're
present
in
the
protocol.
A
A
So
first
I
want
to
catalogue
the
test
and
analyze
whether
these
tests
are
relevant
to
highway,
and
then
we
need
to
refactor
the
test
either
to
make
them
relevant
to
highway
and
and
or
remove
this
Auto
proposed
piece
from
the
tests
so
that
we're
ready
to
integrate
with
highway
when
when
it's
time
to
integrate
with
highway,
we
also
are
doing
we've
got
Network
simulations
around
testing
for
notary
starts.
So
what
happens?
A
What
in
a
running
network,
if
a
node
has
to
go
off
lines,
for
example,
to
install
an
upgrade
or
to
install
an
update,
a
critical
patch,
it
could
even
be
a
system
level
patch.
What
does
that
look
like,
and
so
we
want
to
make
sure
that
we
can
understand
how
the
network
will
behave
if
nodes
are
taken
offline
to
install
patches
or
have
to
install
critical
system
upgrades
lots
of
work
on
the
ecosystem.
A
We
as
part
of
our
no
12
deliverable,
we
will
be
delivering
aid
app
developer
guide
that
will
be
available
in
read
the
docs
underneath
our
technical
specification.
We
will
later
on
push
this
into
a
much
more
user-friendly,
no
getting
started,
guide
that'll
be
aligned
with
our
brand
new
website.
We've
got
a
new
website
that
will
be
rebooting
here
in
the
next.
Oh,
you
know,
probably
three
to
five
days
very
excited
about
that
and
the
eventually
this
smart
contract
user
guide
will
go
in
there.
A
In
addition
to
being
more
embedded
with
the
adapt
developer,
experience
we're
designing.
Let's
see
what
else
we
didn't
wear
any
enhancing
clarity
and
we're
gonna
make
it
more
useful
for
node
operators.
I
myself
when
I've
been
doing
a
little
bit
of
production
engineering
testing
notice
that
it's
hard
to
clearly
tell
when
blocks
have
not
been
finalized
or
blocks
have
been
orphan.
It's
easy
to
see
when
blocks
have
been
finalized,
not
so
easy
to
see
when
blocks
haven't
been
finalized.
A
We
also
want
to
add
some
things
that
node,
you
know
that
node
operators
can
invalidate
errs
can
benefit
from
and
of
course,
we
see
here.
The
new
Castro,
Labs
style
website
is
something
we're
also
working
on
economics
team,
you'll
hear
from
the
economics
team
shortly.
You
know
they
are
they're,
doing
a
design
of
computation
storage,
bandwidth
pricing.
So
we're
again
we
promised
predictable
pricing
and
we
continue
to
do
research
in
that
area,
and
we
also
are
starting
to
figure
out
what
our
pricing
is
going
to
look
like
we're
also
doing
some
research
on
spam
protection.
A
This
is
generally
research,
assistants,
Ensis
research.
So
how
can
we
protect
node
operators
validators
from
dos
attack,
and
we
are
mocking
up
the
reward
distribution
in
pi,
so
we
can
see
what
what
the
rewards
distribution
is.
Actually
gonna
work
like
we've
gotten
a
lot
of
questions
from
prospective
validators
and
stagers
on
what
the
rewards
look
like
and
how
that's
going
to
what
their
net
yields
are.
Gonna
be,
and
so
we're
gonna
do
a
mock
up
in
Python
to
help
simulate
this
I
believe
and,
of
course,
it's
a
quarry
simulator,
which
I
already
talked
about.
A
Other
company
updates,
we
do
have
our
weekly
workshops
every
we
do
them.
Thursday
8:00
a.m.
Pacific
and
we've
moved
it
to
Wednesdays
at
4:00
p.m.
Pacific,
because
Friday
mornings,
for
some
of
our
folks
in
Asia,
don't
work
so
we're
gonna
try
Thursday
mornings
their
time
instead.
So
I'm
going
to
update
this
to
be
wednesday
at
4
p.m.
so
our
first
weekly
workshop
will
be
wednesday
at
4
p.m.
this
week,
and
this
week
we
will
be
talking
about
tic-tac-toe.
A
Tic-Tac-Toe
is
a
nice
little
game
that
Michael
birch
created
we're
gonna
walk
through
the
structuring
of
the
contract,
I'd
like
to
demonstrate
running
it
on
an
x86
machine
to
demonstrate
how
these
contracts
can
run
locally
and
then,
of
course,
we'll
run
it
on
the
blockchain
and
play
a
game.
That's
the
plan
for
this
week's
weekly
workshop.
So
with
that
I've
done
a
lot
of
talking.
I
turn
it
over
to
Alex
and
Oh
Nora
Thanks.
B
I've
been
like
Metta
said:
this
week's
presentation
is
about
how
to
price
up
codes.
I
have
been
doing
some
research
on
how
a
theorem
started
doing
that
in
the
very
first
days,
and
if
you
look
at
it,
it
was
there
even
back
in
the
days
of
like
writing
the
first
yellow
paper.
You
have
this
primitive,
you
know
categorization
of
of
operations,
and
you
can
see
that
they
have
like
tried
to
use
common
sense
to
you
know,
assign
assign
certain
prices
and
it
has
evolved
now.
B
It
looks
like
this
and
how
did
it
come
from
here
to
here-
and
maybe
this
is
a
bit
from
the
very
first
version
of
the
file.
Maybe
I
should
have
also
inserted
the
version
when
main
have
launched
or
the
test
not
launched,
because
it
was
a
test
net
when
they
were
making
most
modifications
as
far
as
I
understand.
B
B
Any
operation
that's
take,
consumes,
resources
take
takes
time
and
it
doesn't,
and
the
user
doesn't
pay,
for
it
is
a
potential
TOS
factor
because
they
can
just
spawn
a
transaction
generating
one
trillion
of
those
operations
without
any
without
paying
anything
and
then
like,
supposing
that
they
send
that
transaction
to
each
miner
the
whole
system,
the
whole
network
would
halt
for
some
time
before
before,
before
the
like
fallbacks,
before
they
actually
detect
that
this
is.
This
is
a
case
where
the
OS
is
happening.
So
you
can
you
can
always
you
can
always
do
time
outs.
B
It's
not
like
the
gas
is
the
proper
way
to
go.
So
that's
how
they
reached
reached
that
point.
If
you
look
back
in
the
history,
VIP
150,
you
have
this
gas
cost
changes
for
IO
heavy
operations.
So
what
happened?
Is
they
realized
the
problem
there?
Just
that
I
just
described
some
IO
operations
were
taking
up
too
much
time
and
then
network
was
wasn't
moving,
so
they
increase
the
prices
appropriately,
and
this
is
something
also
we.
This
is
a
phase.
The
cash
flow
that
blockchain
will
also
go
through.
B
We
will
we
will
run
into
you
know
we
will
discover
certain
things,
certain
bottlenecks,
not
that
that
should
be
taken
into
account
when
pricing
of
codes
and
I
wanted
to
so
trial
and
error
is
fine
in
terms
of
assigning
these
values.
But
I
was
asking
myself
what
could
be
a
better,
more
mathematical
way
of
doing
this
and
I
just.
B
Like
assuming
very
simple,
very
simple
operations,
what
will
be
the
logic
behind
actually
assigning
some
gas
prices,
of
course,
and
I
came
up
with
some
basic
facts,
so
there
they're,
the
things
that
need
to
be
considered
are
number
one.
Your
round
length
is
one
of
your
limiting
factors.
The
round
length
or-
or
you
can
say,
block
time
terminologies,
you
know
shifted
a
bit
in
proof
of
stake.
You
have,
for
example,
in
a
theorem,
14
seconds
block
time
and
that
block
time
is
like
in
that
14
seconds.
B
Many
things
happen,
violators
receive
transactions,
they
create
a
block,
they
include
transactions,
they
execute
them
and
they
start
hashing
the
block.
Then
they
actually
find
the
block
and
then
they
probably
propagated.
So
the
goal
is
that
before
I
receive
like,
if
somebody
finds
a
block,
I
should
be,
it
should
be,
you
know,
created
and
be
propagated
and
to
the
rest
of
the
net
network
under
14
seconds.
B
If
that
doesn't
happen,
then
the
blocks
don't
see
each
other
and
the
network
Forks,
and
this
is
something
undesirable,
the
more
Forks
you
have,
the
less
the
quality
of
consensus.
So
we
want
our
uncle
rate
as
or
orphan
rate
as
small
as
possible,
and
this
this
time.
So
you
have
this
limiting
factor
around
length
and
one
of
the
one
of
the
properties
you
can
use
is
you
can?
B
If
you
know
how
much
time
a
certain
operation
consumes
and
how
much
it
scales
with
the
number
of
operations,
then
you
can
actually
deduce
prices
and
assign
them
in
a
way
that
ensures
when
you
set
a
gas
gas
limit.
If
that
gas
limit
is
not
is
not
exceeded,
then
you
you
can't
you
can
be
fairly
sure
that
round
lengths
like
the
block
will
be
propagated
in
time
created
and
propagated
in
time.
B
Considering
a
simplified
case,
let's
say
the
two
operations
I'm
taking
a
subset,
let's
say
G
base,
I
think
it's
what
you
pay
for
addition,
I'm,
just
considering
I'm
only
paying
for
addition,
enamels
also
only
paying
for
trans
in
data,
so
transaction
data
is
what
you
pay
for.
You
know
the
space
that
the
transaction
takes
in
a
block,
so
the
two
two
factors
that
that
are
that
we
are
looking
is
with
with
G
base.
We
are
looking
at
computation
and
with
TX
data.
B
We
are
looking
at
bandwidth
and
if
we
just
consider
these
two,
it
becomes
a
simplified
model
of
you
know
optimizing
how
the
block
time
is
spent.
You
have
14
seconds
so
how
much
of
that
14
seconds
should
be.
You
know
allocated
for
computation,
for
example,
Edition,
and
how
much
should
that
be
allocated
for
block
propagation?
Should
it
be
4
to
10?
Should
it
be
10
to
4?
B
You
know
that
the
more
the
more
data
you
have
on
the
transaction
or
in
the
block,
the
bigger
the
size
of
the
block
and
the
more
time
it
takes
to
propagate.
You
also
know
that
the
more
operations
a
transaction
has,
for
example,
I,
call
I
call
a
certain
computation
one
trillion
times.
It
takes
a
lot
of
time,
so
it
also
these
these
variables
that
I,
just
defined
are
directly
proportional
to
the
number
of
operations,
and
this
is
what
I
try
to
capture
here.
B
This
document
is
I'm,
just
pasting
the
link
or
you
can
actually
I-
cannot
paste
it,
because
I
cannot
use
chat
when
sharing.
So,
if
you
go
to
this
link,
you
can
find
this
document
from
the.
If
you
watch
this
later
at
YouTube,
I
tried
to
for
a
linear
relationship
between
operation
count
and
time.
Consumption
I
tried
to
get
like
a
basic
principle
of
obvious
and
prices.
B
B
Additionally,
you
would,
if
you
want
to
look
at
like
TX
data,
for
example,
if
you
you
would
look,
you
would
try
to
propagate
a
1
kilobyte
block.
If
we
can
have
a
1
kilobyte
book,
then
you
would
incrementally
increase
the
size
of
the
block
and
see
how
much
the
the
time
required
for
block
propagations
change.
So
you
would.
You
would
like
to
see
how
many
1
kilobyte
blocks
you
can
propagate
in
30
seconds
and
how
many
2
kilobyte
blocks
you
can
propagate
in
30
seconds.
B
So
you
would
factor
these
data
in
and
those
data
become
the
alpha
coefficients.
You
see
here
so
this
this
presentation
is
a
bit
more
technical
than
the
previous
ones.
Then
then
you
would
basically
you
can
ensure
that
the
prices
you
calculate
from
these
alphas
make
sure
that
the
time
the
round,
the
block
time
around
length
is
enough
for
for
creating
and
propagating
that
block
for
a
certain
gas
limit,
and
this
is
this
is
like
a
first
step-
assume
linearity.
Of
course,
like
networks
block
propagation,
you
have
to
have
non-linearity
in
there.
B
C
C
C
That's
difficult
there
that's
a
very
difficult
question,
but
they
think
that
that's
software
that
is
necessary
because
I
mean
it's
picked,
some
prices
to
satisfy.
You
know
the
inequalities,
but
that
doesn't
quite
solves
a
problem
over
and
basically
it
smoothes.
This
way
like.
If
we
had
you
know
just
a
market
for
opcodes
right
I
mean
the
most
expensive
opcodes
would
be
the
ones
that
are
most
heavily
used
right.
C
B
On
top
of
that,
you
could
you
could,
maybe
you
know,
put
surpluses
on
prices
to
make
sure
that
you
know
some
sort
of
welfare
is
optimized,
so
you
could.
Maybe
maybe
we
could
derive
prices
from
from
benchmarks
and
then
maybe
we
could
also
look
at
their
distribution
I
it.
There
is
no.
So
this
doesn't
answer
was
the
most
optimal
distribution
of,
for
example,
transaction
data
versus
computation,
so
I'm
intending
to
do
more
work
on
that.
So
how
long
of
that,
like
30
seconds,
should
be
spent
on
computation
versus
blog
propagation?
So
can
we
have?
B
Can
we
make
it
better
than
how
aetherium
works
already?
Can
we
have
achieve
higher
throughput?
Although
it's
wasum
and
yeah
there
will
be
differences
but
like
our
computations
should
be
faster
already.
So
awesome
I,
don't
quote
me
on
this,
but
I
think
it
was
an
execute
faster
than
EVM,
so
I
mean
certain
optimizations
are
possible.
A
Yeah
definitely
so
I
mean
I,
think
our
execution
engine
will
be
faster
than
the
EVM
for
sure.
I
fully
expect
that,
because
it
is
wasum,
I
think
also
for
our
blocks.
There
also
we're
finding
that
they're
very
small
when
the
cut
contracts
are
structured
correctly.
So
when
the
contract
authors
create
a
do
an
installation,
so
they
do
a
define
and
then
subsequent
instantiations
or
subsequent
transactions
just
call
what
is
already
stored
in
the
global
state.
A
The
block
size
has
become
very,
very,
very,
very
small,
which
is
amazing,
because
I'm
seeing
block
sizes
as
little
as
three
thousand
bytes
and
holding
anywhere
and
every
ten
to
fifteen
transactions.
You
know
looking
at
ten
to
fifteen
transactions
and
three
thousand
bytes.
That
means
each
transaction
or
each
you
know
deploy
is,
is
really
tiny
in
terms
of
what
it's
what's
being
put
into
the
block,
and
this
will
enable
us
yeah.
This
will
enable
us
to
put
almost
a
thousand
transactions
in
a
block
if
you
think
about
it
easily.
It.
B
Should
be
that
way
like
you
should,
this
is
equivalent
to
how
you
call
call
the
ABI
in
aetherium,
and
you
should
you
should.
The
only
data
you
should
need
you
need
should
be
the
the
hash
of
the
contract
and
the
inputs
and
the
function
and
so
on,
and
that's
right.
Other
a
blockchain
which
works
without
this
is
to
me
unimaginable,
correct.
A
But
the
interesting
thing
is
is
because
of
the
orphanage
that
happens
in
proof-of-work
blockchains
I
believe
that
etherium
has
to
impose
a
hard
gas
limit
on
the
block,
so
it
keeps
it
about
225
to
250
transactions
per
block.
Otherwise
they
start
seeing
more
orphan
right,
I
think
that's
the
push-pull
with
proof-of-work
block
chains
right.
If
the
block
size
goes
up,
you
start
seeing
more
orphanage
on
the
on
the
system
and
so
to
prevent
orphanage.
Then
you
keep
the
block.
A
Size
is
really
small,
and
so
in
our
system
we
don't
have
such
a
problem
with
orphanage
right
on
our
system
transactions
get
orphaned
because
you,
your
state
when
you
propose
them,
is
too
old
right
so
like
if
I'm
a
block
producer
and
and
I
have
old
state,
and
my
in
when
I
propose
my
block,
then
my
blocks
gonna
get
orphaned,
but
otherwise
in
the
cazuelas
blockchain.
Provided
you
have
your
your
block
producer
or
validator.
That
has
current
state.
A
A
Know
that's
what
mateos
built
today
right
is
a
sequential
parallel
execution
that
will
address
those
kind
of
non
commutative
transactions
right.
The
conflicting
has
actions
like
double
spend.
It'll
it'll
address
that,
but
it
becomes
clear
to
me
that
for
our
protocol,
like
a
big,
a
big
deal
as
the
size
of
your
deploy
right
like
in
terms
of
just
bytes
right,
if
you're
doing
a
wasum
installation,
that's
going
to
be
significantly
more
expensive
than
just
call
invoking
invoking
awesome
because
you're
doing
storage
network
right.
A
B
A
Yeah
there's
also
compiler
optimizations
we're
going
to
document
to
that
make
web
assembly.
The
compiler
was
smaller
and
we've
also
found
that
the
assembly
script
awasum
is
significantly
more
efficient
than
the
rust
wasum.
So
it
seems
like
there's
a
bunch
of
boilerplate
code
and
the
rust
wasum
that
is
not
in
the
assembly
script
wasum,
which
is
good
news.
Yeah
I,
don't
dak
Chua
li
that
the
assembly
script
is
orders
of
magnitude,
smaller
yeah.
B
B
A
Awesome,
terrific
yeah,
so
we're
gonna
continue
this
research
on
transaction
fees
and
we
should
be
putting
something
out,
that's
available
for
the
community
to
review
too
right.
So
maybe
you'll
share
the
link
you
want
to
put
it
in
the
current
status.
Maybe
we'll
we'll
find
a
slot
for
you
to
put
your
links
there.
If
you
want
where.
A
B
A
B
A
You
can
just
edit
it
it's
just
in
github,
awesome
folks,
thank
you
so
much
for
dialing
in
and
for
those
of
you
watching
online
join
us
next
week,
Tuesday
a.m.
9:00,
Tuesday,
9:00
a.m.
Pacific
and
of
course
we
recorded
live
on
YouTube
and
we
have
our
discord
channel
where
we
share
all
this
information.
Thank
you.
So
much
for
tuning
in
cheers
have
a
fabulous
day.
See
you
next
week.