►
From YouTube: NEAR Lunch and Learn Ep. 06: Pricing Model
Description
In this episode, Maksym Zavershynskyi provides an overview of how NEAR assigns costs to different operations in our system, including what hidden and explicit parameters determine these costs, and an explanation of gas vs tokens.
~~~ABOUT Lunch and Learns~~~
This is a new series of videos exploring a concept on the NEAR protocol blockchain discussed in the NEAR office at lunch. Grab a sandwich and settle in!
Follow the latest from NEAR Protocol on,
Website: https://nearprotocol.com/
Discord: https://near.ai/discord
Medium: https://near.ai/medium
Twitter: https://near.ai/twitter
GitHub: https://near.ai/github
A
A
A
So
a
typical
operation
could
be
execute.
Certain
CPU
comment
create
an
account
call
a
smart
contract,
deploy
a
smart
contract
and
they
all
take
nonzero
amount
of
time
to
execute,
and
they
all
take
some
have
some
intensity
like
some
operations
might
require
a
lot
of
RAM
to
run
some
precious
might
require
a
lot
of
CPU,
and
so
you
could
sort
of
picture
it
as
a
graph
we're.
Oh,
let's
say
this
is
RAM
here.
A
Oh
maybe
like
CPU
usage,
CPU
usage
on
y-axis,
and
this
is
time-
and
we
asked
some
blockchain.
Why
later
to
do
something
like
create
an
account?
It
will
look
like
the
load
that
we
caused
in
a
validation
will
look
or
something
like
that.
Probably
so.
This
is
going
to
be
a
curve
to
take
certain
amount
of
time
before
it's
completed
and
it
will
cause
certain
cpu
load
on
the
validator,
the
same
thing
for
other
resources
like
RAM
usage,
disk,
etc.
A
So
when
we
try
to
measure
how
much
resources
we
used
on
the
validator,
we
need
to
take
into
account.
How
long
did
it
take
for
the
separation
to
execute
and
how
much
of
the
resources
we
actually
used
for
it?
So
it's
the
length
of
this
graph
and
the
area
under
the
curve,
because
an
operation
can
be
executed,
really
quick.
Well,
the
same
time,
it
can
use
a
lot
of
CPU,
so
we
also
want
to
account
for
that.
A
A
So
the
interesting
thing
is
that,
even
if
you
have
a
node
running
on
some
cloud
and
we
perform
the
same
operation
multiple
times
a
minute,
we
can
have
different
cost
of
the
separation
in
different
time.
It
takes
for
it
executes,
for
instance,
the
account
creation
can
take
different
time
depending
on
what
is
the
other
stuff
happening
on
the
same
machine.
A
The
distribution
of
how
much
time
it
takes
is
actually
not
just
like
a
single
spike
like
five
microseconds,
but
it's
sort
of
fic
distributed
thing
and
the
problem
with
that
is
how
do
we
price
this
thing?
If
sometimes,
our
operation
takes
two
microseconds?
Sometimes
it
takes
ten
microseconds.
How
do
we
assign
certain
financial
value
to
this
thing?
That
says
this
is
how
expensive
to
create
an
account
in
our
system
and
that's
where
the
concept
of
average
price
and
model
versus
pessimistic
pricing
model
comes
into
play.
A
We
can
create
fully
pricing
model
just
around
average
and
what
it
what
will
happen
if
we
say,
for
instance,
that
let's
say
create
an
account,
takes,
let's
say
five
microseconds,
which
is
let's
let's
say
it's
gonna,
be
five
gasps
and
we
say
that
you
can
have
only
500
guests
per
block,
because
we
need
to
limit
how
many
guests
you
can
use
in
a
single
block.
So
that
means
that
on
average,
you're
gonna
have
hundred
account
creations
in
a
block,
but
then
someone
can
abused
it.
A
Someone
can
come
in
and
and,
for
instance,
start
issuing
the
account
creations
that
have
the
property
that
they
take
ten
microseconds,
and
this
is
going
to
be
what
they
call
grinding.
They
can
slow
down
our
system
in
this
case
they
will
slow
down
twice.
Suddenly,
no
one
is
doing
anything
that
can
be
slashed
on
our
system,
but
at
the
same
time
our
block
production
slowed
down
twice,
and
this
is
grinding.
It's
like
2x
grinding
in
this
case.
A
2X
is
probably
not
the
worst
thing.
They
can
do
some
operations.
They
have
a
discrepancy
in
our
system
from
like
10x
discrepancy
and
therefore
someone
can
do
10x
grinding,
so
one
can
slow
down
our
system
10x
times.
What
happens
if
we
create
a
pricing
model
which
is
pessimistic
where
we
say
that
create
account
takes
10
microseconds
and
therefore
it
takes
10
gasps
in
this
case,
we're
going
to
have
only
50
accounts
per
block,
maximum
accounts
per
block
number
of
accounts
to
look
at,
we
can
create
and
our
blocks
are
going
to
be
only
half
full.
A
A
There
are
natural
shoes
with
pricing
operations
just
taking
different
amount
of
time
to
execute,
if
you,
for
instance,
plug
out
the
power
from
your
laptop,
everything
is
going
to
be
slower
running
on
it
and
you
will
actually
notice
it
if
you're
running
the
blockchain
node
or
you
have
a
different
cloud
instance
running
or
just
something
happening
in
background
and
you're
gonna
have
this
distribution
of
time.
It
takes
to
be
certain
operation.
A
A
So
we
need
to
address
these
three
things,
so
there
is
like
one
is
like
natural
distribution
of
this
computation
time,
parameters
and
hidden
parameters.
So
if
we,
if
you
want
to
address
this
gap
between
average
scenario
in
pessimistic
scenario,
we
need
to
do
something
with
it
and
there
are
two
things
we
can
do
with
the
gap.
We
can
either
eliminate
hidden
parameters
and
therefore
the
gap
is
gonna
close
because
there
is
less
of
unknown
variance
to
the
price
of
certain
operations,
no
blockchain
or
we
can
try
accounting
for
this
hidden
parameters.
A
So
we
price
the
difference
will
depend
on
how
many
accounts
already
in
the
system
and
the
second
one
is
more
Universal
approach,
because
if
you
know
your
system
well,
you
can
probably
figure
out
all
the
hidden
parameters,
but
that
makes
the
economic
model
more
complex.
Well,
the
first
one
when
you
completely
eliminate
eliminate
hidden
parameters,
is
just
harder.
Engineering
work
like
you
need
to
design
a
system
that
has
few
hidden
economic
parameters,
and
this
is
really
hard.