►
Description
Panel: Scaling Blockchains: Building the Decentralized Economy
Speakers:
Austin Federa, Head of Communications and Strategy, Solana Foundation
Adeniyi Abiodun, Co-Founder and CPO, Mysten Labs
Staci Warden, CEO, Algorand Foundation
Yves La Rose, CEO, EOS Network Foundation
Eli Ben-Sasson, Co-Founder and President, StarkWare
Stage: Stage 1
TOKEN2049 Singapore 2022
#token2049 #solana #eos #algorand #starkware #sui #scaling #blockchain
A
All
right
welcome
everyone,
thanks
for
coming.
Thank
you
for
coming
out
late
in
the
day
to
talk
about
this
here.
So
I
think
one
of
the
things
you'll
notice
about
this
title
is
it
encompasses
pretty
much
everything
you
could
possibly
want
to
talk
about
about
blockchains
ever
so
we're
not
going
to
talk
about
everything,
because
it's
easy
to
spend
a
lot
of
time
talking
about
various
scaling,
Solutions
various
strategies
for
building
a
decentralized
economy,
again
arguing
over
the
world
decentralized,
arguing
over
the
world
economy,
arguing
over
the
word,
blockchain.
A
Yeah
yeah,
we
could
always
talk
about
uptime,
that's
one
of
my
personal
favorite
subjects.
So
if
you
can't
joke
about
yourself,
what
are
you
doing
here?
So
that's
actually
a
great
place
to
start.
I
want
to
talk
a
lot
about
design
decisions.
Trade-Offs.
A
Everyone's
approach
to
scale
in
any
piece
of
software
always
has
trade-offs.
There's
no
such
thing
as
a
piece
of
software
with
no
trade-offs.
Someone
at
some
point
or
a
group
of
people
at
some
point
made
a
decision
to
say
we're
going
to
build
this
thing
this
way
and
we're
not
going
to
build
it
another
way.
A
So,
for
example,
right
Solana,
a
lot
of
the
design
decisions
made
on
Solana
are
to
maximize
throughput
at
sometimes
move
really
quickly
with
software
development,
and
sometimes
that's
not
the
right
decision
right,
there's
been
periods
of
congestion
on
the
network.
There's
been
several
outages
on
the
network:
that's
a
design,
disc
and
I'm,
starting
off
as
the
one
being
vulnerable
here
to
invite
the
fellow
panelists
to
be
equally
vulnerable,
but
that's
a
design
decision
that
was
made
in
the
process
for
the
community
to
say
like
going
fast,
is
some
and
and
having
that
failure.
A
State
be
one
where
the
Ledger
is
secured,
even
if
it's
publicly
very
unfortunate.
Let's
call
it
that
is
a
design
decision.
That's
been
made
by
the
folks
building
the
Solana
Network.
A
Every
single
network
has
different
design
choices
and
decisions
made
so
I
want
to
kind
of
talk
about
some
of
those
that
maybe
didn't
go
so
well,
or
maybe
some
of
the
trade-offs
that
are
built
into
how
you're
thinking
about
building
and
scaling
software,
because
today,
as
it
exists,
there's
no
piece
of
decentralized
software
that
can
support
a
hundred
percent
of
the
volume
of
all
the
world's
transactions
as
much
as
someone
might
want
to
tell
you
that
at
best
we're
looking
at
theoretical
numbers
of
a
million
transactions
per
a
second-
and
that
is
not
a
number
that
is
big
enough
for
all
of
the
world's
information.
A
So,
no
matter
what
we're
talking
about
in
scale,
there's
phases
of
scale
and
it
might
take
10
years
to
reach
true
global
scale
for
any
possible
Solution
on
stage
not
on
stage
or
one
that
comes
in
the
future,
that's
kind
of
where
I
want
to
start
things
off
today.
So,
let's
kind
of
start
at
the
beginning
there.
How
are
you
guys
thinking
about
scale,
hey.
C
Everybody
I
didn't
even
listen,
we're
the
team
we're
working
at
Facebook
for
the
last
three
well
three
four
years:
building
the
Libre
blockchain.
Obviously
Libra
isn't
here
with
us
today
and
rest.
C
Rest
in
peace
Libra,
but
we
learn
a
lot
of
great
things
through
that
experience
and
I
think
one
thing
we
we
actually
did
once
we
left
Facebook
is
tore
up
the
work
we've
done
for
the
last
three
four
years
and
did
a
new
design
entirely,
because
we
had
learned
a
lot
through
the
process
of
building
Libra
on
how
to
actually
build
a
system
that
would
tailor
to
the
billions
of
people
of
the
world.
We
took
a
very
you
know:
complex
approach
to
solving
scalability,
namely,
unlike
other
protocols,
I,
have
a
single
consensus
algorithm.
C
We
actually
have
two
right:
we
have
one
for
single
owner
objects,
whereas
when
I
transfer
you
water
and
you
transfer
someone
else
something
else,
those
are
two
different
transactions.
We
can
parallelize
those
add
more
CPUs,
you
go
faster
and
then
we
have
a
different
consensus
algorithm
for
shared
state.
So
whenever
you're
trading
on
a
decks,
there's
a
different
consensus,
algorithm
for
doing
that,
that's
actually
very
complicated,
so
we're
building
two
blockchains
at
the
same
time
and
we're
welding
them
together,
which
is
not
the
most
optimal
thing
you
want
to
do,
but
yeah,
let's
build.
A
A
time
right,
so,
let's,
let's
I,
mean
there's
two
trade-offs
that
you
just
brought
up
there.
The
first
is
to
say
the
code
base
that
I
worked
on
for
three
or
four
years:
we're
going
to
throw
this
away,
we're
going
to
take
learnings
and
build
something
new.
The
second
one
is
we're
going
to
build
a
complicated
system
correct
and
the
more
complicated
a
system
gets
the
more
likely.
A
C
In
devnet,
a
Tesla
in
a
couple
of
weeks,
so
the
way
we're
thinking
about
it
is
really
slow
and
steady.
But
you
know
we're
willing
to
make
take
some
risks
along
the
way
and
add
core
functionality.
As
we
progress
right
like
you,
can't
build
it
and
expect
it
to
be
100
perfect
on
day
one.
But
what
you
can
do
is
make
sure
you
have
the
mass
majority
of
functionality
available
for
Builders.
The
biggest
thing
we
did
is
really
made
object-oriented
language
for
small
contracts.
C
So
if
you
are
a
programmer
in
C
plus
in
C
sharp
Java,
you
can
come
to
sui
and
build
and
move
with
an
objective
move
very
very
quickly.
So
we
took
the
element
of
redesigning
and
a
lot
of
core
elements
of
move
to
give
developers
a
very
much
more
improved
developer
experience
at
the
cost
of
launching
way
sooner
I
mean
we
could
have
gone
to
Market
with
Libra
and
be
live
by
now
right,
but
we
took
a
very
different
approach
in
that
perspective.
So.
A
A
And
now
you
look
back
and
that's
why,
like
the
US
unemployment
system,
couldn't
meet
the
demand
it
needed
during
covid
because
it
was
built
in
a
language
that
had
zero
scale
potential
and
had
fallen
out
of
favor
like
when
you're
looking
at
a
language
choice
right,
walk
us
through
a
little
bit
of
that
process
too.
C
Look
ultimately,
when
you're
talking
about
small
contracts,
you're
talking
about
programming
scarcity
at
the
heart
of
it
right
and
programming
languages
were
not
designed
to
deal
with
courtesy
natively.
So
you
have
people
building
a
lot
of
boilerplate
code
to
deal
with
verification,
which
is
where
you
get
a
lot
of
the
hacks
and
the
bugs
that
we
have
today
right
so
with
move.
It
was
designed,
I
mean
when
we
wanted
to
go
to
Regulators.
We
can't
say
trust
us.
It
works.
We
have
to
build
something
that
had
an
element
of
functional
verifiability
into
the
model.
C
So
we
have
a
language
now,
that's
functionally
verifiable
that
actually
has
the
element
of
verification
directly.
So
if
I
transfer
you
in
an
asset,
you
don't
have
to
do
all
the
checks
to
make
sure
it's
left
the
account
and
the
balances
are
correct.
It's
done
by
the
runtime.
So
that
was
a
design
decision,
because
when
you're
programming
scarcity,
you
don't
want
assets
to
be
destroyed
or
duplicated
in
any
way.
C
A
A
An
interesting
decision
right
so
so
Stacy
when
you're,
like
we
were
talking
before
around
some
of
the
verification
systems
that
elgrand
uses
and
its
ability
to
scale
because
of
that,
but
that
the
transaction
finality
time
is
maybe
a
little
slower
than
you
might
see
from
some
of
the
other
systems.
Talk
a
little
bit
more
about
some
of
those
decisions
and
how
you've
approached
scale.
Yes,.
B
Well,
when
I
built
the
algorithm,
blockchain
I,
said
touring
Award
winner
in
computer
science
and
a
10-year
professor
at
MIT
I.
No,
you
know
one
thing
I
will
say
about
Libra
though
first
is
I,
think
the
fact
that
Libra
could
scale
is
actually
what
brought
down
its
demise.
B
You
know
from
a
political
and
a
regulatory
component,
the
idea
that
it
could
end
up
being
a
regulation
maker
instead
of
a
regulation
Taker
and
the
it
was
powerful
enough
and
scary
enough
to
kind
of
maybe
sow
the
seeds
that
it's
undoing,
I,
don't
know,
but
I
think
that's
a
reasonable,
a
reasonable
Theory.
So
Sylvia
McCauley
is
a
the
founder
of
Al
Grant
and
he
was
in
TR.
You
know
he
has.
He
had
kind
of
invented
or
co-invented
a
number
of
things
that
are
very
important
for
the
crypto
ecosystem.
B
More
broadly,
so
those
are
zero
knowledge,
proofs
and
verifiable
random
functions.
So
I
think
you
know
he
was
watching
this,
but
he
never
attached
the
idea
of
money
to
these
things
that
was
kind
of
not
not
how
he
was
thinking
about
it.
He
was
thinking
about
it
as
a
cryptographer
and
vitalik
buterin.
You
know
famously
posited
the
trilemma
that
you
can't
have
scale
and
security
and
decentralization
at
the
same
time
and
I
think
he
said
to
himself.
B
You
know
I
think
we
probably
could
and,
as
you
guys
all
know,
everything
all
the
juice
in
a
blockchain
is
in
its
consensus
mechanism.
So
he
developed
a
pure
proof-of-stake
mechanism
and
just
at
a
very
high
level,
it
kind
of
works
like
this
and
and
and
final
settlement
finality
without
forking,
it's
impossible
to
Fork.
It's
not
just
that.
It
hasn't
happened.
It's
impossible
does
take
place
now
in
3.9
seconds
Austin.
So
we
have.
We
have
just
released
the
new
version
that
goes
from
4.5
to
3.9,
without
without
the
possibility
of
forking
I.
B
Think
that's.
That's
pretty
fast,
particularly
for
sort
of
high
high
value
transactions
and
and
consensus
takes
place
in
three
rounds
over
the
course
of
these
4.5
seconds
and
the
number
of
Algos
that
you
have
improves
your
chances
of
appending
the
next
block,
but
does
not
guarantee
it.
So
that's
where
the
Rand
tenderness
comes
in
built
on
these
verifiable
random
functions,
and
so
it's
random
and
also
deterministic.
So
the
reason
it's
secure
is
because
the
vector
of
attack
is
not
known
beforehand.
You
don't
even
know
yourself.
B
If
you're
going
to
get
picked
the
more
hungry
you
have,
the
the
the
the
greater
the
likelihood
that
you'll
be
able
to
propose
the
next
block,
and
so
you
you
this
happens
over
three
rounds.
The
first
round
you
propose
the
block.
The
second
round
is
a
soft
vote.
Lowest
hash
function
will
end
up
getting
picked.
B
It
takes
some
time,
though,
and
there's
kind
of
a
a
natural,
maybe
hard
hard
wall
there,
because
you've
got
to
get
to
all
the
relay
nodes
and
they've
got
to
be
able
to
compare
all
of
the
different
proposals
and
go
and
get
and
select
the
ones
hash
function
that
that
will
take
some
time
because
you
don't
want.
You
know
a
lower
one
to
come
in
after
you've
right
and
then
the
third
round
is
the
is
where
you
check
for
like.
Is
there
a
double
spending?
A
It's
a
it's
interesting
because
there
are
models
on
several
chains
where
you
have
sort
of
optimistic
confirmations,
then
you
move
forwards
and
you
and
on
algorithm
there
seems
to
be
three
layers
of
that
which
is
like
moving
confidence
intervals
up
at
each
stage.
If
I
understand
that
correctly,
yeah.
B
The
third
looks
at
the
the
transaction
and
make
sure
the
transactions
are
okay
and
the
first
two
kind
of
propagate
and
then
verify
the
block
yeah
yeah.
But
you
know,
if
I,
could
you
know
you
you?
You
did
ask
something
about
kind
of
language,
trade-offs
and
I
think
that
they
made,
and
you
know
of
course,
I.
Don't
speak
for
them
and
I.
Wasn't
there
exactly,
but
I
think
the
the
machine
language
of
algorand
is
called
teal
and
I.
B
Think
that
Sylvia
went
in
the
early
days
and
the
engineers
decided
that
you
know
the
the
real
fork
in
the
road
really
honestly,
if
you
want
to
build
something
quickly
and
deploy
quickly
is,
are
you
going
to
use
solidity
or
not?
You
know,
and
so
I
think
they
decided
that
they
were
going
to
use
a
language
that
was
harder
to
learn
and
less
tolerant
of
mistakes.
You
know
it's
the
kind
of
language
that
you
fly
airplanes
in,
not
the
kind
of
language
that
you
would.
B
You
know
program
Facebook
in
right
and
as
a
result
of
that,
it
can
scale
to
whatever
whatever
you're
sick
I'm,
not
throwing
shade
at
any
ethereum.
But
you
know,
sometimes
your
success
can
a
little
bit
be
your
demise
right
as
you,
and
so
so.
This
is
a
little
bit
harder
to
learn
in
the
beginning
and
and
especially
in
the
beginning.
A
Want
yeah,
so
Eli
I
want
to
talk
about
one
of
the
most
interesting
trade-off
decisions
that
someone
can
make,
which
is
relying
on
someone
else's
base
layer.
Yes,
yes,
yeah.
D
A
Talk
talk
to
us
a
little
bit
about
that
process
of
deciding
I
mean
the
idea
of
Roll-Ups
and
layer.
2
systems
is
one
that's
been
floating
around
for
a
while
there's
a
lot
of
systems
that
have
been
built,
but
it's
sort
of
that
that
old
adage,
if
you
don't
want
to
build
your
business
on
someone
else's
business
right
and
that's
like
a
very
web
one
web
two
way
of
thinking
about
it,
I
don't
think
that
applies
to
web3.
But
it
means
that
you
know
certain
things
like
your
timelines
are
not
necessarily
your
own.
D
Yeah,
so
you
know
we
founded
Stark,
where
four
and
a
half
years
ago
it
was
right
in
the
craze
of
the
Ico
season.
Everyone
was
saying
to
us,
you
know,
and
the
investors
and
everyone
saying
hey
guys
you
have
to
to
they
didn't
call
it
layer,
one
back
in
the
day
you
have
to
do
an
Ico.
You
can
raise
a
gazillion
dollars.
It's
the
obvious
thing
was
the
Happy.
D
Days
EOS
was
launching
you
know,
file
coin,
and
we
said
there's
this
cryptographic
technology
that
we
know
very
well,
and
it
makes
a
lot
of
sense
and
the
fact
that
we
are
bringing
this
technology
to
Fresh
doesn't
mean
that
you
should
have
a
new
coin
for
it
by
the
way
I'm
gonna.
You
know
those
who
follow
us
know
that
by
now
we
actually
are
launching
are
launching
a
token,
but
I
can
explain
why
it
makes
sense
now,
but
back
in
the
day
it
didn't
make
sense.
We
said
we
have
this
technology.
D
That
is,
very,
you
know,
white
label,
in
the
sense
that
it
can
go
anywhere
and
scale
any
blockchain
exponentially
by
the
way
it
can
also
go
outside
of
blockchain
and
help
scale
exponentially.
This
technology
just
delivers
Integrity
in
a
novel
way
and
in
a
much
better
way
and
as
Stacy
said,
indeed,
it
is
based
on
the
foundational
work
of
Professor
Sylvia
mikali
who's.
You
know
founder
of
algorand,
and
what
we
did
was
you
know,
improve
a
lot,
the
efficiency
of
these
things,
which
were
considered
impractical,
impractical
initially.
D
A
Let's,
let's
dig
into
that
a
little
bit,
though
around
like
that's
a
big
claim.
Yes,
right,
no
doubt
and
and
with
every
big
claim,
comes
decisions
that
have
been
made.
That
say
we're
going
to
be
really
good
at
this
piece
and
we're
going
to
be
less
good
at
this
piece.
But
what
does
that
look
like
for
starkware?
A
D
For
starkware,
we're
really
good
at
Future
proof
scaling
through
Starks,
which
is
also
the
future-proof
way
to
do
validity
proofs.
Yes,
so
this
is
something
that
is
post
Quantum
secure
as
the
fastest.
You
know,
proving
time
can
scale
to
any
scale
without
any
need
for
trusted
setups
or
things
like
that,
so
it's
going
to
stay
around
for
a
while.
D
So
one
thing
that
was
very
easy
for
us
to
say:
we
don't
need
to
go
there
at
this
point
in
time
is
to
say
reaching
you
know
doing
things
like
consensus
in
the
base
layer.
Others
have
done
it
very
well.
We
basically
said,
and
we're
still
saying
this
is
a
technology
that
any
blockchain
as
layer
one
and
later
on
you
know
the
conventional
world
as
well.
D
That
needs
scale
to
go
beyond
its
perimeter
of
trust
is
going
to
use
it
and
okay
Bitcoin
was
the
obvious
place
that
you
would
want
to
deploy,
but
on
bitcoin
you
can't
deploy
because
it's
not
curing
complete.
So
the
second
obvious
place
to
go
on
is
ethereum,
which
is
why
we
went
there.
It
needed
the
scale
by
the
way
when
we
started
our
our.
D
You
know
our
our
path
we
set
everyone's
scale
is
going
to
be
a
problem,
and
people
said
no,
it's
not
a
problem
and
we
were
still
working
on
scalable
Technologies
back
then
in
2018.
Lo
and
behold
it
did
become
a
problem
now
everyone's
talking
about
it.
Luckily
we
started
you
know
back
then
so
this
technology,
Will
Go
On
Any
infrastructure
that
needs
scale
and
wants
to
have
integrity
at
scale
beyond
the
perimeters
of
its
place,
so
yeah
sure.
A
But
in
that
model
right,
there's
a
whole
bunch
of
different
systems
involved
in
any
sort
of
proof
right,
there's,
there's
off-chain
technology
involved
in
actually
Computing
that
and
bringing
that
back
on
chain.
There
are
points
of
censorship
around
all
of
these
types
of
Technologies
right
famously
if
you're
running
a
single
sequencer,
not
that
people
aren't
necessarily
but
like
in
a
ZK
roll-up
system,
if
you're
running
a
single
sequencer.
That
is
a
choke
point
right
both
of
throughput
and
of
centralization,
potentially
right.
A
There's
a
lot
of
these
types
of
things
where
most
solutions
that
revolve
on
Stark's
brake
composability
when
you
move
between
layers
right,
those
are
sort
of
like
the
the
world
that
that
scaling
solution
is
built
in
has
a
thesis
behind
it
that
a
bunch
of
transactions
are
going
to
be
of
this
type
or
maybe
not
of
this
type.
Right.
D
So
for
us
it
was
first
utmost
scale
with
a
future
proof
and
very
scalable
technology.
You
write
that
decentralization
comes
later.
It
is
something
that,
within
the
next
year,
will
be
solved
to
a
very,
very
satisfactory
way
on
both
the
sequencers
and
the
provers
will
be
decentralized
and
open,
and
it's
not
going
to
be
single
point
and
you're
right
that
these
are
things
that
come
second
in
in
our
you
know,
prioritization
sure,
that's
what
we're
here
to
talk
about
yeah.
So
first
we
want
functionality
scale.
B
D
Think
you
know,
the
developers
of
Stark
net
are
very
happy
with
these
things
and
they're.
You
know
they're,
they
accept
this
and
they
understand
the
reason
we
went
there
and
so
yeah.
A
Yeah,
so
I
want
to
talk
about
EOS
Network
as
well,
because
so
back
when
I
was
working
at
bison
Trails.
One
of
the
networks
that
we
were
working
on,
archival
support
for
was
was
eos,
because
that
was
something
there
was
some
some
demand
for
from
networks
there
and
I
believe
at
the
time
this
was
like
mid-2020.
A
The
volume
of
data
generated
by
the
network
meant
that
ledger
and
Trace
data
was
over
60
terabytes
that
had
to
be
indexed
and
maintained.
It's
crazy
about
that
because
of
the
volume
of
data
moving
through
that
there's,
obviously
a
scale
decision
made
when
you're
talking
about
a
system
that
both
can
pump
that
much
data
through
it,
but
at
the
same
time
requires
that
much
data
to
run
talk.
I
would
love
to
dive
in
on
some
of
that,
like
decision
making,
there.
E
Yeah,
so
what
you're
talking
about
that
particular
time?
The
EOS
network
was
roughly
doing
125
million
transactions
per
day
yeah.
To
put
that
in
perspective,
in
every
single
day
we
were
doing,
EOS
was
doing
more
than
all
other
blockchains
combined,
and
if
you
looked
at
Bitcoin
and
ethereum,
it
was
doing
more
in
one
single
day
the
midcorn
in
ethereum
for
the
entire
year
sure
so
I
mean
that
is.
E
So
it
was
it's
an
immense
amount
of
data
to
be
able
to
process
right
and
that's
one
thing
that
EOS
does
extremely
well
is
that
scale,
but
that
comes
at
a
cost,
so
during
that
period
of
time,
ended
to
keep
it
kind
of
on
on
point.
The
idea
was
that
the
ones
that
were
actually
providing
that
service
their
cost
was
increasing
significantly
and
if
you're
talking
about
the
ones
that
need
to,
because
obviously
we're
in
blockchain
we
want
to
have
that
immutability.
E
We
want
to
keep
that
that
record
right,
so
you
want
to
have
history
nodes.
In
our
case,
those
history
nodes
were
bearing
the
brunt
of
that
cost,
the
amount
of
the
amount
of
data
that
was
being
captured
daily
and
that
needed
to
be
stored
daily
during
that
period
of
time,
which,
roughly
you
know,
lasted
for
about
three
months
or
so
of
120
125
million
transactions
daily
non-stop.
In
order
to
be
able
to
store
that
data,
there
was
no
incentivization
model
for
those
history
nodes
to
be
able
to
provide
that.
E
What
was
being
stored,
how
it
was
being
stored,
so
we're
really
pushing
the
limits
at
that
point,
not
even
of
what
blockchain
could
handle,
but
how
it
typically
even
machine
and
and
data
Enterprise
centers
were
able
to
handle
in
the
blockchain
space,
and
so
is
quite,
is
quite
tricky.
So
it
was
one
of
that
double-edged
sword
whereby
do
you
allow
all
of
those
transactions
and
and
in
the
community
at
that
time
was
starting
to
have
very
real
conversations
about.
Do
you,
you
know?
What
do
you
store?
E
Do
you
essentially
end
up
storing
everything
because
there's
a
cost,
you
end
up
not
storing
everything,
but
if
you
go
the
route
of
not
storing
everything,
then
do
you
lose
some
of
that
immutability
do
you
lose
and
ultimately,
what
that
ended
up
being
for
us
anyways
it
was,
is
it's
called
CPU
mining,
essentially
yeah
that
ended
up
being
kind
of
the
base
layer
foundation
for
what
we
see
now
is
Gamefly
a
lot
of
the
games
now
that
are
that
are
trending
on
dap
radar
in
the
top
10
are
CPU
Mining
and
dpos
chains,
and
so
we
still
have
a
few
of
of
the
the
chains
that
run
on
the
software
that
we
manage
and
that
we
maintain
that
we
develop
are
still
processing,
25
30
million
transactions
per
day,
but
we,
two
years
ago
re-architectured
everything
in
order
to
be
able
to
handle
that.
A
It's
interesting
I
mean
like
so
one
of
the
design
decisions
that,
for
example,
Solana
makes
very
differently,
is
a
weak
subjectivity
chain.
The
accounts
database
is
updated
on
a
per
block,
but
well
as
an
as
needed
basis
per
block,
so
it
actually
the
nodes,
don't
have
to
carry
the
full
history
of
The
Ledger
with
them
right
that
that's
a
that's
another
great
example
of
like
that's
a
design
trade-off
that
was
made
on
the
sauna
Network
to
support
it,
you're
not
having
to
have
a
30
60,
90,
terabyte
SSD
bolted
onto
the
thing.
E
Well,
another
one
of
those
design
decision
as
well
is
Ram.
So
what
we're
facing
in
EOS
is
actually
Ram
limitations
that
that
are
not
even
theoretical
they're
at
the
point
where
Intel
and
AMD
don't
make
architecture
that
can
handle
the
amount
of
ram
that
we
need
in
the
hardware
that
we
that
we
run
the
chains
on
in
order
to
be
able
to
support
that
large
amount
of
account
creation,
essentially
so
brand
limitations.
E
Right
now
are
one
of
the
challenges
that
we're
facing,
because
the
amount
of
again
going
back
to
game
five,
which
Antelope
the
the
EOS
IO,
the
underlying
software,
was
rebranded
to
Antelope
about
a
month
and
a
half
ago.
That
underlying
software
is
run
by
multiple
chains
that
are
that
are
running
a
lot
of
GameFly.
And
what
we
saw
in
the
last
cycle
is
that
you
have
tremendous
amount
of
users
coming
in.
E
So
we're
talking
about
tens
and
and
and
tens
of
millions
of
accounts
being
created
in
very
short
period
of
time,
all
that
being
stored
in
Ram
in
order
to
be
able
to
access
in
a
very
Rapid
Way
and
we're
inhibiting
the
bounds
of
what
current
computational
architecture
can
provide
us.
And
then
we
have
a
trade-off.
So
do
we
keep
using
latest
generation
Hardware
more
on
the
on
the
desktop
level
type
of
com
Computing
for
the
BP
nodes,
or
do
we
go
Enterprise
grade?
E
If
you
go
Enterprise
grade,
you
got
a
lower
clock,
core
speed,
sure
you've
got
multi-threading
which
effectively
everything
is
pretty
much
single,
threaded,
anyways
and
so
you're
not
really
using
those
cores.
So
that's
the
trade-off
there,
but
you
can
have
more
RAM.
Or
do
you
go
on
the
fast
core
clock
speeds?
You
still
are
able
to
meet
that
that
high
throughput,
but
now
you're,
maybe
pushing
the
limits
of
how
many
accounts,
so
you
can
actually
create,
and
so
that's
what
we're
trying
to
solve
for
now.
Horizontal
scaling
in
that
sense.
B
You
know
I
think
this
has
been
cast
also
to
date
on
this
panel
in
terms
of
kind
of
technical
decisions
and
Technical
trade-offs,
but
there's
also
business
trade-offs
that
you
make
all
the
time
and
that
drive
the
technical
trade-offs.
So,
for
example,
we
just
released
an
upgrade,
and
now
we
do
six
thousand
transactions
per
second
right,
but
the
head
of
product
at
the
algorand
technology
company
says
you
know
they
have
a
road
map
that
they
theoretically
know
how
to
get
to
to
10
000
transactions
per
second
and
then
to
40
000
transactions
per
second.
B
B
Think
you
have
the
you
know
a
similar
kind
of
situation,
the
you
do
with
Aptos
and
you
I
I,
hesitate
to
say
this,
but
our
two
chains
maybe
do,
which
is
that
you
guys,
are
and
correct
me
if
I'm
wrong,
of
course,
but
you're
you're
much
more
TR,
you
you
put
things
out
there
that
might
not
be
fully
baked
and
the
community
loves
it
and
you're
super
transparent
about
it.
Hey
we're
trying
this,
it
might
not
be.
B
You
know
it's
in
GitHub
like
what
do
you
guys
think,
and
we
are
you
know
we
kind
of
make
sure
it's
like
right
before
we.
You
know
we
and
I.
Don't
know
you
know
what
is
and
then
it's
kind
of
like
you
versus
Aptos
as
well
right,
you're,
a
little
bit
sweet
right
yeah,
but
versus
like
they're
kind
of
like
out
there
and
you're
you're,
making
sure
it's
right.
Maybe-
and
you
know
who
knows
what
yeah.
A
A
There's
often
a
lot
of
you
know,
meetings
involved
in
someone
saying
like
you
know
we
can
just
hypothetical
web
2
companies
saying
we
want
to
build
a
rewards,
nft
system
right
for,
for
what
we're
building
there's
going
to
be
a
lot
of
meetings
with
lots
of
different
potential
chains
and
providers
and
something
like
that
and
trade-offs
that
make
a
network
attractive
for
developers
will
often
make
it
not
attractive
for
those
web
2
companies,
because
they
look
at
something
and
they
say
well,
there's
been
a
there's,
been
periods
of
downtime
for
this
or
an
outage
right
and
they
can
say
like
well
for
our
organization.
A
The
pr
hit
is
not
worth
even
if
we
think
that
the
technology
platform
might
be
better.
You
know,
because
we
exist
in
a
web
2
world
web
2
users
there's
a
PR
trade-off
there.
So
you're
exactly
right
that,
like
the
process
that
goes
into
deciding
how
scale
works
is
often
as
much
human
as
it
is
technical.
C
C
Been
doing
we
we've
had
a
great
experience
with
DM
and
for
us
in
general,
we
designed
a
system
that
is
multi-threaded
from
day
one.
Our
system
is
multi-threaded,
it
takes
advantage
of
scale,
you
add
more
machines.
It's
faster
right,
like
the
idea
that
you
can
have
a
cap
on
throughput,
and
today
your
fee
is
a
dollar,
and
next
week
is
five.
You
can't
build
a
sustainable
business
model
there
today.
You
can't
tell
me
I
play
a
game
for
Ascent
and
I
can't
play
it
unless
the
fees
are
really
low.
C
E
E
One
thing
that
I
think
is
really
important
to
remember
is
we're
pioneering
a
lot
of
things
that
nobody
else
has
done
before,
so
whether
it
be
being
comfortable
with
actually
having
downtime
but
hitting
that
that
theoretical
limit
of
what
you
can
do
and,
as
you
say,
it's
that
trade-off
of
being
comfortable
with
that
or
taking
a
different
approach
of
maybe
taking
a
little
bit
of
slower
approach
of
making
sure
that
everything
is
stable
again.
It's
that
trade-off,
but
anything
that
we're
doing
essentially.
E
D
E
That
really
this
is
such
a
nascent
feel
anyways.
It
doesn't
ultimately
matter
which
direction
we're
going.
It's
going
to
benefit
the
entire
space,
because
looking
back
five
years
from
now,
looking
back
a
lot
of
what
we
learned
as
individual
projects
will
now
be
rolled
up
into.
Perhaps
new
projects
or
now
it'll
be
stronger.
Where
we
are
today
is
nowhere
near
where
we're
going
to
be
that's
yeah,
you
know
that's
really
interesting
about
it.
A
I
would
say
the
voice
I
wish
we
had
on
stage
with
us
is
Cosmos,
because
cosmos's
approach
to
scaling
is
is
fascinating
and
it's
been
very
ahead
of
a
lot
of
like
so.
The
cosmos
approach
to
scaling
I
think
is
particularly
interesting
is
that
you
have
many
fractured
States
in
application,
specific
side
chains
and
networks
of
networks,
and
you
have
this
theoretical,
inter
no
well
practical
interoperability
using
IBC
and
these
connectors.
But
what
we've
seen
is
that
very
few
people
take
use
of
those
connectors.
A
We've
seen
this
too,
with
the
with
the
L2
and
scaling
solutions
for
ethereum
with
optimism
and
arbitrum.
They
don't
have
many
users,
even
though
they're
able
to
offer
things
like
much
lower
fees,
and
so,
as
we
kind
of
get
into
the
end
of
this,
and
talk
a
little
bit
more
about
that
decentralized
economy.
Portion
of
it
are
users
actually
ready
for
I,
don't
want
to
call
it
decentralization,
but
fracturization
I.
E
Think
they
are
so
on
our
front,
so
EOS
and
I
guess
the
underlying
I
guess
Antelope
ecosystem.
It
was
very
much
decentralized,
and
so
you
have
multiple
blockchains
that
were
running
the
same
software
stack,
but
the
very
much
were
competitors
and
we're
not
talking
to
each
other
in
the
last
eight
months
or
so
when
we
created
the
foundation.
One
of
the
big
things
that
we
wanted
to
do
was
to
create
those
Partnerships
with
those
other
ecosystems.
And
so
what
we
saw
happen
is
that
these
you
know
right
now.
E
We
have
four
Partners
in
there
wax
Telos,
ux
and
EOS
that
have
come
together
started
deploying
funding
towards
IBC,
which
we're
currently
running
between
chains.
So
we
all
had
our
own
separate
communities,
our
own
applications,
our
own
ecosystems,
we're
taking
the
opposite
approach.
Where
now
we're
seeing
we're
creating
those
that
encrypted
connectivity,
but
in
order
to
scale
but.
A
E
Right
now
that
was
that
was
one
of
the
issues
right.
They
weren't
able
to
scale
further
than
that,
and
that
is
the
method
that
we're
using
to
actually
scale,
whereas,
instead
of
creating
it
at
the
onset
not
having
a
user
base
and
and
potentially
kind
of
growing
too
fast
in
a
way
we're
we're
having
to
do
that
by
not
necessarily
by
choice
in
order
to
scale
it's
a
very
different
approach,
but.
D
Yeah,
so
the
way
we
went
about
it
with
Stark
X
is
offer
this,
as
you
know,
going
back
to
what
we're
good
at
and
what
we're
so
so
we're
not
about
hype,
we're
about
delivering
scale
to
ones
who
need
it.
So,
luckily
or
sorry,
not
luckily
we're
very
fortunate
and
very
proud
to
have
partner
with
amazing
b2c
teams,
for
instance,
you
know
so
rare
they
have
a
user
base
of
I
think
upwards
of
two
million
users
that
are
basically
using
nfts.
These
users.
D
Don't
even
know
that
necessarily
these
are
nfts
or
are
on
ethereum
on
a
layer
two
on
Star
kicks,
but
they
are
again.
It's
about.
You
know
delivering
scale,
not
about
necessarily
you
know
the
hype
around
it
immutable
x
with
its
multiple
of
amazing
teams.
Building
on
it
has
your
user
base
of
you
know
active
users
of
hundreds
of
thousands
dydx.
D
You
know
volumes
right
now,
already
nearing
one
trillion
dollars
of
volume
being
traded
over
over
overall
and
you
know,
TPS.
There
have
been
weeks
where
the
TPS
on
Stark
net,
sorry
on
Stark
X,
has
surpassed
that
of
ethereum,
and
on
most
weeks
we
surpass
Bitcoin
and
probably
I'm
guessing
you
know,
10x
all
the
other
l2s
combined,
so
I
think
you
know.
Crypto
Twitter
doesn't
necessarily
know
these
facts
so
much
because,
as
I
said,
we're.
A
C
A
C
C
A
product,
it's
a
it's,
a
building,
block
or
tool
right,
I
think,
unfortunately,
we've
LED
with
a
lot
of
that
and
it's
great
people
have
made
many
people
have
lost
money,
but
I
think
reality
is
the
next
cycle.
We
need
real
applications,
we're
real
users
with
use
cases
that
keeps
them
engaged
and
entice
for
a
very
long
time.
I
think
there
aren't
enough
users
in
web3,
which
is
why
we
maybe
say
we
don't
need
a
skills
regions
now.
C
But
when
you
start
talking
about
gaming,
when
you
have
100
million
people
playing
video
games,
you
don't
need
to
transfer
items
in
real
time
when
you're
moving
assets
from
AWS
to
now
a
public
Ledger
you're
going
to
need
real
scaling,
Solutions
and
especially
if
it's
not
focused
around,
like
literally
milking
the
most
amount
of
money
out
of
users,
but
giving
them
a
new
gaming
experience
that
they
never
had
before.
That's
our
goal
and
that's
our
mission
I
think
it's
going
beyond
the
hype
that
we've
had
so
far
and
bringing
real
users.
C
When
we
talk
about
10,
12
million
people,
it's
not
a
lot
of
people,
yeah
we're
nowhere
near
okay,
I.
B
I
would
like
to
stick
up
for
us
collectively,
though,
if
I,
if
I
may
go
first
of
all,
every
single
one
of
us
believes
in
a
multi-chain
world
right
I
mean
yeah
you're,
the
head
of
comms
and
Solana.
How
many
times
have
you
said
we
are
not
an
ethereum
killer,
we're
not
trying
to
be
in
a
theorem,
because
it's
the
same
thing
with
us
right.
We
all
believe
in
a
multi-channel.
That's
the
first
thing.
B
Then
you
know
NASDAQ
can
do
you
know
so
many
trades
per
second
that
we
can't
even
imagine
doing
at
Al
Graham,
but
you
know
like
we
don't
close
from
Friday
at
4
pm
until
Monday
morning,
like
we
don't
sit
around
waiting
for
the
opening
bell.
You
know
what
I
mean,
so
you
got
to
think
about.
We
are
really
to
your
point.
We
are
pushing
the
envelope
of
these
things
and
you
know
this
is
why
I
think
we
all
believe
in
the
web
3
proposition
more
broadly
right
and.
E
Don't
think
it
was
a
jab
at
you,
but
if
we're
talking
about
hundreds
of
millions
of
users,
we're
nowhere
near
being
able
to
do
that,
none
of
us
at
all
at
any
given
time
right
now
this
this
idea
of
multi-chain,
that's
because,
inevitably,
if
you're
in
a
decentralized
system,
you
will
not
be
as
efficient
as
a
centralized
system
you
just
cannot
be.
That
is.
That
is
one
of
the
premises
right,
but
you
somewhat
can
be.
If
you
end
up
being
multi-chain,
if
chains
start
interconnecting
with
one
another,
you
might
not
hit
the
same
amount.
E
D
I
mean
I
just
want
to
see.
No,
no,
there
you
go.
D
I
gotta
say
this:
look,
you
know.
Already.
Three
years
ago
we
demonstrated
in
production.
We
can
process
a
TPS
of
9000
for
trades,
18
000
for
payments.
We
mint
regularly
millions
of
nfts
per
block
for
immutable
X.
The
reason
we
don't
have
billions
of
users
using
Stark
X
systems
is
not
because
it
doesn't
scale
is
because
the
users
aren't
coming
like
the
technology.
Yes,.
A
D
A
But
okay
agreed
well.
Our
time
is
up,
but
thank
you
all
for
sticking
around
on
this
late
afternoon
panel
here
and
thank.