►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
today
we
have
like
a
lot
of
really
fun
stuff
to
talk
about,
but,
like
we've
talked
about
the
somewhat
Appel
friends,
we've
talked
somewhat
about
laws
them,
but
we
can
talk
to
her
about
normally
constructions.
We
can
talk
about
like
a
lot
of
things
about
execution.
Some
leader
protocol
improvements,
including
like
how
things
like
Starks
beginning
greeted,
and
how
the
availability
Bruce
could
get
integrated
to
various
cryptographic
upgrades
and
a
bunch
more
aspirin,
charting
topics
that
we
didn't
have
a
chance
to
cover.
Yesterday.
C
C
A
B
So
I
guess
one
important
thing
to
think
about.
It
is
like
what
is
the
minimum
number
of
seconds
that
actually
is
safe
and
one
so
for
what
you
can
go
over
to
kind
of
different
extremes
here
right
so
like,
for
example,
its
affinity
is
managing
to
its
test
that
totally
fine
with,
like
nine
hundreds,
millisecond
block
climbs,
but
the
reason
they're
doing.
That
is
because
all
of
their
notes
are
basically
in
the
same
cloud
hosting
provider.
B
But
if
that's
just
connected
to
get
to
the
Internet
and
with
a
regular
internet
connection
and
just
doing
things
by
yourself
and
that,
basically,
whatever
the
incentive
structure
is,
it
should
be
that
whatever
you
can
or
in
doing
it,
that
way
should
be
it
like
you're,
clearly
close
to
what
you
can
earn,
what
they
kind
of
theoretically
optimal,
set
up
ready
to
so
I
know
not
like
that's
not
assuming
that
that
feel
like
whatever
that
we're
gonna,
definitely
soom,
that
everyone
actually
will
be
doing
doing
it.
That
way.
B
Various
efficiency
issues
that
this
particular
design
is
not
going
to
have,
because
the
state
of
the
beacon
chain
is
is
a
and
we've
already
talked
to
it.
This
is
going
to
be
less
than
400
megabytes,
meaning
that
you
can
just
keep
the
whole
thing
in
grams,
so
you
don't
have
like
I/o
issues
and
a
lot
of
and
the
beacon
chain
itself
does
not
run
any
vm.
B
B
B
You,
like
ANCA
rates,
do
theory
if
you're
always
significantly,
and
so,
if
it
weren't
for
the
uncle
inclusion
mechanism,
like
the
whole
thing.
Probably
would
a
would
have
a
centralized
much
more
significantly
by
this
point.
So
with
stake,
on
the
other
hand,
you
have
these
like
fix
slots,
and
we
know
that
as
the
long
ass
of
walking
get
propagated
within
the
amount
of
time
before
and
that
the
next
a
lot
of
years,
then
you're
relatively
fine
I
mean.
C
Another
thing
that
was
taken
as
you
to
do
more
easily
is
have
a
splashing
condition
if
the
proposer
user
science,
this
mod,
makes
two
different
proposals,
so
basically
equivocate
at
a
given
height.
So
that
should
mean
that
you
should
get
confirmation
faster
I
mean
that's
also
in
addition
to
like
this
spectrum
of
confirmation.
That
I
was
talking
about
where
you
start
getting
at
the
stations
and
all
these,
which
is
much
much
faster
well
within
the
five-second
long
time.
So.
B
Yeah
so
in
terms
of
kind
of
what
actually
needs
to
happen
right
like
within
each
block
height,
basically,
you
have
a
block
that
gets
published.
Then
you
have
all
these
a
testers,
the
need
to
submit
their
at
the
stations.
The
other
stations
make
the
risk
with
a
network
they
get
aggregated
and
the
next
producer,
basically
aggregates.
The
other
stations
makes
an
hour.
Is
the
aggregate.
E
There
was
a
paper
at
some
point
by
Rafael
pass
and
a
few
others
who
looked
at
like
sort
of
the
Bitcoin
model
and
concluded
that
basically,
in
order
to
reach
eventual
consistency,
every
time
your
your
block
interval
had
to
be
like
several
times
your
network
delay.
So
I
wonder
in
like
related
questions
like
what
would
happen
if
you
had,
let's
say
a
temporary
partition.
B
B
Yeah,
but
basically
it
if
the
network
latency
is
kind
of
in
between
the
block
time
and
yeah
Casper
epoch
times
and
I
expect
that,
like
obviously,
the
blockchain
will
kind
of
proceed
forward
more
erratically,
but
it'll
still
make
progress
and
be
ends
of
it'll
still
do
what
people
expect.
The
main
cost
is
that,
under
those
conditions,
there
will
be
incentives
to
have
much
better
connections
than
ever
all
analysis.
So
if
that
becomes
the
persistence
date
of
events,
then
that
creates
centralization
risks.
F
B
This
also
kind
of
reduces
the
monopoly
power
followed
by
individual
miners,
because,
basically,
an
individual
miner
will
only
be
able
to
delay
any
particular
transaction
by
five
seconds
instead
of
ten
the
risk
in
a
specific
way
that
the
shorter
the
time
go
is
then,
first
of
all,
depending
on
the
algorithm.
If
your
algorithm
is
that
dependence
on
synchrony
for
safety,
then
obviously,
if
block
time
becomes
too
fast,
it
becomes
unsafe
but
like
Casper,
is
safe
under
asynchronous.
So
like.
A
B
F
F
B
Is
definitely
a
few,
so
there
is
kind
of
Holy
Grail
latency,
which
is
basically
like
you,
click
and
it
happens
before
you
even
notice.
So
these
we
server
level,
that's
not
really
achievable
in
a
public
watch
chain
except
those
channels.
The
second
level
is
you
get
to
be
one
to
two
second
level.
Then
it
becomes
like
really
comfortable
point-of-sale
systems
and
if
you
get
to
the
five
to
ten
second
level,
then
like
there
definitely
are
like.
If
you
look
at
existing
credit
card
payments
in
a
restaurant
like
that's
roughly
their
latency
already.
B
So
if
some
like,
it
is
definitely
at
a
point
where
you
have
and
if
enough
wings
need
to
be
usable
for
a
bunch
of
for
a
bunch
of
things
like
you
do
kind
of
keep
gaining
with
every
second
that
you
can
knock
down.
If
you
get
into
longer
periods
of
longer
periods
of
time,
so
basically
anything
longer
than
etherium.
Then
like
there
comes
an
inflection
points
where
you
have
to
kind
of
go
afk
after
you
send
a
transaction,
and
that
when
you
hit
that
inflection
points,
then
like
that
say
in
some
ways,
equality.
G
A
B
C
equals
1
is
the
time
when
you
as
a
merchant,
receive
a
transaction,
and
you
have
common
knowledge
that
the
rest
of
them
most
of
the
rest
of
the
network,
received
that
transaction
before
any
double
spend,
because
you
haven't
seen
the
double
span.
So
if
we
have
some
application,
where
might
look
in
the
case
where
miners
are
nice
and
refused
to
include
double
spend
so
basically
the
Bitcoin
cash
model?
You
get
kind
of
pick
with
the
you
know.
It's
the
big
one,
cache
style
securities
here
now
with
the
Orion
that
present
does
release
by
epoch.
B
B
If
the
block
time
is
5
seconds,
then
this
is
the
point
where
your
your
transaction
gets
included
into,
gets
included
into
one
block
right
when
your
transaction
gets
included
into
one
block,
then
basically
the
default
passes
for
the
transaction
to
get
accepted.
The
only
things
that
could
happen
are
number
one.
The
block
producer
themselves
maybe
had
double
spent
or
number
two,
possibly
some
other
sort
of
offering
some
solace
hammock
made
equivocated
in
a
to
two
different
blocks.
B
Is
there
some
other
blog
producer,
so
say
the
next
block
producer
comes
early
and
or
otherwise
creates
a
block
and
for
some
reason,
that
block
gets
accepted
by
the
network,
so
both
of
these
have
like
a
fairly
low
probability,
but
it's
still
a
significant
one.
So
that's
kind
of
level
two
of
confirmation
now
level
three
is
when
you
start
getting
at
the
stations
on
top
of
this
right,
so
you
start
getting
at.
Actually,
I
could
point
out
by
the
way
that
this
is
all
happening,
not
inside
the
beacon
chain.
B
This
is
happening
inside
of
a
short,
so
level.
Three
in
the
shards.
You
can
also
have
an
as
a
station
structure,
and
so
over
here,
like
basically
C,
is
between
I.
Don't
know
something
like
we'll
say
like
two
and
eight
is
when
you
start
getting
at
the
stations
on
this
block.
Now,
let
me
start
getting
at
the
stations
here,
then
it's
harder
for
these
competing
blocks
to
win,
and
once
these
at
the
station's
exceed.
B
Once
these
at
the
station's
exceed
fifty
percent,
but
basically
for
a
competing
will
like
at
the
same
height
to
reach
the
same
as
a
station
level
like
somebody
has
to
get
slashed,
so
that's
a
higher
level
of
security.
Now
technically,
of
course,
you
know
you
could
get
like
some
block
thing
is
created
over
here
and
then
this
ends
up
getting
out
of
stations,
but
that's
considerably
more
then
over
here.
B
Your
next
point
is
something
like
C
I,
don't
know
like
possibly
6
to
11
and
that's
when
you
have
two
blocks
and
then
this
continues,
then
your
next
milestone
is
probably
when
you
have
a
cross
link
and
the
first
law
stone
is
one.
The
cross
link
starts
getting
formed
right,
so
this
could
happen
in
a
Noah
closely
on
average
somewhere
between
1
to
10
minutes.
So,
let's
say
60
or
let's
say
60
to
600
cross
link
starts
getting.
B
Next
milestone,
the
across
link
ends
up
getting.
It
ends
up
getting
included
into
the
chain,
and
this
could
possibly
happen
over
time.
So
let's
say
this
could
be
like
65
to
7,
constantly
gets
included
into
the
chain.
Then,
wherever
the
cross
link
is
that
point
gets
justified,
so
basically
that
block
by
itself
ends
up
getting
more
and
more
confirmations
and
then
eventually
that
point
is
justified,
and
that
might
happen
in
and
I
came
general
likes
me
maybe
70
to
like
1500,
and
then
you
don't
get
finalized
and
that
might
be.
B
You
know
like
370
to
2000
or
something
like
that,
and
then
at
this
point
like
basically,
if
this
block
ends
up
justifying
whatever
this
is.
This
ends
up
finalizing
whatever
this
is,
and
at
this
point
you
have
like
some
degree
of
finality
and
then
after
here,
basically
you
get
kind
of
more
and
more
indirect
confirmation
that
nothing
invalid
was
finalized,
and
so
the
chain
really
is
final.
G
D
B
B
B
Ordering
from
a
restaurant
well,
if
he
double
Spence
within
then
we
can
cancel
number
three
ordering
booking
a
Floyd.
They
can
cancel
so
four
on
anything
in
person.
Well,
life.
It
happened
to
be
resumed,
then
you're
right
in
front
of
the
guy,
the
guys
know
he
knows
you're
a
thief
and
you
can
like
call
the
police
of
whatever
right.
G
G
B
Know
because,
like
the
security
gradations
do
depend
like
the
security
gradations
that
come
before
finality
do
depend
a
lot
on
like
network
latency.
I'm,
not
honest
the
assumptions
yeah,
they
depend
on
like
what
kind
of
counterparty
you're
dealing
with
so
like
I
say
like
the
payment
example
versus.
Oh.
B
Is
that
you
can
do
a
trick
where
you
can
reduce
the
time
to
finality
from
two
e-box
to
one
course
one?
Basically,
the
trick
is
that
you
can
have
at
the
stations
in
every
shard
simultaneously
be
messages
pre,
committing
to
only
finalize
specific
things
right,
and
if
you
do
that,
then
you
can
kind
of
get
off
chain
detectable
de
facto
finality
after,
like
one
you
plug
myself
somewhat
instead
of
two
and.
B
You
don't
halt,
basically,
because
people
will
only
do
that.
If,
then,
the
chain
actually
is
like
at
the
top
of
the
fortress
world
it,
what
should
I
know-
and
it
was
justified-
and
everything
was
okay
right.
So
this
is
a
kind
of
category
of
optimizations.
That's
worth
looking
into
right,
these
yeah
that
you
can
like.
Basically,
even
though
the
main
set
of
message
inside
this
messages
on
the
main
chain,
you
can
do
off
jet
Newton
evaluators
can
sort
of
voluntarily
can
constrain
themselves
inside
shards,
and
this
could
be
finality
happen
faster
de.
Are
you
listening?
D
B
D
A
C
Guess
one
thing
to
mention
is
in
the
in
the
failure
mode
where
finality
takes
a
very
long
time.
Then
these
a
partial
confirmation
starts
with
the
grade.
So,
for
example,
you
know
the
cross
links
they
don't
have
to
be
finalized
necessarily
I
mean
the
main
role
of
cross
links.
I
guess
is
to
to
ensure
availability,
not
an
idea,
so
you
can
have
the
process
of
creating
cross
links
continue,
even
if
I
that
it
doesn't
happen,
and
so
you
have
all
these
unconfirmed
cross
links
and
some
of
them
might
not
make
it
to.
B
You
have
to
delay
finality
in
those
cases
where
more
than
a
third
of
or
more
than
like,
whatever.
Well,
you
definitely
have
to
sell.
You
have
to
be
pedantic
as
vlados
here
you
definitely
have
to
delay
finality
if
one
half
or
more
of
nodes
are
offline
and
like
you
definitely
have
to
delay
the
strongest
possible
degree
of
finality-
and
you
know
it's
rough
line
and
like
between
that
there's
a
spectrum
and
like
a
can't,
the
algorithm
dependent.
D
B
Number
one
maintain
the
number
to
maintain
its
own
for
choice
and
maintain
its
own
finality
number
three
maintain
the
validator
set
and
process
validator
incentives
process
slashing
the
process
of
elevators
coming
in
so
validators
could
like
come
in
deposit
from
either
a
shard
or
from
the
river
work
chain.
You
know,
process.
Withdrawals
number
four
is
to
include
cross
links,
but
that's
basically
it
in
terms
of
what
to
make
a
chain
does.
B
So,
like
one
example,
is
that
we
can
change
the
double
you
can
change
for
choice,
rule
from
being
like
block
counting
to
being
ghost
or
potentially
you
could
even
have
some
clients
around
block,
counting
in
some
clients
or
uncoerced,
because
they're
fine,
unless
you
have
like
fifty
percent,
goes
to
50
percent.
Malicious
I'll
just
always
agree
with
each
other.
H
H
B
Is
like
the
late
execution
is,
on
the
one
hand,
nice,
but
on
the
other
hand,
like
I,
remember
when
we,
especially
when
we
talked
about
it
in
Taipei,
people
like
there
was
a
lot
of
kind
of
like
a
negative
feedback
about
it.
Basically,
because
that's
like,
like
it
breaks
a
nice
clean
aetherium
model,
and
it
requires,
like
your
basically
doubles
their
complexity,
because
you
need
separate
fortress
rules
and
separate
finality
rules
for
figuring
out
like
what
we
our
state
got
executed
so
like
if
we
can
figure
out
a
reasonably
simple
model
sure.
B
B
G
B
No
TSI
like
if
you're
using
one
of
these
alternative
execution
contraptions,
then
you
could
potentially
have
kind
of
instant
across
char
transfers
where
it's
the
ability
kind
of
finality
of
the
ability
to
kind
of
get
everything
you
think
you'll
get
because
of
the
transfer
is
conditional
on
the
cross
link
and
on
the
the
whole
thing
getting
finalized.
So
it's
but
like
Appa,
but
in
an
optimistic
sense
at
that,
like
the
the
sort
of
mechanism
could
allow
you
to
move
money
rather
do
whatever
you
want
in
a
few
seconds.
Oh.
C
The
system
player
in
terms
of
proposals
and
testers
and
maybe
executors,
might
be
a
new
class
of
or
new
role
for,
the
for
the
execution,
that's
kind
of
randomized
on
a
shuffle
of
a
randomized
basis.
As
for
you
know,
the
users
and
the
I
thought
they
I
guess
will
be
in
terms
of
applications
that
get
built
on
specific
shots.
C
A
C
C
The
main
I
mean
there
will
be
discussion
about
stocks
tomorrow,
but
I
guess.
One
of
the
challenges
of
starts
is
around
performance.
You
know
like
this,
the
size
of
the
proofs
and
also
the
the
pruvit
and
things
where
we
can
have
a
significant
contribution
at
the
protocol
layer
is
by
using
primitives
which
are
friendly
to
stocks,
and
one
of
these
is
the
stock
friendly
hash.
So
one
of
the
things
we're
considering
doing
is
basically
deprecating.
Three.
I
C
Something
more
star
friendly
the
technicality
here
is
that
I
think
when
you
look
at
the
hash
function,
you
want
to
look
at
the
the
the
complexity
of
the
circuit
once
you
kind
of
lay
it
out
and
and
that
by
that
I
mean
like
the
number
of
multiplication
days,
you
need
to
to
implement
it
and
it
turns
out
that
that
shafts
be
and
most
of
the
stock
the
hash
functions
out.
There
are
not
super
friendly,
but
one
kind
that
we're
looking
at
is
called.
It's
called
messy.
It's
very.
F
B
A
B
B
So
this
team
is
probably
the
most
secure
if
they
literally
are
the
output
of
something
random
and
if
it's
not
random,
as
long
as
it's
kind
of
mathematical
structure
doesn't
align
with
the
mathematical
structure
of
the
finite
field,
you're
working
with
that
and
I've
got
safer
another.
So
I'm
MC
is
interesting
because
it
can
actually
be
used
to
solve
both
of
our
problems.
B
So
it's
approved
claims
about
the
etherium
block,
but
also
because,
if
you
wants
to
have
a
start
based
aggregate
signature,
then
it
turns
out
one
of
the
cheapest
ways
of
doing
it
is
that
if
you
use
some
way
important
signature,
which
is
basically
just
a
bunch
of
hashes,
then
you
can
do
an
aggregate.
You
can
do
a
stark.
B
B
/
/
or
something
like
that,
something
like
that
yeah
and
then
possibly
yeah,
but
actually
no
plus
P
plus
1
over
6.
That's
what
you
need,
because,
like
X
to
the
P
plus
1,
is
X
to
the
2,
because,
like
pronounceable
here,
I
look
rolls
over
P
minus
1
and
then
2
over
6
gives
you
that
yeah.
Then
you
have
another
XOR
gadget,
then
you
have
that
and.
B
B
Basically,
the
reason
why
I'm
not
quite
is
because
we
want
to
have
a
large
safety
margins.
In
case
someone
has
a
six.
We
wants
to
make
verification,
be
friendly
to
like
white
clients
and
all
those
other
things,
but
here's
what
you
do
right
so,
first
of
all,
in
order
to
run
the
video
it
basically,
what
the
way
could
work
is,
you
would
run
run
the
permutation
in
the
heart
direction.
Then.
A
B
Basically,
the
idea
is
that
if
you
can
get
the
overhead
of
running
a
stark
and
easy
direction
or
if
you
get
the
overhead
of
a
stark
to
be
less
than
the
overhead
of
running
the
function
in
hard
direction
versus
the
easy
direction,
then
imagine
the
hard
direction
takes
like
10
seconds
to
run,
then
the
easy
direction
by
itself
might
take
0.05
seconds.
But
then,
if
the
overhead
of
a
stark
has
say
only
100,
then
this
goes
up
to
5
seconds,
and
so
it
doesn't
actually
affect
the
runtime
of
the
media
of
the
vdf.
B
That
much
so
one
piece
of
good
news
is
that
I
spent
the
last
couple
of
days
trying
to
implement,
starts
from
what
I
know
of
the
protocol,
and
it's
like
I
used
this
as
an
example,
and
it's
like,
and
it's
obviously
kind
of
a
theory
like
very
surest
ik,
but
we
have
about
it.
The
numbers
I
have
is
basically
that
the
stark
overhead
can
be
brought
down
to
about
a
factor
of
600
yeah.
B
The
reason
now
do
you,
if
you
read
the
literature,
you'll
hear
about
overheads
of
like
50,000,
but
the
reason
why
that's
the
case
is
because
once
you
get
numbers
like
50,000,
because
you're
doing
starts
over
general-purpose
computation
when
you're
doing
them
over
general-purpose
computation
every
single
bit
operation
every
logic
it
turns
into
a
finite
field
operation.
But
here
we're
doing
a
circuit.
That's
like
naturally,
very
earth
emits
the
arrows,
meditation
friendly
and
it's
already
over
finite
field
operations.
B
So
you
just
have
a
smaller
amount
of
stark
overhead,
and
then
you
like
me
like
600,
is
your
area
like
you
can
obviously
optimize
both
the
MEMC
itself
and
the
stark
more,
but
then,
if
he
wants
to
go
below
100.
Basically,
what
you
do
is
you
just
like
paralyze
the
thing
in
hands?
We
run
it
on
an
8
core,
CPU
and
then
600
over
8
is.
B
B
B
Ok,
I've
had
some
chats
with
you
know
like
this
e-cash
team
we've
had
died,
chats
with
other
teams,
so
there
were
like
there's
plenty
of
academics
that
are
like
actively
working
on
this.
The
heuristic
we
speaking
I
will
say
that
I,
don't
so.
First
of
all,
like
construction
was
conceived
in
like
this
is
inside
of
a
finite
field,
where
the
finite
field
itself
is
small
enough
that
a
discrete
log
problem
over
the
field
is
definitely
solvable
right
and
in
fact,
the
constructions.
The
advocator
are
not
even
over
for
they
or.
D
B
If
they
talk
about
binary
extension
fields
and
over
binary
extension
fields,
the
discrete
log
problem
is
even
more
solvable
so
like
basically,
it
seems
to
me
that
yeah
discrete
logs
being
trivial
even
that
by
itself.
Basically
it
doesn't
end,
it
doesn't
really
help
you
I
could
do
anything
funny
to
maybe
see
it,
and
the
intuition
is
basically
in
that,
like
BBC
does
not
require
this
not
using
the
polynomials
as
any
kind
of
trapdoor.
Maybe
C
is
basically
just
using
polynomials
as
a
way
to
kind
of
shuffle
bits
around
and
create
mathematical,
complex
relationships
between
bits.
B
Now,
I
like
this
is,
as
I've
said,
this
is
very
serious
NIC,
and
this
is
like
the
like
as
a
hash
function.
Right,
like
generally,
you
want
your
hash
bunches
to
be
kind
of
jumbled
up
and
confusing,
so
their
resistance
to
analysis,
but
like
C,
R.
Well,
okay,
so
like
this
is
like
one
of
the
mathematically,
cleanest
things.
That's
ever
been
proposed
as
a
hash
function,
so
that
play
itself
kind
of
makes
it
scary.
B
B
The
degree
is
3
to
the
power
of
three
hundred
one
hundred
to
a
thousand,
so
you
can't
do
things
like
a
garage
interpolation
and
the
standard
stuff
that
you
can
use
to
break
things
that
are
close
to
being
or
there
close
to
being
linear,
but
like
this
is
worth
more
analysis
and
like
media
mathematicians,
have
more
insights
and
so
in
all
the
various
kind
of
academics,
the
words
it's
like
indefinitely.
All
right,
you
know,
I
go
there.
B
Yes,
so,
okay,
so
mimsy
is
a
kind
of
core
primitive
and
it
can
be
used
as
a
building
block
for
a
hash
function.
So
if
there
is
any
kind
of
hero,
a
standard
way
like
this
bunch
destruction
where
you
can
take
any
kind
of
like
reasonably
random,
looking
permutation
put
it
into
the
construction,
it
turns
into
a
hash
function
or
you
can
just
use
the
thing,
as
is
as
a
PDF
Oh.
B
No,
so
the
alternative
to
be
a
less
signature
is,
is
basically
an
average
aggregated
Lamport
signatures
right.
So,
like
all
the
upward
signature
is
basically
just
like
a
pile
of
hashes
where
you,
you
reveal
a
subset
of
hashes
based
on
what
the
value
is.
Each
Lamport
signature
verifying
to
import
signature
is
on
average,
like
the
128
hash
calculations,
and
so,
if
you're
going
to
aggregate
together
a
thousand
Lamport
signatures
and
that's
basically
you're
doing
a
stark
over
128,000
hash
calculations
and
which
is
potentially
totally
within
the
realm
of
possibility.
B
H
B
So
the
road
map
is
in
the
short-term,
USA
aggregation
dragon
for
aggregate
signatures
and
forth
of
Europe
liable,
delay
function.
It
could
be
a
stark
or
it
could
be
one
of
the
other
instructions
that
came
out
of
like
Stanford
academic
land
recently,
which
basically,
it's
kind
of
like
a
fancy:
probabilistically
checkable
proof,
4x,
^,
^
and
for
very
large
n.
So
there
are
all
the
instructions
that
are
coming
out
but
like
there
are
not
going
to
be
secure,
they
rely
on
factoring
hardness
and
they
realize
that's
a
mathematical
structure
but
like
in
the
future,
like.
B
I
B
Quantum
computers
can
solve
very
specific
problems;
they
can
solve
order.
Finding
which
makes
them
break
have
means
they
can
do
factoring
very
easily
and
they
can
solve
the
discrete
log
problem
right
so
and
that
basically
means
that
they
can
break
anything.
That's
like
a
look
to
curve
based
or
based
on
any
other
kind
of
finite
cyclic
groups.
B
So
an
information
theory
basically
just
means
math
like
it's
mathematically,
you
know
impossible
and
now
technically
you
know
at
Bob
of
Lafayette,
Shamir
and
most
properties
of
cache
functions.
Well,
I
guess
long
as
hash
functions
or
elles
are
continued,
being
security
or
fine,
so
like
like
the
idea
here
is
that
the
etherium
watching
is
going
to
be
very
dependent
on
hash
functions,
no
matter
what
we
do,
because
we
need
them
for
a
miracle
trees.
We
need
them
for
stateless
clients
like
scalability.
We
need
the
first,
so
many
things
anyway.
B
B
I
C
One
cool
thing
of
using
Minzy
for
the
disposable
poultry
which
might
talk
about
later
is
that
we
can
have
an
optimization
on
the
stick
less
clients.
So
normally
we
sniffers
clients.
You
come
in
with
your
data,
along
with
a
bunch
of
very
large
witnesses,
proving
that
the
data
matches
the
state
and
which
stocks,
potentially
we
can
take
all
these
witnesses
and
compress
them
making
making
status
class
more
viable
as
a
strategy.
C
B
Are
there
are
statistics
arguments
in
the
paper?
Let's
say
that
it's
very
good
against
traditional
forms
of
crips
analysis
like
basically
the
idea
is
that,
like
X
2,
X
cubed
is
as
perfect
as
possible
in
terms
of
being
linear
in
a
specific
way,
but
look
there
is,
but
basically
because
it's
a
simple,
a
relatively
simple
arithmetic
representation
that
itself
could
with
other
kinds
of
mathematical
attacks
analyzed
couple
years
ago,
I
think.
C
B
What
happened
is
that
there
was
an
earlier
got,
a
crappier
version
that
was
broken
and
I
forget
for
what
reason,
but
I
think
it's,
because
they
either
they
don't
have
round
keys
or
they
have
like.
They
use
the
construction
in
such
a
way
as
to
like
make
it
like
I
know,
I
think
what
happened
is
that
they
use
the
construction
in
a
different
way,
like
basically
might
have
done
the
Cubans
they're
all
instead
of
in
series,
and
that
made
the
entire
arithmetic
representation.
D
B
The
entire
thing
have
a
low
degree.
So
when
you
have
a
very
low
multiplicative
degree,
then
let's
say,
for
example,
if
the
multiplicative
degree
is
say
only
100,
then
as
soon
as
you
get
a
hunch
it
in
one
hashes,
you
can
basically
use
what
Raj,
interpolation
and
figure
out
what
the
huns
degree
100
polynomial
is,
and
once
you
have
that
you
can
like
figure
out
everything
else
he
wants
about,
like
the
Union,
can
kind
of
fully
analyze
and
break
the
motion.
G
B
Like
the
basic
construction
is
fairly
simple
right
like
bees,
you
have
Avery,
then
that
brute
by
itself
has
two
sub
nodes
and
then
those
nodes
now,
unlike
other
kinds
of
tree,
is
the
idea
is
that
here
at
the
bottom
level,
we
actually
have
two
to
the
169th
reference.
The
entire
set
of
two
to
the
160
accounts
right.
So
it's
like
a
tree
that
has
to
the
160
notes
at
the
bottom
then
over
here
you
have
a
hundred
and
of
160
levels.
Now
this
site.
B
Basically
it's
like
an
ideal
binary
tree,
which
means
that
you
know,
if
it's
very
simple,
to
pay
for
all
kind
of
insert
how
to
woke
up
and
how
to
do
everything.
There's
a
few
questions
here
right.
So
what
is
this
thing
has
to
the
160
sighs?
How
the
hell
can
you
even
construct
it
in
less
time
than
it
takes
the
Brits?
B
You
will
find
a
hash
collection
and
the
answer
you
said,
while
in
an
empty
suite,
these
nodes
are
all
zeros,
and
so,
while
the
level
1
nodes
they're,
just
also
the
white
nodes
are
going
to
be
the
same
and
the
level
on
those
are
the
same
so
level
to
those
are
going
to
be
same
and
then
the
level
3
nodes
are
gonna,
be
the
same
and
so
on
and
so
forth.
So
it
only
takes
160
hashes
to
construct
a
full
empty
tree,
then
from
there.
B
You
don't
actually
mean
to
you
know,
explore
every
single
copy
of
the
treat
exactly,
and
so
basically
you
just
start
off
with
a
database
that
contains
these
values,
and
then
you
just
like
run
the
insert
for
whatever
you
need
to,
and
then
you
can
run
a
walk
up
for
whatever
you
need
to
announce
enough.
So
that's
that
gives
that's
the
answer
to
why
they
can
work
at
all.
Now
there
are
two
in
efficiency.
B
B
The
work
which
me,
instead
of
it,
the
myrtle
tree
being
off
to
being
basically
32
bytes
x,
32
long,
the
Merkle
tree
becomes
32
x,
160
long,
and
so
the
merkel
groups
are
five
times
one,
but
there
is
a
solution
so
Cirrus
right.
So
let's
say,
for
example,
we
have
a
situation
where
we
have
this
tree.
That
has
eight
nodes
and
let's
say
only
two
of
the
nodes
are
nonzero
right
and
I'll
sing.
These
notes,
then,
the
only
note,
but
basically
if
we
can
look
at
these,
that
will
highlight
these
notes
right.
B
But
let's
say
you
actually
are
looking
for.
You
know
like
some
note
in
the
middle
right,
so
one
thing
that
you
might
so,
let's
say,
for
example,
that
you
want
a
full
proof
of
this
right.
So
here's
one
thing
that
you
can
figure
out.
So,
first
of
all,
you
clearly
have
this
note.
So
you
need
this
note,
but
then,
in
order
to
calculate
this
note
you
need
this
note,
but
guess
what
this
know.
B
It
actually
is
a
level
0mc
subject,
and
we
already
know
what
level
zero
empty
subtree
is,
and
so
we
can
use
one
bit
to
represent
the
fact
that
there's
an
empty
subtree
and
then
the
recipient
knows
what
the
hash
would
after
South
three
also
is,
and
so
all
of
his
hope
she
also
is
it
so
the
recipients
can
extend
the
one
bit
back
into
hache.
Then
we
go
up
to
over
here
well
over
here
the
scanner
Merkle
tree.
You
need
this
value.
B
Well,
this
value
is
the
root
of
a
local
one,
empty
subtree,
and
so
that's
also
something
that
we
go
then.
Finally,
over
here
you
mean
this
value
of
this
value
is
not
empty,
and
so
this
value
I
should
provide
a
hash.
So
altogether
you
can
create
a
very
simple
representation
that
basically
says
like
here's,
a
big
book
that
says
look
which
of
the
at
what
Heights
doing
have
empty
hatches
and
there's
your
only
actual
matches,
and
so
you
basically
get
like
a
very
close
to
the
F
32
times,
L
log
n.
B
So
that's
and
I
actually
didn't
I
implemented
this,
and
it
turns
out
that
the
overhead
on
even
the
simple
construction,
where
you
have
a
32
byte
bit
field,
is
smaller
than
the
overhead
on
like
pretty
much
every
tree
that
we've
had
so
far,
I
could
say
maybe
20
percent
size
improvements
on
Park
offers.
The
second
thing
now.
The
second
challenge
is
well.
If
you're
gonna
actually
poke
through
the
tree,
then
possibly
you're
gonna
need,
if
you
do
it
naively
right.
B
If
you
just
like
store
the
tree
as
lovely
to
be
hashable,
then
you're
going
to
need
like
a
hundred
and
sixty
awoke
obsessed
over
the
level
gb
hash
map
instead
of
the
current
value,
which
is
like
basic.
Well,
it's
not
even
32.
It's
yeah,
it's
gonna,
be
oh
I'm,
eight,
because
it's
32
divided
by
log
of
16
right
because
they
lose
X
3
so
like
that
might
know
that.
B
Might
that
has
a
solution
as
well,
when
the
solution,
basically,
is
that
instead
of
storing
between
I,
usually
basically
client
side
of
any
client
can
use
whatever
tree?
They
was
right
so
client,
side.
I
know
one
can
choose
to
basically
represent
like
this.
Entire
branch
has
a
single
object.
Client
side
two
clients
can
choose,
represent
all
of
this
in
some
kind
of
red
and
blotchy
client
side
appliance.
We
choose
to
represent
us,
and
you
know
it's
some
kind
of
Patricia's
tree
that
has
to
go
structure
and
if
it's.
A
B
Client
side,
the
clients
could
choose
to
represent
this
in
a
just
like
regular
binary
to
me,
and
then
you
as
the
privates
need
to
order
to
prevent
the
to
randomize
the
tree
to
prevent
the
attacker
from
getting
a
from
creating
a
horse,
a
situation
so
like.
Basically,
you
can
design
the
database
logic
so
that
you
can
well.
You
can
actually
design
the
database.
The
client-side
database
logic
to
be
optimal
in
whatever
way
you
want,
and
the
kind
of
logic
in
database
structure
of
pointers
does
not
have
to
directly
correspond
to
the
structure
of
hash
pointers.
B
So
you
could
potentially
get
even
better
results.
So,
like
then,
I'm,
not
the
expert
but
like
one
very
simple,
optimization
right
is
that
a
a
database
to
read
basically
takes
the
same
amount
of
effort
if
it
goes
anywhere
up
to
4096
bytes,
and
so,
instead
of
basically
you
instead
of
having
HV
where
it's
where
it
has
a
16
elements
on
every
side,
so
you
have
like
16
times
32,
you
could
have
a
tree
that
has
some
120,
where
every
single
hop
down
the
tree
gets
takes
you
seven
bits.
B
That
would
basically
mean
that,
instead
of
doing
you
know
like
seven
Hawks,
you'd
be
able
or
like
eight
harps,
and
people
would
go
down
to
something
like
five
hops.
So
like
there's
the
basic
way
like
once,
we
stop
assuming
that
the
database,
the
database
level
pointer
structure,
has
to
follow
a
hash
table
pointer
structure
instead
of
you
know
it
being
some
being
a
sub
graph.
Then
all
of
these
optimization
opportunities
become
available
and
like
you
can
basically
get
whatever
they
want.
This
cost
optimal.
B
Yeah,
yes,
the
other
really
nice
thing
about
this
by
the
way.
Yes,
it
does,
and
the
other
nice
thing
about
this
by
the
way.
Is
that,
because
you
function
for
verifying
merkel,
branches
is
like
absurdly
simple
number
one.
This
becomes
a
way
more
stark
readily
because
you
don't
have
to
do
like
ROP
decoding
inside
of
Starks
number
two.
It
becomes
way
more
aetherium
contract.
B
Frankly,
it
just
becomes
easier
for
pretty
much
anyone
table
event
number
three
year
it
becomes
possible
to
do
constructions
like
proof
of
custody
or
teresa
for
that
data
availability
proofs
over
these
over
the
state
tree,
which
you
can
to
now,
because
the
teacher
is
way
too
in
structure.
So
you
know
basically
the
kind
of
like
the
much
greater
level
of
structure
in
this
particular
client,
and
this
kind
of
tree
makes
says,
take
opens
up
a
lot
of
these
possibilities.
B
E
B
Adversarial,
yes,
so
I.
Basically,
the
idea
is
right.
There,
like
the
Merkel
branches,
are
okay.
So,
first
of
all,
like
the
worst
thing
that
an
adversary
can
do
is
an
inaccuracy.
If
you
have
a
note
over
here,
it
under
see
everything
kind
of
like
send
one
one
way
to
what
else
beside
it
in
order
to
kind
of
fill
it
up
right,
but
that's
something
that
you
can
do
with
a
Patricia
tree
today.
B
A
B
B
H
H
C
C
D
G
I
B
The
problem
is
like,
basically,
because
there
is
no
external
layer
to
pendency
what
happens
if
suddenly
this
gets
it
this
for
free
works
then-
and
this
does
not
work
so
then
the
effect
happened,
but
the
cause
did
not
happen
right.
So
in
order
for
this
to
kind
of
not
be
an
issue,
you
basically
have
to
allow
for
the
possibility
for
state
routes
inside
of
here
to
recalculate
with
Delta
T
there.
B
It's
also
recalculating,
and
the
way
you
do
that
is
you
have
this
kind
of
somewhat
decoupled
state
execution
game
where
the
blocks
aren't
kind
of
tightly
comfortable
state
routes,
but
instead
you
basically
have
some
separates
a
tree.
Providing
game-
and
over
here
this
may
be
providing
a
game
we're
just
saying.