►
From YouTube: Eth2.0 Implementers Call #2 [8/30/2018]
Description
A
B
A
A
C
So
far
so
means
that
it's
able
to
propose
blocks
and
process
that
them
everything
is
stored
on
disk,
so
the
state
is
preserved
between
restarts,
but
for
now
it
has.
It
has
a
placeholders
for
state
transition
and
for
for
choice,
rule
yep
and
the
next
steps
we
are
going
to
work
on,
it's
our
state
transition
and
attestation.
C
So
what
I
am
personally
doing
now
is
trying
to
understand
the
state
transition
and
the
things
that
happens
around
it,
because
I'm
not
fond
of
the
design
of
state
that
is
suggested
in
a
POC
and
maybe
next
week,
I'll
try
to
outline
something,
and
so
we
can
discuss
it.
For
example,
I
think
that
it's
pretty
straightforward
to
use
a
miracle
tree
for
to
store
validator
set
and
so
forth,
but
to
make
this
outline
and
it
some
time
to
understand
everything
that
is
happening
around
stage
musician.
So
that's
where
we
are
okay.
Thank
you.
C
C
D
E
D
D
C
C
F
Starting
to
implement
a
state-transition
functions
and
trying
to
understand
everything
around
that
also
trying
to
get
gossips
of
implementation
started
and
I
also
have
several
questions
regarding
the
random
beacon
implementations
in
community
selection
that
we'll
go
into
later,
but
in
terms
of
updates.
That's
it.
F
G
G
G
Sterilization
tests
vectors
to
compare
to
so
hopefully
we
can
agree
on
some
I-
can
show
those
notes
on
the
implementation,
because
so
that
everyone
agrees
on
how
to
serialize
right
yeah,
as
we
have
more
implementations
of
this
curve,
I
think
we're
gonna
definitely
do
come
around
as
standard.
So
that's
good.
A
E
A
B
Talk
about
later,
we've
been
eyeing
out
the
state-transition
stuff's
pretty
hard
just
trying
to
figure
out.
You
know
what
the
motivations
behind
that
and
if
it's
efficient
and
next
up
I
think
the
next
thing
for
us
is
just
to
start
pushing
blocks
around
the
network,
we're
just
going
to
pick
a
serialization
format
and
just
run
with
it.
For
now,
that's
it
Russ
cool.
H
H
I
would
like
to
in
the
next
two
months,
spend
some
time
on
do
some
kind
of
simulation
of
execution
engine
on
charting,
where
we
take
the
the
lower
layers,
the
black
box,
so
the
the
book
ordering
anything
coming
in
and
that's
just
a
given,
and
we
are
trying
to
do
some
kind
of
execution
energy
simulation
on
top
of
that
which
maybe
he
was,
it
may
be
just
EVM
for
the
time
being.
But
that's
like
the
next
step.
I
I
A
J
J
K
Where
we
invited
a
bunch
of
DTF
experts
generally
I'd,
say
this
event
went
very
well.
Some
new
results
were
found
during
that
day,
just
by
having
different
people
in
the
room
and
and
sharing
open
problems.
In
particular,
we
have
a
new
way
of
aggregating
the
proofs
of
EDF
for
the
the
construction
that
we're
looking
into
it
called
by
Benjamin
whistle
on
ski.
So
you
can
take
two
vdf
proofs
with
different
inputs
and
aggregate
them
into
single
proofs.
That's
it's
very
nice.
K
K
K
One
of
the
things
that
I
guess,
one
of
the
rabbit
holds
that
that
you
know
is
we've
gone
down.
Pretty
far,
is
an
approach
to
build
RSA
modulite
restlessly.
So
the
idea,
basically,
is
that
if
you
pick
a
4,000
bit
number,
let's
say
at
random
and
you
make
sure
that
it
has
no
small
prime
factorizations.
Otherwise,
you
kind
of
you
trash
it
and
you
you
pick
another
random
number
four
times
on
bit
random
number,
then,
with
with
some
reasonably
high
probability,
something
like
60
or
70%.
K
This
this
random
number
will
be
safe
to
use
as
an
RSA,
modulus
and
and
what
I
mean
by
safe
to
use
is
that
there's
a
a
2,000
bit
component
of
this
of
this
random
number,
which
is
on
factorizable,
so
is
the
product
of
at
least
two
large
enough
crimes,
so
they
cannot
be
factored,
and
so,
if
we
pick
enough
of
these
random
numbers,
then
we're
very
high
probability.
None
of
them
will
be
completely
factorizable
and
so
long
as
they're,
not
completely
factorizable.
K
The
the
scheme
is
secure,
so
that
gives
us
a
nice
way
of
having
a
trustless
set
up
for
the
to
be
DF,
the
main
sorry
putting
that.
In
that
case,
you
have
do
you
have
to
run
multiple
videos
for
one?
Yes,
exactly
so,
basically,
your
your
high-level
PDF
is
made
out
of
these
mini
PDFs,
which
are
run
in
parallel,
and
for
you
know
what
you
do
is
that
you
concatenate
the
outputs
and
you
concatenate
the
proofs
and
each.
K
The
the
main
downside
here
is
that
the
you
know,
your
proof,
sizes
and
your
outputs
are-
are
not
optimal
in
terms
of
size,
so
each
the
output
and
the
proof
per
video
output
would
be
on
the
order
of
50
kilobits.
So
that's
about
eight
kilobytes,
so
you
know
that
that
is
totally
workable,
but
it's
slightly
suboptimal
and
the
other
worry
is
that
you
know
we
need.
K
We
need
quite
a
few
of
these
of
these
moduli
and
you
know
on
the
order
of
ten
or
twenty,
and
so
if
we
were
to
build
an
ASIC,
we
might
have
actually
to
build
to
have
several
Asics
on
the
same
board
and
that
might
start
consuming
lots
of
power
which
which
not
might
not
scale
very
well.
But
you
know
this
is
still
part
of
the
feasibility
study
time
that
I'm
doing
right
now
right,
so
it
doesn't
scale
poorly
in
the
sense
that
the
whole
world
is
gonna,
start
running
them
and
use
tons
and
ones.
A
K
Graphics
cards,
so
it's
it's
definitely
small
amounts
of
power,
but
my
hope
initially
was
that
you
know
we
could
have
this.
This
vdf
ASIC,
be,
you
know,
run
a
ball
on
a
on
a
USB.
Stick
that
you
can
just
plug
into
your
computer,
but
it's
looking
more
like
it
would
be
like
the
graphics
card
that
you'd
plug
into
your
desktop
computer.
K
So
the
other
piece
of
research
that
I've
been
doing
is
basically
looking
into
the
modular
multiplication,
algorithms,
which
is
the
basis
for
the
the
EDF
that
we're
looking
into
and
it
it
turns
out
that
this
is
a
pretty
long
and
rich
line
of
research.
You
know
it
might
seem
innocuous,
but
it's
actually
non-trivial
to
find
the
optimal
parallel
time
algorithm
and
over
the
years
there's
been
various
approaches.
K
You
know
Montgomery
Barrett's,
antibodies,
use
already
sorry
terms
of
residues,
but
there's
this
one
specific
team
in
Australia,
essentially
a
research
lab
part
of
the
engineering
department,
one
of
the
universities.
There
are
looking
into
so-called
residue
numbers
system
and
that's
that's
kind
of
nice,
because
the
basic
idea
is
that
you
can
take
your
your
4,000
bit
modulus
and
you
can
in
a
way
break
it
down
into
many
smaller
moduli,
and
it
means
that
when
you're
doing
the
multiplication
and
the
additions,
you
don't
have
these
these
long
carry
chains
which
which
makes
everything
complicated.
K
K
What
was
the
the
biggest
point
of
uncertainty
is
basically
the
DD
ASIC.
Now,
what
kind
of
a
budget
do
we
need,
and
it
is
it
actually
possible
to
to
build
an
ASIC
which
is
close
to
optimal?
And
you
know
a
big
part
of
that
question
was:
can
we
use
exotic
processes
that
are
different
from
CMOS
silicon
to
insulate
these
eight-six,
so
one
that
that's
quite
famous
is
called
gallium
arsenide,
which
is
a
type
of
semiconductor
with
for
which
you
can
build
these?
K
These
high
frequency,
transistors
and
then
another
one
which
is
which
is
more
available
commercially,
and
it
is
using
one
more.
It's
called
silicon
germanium
and
for
both
of
these,
it
looks
like.
Even
if
you
had
unlimited
amounts
of
money,
it
would
not
work
and
part
of
the
reason
is
that
the
the
the
complexity
of
the
the
ASIC
you
know
in
terms
of
number
of
gates
and
would
be
so
high
that
the
your
power
budget
would
go
through
the
roof
and
there's
there's
no
way
you
can
feed
enough
power
to
the
ASIC.
K
Can
we
actually
instituted
on
a
CMOS
silicon
chip,
and
it
looks
like
the
answer?
Is
yes?
But
you
know
with
with
carefully
looking
into
relative
considerations
and
diarrhoea
considerations
and
power,
considerations
and
stuff
like
that,
and
we're
also
looking
into
you
know
the
various
process
technologies
like
28,
nanometer,
20,
nanometer,
16,
etc.
K
K
But
yeah
I
think
overall,
the
it
is
it's
too
early
to
tell
but
I'd
say,
there's
a
high
chance
that
the
project
can
be
done
successfully.
I
say
maybe
slightly
more
than
50%
chance
in
terms
of
the
the
amount
of
money
that
will
have
to
be
spent,
it's
still
unclear,
but
I
think
10
million
or
20
million,
or
maybe
30
million
is
kind
of
the
ballpark
area.
And
so
this
is
why
you
know
the
collaborations
are
so
important
and
in
addition
to
IP
address
and
Tia
looking
to
do
the
same
thing
as
us.
K
There's
also
a
new
project
that
I
learned
about
called
Solana
and
they
have
this
proof
of
history.
And
it's
you
know.
The
whole
thing
is
built
around
the
vdf,
so
I
mean
they.
They
don't
have
the
kind
of
budget
that
ipfs
and
I.
If
you're
gonna
have,
but
at
least
they
have
the
the
will
to
to
see
it
happen
and-
and
they
seem
to
have
a
very
good
team
of
engineers
behind
it,
so
potential
collaboration
there.
K
K
A
B
Terms
of
looking
at
the
like,
on
wire
size
of
an
attestation
record
and
a
block
between
all
of
these,
these
formats
I
think
it's
simple,
serialized
pretty
much
wins
hands
down,
always
except
for
the
pickle,
which
is
like
crazy
small.
But
you
know
no
one
really
wants
to
use
pickle,
which
PDP
minutes
think
so
see.
A
simple
serialize
is
always
as
smallest,
I
guess.
That's
really
not
surprising,
because
it
gets
to
make
all
these
assumptions
about
schema.
B
So
we've
been
thinking
about
like
what
benefits
you
might
actually
get
from
from
from
using
like
captain
proto
or
something
one
of
the
things
that
we
keep
coming
back
to
is
that
you're,
probably
gonna
want
to
hatch
you're
gonna
want
to
hash
like
if
you
get
a
message
off
the
network,
you're
obably
gonna
want
to
hash
it
pretty
quickly,
at
least
whatever
you
do.
You're
gonna
hash
it!
So
we're
talking
before
about
that.
You
know
this
two
different
serialization
formats,
one
is
some
for
the
consensus
style
stuff's.
B
You
know
when
you
hash
it,
it
always
needs
to
be.
You
know
the
same
the
same
order
of
bytes,
but
if
you're
looking
around
the
network,
it
doesn't
have
to
be
so.
What
kind
of
thinking
that,
if
you
have
this
different
byte
ordering
on
the
network,
and
then
you
have
to
get
it,
then
you
have
to
move
it
into
the
hashing
format
and
then
hash.
It
you've
probably
lost
any
benefits.
You
got
from
having
that
other
p2p
type.
B
Give
it
to
someone
they're
gonna,
have
to
like
know
these
two
different
formats
or
just
thinking
it's
kind
of
kind
of
difficult.
So,
from
our
perspective,
we'd
say:
stick
with
something
like
simple
serialized
we're
gonna
just
play
around
with
it
for
the
next
couple
weeks
see
if
we
can
make
it
faster,
otherwise
them
I
think
we'd
probably
go
with
that,
be
keen
to
hear
anyone's
thoughts.
B
B
A
A
Think
that
I
last
looked
before
we
on
this
call
metallic
was
of
the
mind
that
the
easily
writable,
although
may
be
more
difficult
to
read,
format,
is
beneficial
to
him
and
maybe
to
some
others.
That
may
be
writing
these
tests,
but
the
compromise
might
be
just
to
have
a
parser
that
turns
that
into
JSON
or
some
other
format.
I
guess
I
lean
in
that
direction.
The
JSON
tests
are
easily
readable
and
thus
more
easily
audible
by
the
people
who
didn't
write
it
I
know
Preston
had
some
thoughts
on
this
and
press
tonight.
A
You
know
I
sort
of
agree
with
what
you
just
said.
This
language
proposes.
You
know
at
first
had
quite
bizarre,
it's
kind
of
like
you
really
need
to
think
about
it
to
just
try
to
parse
it
in
your
mind,
like
what
is
actually
going
on
here,
and
it
just
makes
think
when
I
first
saw
this
I
thought
you
know.
Is
this
what
we're
going
to
be
ingesting?
You
know
you
can
use.
You
know
photographer
puzzle
to
create.
L
If
you
know,
if
we
go
with
like
a
JSON
sort
of
standard,
we're
like
we
don't
need
to
build
a
language,
specific
parser
for
this
data,
then
you
know
it
kind
of
makes
things
a
lot
easier.
A
lot
of
the
back
and
forth
I
think
was
you
know
in
our
discussion
is
maybe
a
misunderstanding
like
what
we're
actually
testing.
L
From
the
initial
example,
we
had
like
some
like
scenario
of
what
the
blockchain
looks
like
and
what
the
different
Forks
look
like,
and
then
we
wanted
to
decide
like
what
the
final
head
would
be
like,
given
all
this
data,
so
I
was
a
little
bit
confused.
Why
we
needed
references
to
other
slots
and
things
like
that.
They
just
didn't
understandable.
Why
some
of
that
functionality
is
there?
L
So
maybe,
if
you
want
to,
if
somebody
can
give
it
like
a
high
level
of
what
are
the
goals
of
this
test-
and
you
know
what
are
we
hoping
to
achieve?
Yes,
I
mean
the
goals
are:
what
is
the
head?
What
is
the
most
recent
justified
block
and
what
is
the
most
recent
finalized
block
at
its
core?
This
is
you
know
this
is
a
fork
choice
and
finality
test.
L
That
brings
in
the
particular
construction
of
beacon
chain,
along
with
this
epic
list,
Kaspar
so
important
to
note,
and
why
this
notion
of
slot,
sir,
is
important
is
because,
when
I,
when
I
make
an
attestation
for
a
block
at
some
slot,
I'm
making
I'm
casting
an
FFG
vote
on
that
block
and
any
of
its
ancestor
blocks
that
are
in
the
previous
cycle,
length
slot,
and
so
you
can
have
these.
You.
A
A
Don't
think
it
does
it
really
it's
more
a
matter
of
we're
dividing
we
divide
validators
into
certain
slots,
and
so
when
I
say
validators
zero
through
nine,
that's
a
test
to
slot
and
I'm,
not
saying
validators
in
this
test.
I'm
not
saying
validator
is
zero
through
nine
from
the
global
set
I'm
saying
the
validators
that
the
subset
of
validators
that
can
attest
to
that
slot,
and
so
for
the
test.
Writer
they
don't
have
to
think
about.
Is
that
validator
with
the
index
10
through
11
or
10
through
19?
A
Or
is
it
the
validators
with
index
20
through
29?
They
just
think
about
you
know
starting
the
indexing
at
that
subset
of
alders,
which
is
a
committee
for
that
slot.
So
that's
kind
of
the
general
motivations
we
can.
We
can
talk
a
little
bit
more
about
that
offline,
I'm
I'm.
Definitely
in
the
direction
of
we
have
this.
We
can
easily.
We
have
this
easily
writable
format,
we'll
just
we
can
make
a
Python
script
to
the
outputs
in
JSON
and
we
as
a
community
can
come
around
a
JSON
standard.
A
Selected
and
how
do
we
keep
that
consistent
across
right,
so
in
because,
because
in
this
testing
link,
at
least
currently
there
is
no
notion
of
these-
are
all
equally
weighted
validators,
it
doesn't
really
matter
so
you
can
have
your
shuffling
as
being
non
shuffled,
so
you
know
you
just
slice
your
validator
set
and
apply
them
to
your
slots.
Accordingly,.
A
Know
you
think
it
would
be
a
problem
if
the
like
the
scenarios
would
be
different
either
like
on
each
test
run
or
for
each
client.
If
they're
not
selecting
the
same
global
indexes?
Well,
it
was
in
the
it
does.
It
will
not
matter
as
long
as
those
validators
have
the
same
weight
it
matters
that
you
keep
them
where
you
put
them
right.
A
L
A
G
G
G
C
A
C
Let
me
see
what
is
that
cool
there's
been
a
lot
of
questions
on
b21
around
some
of
the
stuff,
around
attestations
and
some
stuff
around
shuffling
and
we've
talked
a
lot
on
the
getter.
So.
A
Yeah
I
had
a
question
when
I
was
thinking
about
this
the
other
day.
Just
about
to
say,
say
you
have
like
scenario
right
where
you
have
your
cycle
linked
at
length
of
10,
so
you're
a
block,
40
or
redoing
crystallite
state.
You
look
at
your
shuffling
when
you
do
you're
shuffling.
Does
that
apply
it
to
blocks
a
like
48
to
49,
or
is
that
apply
to
blocks
50
from
59s
biggest
another
way
to
ask
it
is?
Is
there
like
a
look
ahead
here.
B
Some
multiple
of
cycle
length,
so
that's
a
dynasty
change,
was
when
you
do
some
of
these
more.
You
do
dynasty
changes,
kind
of
a
subset
or
extra
stuff
on
a
state
update
and
dynasty
changes.
When
you
bring
some
new
validators
in
and
out,
it's
also
when
you
do
recompute,
shuffling
and
some
other
stuff,
so
some
multiple
cycle.
A
A
You're
ready
to
move
you're
ready
to
you
finalize
and
you've
brought
in
new
people.
You've
now
have
kind
of
this
notion
of
a
new
validator
set,
although
you
still
have
some
overlap,
still
have
a
majority
overlap,
but
you
have
this
notion
of
like
here's,
our
new
validators,
that
what
they're
shuffling
let's
begin
finalizing
stuff.
A
Okay
thanks:
this
is
Terrance
from
Kris
making
of
that
I
have
a
question
regarding,
if,
let's
say
you
receive
an
incoming
blog
and
that's
safe,
there's
a
few
bad
at
the
station.
That's
within
the
blog.
What
do
you
do
with
the
plot?
Do
you
like?
Do
you?
Do
you
just
like
ignore
the
plot
bad
attestation
can
respect
malformed
Adam?
A
M
One
of
the
votes
with
a
bad
signature,
then
it's
going
to
pollute
the
whole
aggregated
signature,
a
thousand
people
voting
and
even
just
one
bad
signatures
and
the
whole
thing
with
the
whole
signatures.
Gonna
fail.
So
it
doesn't
really
make
sense
to
have
a
single
signature
which
is
roll,
that
a
single
attestation
could.
N
A
A
K
Basically,
a
proof
of
work
is
not
so
easy
to
check.
If
you
don't
have
a
pre
calculated
data
set,
I
mean.
A
K
I
K
K
C
B
C
Yeah,
maybe
100
milliseconds
is
like
the
worst
years.
So
just
amazed,
we
talked
about
the
like
a
affinity,
style,
zero
knowledge
groups
to
be
privileged
actors
in
the
network
a
little
bit
last
week.
As
you
know,
one
way
to
mitigate
a
kind
of
a
DDoS
protection.
I
know
you
were
the
one
that
has
originally
told
me
about
that.
Do
you
have
any
thoughts
on
that?
I
was
meaning
to
dig
in
a
little
bit
last
week
when
I
did
not
enough
time
to
do
that.
K
Learned
about
it
by
going
to
definitely
meet
up
and
from
what
I
understand,
they're
actually
going
to
write
a
white
paper
specific
on
the
the
Pittsburgh
networking,
and
you
know
it
seems
that
the
affinity
has
done
a
lot
of
innovation
on
the
pittsburgh
networking.
So
not
only
have
they
written
their
Pippin
library
from
scratch
and
you're,
not
using
the
PGP
or
ipfs
in
any
way.
They
also
have,
as
I
understand
these
their
own
eyes,
proofs
to.
A
N
A
You're
privileged
actor
but
I
think
they're,
also
innovating
on
network
relay
policies
and
so
I'm
at
this
point,
I'm
I'm
mostly
speculating,
but
as
I
understand,
they
want
to
release
seven
or
eight
white
papers
and
then
only
release
one
right
now
on
the
consensus
and
I
as
I
understand.
Also
the
next
one
will
be
younger,
explain
that's.
K
Yes,
so
just
going
back
to
what
Terrance
was
asking
before
I
guess
say
you
get
like
halfway
through
the
attestations
of
a
block
and
you
find
an
invalid
one.
It's
probably
worth
hanging
on
to
the
other
s
station
at
the
stations
that
you
did
find
in
that
book,
though,
would
you
agree
Danny
if
you
haven't
seen
them
yet
and
they're
valid
they're,
probably
worth
putting
in
your
database
for.
A
I
think
ties
into
this
a
little
so
given
that
you
kind
of
know
a
little
bit,
what
that
looks
like
well
would
how
would
the
nodes
actually
come
to
agreement
on
the
random
number?
So
I
know
that
they're
using
that
Justin
have
proposed
in
a
different
talk,
I'm
using
R
and
L,
and
then
entering
the
results
of
the
round
out
into
PDF.
So
I'm
just
wondering
how
would
that
look
like
in
practice
from
an
implementation
point
of
view
and
community
selection
is
because
the
spec
mentions
committees,
but
has
no
ethos
on
how
that
would
work.
F
Randomly
sample
nodes
for
committee
selection-
those
are
my
initial
thoughts
for
now.
Does
anybody
have
any
input
on
the
second
part
and
then
just
answer
the
first
part?
So
the
second
part
we
have
this
field
and
crystallized
States
called
indices
for
slot.
It's
an
array
of
arrays,
and
so
it's
the
length
cycle
length
or
it
might
be
cycling
sense.
No
thing
is
cycling,
and
so
you
have
a
you,
have
an
array
for
each
slot,
and
that
array
is
a
Chardon
committee.
An
array
of
Chardon
committee
objects,
and
so
a
Chardon
committee
object.
F
Has
the
committee
inside
of
it-
and
this
is
all
an
output
of
get
new
shuffling
which
currently
is
not,
is
only
used.
I
think
the
beginning
to
seed
this
entire
thing,
but
would
be
used
on
the
dynasty
changes
and
when
you
have
a
new
RNG,
so
they
get
new,
shuffling
shuffles
validators
and
also
puts
them
into
committees
and.
A
A
Right
so
I
actually
linked
to
some
sly
I
prepared
in
yesterday
and
there's
some
time
which
might
clarify
what's
going
on.
So
basically
you
have
the
the
random
reveal
period
and
that's
pretty
standard
in
the
sense
that
at
every
slot
you
have
a
block
reducer
and,
if
creates
a
block
which
gets
included
on
chain,
then
that
block
will
also
have
a
random
repeat
and
then
once
you
get
at
the
end
of
a
certain
number
that
it's
like
weak
source
of
entropy
in
the
sense
that
it
could
be
biased
by
some
of
the
last
reviewers.
K
You
know
how
long
the
the
vdf
evaluation
could
take.
So,
even
though
we're
targeting
ten
minutes,
you
know
it's
possible
that
in
the
worst-case
scenario
it
could
take
20
minutes
or
30
minutes,
and
so
that
that's
something
to
take
into
account
and
then
another
thing
to
take
into
account
is
it
is,
you
know,
revealed
off
chain,
you
know
broadcasted
and
gossiped
the
whole
network.
K
It
still
needs
to
be
included
on
chain,
and-
and
so
we
have
the
notion
of
an
inclusion
buffer
on
on
on
slide
13,
and
you
know
you
know
that
those
are
most
of
the
the
practical
considerations
from
implementation.
You
know
it
is
a
simple,
simple
protocol.
I
guess
you
know,
some
of
the
subtleties
that
you
want
to
understand
is
it's
from
a
protocol
designer
point
of
view.
You
know
one
of
the
questions
is:
how
long
should
the
evaluation
period?
K
K
You
know
that
happened
during
this,
this
research
event,
where
we
found
a
way
to
do
so-called
watermarking,
where
you
in
the
in
the
proof
of
the
primary
vdf
you
kind
of
construct
a
special
proof
which
is
linked
to
specific
public
key
right.
So
you
know,
from
a
from
a
implementers
perspective,
there's
going
to
be
some
details
out
of
the
spec
that
are.
K
There
will
be
some
rules
about
assessing
whether
the
BPS
vdf
output
is
valid
like
if
it's
from
the
correct
cycle
or
epoch
and
whether
it
has
the
it's
whether
it's
satisfying
a
solution
to
the
expected
input,
which
would
be
some
of
the
R
and
Alex
ORS
and
then
Plus
that
you
know
that
when
a
when
an
output
is
included
in
a
block
that
can
serve
as
a
seat
of
randomness
and
there
will
be
rules
around
what
seat
of
randomness
and
when
to
use
that
seat
of
randomness.
To
reshuffle
things.
K
You
know
and
then
there's
a
couple
of
things
on
recalculating
difficulty.
But
that's
more
of
that's
the
end
of
what
theater
recalculating
difficulty
for
future
videos
so
not
terribly
complicated
and
kind
of
like
handling
access
stations,
but
on
just
rarely
handling
this.
Like
extra
input
to
block
and
bloody
around
that
and.
A
A
F
K
One
one
I
think
we
have
a
good
answer
to
which
is
so.
The
first
concern
is
usually
when
people
propose
synchronous,
cross
yard,
a
synchronous
crush
our
transaction
protocol.
The
concern
has
been
that
the
state
execution
gadget
won't
be
able
to
keep
up.
If
there's
you
know
too
much.
If
it
requires
too
many
rounds
too,.
O
O
O
Well,
I
think,
yes,
it's
possible
that
rock
will
be
for
junk,
but
I
know
the
application
that
Eric
my
want
most.
Is
that,
like
so
far
the
first
one
application
might
be
very
limited,
okay,
see,
but
isn't
that
the
kind
of
the
point
is
that
it
can
just
be
filled
with
junk
and
that
there's
no
structure.
O
P
To
include
useful
data,
isn't
the
move
when
you
do
formalize
the
execution
layer,
the
move
is
to
move
to
more
structured
blocks.
This
is
a
little
bit
out
of
the
scope
of
what
I'm
my
understanding
of
those
APIs.
So
my
understanding
was
that
you
have
data
blobs
at
the
beginning
and
then
move
to
more
structured
blocks
as
you
formalize
the
execution
layer
in
phase
2
or
3
or
whatever
yeah
I.
Guess
from
my
perspective,
it's
the
problem.
Is
that
I
mean
it's
an
easy.
You
know.
Oh,
it's
easy
to.
A
O
It
gets
harder
if
you
allow,
you
know
if
it's
possible
for
validators
to
stuff
the
card
blocks
with
with
spam.
You
know
useless
data
blobs
and
it's
not
clear
how
much
of
a
problem
that
is
or
or
what
ways
there
are
to
prevent
that
from
happening
right.
I,
remember
seeing
something
somewhere
the
notion
of
maybe
flagging
blobs
as
when
we
add
execution,
they
are
flagging.
Flagging
blobs
is
a
bit
to
say
whether
they're
part
of
whether
they're
just
data
or
whether
they're
part
of
the
transaction.
A
Execution
layer
again,
I've
read
some
stuff
about
this,
but
it's
not
some.
The
corporate
of
my
mind
on
that
I'm,
not
sure
on
the
point
of
view
of
starting
phase,
one
no
I
think
we
want
every
block
to
be
the
same
size,
and
so
in
that
sense
every
single
block
will
be
100%
filled
with
junk
and
by
still
doing
that,
with
junk
I
mean
that
you.
O
Know
they
will
not
be
interpreted
by
the
evm
once
we
launch
the
EBM,
so
everything
will
be
ignored
in
what
you
know.
What
goes
into
the
blocks
is
ignored
until
we
launch
DBM
and
then
when
we
do
not
EVM,
only
the
the
new
blocks
will
be
intestate,
so
in
that
sense,
every
will
be
100%
junk
in
terms
of
what
decentralizes
you
know
proposes
from
just
filling
the
blocks
with
junk.
Well,
it's
the
exact
same
answers
with
Bitcoin
Experian
is
like
putting
this
opportunity:
cost
on
lost
revenue
from
the
execution
fees.
A
K
K
Okay,
then
so
it
sounds
like
everything's
just
up
in
the
air
still,
which
is
which
is
yeah.
What
I
thought.
So
that's
don't,
but
you
know
when
you
add,
when
you
add
an
execution
later,
when
you're
out
of
BM,
you
do
move
to
the
notion
of
like
block
validity
in
terms
of
the
VM.
So
even
if
you
do
have
some
of
the
block,
you
can
carve
out
some
of
the
log
for
data
and
rather
than
transition
transactions.
K
A
When,
once
the
EVM
is
is
implemented,
then
some
of
these
blobs
will
be
will
become
transactions
and
and
run
in
the
the
default
execution
engine.
But
of
course
you
know,
one
of
the
things
that
we
mentioned
in
in
italics
blog
post
is
the
idea
of
alternative
execution,
engine
or
layer,
2
execution
engine,
and
in
in
that
respect,
you
don't
need
the.
K
Cool
so
there
there
is
a
clean
separation
between
phase
one
and
phase
the
phase,
one
data
layer
and
the
Phase
two
execution
and
layer.
So
phase
one
has
no
gas
rewards,
it
only
has
block
rewards,
but
but
no
no
transaction
fee
or
gas
mechanics.
So
the
only
the
incentive
for
proposing
blocks
is
to
earn
a
block
reward,
but
not
necessarily
a
gas
V
yeah.
K
So
if
you
talk
about
strictly
speaking
layer,
one
in
terms
of
infrastructure,
that
does
have
to
be
correct
and
that's
something
you
know-
that's
that's
one
part
of
the
designer
which
hasn't
changed
for
months
like
over
six
months.
That's
even
like
close
to
a
year.
It's
been
like
that
very
clean
separation
between
a
one
and
A
or
two
yeah,
but
you
know,
even
though
there's
no
layer,
one
incentivisation
scheme,
you
definitely
can
have
layer.
K
Two
incentivization
schemes
and
in
which
case
it
would
be
against
rational,
proposes
to
you
know,
have
blocks
which
are
all
zeros,
for
example,
yeah
I,
suppose
that's
the
same
level
of
freedom
for
for
validators
or
black
proposers.
As
with
the
current
aetherium,
because
they
can,
you
know,
there's
nothing
that
stops
validators
from
proposing
empty
box
or
from
including
a
bunch
of
transactions.
O
K
Crushed
our
communication,
we're
gonna
table
the
rest
of
the
potential
conversation
for
now,
and
you
know,
ask
questions
on
the
gator
and
we
can
bring
it
up
next
time.
It's
a
little
bit.
You
know
we
have
some
people
like
Casey
who
are
thinking
about
this
a
lot
but
in
terms
of
practical,
be
contain
implementation
with
a
little
bit.
K
You
know
in
the
future
the
next
question
the
next
potential
topic
was
talking
about
proof
of
custody
implementation
and
whether
that
part
of
the
spec
has
been
finalized
and
I.
Think
I
noticed
in
some
of
the
channels.
There
was
maybe
a
little
bit
of
confusion
about
the
person
of
custody
and
what
it's
in
relation
to
in
with
the
building
of
the
short
chain.
O
Just
from
the
research
perspective,
I've
seen
a
few
research
threats,
that's
floating
around
and
both
from
the
spec
wise
I
haven't
seen
that
much
final
ice
pack
in
terms
of
proof
of
custody.
So
I'm
wondering
is
it
because
we
haven't
start
really
messing
around
with
the
short
chain?
Yeah?
That's
why
there's
hasn't
been
much
respect
on
it.
A
A
A
You
know
what
one
area
where
there
is
some
kind
of
ambiguity
as
to
how
it
could
be
implemented
is
in
the
challenge
game.
My
philosophy
there
is
it
doesn't
matter
if
the
challenge
game
is
suboptimal
in
the
sense
that
the
messages
are
slightly
longer,
although
slightly
more
rounds
or
oh
you
know
it
just
it's
the
the
reason
is
that
it
the
mere
existence
of
this,
and
the
reason
is
that
you
know
you
stand
to
win
almost
nothing.
You
save
a
little.
M
K
And
other
than
the
details
of
the
challenge
game,
you
know,
there's
a
question
around
which
which
hash
function.
Do
we
want
to
use
for?
The
Merkel
is
Asian,
and
you
know
there's
this:
it's
still
up
in
the
air.
What
we
want
to
use
here,
one
of
the
the
main
questions
is:
do
we
want
to
use
a
stock
friendly
function
and,
if
so,
which
one
I
am
personally
a
proponent
of
of
having
a
start
friendly
function
and
one
of
the
the
will
have
two
candidates?
K
One
is
called
mimic,
mi
MC
and
the
other
one
is
based
on
on
AES
but
stock.
Where
who
we
we've
given
a
very
large
grants
will
be
producing
a
report
fairly
soon
and
in
making
it
public-
and
you
know,
encouraging
feedback
on
that
report,
where
they
will
make
suggestions
as
to
what
would
be
stock
from
the
hash
functions
and,
at
that
point,
I
think
we'll
be
able
to
make
a
decision.
K
There's
also,
you
know
some
permits
translational
and
how
long
the
the
challenge
periods
should
last
and
how
long
the
this
the
secrecy
period
should
last
how
long
you
should
keep
your
secret
secret
and
that
kind
of
thing.
But
these
are
mainly
parametrizations.
They
don't
really
affect
the
design
that
much.
K
K
Yeah
I
mean
if
we
do
have
a
custody
bit.
You
know
it
would
make
sense
to
just
set
it
to
zero
all
the
time
until
we
actually
have
a
shot
data
right
passing
over.
You
have
some
set
hash
or
function
the
hash
for
the
sharp
block
cash
in
the
cross.
Things
crime
right,
all
right,
cool
the
last
thing,
so
I've
kind
of
I,
don't
think
I
got
a
response
from
everyone,
but
in
some
of
my
discussions
it
started
to
seem
like
the
the
26th.
A
A
A
My
head,
itches
I,
mean
there
was
one
issue
around.
You
know
the
the
hash
chain
and
the
hash
one.
You
know
where
it's
that's
difficult
to
do
in
in
the
pool
context.
I
think
that
the
solution
there
is
just
to
forget
about
the
hash
onion
and
have
separate
commitments
and
reveals
there
and
then
in
terms
of
building
the
proof
of
custody
with
all
the
hashing.
A
Q
Q
Eternal
to
my
pool
a
sign,
basically
every
pool
member,
and
then
they
have
a
different
secret
for
one
in
time
and
if
your
secret
is
exposed
before
the
point
of
time.
I
can
punish
that
individuals
within
my
pool
provided
that
that
individuals
share.
So
then
the
punishment
that
will
be
received
and
I
assume
at
some
opponent
of
the
number
of
people
being
punished
at
a
given
time.
But
if
it's
on
average,
the
water
of
that
is
relatively
small,
then
that
you
could
paste
as
a
size
limit
on
the
amount
of
each
person
in
the
pool.
K
You
so
the
core
death
calls
got
a
little
bit
off.
They
ended
up
stayed
in
a
meeting
last
week
and
they're
doing
a
meeting
this
week
so
they're
doing
a
meeting
tomorrow.
I,
don't
think
we
have
a
ton
of
overlap
in
the
members
of
the
two
people
that
attended
both
meetings
or
whatever
and
we'll
try
to
plan
it
on
that,
but
I'll
early
next
week,
I'll
have
the
call
scheduled
yeah.