►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
I
guess
I'll
start
off
with
so.
First
of
all,
the
things
that
we
figure
it
out
over
the
last
couple
of
over
the
last
month
or
two
are
number
one,
but
the
newer
roadmap
is
starting
to
here.
We
have
only
use
aggregate
signatures
and
aggregate
signatures
basically
allow
you
to
have
a
signature
that
represents
that
combines
together
as
an
signatures
from
some
set
of
elevators
and
the
if
the
cost
of
verifying
a
signature
from
here
in
Washington,
where
an.
C
A
There
are
several
different
ways
to
do
this,
so
one
is
BLS
signatures
and
there's
research
posts
about
that
in
other,
is
that
you
can?
If
you
want
something
purely
cash
base,
then
you
can
do
us
darhk's
on
top
of
all
Lamport
signatures
and
that's
a
more
and
that's
a
likely
more
longer-term
sort
of
roadmap,
like
aside
from
those
two
there's
theoretically
other
options
as
well
right.
So
what
this
gives
us
is.
It
basically
gives
us
the
it.
A
It
reduces
the
marginal
cost
of
having
more
validators,
and
so
maybe
by
a
factor
of
something
like
100
range.
So
what
that
means
is
that
we
can
have
more
elevators
and
the
whole
thing
can
be
more
efficient
and
we
can
not
come
at
minimum
validator
size
down
from
1530
to
eat
the
one
with
a
bunch
of
other
good
stuff.
A
The
second
thing
is
that
the
a
lot
of
the
kind
of
overhead
and
math
involved
in
dealing
with
these
signatures
and
processing
large
numbers
of
elevators
basically
involves
kind
of
shuffling
big
fields
around
and
like
doing
a
bunch
of
like
ORS
X
ORS,
whatever
into
into
big
big
fields,
and
it
involves
a
lot
of
integer
manipulations
all
over
the
place.
And
so
this
is
the
sort
of
thing
that
you
can
do.
What
really
really
fast.
A
A
The
design
that
we
have
could
totally
function
has
a
kind
of
non
scalable
for
perot
state
system
great.
But
it's
fear
it's
fairly
easy
to
like
basically
dual-purpose
a
lot
of
the
messages,
as
also
being
at
the
stations
for
blocks
inside
of
shards,
and
so
we
get
a
bunch
of
shots
for
free
and
how
many
shards
doing
yet.
Well,
because
we
can
now
use
the
aggregate
signatures,
we
get
even
more
shots.
B
A
I
guess
that,
like
one
of
the
challenges
to
your
right
is
that
you
like
at
first
I,
think
we
kind
of
thought
that
you
know
I
think
usually
that
the
number
of
shards
could
go
up
or
down,
but
that's
actually
a
pretty
bad
idea,
because
what
that
means
is
that
if
I
have
a
contract
on
shard
number
573
and
then
a
bunch
of
people
drop
out,
and
then
my
shard,
like
pauses
for
four
five
years
or
whatever.
So
the
approach
that
this
is
one
of
those
not
fully
decided
things.
A
But
the
approach
I'm
favoring
is
where
we
set
a
number
of
shots.
A
number
of
shards
is
high,
so
at
4096.
But
then
the
idea
is
that,
as
the
number
of
validators
participating
in
proof
of
stake
goes
down,
then
the
gas
limit
of
the
shard.
So
it
basically
drops
a
lot
of
lower
than
what
the
maximum
value
would
be
right.
So
if,
for
example,
the
gas
limit
of
a
shard,
if
literally
everyone
was
validating,
would.
A
If,
when
we
10%
of
people
are
validating,
the
gas
oneof
limit
of
a
char
goes
down
the
two
million
and
because
we
have
one-tenth
as
many
validator
is
each
validator.
What
on
average,
be
responsible
for
10
times
as
many
shards,
but
each
shard
has
one-tenth
the
capacity
so
about
so
it
balances
out
right.
So
it's
four
thousand
shards,
but
in
expectation,
maybe
three
or
four
hundred
shards
worth
of
capacity.
But.
C
B
You
can
either
make
a
zero
vote
or
a
one
vote,
and
this
this
one
bit
of
flexibility
is
going
to
be
your
custody
bit
and
the
custody
built
as
a
validator
is
a
bit
that
you
can
only
compute.
If
one
you
have
custody
of
the
data
and
two
you
have
custody
of
secrets,
which
only
you
know,
and
that
provides
some
of
the
crypt
economic
incentives
to
make
the
whole
scheme
work.
B
B
For
example,
in
Australia,
saying
you
will
Rand
our
commitment
and
there
so
some
technicality,
everything
yeah
less
signatures
and
then
once
you've
made
a
deposit,
do
a
validator
and
your
validator
for
everything,
your
validator
as
a
Casper
validator
as
a
beacon
chain
proposer
and
a
tester
as
a
shard
proposer
and
the
shadow
tester,
and
also
as
a
notary
and
so
you'll,
be
called
upon
on
the
randomize
basis.
So
you
put
all
these
all
these
tasks
and
like
one
of
the
designs
were
considering,
basically
is
similar
at
the
beacon
change
level
and
Ishod.
B
Wair
have
predominantly
two
types
of
participants.
You
have
the
proposals
who
will
extend
the
chains,
and
then
you
have
the
testers,
which
you
can
think
as
kind
of
code
proposals
which,
together
what
the
proposer
will
attach
to
the
fact
that
the
proposal
is
building
on
on
the
routes.
Sorry
in
the
tip,
so
that
creates
the
friction
for
a
a
month.
Traditionally
is
a
monopoly
proposal.
It
can
just
propose
whether
he
wants.
B
A
So
the
ax
testers
would
not
be
kind
of
policing
to
make
sure
that
the
proposed
are
followed
like
I,
feared
gas
policy
or
anything
like
that.
The
attested
just
be
a
testing
to
a
few
things
blinding
us
that
the
the
proposed
are
actually
built
to
walk
on
the
head
is
that
the
block
is
valid
and
three
is
the
block
is
available.
D
F
A
A
So
in
terms
of
like
stuff
that
I've
written
down
this
year,
but
this-
and
this
is
basically
the
kind
of
approximate
simple
group
of
custody
algorithm,
so
the
general
way
pool
of
custody
works.
Is
that
you're
attesting
to
when
you
are
testing
to
a
blog
you're
attesting
to
a
blog
header,
and
you
can
think
of
a
blog
header
and
very
simplified
form.
It's
just
being
a
work
over
it,
and
the
work
already
has
a
bunch
of
underlying
data
and
your
custody
is
basically
taking
the
underlying
data
that
X
or
inion.
A
A
B
Thing
to
mention
is
we
could
have
made
the
the
custody
bit
longer,
so
it
could
be
like
the
full
house,
but
the
reason
why
one
bit
is
kind
of
optimal
is
from
point
of
view
a
test
aggregation.
So,
oh,
when,
when
the
evaluators
in
a
committee
of
an
attestation
committee
or
in
the
form
of
a
notarization
committee,
they
will
be
voting
on
the
same
message.
So
they'll
be
signing
the
same
message
and
it
turns
out
that
Els
aggregation.
B
A
Another
thing
you
know
other
thing:
that
is
that
there
is
also
an
information,
theoretic
argument,
which
is
that
if
every
distinctive
validator
provides
their
own
distinct
group
of
Custody,
then
the
amount
of
information
you'd
have
to
throw
onto
the
chain
is
basically
32
bytes
times
the
number
of
elevators.
Whereas
here
the
information
we're
throwing
on
is
basically
two
bits
times
the
number
of
elevators
and
yes,
you
know
like
technically
it
can
be
1.5
3
bits
or
whatever.
The
number
is,
but
like
it's
extremely
small
right.
A
C
H
A
Proof
of
custody
is
for
verifying
data
availability
of
shard
blocks
that
are
being
a
test,
attested
to
in
cross-links
and
like
this
is
a
bit
of
a
simplified
diagram
because,
like
when
you're
attesting
to
a
cross
link
your
artistic
to
not
just
one
block,
but
an
entire
tina
blocks
going
back
since
the
previous
cross
link,
but
like
there
is
still,
there
are
different
ways
to
do
with
that.
Right,
like
you,
can
have
like
some
kind
of
working
like
weirder
merkel
structures.
A
F
B
I
guess
one
thing
which
might
be
worth
pointing
out
or
listing
is
all
the
services
that
the
beacon
chain
does,
so
you
can
think
of
it
as
the
kind
of
the
system
chain
with
the
manager
chain.
It
does
order
kind
of
oh,
the
nitty-gritty
stuff
that
manages
all
the
shots,
the
shots
themselves.
I
extremely
clean
they're,
mostly
just
data.
B
So
just
a
word
on
that
in
terms
of
roadmap,
we
have
phase
one
of
a
fear
of
2.0,
which
is
kasper
and
shouting,
but
the
shouting
part
is
only
really
solving
the
data
availability
challenge
so
coming
to
consensus
as
to
what
data
is
being
pumped
into
the
shots
and
then
phase
two
plus
is
basically
the
EVM.
You
know
it's
either
in
Indiana
with
an
ocean
state
or
a
notional
transaction,
and
so
in
terms
of
the
various
services
that
the
beacon
chain
provides.
Number
one
is
the
heartbeat
and
random
number
generator.
B
B
This
new
random
number,
which
is
output
kind
of
constructed,
and
we
can
talk
about
that
house
contract,
that's
a
beta
and
for
every
random
number.
You
have
one
heartbeat
another
service
that
the
beacon
chain
provides
is
all
the
accounting
for
the
rewards
and
penalties
and
deposits
of
the
validators.
So.
B
B
So
like
the
design
that
we're
considering
really
nicely
couples,
sharding
and
Casper,
so
in
terms
of
these
infrastructure,
every
shot
is
where
the
individual
goals
are
made
and
so
we're
reusing
messages.
We
will
be
using
gossip
channels
and
we
also
be
using
the
aggregation
that
each
individual
shard
has,
and
then
we
have
these
very
small
and
efficient
to
verify
messages
which
so
you
can
think
of
every
shop
kind
of
as
a
stating
pool
in
and
of
itself
each
self
having,
for
example,
valuators.
H
H
A
I
basically
be
like
until
the
until
door,
so
yeah
I'm
aware
you
can
run
like
real
deaths.
On
top
of
it,
you
basically
have
two
options
right.
What
if
them
is
that
you
use
the
watching,
I,
think
I,
know
high
high
cost,
but
very
high
availability
data
store
and
the
other
one
is
that
you
basically
come
up
with
a
custom
execution
engine
and
you
like.
A
Basically,
we
have
some
application
where
you
sort
of
offer
be
off
Chilean
that
some
particular
function
calculated
over
the
entire
chain
is
something
that
you
care
about,
and
then
that
could
be
about
make
sense
for
a
lot
of
non-financial
applications.
And
then,
if
you
want
to
actually
calculate
that,
you
could
end
up,
you
know
accusing
some
crippling
economic
execution,
and
that
is
your
knowledge.
It's
the
same
purse
or
whatever
so.
B
I
think
the
most
simple
example
of
alternative
execution
engine
is
actually
true
bit
so
or
true
that
you
can
have
agree
on
a
program
that
we
on
input
and
then
somehow,
with
some
magic
very
cheaply,
also
agree
on
the
outputs.
The
problem
is
that
the
greeny
only
input
and
on
the
program
to
run
it
is
very
expensive
because
that
you
know
megabytes
or
more,
and
so
traditionally
there's
been
this
problem.
F
So
as
far
as
I
understand
it,
I
poke,
the
validators
will
be
asked
to
might
be
asked
to
validate
different
charts
right.
So
what
I'm
thinking
about
is
that
if
the
so,
if
I'm
on
the
elevator
and
I
invalidated
shard
number
three
and
now
I'm
gonna
start
to
validate
now
five
shard
number
five
I
have
to
somehow
find
my
future
peers
ahead
of
time
and
those
other
peers
have
to
find
me.
So
the
network
has
to
completely
reform
between
the
epochs.
Did
you
think
about
how
this
is
going
to
be
done?.
A
There
are
going
to
be
a
lot
of
static
peers
that
just
stay
on
the
same
chart.
Right
like
the
user
is
probably
will
be
it
there
might
there's
gonna
be
a
notes
that
just
run
every
shard,
also
yeah,
our
current
spec
suggests
diag
has
shards
having
long-term
proposers
that
stay
on
that
shard
for
a
period
of
something
like
a
more
like
a
month
or
more
so
like
the
network.
A
A
Can
know
where
you'll
be
allocated
to
some
amount
of
time.
Some
amount
of
time
like
plus
a
few
minutes
in
advance,
so
like
basically
you'll,
have
a
more
time
this
with
shards,
then
you
know
substantially
more
time
as
to
which
shards
that
it
normally
takes
like
a
gas
or
P
already
know
it
today,
just
what
you
want
and
I
in
connect
to
the
network.
A
A
A
I
D
I
A
So
there
is-
and
this
is
kind
partially
question
mark
territory,
but
generally
with
every
you
slot,
so
whatever
you
with
every
new
period
of
five
seconds,
it
becomes
possible
for
a
set
of
validators
to
vote
on
some.
You
ought
to
vote
on
and
you
cross
link
on
some
particular
short,
though
the
specific
set
of
validators
and
the
specific
short
they'll
be
going
on
will
be
picked
a
few
minutes
in
advance,
probably
one
I'll,
probably
like
one.
A
H
A
J
A
J
My
question
is:
are
they
is
that
roadmap
intended
as
research
milestones
or
is
it
more?
Is
it
actually
production
deployment?
You
know
like
distillate
live
first
phase,
one
goes
live
without
any
execution
and
Jenner.
Is
it
possible
that
the
execution
engine
would
be
developed?
You
know
in
parallel
as
an
architectural
layer,
so
that
phase
one
phase
two
go
live
at
the
same
time
and.
A
That's
possible
too,
though,
I
mean
realistically
face
to
what
happened
after
phase
one
because
like
face
to
it,
is
what
it's
something
that
we
need
to
be
well.
I
mean
it
is
okay,
fine,
it
is
theoretically
possible,
but
it
can
be
possible
that
so
much
progress
will
have
been
made
on
Phase.
Two
is,
as
she
put
the
what
to
say
at
the
same
time,
and
also
there's
like
another
dimension
is
I.
A
Data
and
no
execution
going
to
go
alive,
as
they
quote
mean
that
look
like
people
stating
real
ether
on
it.
Yeah
I
mean,
like
my
personal
opinion,
is
that
if
all
of
the
different
charting
clients
like
prism,
any
you
know,
prismatic,
Sandra's,
Pegasus
and
so
forth,
were
more
stable
and
had
like
Goods
to
go
well
tested
and
on
advert
and
audited
versions
of
phase
one
venture.
A
A
D
B
A
A
Of
the
dependency
graph,
so
the
thing
in
the
middle
is
the
beacon
change
and,
like
the
old
imagine,
this
has
two
short
shorts
chain
surrounding
you
think
of
being
kind
of
like
the
middle
of
a
hollow
cylinder
and
imagine
the
Sharks
means
is
being
on
to
the
edge
of
the
hollow
cylinder
and
imagine
me
like
a
hundred
or
a
thousand
of
them
surrounding
the
beacon
chain.
So
the
kitchen
is
clearly
a
chain
and
the
Shannara
chains
are
clearly
chains
and
you.
E
A
There
is
two
kinds
of
the
links
that
are
going
between
them
right,
so
one
kind
of
link
is
that
there
is
a
link
going
where
short
logs
depend
on
eaten
books,
and
this
happens
kind
of
implicitly,
because
it's
the
beacon
team
that
controls
the
randomness
and
it's
the
randomness
that
determines
which
validators
are
allowed
to
create
an
in
testable,
walk
separately
towards
one
particular
charge
chains,
a
particular
times
right.
The
second
kind
of
link
over
here
is
the
link
where
the
beacon
change
and
becomes
dependent
on
the
Block
in
the
shark
chain.
A
For
example,
this
proposed
there
is
somewhat
halti
and
just
like
had
nodes
here,
will
network
on
activity
when
we
saw
like
a
quarter
of
the
cross
links
and
then
of
the
cross
link
messages
that
a
bunch
of
other
crossing
two
days
here.
So
then
over
here
this,
this
is
viewed
as
having
a
cross
link.
This
is
viewed
as
having
a
dependency
on
this.
A
So
I,
basically
think
of
it
as
like,
the
cause,
a
sort
of
like
a
kind
of
a
structured
day
where
you
have
chains-
and
you
have
these
interconnections
between
chains
except
interconnections
that
are
that
make
the
center
dependent
on
an
edge
or
more
sensitive
it.
So
they
require
this
conformation
structure
to
become
binding.
B
So
one
of
the
things
we're
looking
into
right
now
is:
how
can
we
maximize
the
decouple
the
beacon
chain
of
the
shots?
So
if
the
beacon
chain
is
kind
of
locally
for
call
a
little
bit
forty,
we
still
want
the
shots
to
power
on
so
I
guess
at
a
minimum.
You
want
the
shots
to
respect
the
finalize
checkpoints,
because
it's
finalized
and
so
we're
considering
having
only
this
dependency
in
this
direction.
E
So
I
have
a
question
about
scaling
right
that
I
think
in
Taipei
they
were
still
just
a
hungry
chart
and
not
just
gonna,
be
like
a
whole
bunch
of
them
more
and
there's
this
maxim
that
the
amount
of
traffic
will
grow
up
to
the
capacity
right.
So
we
can
expect
that
each
and
every
shard
will
have
about
as
much
traffic
as
say
the
main
team
today,
right
and
I'm
still
thinking
here
in
practical
terms,
for
processing
power
of
the
various
players
in
the
system
with
so
many
sharks
around.
A
B
A
To
the
Sharks
but
like
technically
they
don't
even
have
to
do
that,
because
the
beacon
chain
maintains
my
client
access
to
the
Sharks
and
they
need
to
maintain
full
client
validation
on
the
tiny
subset
of
shards
that
they're
assigned
to
it,
which
we
can
say
is
too
because
if
it's
going
to
be
20,
then
it
will
be
20.
Shards
operating.
A
C
A
So
I,
basically
in
terms
of
the
lowered
right
like
the
goal,
is
that
running
the
beacon
chain
should
be
thrust
a
should
take
like
maximum.
We
don't
have
the
F,
whatever
acceptable
computation
load
is
and
then,
like
so
I,
see
over
its
view,
and
then
this
over
here
would
take
another
C
over
2
and
that's
basically
what
I
know
what
the
load,
what
that
would
be
so
insert
and
the
the
load
over
here
it
is
proportional
to
the
number
is
proportional
to
the
number
of
nodes
and
and
to
the
number
of
shards
and
well.
A
The
capacity
of
the
shards
is
proportional
to
1
over
the
total
capacity
of
the
shards
proportional.
So
it's
kind
of
the
same
thing.
So
in
total
you
do
get
this
kind
of,
like
o
of
like
basically
something
like
C
squared
over
2
capacity
but
like
in
every
node
in
consensus
I.
My
personal
goal
is
actually
is
closer
to
the
overhead.
Over
of
a
bit
point
node
than
the
overhead
of
a
crew
of
a
person
say,
aetherium,
node
and
I
think
we
can
get
closer
course
into
that
over
time.
A
F
F
F
Okay,
cuz
I
was
gonna
suggest
that,
instead
of
basically
everything
happens
on
the
on
a
certain
for
everybody
or
a
certain
time,
you
can
actually
sort
of
do
it
gradually.
You
just
drop
one
person
or
a
community
in
one
shard
other
than
either
graduate.
So
there's
no
like
a
huge
network
traffic.
Spikes.
A
B
A
You
know,
basically,
all
of
the
parameters
we're
using
a
purely
hash
based
thoughts
are
an
alternative
if
what,
if
slash,
glenda
these
yeah,
where
it's
needed
slash,
we're
ready
to
upgrade
to
it
but
like
there
are
also
kind
of
more
perform
performance
organs.
Don't
work
given
I
could
give
in
the
technical
constraints
that
we
have
now.
A
Basically,
that
would
mean
the
main
approach
that
we
have,
for.
That
is
an
asynchronous
call.
So
basically
the
idea
is
that
step
one.
You
would
have
say
a
function
call
having
over
here,
and
this
would
create
a
receipt
then
step
two
or
this
knew
you'd
wait
for
this
to
get
cross
links
then
step
three
year
over
here.
K
K
A
Now
this
is
if
he
wants
to
kind
of
rely
on
the
base
change
right
like
there
is
also
this
hope
to
economic
alternative.
We
are
basically
a
shard
over
here.
Just
accepts
that
or
yeah
what
you
do.
It
is
like
basically,
a
shard
over
here,
except
that
some
particular
state
over
here
is
the
state
of
that
shard,
because
someone's
have
basically
publishes
a
bonds
that
says
I
agree.
There
was
some
amount
of
money,
that's
not.
B
A
But
between
like
two
or
three
points,
depending
on
how
much
recipe
or
willing
to
take
now
of
the
third
alternative
by
the
way
is
some,
you
can
do
an
alternative
execution
engine
which
basically
kind
of
optimistically
the
processes
cross
chart
across
contract,
calls
for
only
one
walk
of
latency,
and
the
idea
is
that
in
case
there
are
any
mistakes.
There
is
a
kind
of
slower
gadget
that
runs
the
runs
a
bit
behind
the
main
execution,
and
that
means
up
the
mistakes.
B
Write
this
optimistic
approach
was
quite
nice,
I
mean
one
of
the
reasons
is
that
we
have
this
spectrum
of
confirmation.
So
I
guess
it
starts
with
the
transaction
in
a
block
by
a
proposer,
and
then
you
have
attestation
is
planning
on
and
then
you
have
more
proposals
which
pile
on
and
then
you
have
the
beginnings
of
the
formation
of
a
cross-linked.
A
A
B
A
So
by
defaults,
think
of
white
defaults,
think
of
shards
as
just
pink,
universes
and
or
being
like
cities,
and
when
you
build
something
you
choose:
what's
they
do
you
build
it
in
right?
So
now
the
one
now
there
is
such
a
thing
as
cross
your
engaging.
So
this
is.
This
is
basically
a
generalization
of
kind
of
like
atomic
coachman's,
actually
working
techniques
that
I
came
up
with
that
works
for
kind
of
solving
traded,
hotel
problems.
So
the
idea
what
the
yanking
is.
A
Basically
that
imagines
you
have
one
contract
over
here,
which
is
for
trains
and
one
contract
over
here,
which
is
for
hotels,
and
he
wants
to
book
the
Train
in
the
hotel
atomically
because,
like
there's
no
point
in
putting
the
Train,
if
the
hotels
are
all
are
almost
sold
out
in
the
vice
versa,
so
the
like
enough,
you
can
do
it
in
a
purely
a
synchronous
setup,
but
it's
finally
a
fairly
inefficient.
So
what
you
do
instead?
Is
you
basically
step
one
you
send
a
transaction
and
what
this
transaction
does
is.
A
A
To
both
one
particular
train
ticket,
then
what
you
do
is
you
send
interns
basically
says
I'm
once
the
heat
mystery
to
this
contracts
to
this
chart.
So
what
you
do
is
we
do
the
same
dance
that
you
did
over
here,
step
1,
you
basically
kind
of
like
you
are
basically
consuming
on
this
contract.
Then
you
can
take
the
receipt
you
can
be
and
you
can
include
the
universe
here
and
then
tonight
you
pop
the
Train
swap
contract
into
existence.
Then
over
here
you
have
also
a
hotel
salon
contract
which
is
already
on
the
shard.
A
D
H
A
Basically,
what
happens
then,
is
that
the
this
basically
gets
up
the
crosswalk
in
the
first
version
right,
the
crosslink
would
get
included
and
basically
the
as
soon
as
this
gets
discovered
by
the
rest
of
the
chain,
whatever
portion
of
the
beacon
chain
is
that
is
including,
and
after
that
cross
link
would
have
to
be
started
in
the
later
versions.
We
do
have
things
like
flood
proofs
and
data
availability
proofs
which
basically
allow
you
to
like,
like
do
a
bunch
of
proxy
checks
for
validity
of
the
shard
chain.
We
are.
A
A
India
one
so
in
the
short-term
future,
that
would
probably
be
the
erasure
code
and
data
availability
of
stuff
and
fraud
proofs
in
don't
long-term.
You
would
basically
have
like
starts
and,
like
you,
don't
even
need
any
kind
of
interacts
with
magic.
You
just
say
in
this
is,
like
you
know,
the
long-term
utopia.
You
would
just
have
a
stark
over
here.
That
basically
says
here
is
the
data
start.
A
This
is
valid
and
the
entire
history
is
valid,
and
this
is
a
pointer
to
all
of
the
data
and
this
chain
and
all
of
its
recent
history
and
the
data
stories.
You're
gonna
be
you
can
ran
away
and
you
can
ran
away
sample
it
to
verify
that
it's
available,
and
so
any
clients
can
do
basically
download,
like
a
couple
of
megabytes
of
data
and
verified
that
an
entire
shark
has
the
Philippian
availability
properties
that
you
want,
but
that's
like
if
he
reads
to
be
point
open
and
we're
like
more
than
a
decade
away
from
it.
A
C
A
I
mean
this
is
kind
of
in
this
is
kind
of
imperfect
because
you
would
realistically,
we
have
to
over
be
a
bit
to
make
sure
you
can
hear
whatever
the
gas
crisis
but
like
outside
of
that,
and
there
are
ideas
on
how
to
improve
it
in
the
future.
But
I
would
like
add,
come
watch.
The
do
so
later
same
thing.
So.
K
A
It's
probably
also
worth
noting
that
realistically
they'll
be
like
eh
anyone
lighting
networks
floating
around
on
top
of
this
as
well.
So
you
could
like,
if
you
need
a
top
of
a
mirror
at
some
particular
points,
then
you
can
basically
just
like
to
use
some
decentralized
short
for
short
exchange
and
just
like
swap
it.
C
I
E
A
A
D
A
This
is
a
question
mark
one
idea
I
have
is
that,
like
basically,
the
main
chain
will
continue
existing
for
a
while
and
then
eventually
it
can
either
become
a
site
either
become
a
side
chain
of
the
jab
of
the
Charlotte
system
or
I
mean
basically,
you
can
have
a
smart
contract.
The
main
chain
become
a
smart
contract
on
the
on
time,
charging
system
or
the,
but
basically
one
of
one
of
those
few
possibilities
right
or
the
main
chain,
or
become
some
fancy.
It's
my
channel
so.
A
Should
be
possible
to
set
it
up
so
that
the
main
chain
like
the
main
chain,
with
its
logic
kind
of
continues
existing
forever,
but
it
doesn't
add
consensus
complexity
to
be
to
the
new
layer
2,
which
is
I,
think
something
that
we
do
want
to
go
for.
We
don't
want
to
sort
of
grandfather
in
the
EDM.
It's
really
it's
a
means
of
the
show,
if
you're
in
2.0,
so
we
have
to
deal
with
two
two
virtual
machines
forever,
but
fortunately
wasn't
as
fast
enough
that
you
could
just
do
a
VM
inside
of
watham.
G
A
D
A
A
G
A
This
design
is
a
homogeneous,
charting
and
New
York
I
personally
am
in
favor
of
homogeneous
charting
and
not
this
vision
to
what
individual
sharks
being
experimentations
Owens.
Basically,
because
once
you
start
going
in
that
direction,
that
starts
massively
loading
consensus,
complexity
and
it's
like
I
think
like
something
like
a
plasma
plasma
chain
or
a
side
chain
is
probably
the
more
appropriate,
so
kind
of
experimentation.
Ground.
A
D
H
A
A
A
B
A
Yeah
I
know
so,
like
one
thing
that
you
can
do
right
is
you
can
come
up
with
your
own
separate
meta
protocol
that
basically
says
any
trends
out
any
transaction
to
be
part
of
my
matter.
Protocol
has
to
tag
itself
and
what
taking
just
me
into
the
first
five
bid.
Five
flights
equals
some
particular
value,
and
then
you
can
have
a
CG
star
for
a
two-bit
engine
whatever
that
basically
says.
Oh,
you
know
like
here.
A
We
have
some
gadget
that
basically
calculated
that
can
eventually
interactively
somehow
calculate
the
result
of
processing
all
of
what
all
the
transactions
in
the
blockchain
that
have
that
particular
tanks.
At
one
choice,
you
couldn't
even
have
any
like
a
sticky,
very
smart
contract
pencil
than
the
execution
engine,
but
I
mean
directly
controller
and.
B
K
A
Rewards
are
still
won
on
set
all
the
issue
and
I
know
like
I.
Have
my
views
wide
passes
years
of
like
the
all
like?
Not
that
serving
yet
but
like
I.
Definitely
think
that
when
Kinsey
I'm
very
like
a
very
confident
that
we
can
make
you
with
less
than
1
million,
even
rewards
like
it's
very
possible,
but
we
can
make
do
with
zero.
A
B
B
One
of
the
things
that
we'll
have
is
all
the
penalties
will
be
aggregated
in
the
container.
The
reason
is
that,
if
your
balance
becomes
too
low
as
a
validator,
your
deposits,
because
you
know
that
we
want
to
force
the
ejection
so
that
you
can
cheat
the
system
and
so
in
the
showers
that
will
probably
be
rewards,
so
they'll
be
rewards
for
for
the
transaction
fees,
but
there
are
also
be
rewards
for
waiting.
G
So
I
think
that
I
didn't
fully
get.
Why
actually
I
am.
There
are
a
lot
of
things
that
I
don't
fully
get
because
I'm
telling
you
this,
but
this
custody
thing.
So
as
far
as
I
understand
you,
you
give
up
this
proof
that
you
have
some
kind
of
data
and,
if
it
later
it
is
revealed
that
you
don't
and
you
lose
some
sort
of
deposit
so
how
their
ballot
cases
where
you
are
losing
this
data,
because
I
don't
know
your
database
soon
or
whatever,
and
this
is
just
a
case-
it
should
happen
so
yeah.
A
So
this
is
the
sort
of
stuff
that
tends
to
that
would
get
covered
by
the
partial
swatching
mechanism.
So
the
idea
basically,
is
that
if
you're
the
only
one
in
subcommittee
who
screwed
up,
then
your
penalty
would
be
lower.
But
if
you
screw
up
at
the
same
time
as
many
other
people
screw
up,
then
that's
more
likely
to
be
an
attack.
And
so
you
get
utilized
much
more
I
mean.
B
One
thing
to
mention
is
that
it's
a
proof
of
custody
at
the
time
of
signing.
It
is
not
a
proof
of
storage
that
you
will
have
it
in
the
future.
So
if
you
have
the
data
at
the
point
of
signing
and
the
right
of
custody,
then
it's
quite
likely
that
no
one
who
times
you,
because
it
would
be
a
waste
of
their
money
two
times
so.
A
B
A
A
Sure
so,
first
of
all,
like
the
fact
that
the
signature
is
our
BLS
aggregated
does
not
look
from
E
network.
From
a
perspective
of
oh,
like
the
pattern
in
which
things
move
through
the
network,
it's
exactly
the
same
as
it
would
be.
If,
for
the
blocks
were
just
been
very
contains
a
puddle
of
the
signatures
like
the
BLS
aggregation
in
this
case
is
not
a
multi-step
procedure,
is
just
taking
signatures
and
will
literally
adding
them
together.
A
So
like
basically,
what
you,
what
it
looks
like
is
well
this
kind
of
graph
where
between
two
blocks,
basically,
one
flag
gets
published
and
within
that
time
a
bunch
of
people
will
need
to
assign
signatures
and
the
blog
or
needs
in
a
blog
could
include
those
signatures.
So
if
you
want
to
change
that
I
gotta
run
smoothly,
you
basically
want
this
time
to
be
a
bigger
than
twice
the
yeah.
You
know,
networks
away.
A
A
Right,
like
probably
a
few
hundred,
it
is
a
you-
definitely
can
calculate
the
maximum
right,
so
the
maximum
is
basically
the
maximum
theoretical
possible
number
of
validators
divided
by
the
length
of
a
clock.
So
if
you
want
to
calculate
it,
that
would
basically
be
four
million
divided
by
a
hundred
which
gives
40,000
and
40,000
signatures
the
times.
I
L
A
A
A
Where
would
one
know
it
basically
passes
along
a
signature
that
sounds
like
an
entire
flight
like
all
of
the
emissions
I
know
go
to
this
subset,
so
you
can
like,
basically,
if
the,
if
it
is
possible
to
create
a
second
layer,
Network
protocols
that
kind
of
short
the
aggregation
process
have
to
be
tacky
aware,
then
you
can
knock
down
each.
No.
It's
bandwidth
requirements
to
like
a
square
root
at
the
expense
of
an
extra
round
of
people
talking
to
each
other.
A
A
F
A
A
F
D
B
B
Because
he
knows
what
he's
about
to
reveal,
he
basically
has
one
bit
of
attack
surface
at
this
particular
point
in
time,
but
it
turns
out
that,
because
of
the
the
request,
each
of
the
construction,
where
the
randomness
right
now
will
impose
events
in
the
future,
an
attacker
can
basically
strategically
abort
in
order
to
have
more
participation
in
the
chain
in
the
future
which
give
gives
them
an
edge
versus
the
the
optimal
fifty
percent
threshold.
A
bit
like.
B
Thing
with
run
down
which,
which
I
don't
buy
too
much,
is
that
it
we
would
have
read
the
shots
too
closely,
listen
to
what
is
being
revealed
at
the
beacon
chain
layer.
So,
for
example,
if
one
of
the
proposals
are
the
beacon
chain
decides
to
abort,
then
the
whole
shopping
system
freezes
for
five
seconds,
which
is
obviously
on
the
cycle.
So,
on
the
other
hand,
the
the
new
construction
is
basically,
where
you
have
do.
You
have
two
steps
and
I
kind
of
the
metaphor
that
I
use.
Is
that
the
sponge
metaphor?
B
B
Now
that
happens
in
one
of
the
epochs
and
then
use
all
this
entropy
as
a
seed
for
your
verifiability
function,
and
the
very
function
will
basically
squeeze
all
the
entropy
over
time
in
such
a
way
that
you
can't
predict
what
the
entropy,
what
the
the
random
numbers
generated
on
the
other
side
will
be
so
and
one
of
the
properties
of
this
digit
construction,
you
have
a
very
stable
from
beacon.
So
there's
no
possibility
for
our
bought,
which
is
very
nice.
B
Another
thing
that
it
kind
of
allows
us
to
do
is
to,
as
I
said,
more
more
close
decouple
the
shards
from
the
beacon
chain
a
bit
more.
So
if,
if
the
beacon
chain,
for
example,
folks,
we
don't
want
to
have
to
recompute
all
the
for
the
committee's
and
basically
have
all
the
shot.
Also
have
the
shards
be
as
stable
as
possible.
A
This
is
my
I
know
short
of
the
three
major
mechanisms
for
getting
data
into
it,
getting
entropy
into
a
heuristic
chain
with
their
area,
so
I
got
a
weaknesses,
so
ran'tao
was
basically
kinda,
like
keeping
in
randomness
by
having
participants
provide
the
hash
free
images
of
data
they
already
committed
to.
Edf
basically,
has
people
submit
submit
commitments
and
that
later
on,
anyone
can
commit
calculate
PFC
by
calculating
the
FC,
takes
some
amount
of
time
and
then
there's
like
a
various
kind
of
like
collaborative
like
community.
A
If
the
community
thresholds
to
use
that
base
use
some
fancy
procedure,
we
are
at
the
end
of
the
procedure.
You
have
a
scenario
where,
let's
say
any
50
of
a
group
of
100
people
can
reveal
the
data
and
whatever
50
I
guess
they
heard
the
end
up,
revealing
the
same
data.
So
in
terms
of
the
manipulation
properties
on
the
R
and
outside,
like
basically,
you
can
be
miss
by
one
bit
by
choosing
to
forego
your
reward
on
the
EDF
side.
A
If
you
have
an
attacker
that
has
an
ASIC
which
is
fast
enough
to
basically
calculate
calculate
EFC,
so
the
so
quickly
that
he
knows
the
answer
at
the
FC
before
he
has
to
make
the
decision
about
whether
or
not
to
submit
see
it.
Then
you
can
break
it,
though
I
will
say
that
our
particular
way
of
using
the
vdf
like
basically
to
use
a
PDF.
A
Well,
you
can
design
a
system
where,
even
if
this
happens
at
sea
great
to
just
being
a
Red
Cow,
so
the
other
and
the
other
direction
a
super
ASIC
attacker
could
do
is
you
could
basically
get
the
system
used
to
a
high
level
of
difficulty
and
then
suddenly
disappear?
And
if
the
super
ASIC,
how
attacker
has
a
very
high
advantage,
then
the
rest
of
the
network
will
just
like
really
slow
down
for
some
time
until
it
adjusts
and
then
for
the
end
/.
A
Oh,
then,
you
can't
really
manipulate
unless
you
have
half,
but
it
has
other
issues.
One
of
them
is
that
he
can't
survive
on
more
than
50%
of
going
offline
and
the
other
isn't
it
requires
like
complex
system,
Dixie
generation,
setups
and
it
depends
on
public
key
cryptography.
So
like
outer-v
is
I.
Guess
like
I
personally,
I
mean
at
this
point:
I
am
lighting
this
kind
of
vdf,
with
a
with
a
backstop
where
city
PF
gets
broken
by
anything
that
integrates
to
being
a
readout.