►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
It
has
been
very
exciting
to
bring
the
prefer
replication
from
a
theory
construction
to
a
private
to
a
practical
construction,
and
we've
been
going
through
this
full
pipeline,
which
we
call
the
research
lab
pipeline,
and
this
has
been
constant
interaction
between
researchers
and
developers
and
in
the
in
the
past
quarter
in
particular
with
them.
We
have
achieved
very
important
milestones,
which
were
making
sure
that
the
that
the
replication
algorithm
would
run
in
practical
times
and,
and
there
is
a
lot
of
work,
there
is
a
lot
of
work
that
went
into
that
and
it's
impossible.
A
B
B
That
was
definitely
not
the
case
when
we
started
and
still
now,
the
proof
size
is
quite
big.
The
initial
proofs
for
a
single
challenge
that
which
is
here
like
the
important
factor
is
for
immunity
depends
on,
is
a
is
a
local
inclusion
proof.
So
you
end
up
with
log
n
of
where
n
is
the
nodes
that
you
have
new
local
tree
of
the
original
data,
and
so,
if
you're
trying
to
replicate
a
gigabyte,
you
end
up
with
login
of
that
in
terms
of
nodes
and
so
depending
on
your
node
size.
B
You
now
have
this
path
and
it's
a
single
challenge.
If
you
want
to
expand
that
to
a
lot
of
challenges,
this
grows
very
quickly.
One
of
the
first
things
that
we
have
is
to
improve
this
proof
size
is
that
we
use
a
snark
to
prove
that
these
inclusion
proofs
were
done
correctly
and
which
brings
us
down
to
the
current
best
snarking
notation,
which
is
192
bytes
from
I
mean
these
are
in
the
kilobyte
range
to
the
byte
range,
which
is
really
nice.
B
C
Of
the
key
things
that
we
want
would
prove
a
replication
is
that
they
should
be
efficient
and
and
relatively
cheap
to
compute.
This
is
a
replication
need
to
be
slow
enough
in
time
to
actually
work
correctly
as
intended,
but
we
want
them
to
be
fast
enough,
such
that
it
doesn't
take
a
very
long
time
for
for
a
miner
to
produce
lots
of
these
proofs
over
all
of
the
sorts
they
have.
So
we
have
to
find
a
sweet
spot
where
there's
a
lot
of
parameters
in
the
space
that
we're
exploring
things
like
the
sector
size.
C
B
So
run
time
is
definitely
a
challenge
because
you
want
not
only
to
allow
for
a
certain
fast
replication,
but
you
also
want
to
make
sure
that
you're
not
wasting
a
lot
of
energy
just
because
you're
replicating
data,
so
it
can't
just
be
like
okay,
we
throw
more
hardware
to
it,
but
because
that
means
it's
just
increasing
the
cost
from
the
miners.
So
we
started
with
reasonably
naive
implementation.
In
the
sense
we
tried
to
make
it
work
at
all,
so
we
started
looking
through
all
the
ways
that
this
can
be
optimized
and
then
step
through.
B
This
ranges
from
a
good
chunk
of
optimizations
are
definitely
paralyzing
a
certain
steps,
while
the
construction
is
built
such
that
it
cannot
be
parallelized
in
certain
areas
for
security
reasons,
there's
still
a
large
range
of
places
where
paralyzation
can
be
applied
to
speed
things
up.
One
of
the
other
things
that
we
were
working
through
is
trade-offs
between
different
hash
functions.
So
we
have
two
main
task
functions
that
we're
working
with
with
one
is
the
peterson
hash
and
one
is
blake
2's
peterson
hash.
B
At
the
beginning,
everything
was
using
peterson
hash
for
this
simple
reason
that
Peterson
hashes
are
optimized
for
running
inside
a
stock,
and
so
they
require
a
lot
of
less
constraints
than,
for
example,
a
blake.
Yes.
But
the
unfortunate
side
effect
of
Peterson
is
that
it
is
in
comparison
to
blake
2's
by
a
factor
of
a
thousand
slower.
So
if
you
do
a
lot
of
hashing,
which
you
happen
to
do
when
building
a
Merkle
tree
of
a
lot
of
data,
this
is
not
particularly
great.
B
So
what
one
of
the
things
that
we
managed
to
do
is
figure
out
the
places
where
we
could
trade
off
Blake
grass
versus
Peterson
hashes
and
use
that
those
speed
ups
actually
improve.
The
construction
queue
for
last
year
was
exciting
and
October.
We
were
looking
at
180
hours
and
then
November
if
T
and
then
in
December,
started
to
hitting
the
one
hour
mark
and
been
decreasing
ever
since.