►
From YouTube: Ethereum Core Devs Meeting #70 [2019-09-06]
Description
A
A
A
B
A
C
D
A
E
So
now
we
waiting
for
date
s
to
be
available,
so
he
can
confirm
if
everything
works
s
in
any
other
clients.
But
yes,
we
have
we
have
like
implemented.
We
looked
at
the
discussion
around
it
generally,
we
have
it
and
we
started
optimizing
it
as
well
and
I.
Think
I
think
we
already
have
it
under
decent
speed
will
be
much
improved.
A
All
right,
excellent,
okay,
so
now
that
we've
gone
over
kind
of
where
everyone's
at
as
far
as
the
client
updates
I
know
that
Jason
and
Thomas
and
Paul,
and
a
few
other
people
in
the
Gator
chat
kind
of
wanted
to
address
some
of
the
possible
like
issues
that
we
haven't
like
hammered
out
with
the
stainability
IPS,
especially
the
fact
that
some
of
them
haven't
been
merged
in
the
repository
itself.
So
with
that,
let's
start
with
blake,
like
f
I,
should
say:
did
we
kind
of
come
to
a
like?
A
F
F
But
it
turned
out
that
the
EIP
right
now
only
focuses
on
Blake
to
B,
which
is
a
specific
configuration
of
lake
two
and
Blake
to
be
specifies
the
rounds
to
be
12
and
there's.
That's
it.
It's
fixed
Blake
to
be
the
configuration
itself
means
that
it's
12
rounds.
It
has
a
specific
initialization
vector.
It
has
a
specific
set
of
round
constants
and
a
specific
buffer
size.
So
that's
what
Blake
to
be
is,
and
the
EIP
right
now
even
though
specifies
the
F
function
it.
It
uses
its
as
f
function,
specific
to
Blake
to
B.
F
B
And
so
so,
there's
the
kind
of
bringing
down
the
limit
as
kind
of
wasted
bytes.
So
we
could
use
less
gas,
but
then
also,
this
question
of
you
know
is
they're
actually
usefulness
in
having
the
the
F
function,
rather
than
that's
that's
flexible,
on
rounds
versus
just
pinning
it
to
twelve
rounds.
I
don't
have
enough
familiarity
with
the
way
it's
gonna
tie
into
things
like
like
Z
cash
and
others
other
than
to
say
that
the
people
who
do
have
more
familiarity
are
saying
that
they
want
this
flexibility
to
change
the
number
of
rounds.
B
F
I
think
that
the
big
confusion
is
that
if
you
have
a
fully
flexible
F
function,
then
you
would
need
the
number
of
rounds.
You
would
need
an
initialization
vector
and
you
would
need
a
bunch
of
other
parameters
which
the
current
EIP
doesn't
have.
But
if
you
want
to
only
support
big
to
B,
then
the
EIP
is
fine.
But
if
you
change
the
rounds,
then
you
won't
have
Blake
to
be
any
more.
F
F
There
is
big
to
be
Blake
to
s,
and
there
there
are
two
paralyzed
versions
of
them
and
there's
big
2x,
which
is
a
version
to
support
variable
length.
I
put
the
paralyzed
versions
are
compatible,
so
the
hash
you
would
get
from
Lake
to
be
on
Blake
to
BP
would
be
identical,
but
obviously
Blake
to
be
versus
Blake
2x
wouldn't
be
identical.
A
So
I'm,
looking
at
the
EIP
and
as
of
17
days
ago
from
the
people
who
were
advocating
it,
they
seemed
to
have
reviewed
it
enough
to
say
this
is
good
to
go.
So
it
sounds
like
what
they
want
is
just
Blake
to
be.
Is
that
not
accurate?
Have
they
expressed
that
somewhere
else
that
I'm
not
seeing
that
they
want
the
more
flexibility.
G
Way
too
expensive
compared
to
Kate
Shack
or
their
function,
but
basically,
if
we
could
use
at
some
point
Blake's
2x.
That
would
be
useful
for
us
because
we
don't
didn't
output
up
to
256,
but
we
need
an
output.
R1,
180
and
being
able
to
make
variations
in
the
in
the
hash.
Size
would
be
useful
for
us
in
the
internal.
In
the
long
run,.
A
G
G
G
A
D
So,
as
far
as
I
see
it
modifying
the
inputs
format,
a
bit
or
dropping
the
rounds
would
not
be
a
big
problem
testing.
We
basically
just
need
to
modify
the
existing
test
vectors
a
bit
and
then
regenerate
the
tests.
However,
if
we
add
anything
such
as
adding
yeah
either
making
it
more
generic
or
making
some
other
change,
where
we
introduce
one
more
error
condition,
then
we
would
need
to
actually
invent
new
some
new
test
cases
from
scratch
right.
D
E
Would
say,
maybe,
let's
just
drop
the
round
set
it
at
12
call
it
Blake
to
be
because
we
don't
support
anyway,
when
we
were
looking
there.
I
was
reading
Alex's
detailed
report
on
this
one,
and
we
also
notice
that,
when
implementing
that
they
specification
transitioned
into
like
two
F
function
but
didn't
provide
any
additional
parameter,
C
Allah
for
different
support,
and
then
we
couldn't
work
with
the
existing
libraries.
E
We
had
to
extract
it
and
very
strangely
assumed
the
vectors
for
for
like
to
be,
but
at
the
same
time
leave
the
flexibility
of
the
round,
so
so
indeed
feel
unnatural
and
wrong
in
Alex's.
Detailed
analysis
just
confirms
it
so
and
here
as
Martin
says,
I
also
see
that
adding
this
flexibility,
unless
we
have
some
very,
very
detailed
statement
from
the
AP
proposer,
some
exactly
how
this
facility
should
look,
what
being
would
be
more
difficult
than
just
dropping
the
rounds?
E
And
assuming
it's
black
to
be
a
lighter,
we
can
improve
it
into
additional
recompile,
sir,
just
modifying
the
existing
one,
probably
more
like
adding
a
new
one.
So.
B
And
you
know
he
listed
this.
He
was
specifically
addressing
this,
this
question
of.
Can
we
get
by
with
with
just
like
to
be
function
separate,
and
you
know
it's
worth
noting
I
guess
that
you
did
say
it
would
suffice
for
BTC
relay
style,
Z
cache
integration
to
have
just
like
to
be,
but
these
other
things
are.
You
know
more
capabilities
that
we
have
exposed
in
that
function,
which
means
they'll,
want
it
eventually,
I,
don't
I,
don't
know
enough
to
do
anymore,
really
beyond
relay
with
what
he
said.
A
I
think
that
we
should
just
take
note
that
that
might
eventually
be
what
people
want.
We
can't
guarantee
that
we
would
ever
make
another
pre-compile
to
add
that
flexibility.
It's
definitely
something
that
we
would
need
to
see
once
I
mean
you
know
after
Istanbul
is
done,
and
once
people
request
it
for
their
use
case.
A
G
F
The
current
IP,
even
if
we
fix
it
for
Blake
to
be,
but
still
it's
the
f
compression
function,
it
still
offers
a
certain
amount
of
flexibility.
We
didn't
Blake
to
be
itself
because
Blake
to
be
can
have
a
keying
and
can
have
personalization
and
can
have
salting
and
you
could
implement
all
of
those
with
well.
F
A
regular
Blake
to
be
hashing
function
probably
exposes
all
those
features,
but
with
this
compression
function
you
can
implement
them
outside
and
actually
the
personalization
and
salting
is
needed
for
for
the
Z
cache
proof-of-work
and
then
a
derp
feature
you
can
make
use
of
is
that
potentially
you
could
have
a
starting
state
of
a
hash,
for
example
starting
state
where
you
hash,
like
1
megabyte
of
data
already,
and
you
could
just
hash
the
next
hundred
28
bytes.
On
top
of
that,
so
that's
like
some
kind
of
flexibility.
You.
H
G
A
Yeah
I
agree
with
you,
I
think
James
Hancock
ran
a
group
of
people
that
included
input
from
like
Zuko
some
people
from
Stark
we're
obviously
and
others,
and
it's
unfortunate
that
we,
you
know.
Oh
actually
James
you're
on
here
I
thought
that
you
weren't
on
here.
James.
Do
you
want
to
address
this
yeah.
I
We
didn't
have
a
ton
of
resources
for
implementing,
so
the
thought
was
it's
good,
something
in
that
has
the
hash
function
and
then,
if
there's
any
updates
that
need
to
happen,
those
can
be
done
as
people
implement.
We
can
figure
out
what
those
are
sort
of
pre
optimizing
for
what
those
are,
because
all
the
stuff
that
it
has
been
kind
of
found
out
has
been
after.
We
have
had
something
that
someone
could
work
with.
So
for
me,
it
gives
a
good
I
get
it's
a
good
road
map
for
what
to
do
with
this.
Okay,.
A
I
think,
for
the
sake
of
time
today,
since
we've
spent
a
bit
of
time
on
this,
let's
just
go
with
the
reality
that
what
we
have
is
oblique
to
bei
P.
It
needs
to
be
reflected
as
that,
so
the
EIP
does
need
to
be
better
reflected
as
a
blake
to
bei
p.
The
rounds
need
to
be
specified
so
that
they
can
be
properly
implemented
in
a
way
that
everyone
can
understand
and
make
sure
they're
doing
it
correctly.
I
D
F
A
Great
and
in
the
chat
you
can
see
that
Matt
Longo,
one
of
the
champions
of
this,
had
some
comments
that
can
be
addressed
in
the
awkward
of
Gator,
just
like
we
talked
about,
but
James
and
Luiz
and
others
from
the
Z
cache
team
and
Stark,
where
great
job
on
bringing
this
forward
and
yeah
I
think
this
has
been
a
success
of
pushing
an
EIP
through.
That
has
a
good
use
case.
D
A
G
I
K
F
Matt
I
have
the
RFC
open
right
now
at
section
3
point
2,
which
is
compression
function.
F,
is
the
title,
and
the
function
itself
only
has
four
parameters:
H,
which
is
the
state
M,
which
is
the
message
T,
which
is
the
offset
counter
and
F,
which
is
a
Finance
Asian
flag.
It
doesn't
have
the
rounds
as
an
input.
K
Fair
enough
one
sec,
I'm
getting
better
department
as
well
and
going
through
the
PR
to
make
sure
that
I
don't
miss
my
own
rationale
and
misreport,
but
because
F
is
not
an
export
function.
I'm
less
concerned
about
matching
the
F
function,
signature
and
I'm
more
concerned
about
matching
what
F
actually
accomplishes.
A
K
D
A
A
Let's
see,
I
think
the
only
other
two
things
were
Jason
saying
you
know
suggesting
that
we
might
need
to
increase
the
cost
of
ext
code
copy
and
how
to
deal
with
contracts
that
break
the
increasing
s
load
cost
just
like
Aragon
and
others
Jason.
Do
you
want
to
comment
on
those
or
those
things
that
would
be
like
discussion,
topics
for
fixing
and
future
hard
Forks
or
something
that
we
would
need
to
talk
about
now?
I.
B
B
A
D
Analogy
we're
fixing
a
hole
in
the
roof,
yeah,
there's
a
smaller
hole,
one
side
just
because
we've
fixed
a
big
one.
We
don't
you
know
we
don't
need
to
fix
the
second
one
in
the
panic
it's
been
there,
it's
going
to
be
there,
let's
fix
it
in
due
time.
If
we
have
to
that's
my
comment
about
a
copy,
there's,
nothing
we
need
to
fix
for
Istanbul.
In
my
opinion,.
C
Just
like,
first
of
all,
I
think
our
position
is
always
that
I
think
we
shouldn't
do
any
dramatic
change
for
Franny
or
so
yet
he's
anymore,
because
our
heart
Fergus
already
delayed-
and
we
have
expressing
concerns
especially
to
ask
about
why
we
are
delaying
the
heart
work,
so
so
I
think
it
might
be
better
for
e-stamp
or
if
we
just
go
forward
with
what
we
have
known.
Just,
don't
change
anything
so
I.
We
can
stay
on
the
timeline
and
don't
cause
any
delays.
C
But
what
I
really
want
to
argue
about
is
is
that
we
should
really
be
careful
about
backward
in
cut
incompatible
change
in
the
future.
There
are
two
things
I
think
because
of
mists.
The
first
is
related
to
like
even
like
the
usual
ways,
things
like
increasing
nice,
Lord
and
course
contracts
to
produce
not
only
that
those
contracts
might
be
frozen,
but
actually
for
things
like
like
some
contracts
like
that
they
mate
with
time
lock,
seams
or
stuff.
We
also
need
to
be
really
careful.
C
C
Experiment
is
something
I
think
we
should
be
careful
about,
and
second
thing
what
I
want
to
argue
about
is
the
procedure
that
any
because
the
assume
communities
group
growing,
bigger
and
bigger
and
we
have
more
parties
submitting
changes
to
the
IC
IP
for
assume
hard
works.
The
issue
is
that
any
backwards
incompatible
change
can
be
used
as
a
tack
waiter,
because
someone
can
try
to
deploy
some
contracts
and
then
convince
even
some
auditors,
some
users
who
use
it.
C
It
works
perfectly
fun
and
then,
after
a
year
or
later,
they
propose
and
yeah
he's
saying
we're
doing
and
in
Packer
in
Kabul
or
change,
and
if
our
core
developers
got
convinced
by
them,
the
malicious
energy
might
be
able
to
cause
be
strapped
to
the
network
or
cause
those
contracts
that
they
perversely
deploy
to
be
to
be
attacked
or
to
to
stay
off
some
markings.
So
I
think
this.
D
B
D
C
A
A
E
Think
that,
last
time
during
the
call
wave
raced
very
important
issue
of
one
a
94
and
I,
while
I
totally
agree
that
this
change
is
very,
very
important
to
introduce
for
the
reasons
to
marking
stated
I
think
Y
has
experiencing
some
pressure
after
after
the
community
started
commenting
on
the
party
delay.
But
as
we
see
already,
they
do
miss
everything
and
they're
catching
up
very
quickly,
and
it
would
be
bad
for
already
just
step
back
from
those
statements
about
security
concerns.
I
think,
they're,
reasonable
and
I.
Think
one
ah4
can
bring
a
lot
of
disruption.
E
Yes,
yes,
I
think
I'm
anything
for
it.
It
for
the
reason
that
why
stated
that
what
we
can
think
that
just
breaking
contracts
for
a
while
looking
at
it
works
and
then
fixing
it
in
the
in
the
next
four
core
ended
like
some
kind
of
emergency
and
fix,
would
be
risky
because
there
can
be
some
contracts
that
are
time
locked,
which
means
that
some
some
users
can
lose
access
to
their
funds
and
then
never
be
able
to
withdraw
him.
A
E
Answer
yes,
I've
seen
the
least
that
was
preferred
based
on
this,
the
reviewed
at
Martin
organized
with
some
team.
Sorry,
I,
don't
remember
the
name
of
the
team,
but
the
the
list
of
contracts
was
very
long
and
and
I
beyond
the
time
that
would
have
to
analyze
every
single
of
them.
How
it
exactly
behaves,
at
least
that's
that's
my
feeling.
Also
I,
don't
think
every
single
owner
of
this
contracts
is
actually
tracking
how
the
change
may
affect
down,
sir.
It
is
speculation,
but
I
think
it's
very
reasonable
intuition
that
I'd
be
difficult
and
dangerous.
E
Today,
I
just
wanted
to
discuss
this
like
proposed
change
to
one
eight,
eight
four.
That
was
suggesting
this
additional
counter
that
treats
the
first
2,300
gas
and
steepened
differently.
I
think
it
was
in
line
what
Alexei
admitted
that
it
was
in
line
slightly
with
with
his
idea
of
taking
the
limit
counter
and
the
gas
counter.
E
That's
also
analyze
the
the
required
change
for
the
calculation
and
for
the
actual
claims
implementation,
I
seems
to
be
non-invasive
front
and
simple
buff
on
day,
like
performance
performance
wise,
how
it
effects,
often
memory
requirements
and
their
computation
requirements
of
the
clients.
So
I
would
really
like
to
discuss
it
more
in
detail
because
I
think
not
enough
people
read
it
and
analyzed
the
suggestion.
I
believe
it
solves
the
problem.
A
E
And
while
I
understand
that
this
is
this
is
what
Leigh
mentioned
that
maybe
let's
not
change
anything
for
Istanbul,
but
at
the
same
time
last
week
we
suggested
that
this.
This
is
a
reasonable
concern
about
Istanbul
188
for
change
and
I.
Agree
to
that
and
I
wouldn't
like
anyone
to
just
withdraw
now,
because
because
I'm
feeling
pressure
that
we
are
delaying
things
I
think
they're
backwards,
compatibility
well,
sometimes
sometimes
it's
unavoidable.
E
But
in
the
cases
where
we
really
see
that
we
are
like
breaking
the
attackers
contracts
that
you
are
mentioning
about
tendering
whistle
for
this
particular
case,
I
think
we
have
solution.
We
understand
there
exist
risks.
They,
the
big
contract
creators
like
I,
reckon
I
think
they
raised
concerns
that
is
actually
very,
very
problematic
for
them
that
the
did
results
would
be
very
bad
and
that
they
caused
a
lot
of
PR
trouble,
but
also
like
undermine
the
the
trust
union
serum
as
a
platform
and
our
ability
to
deliberate
changes.
You
know
you
know
reasonable
way.
D
Yeah
so
I
think
we
I
mean
we're
obviously
kind
of
stuck
between
a
rock
and
a
hard
place,
and
in
my
view,
the
the
proposal
is
kind
of
convoluted
and
complex
and
not
properly
analyzed.
So
I
think
that
it's
very
optimistic
to
try
to
squeeze
it
in
for
Istanbul
and
I
also
believe
that
it,
if
we
postpone
fixing
it
we'll,
make
it
better
job
of
it
and
we
can
cover
perhaps
more
cases
than
only
I
mean
things
can
break
for
other
reasons
than
defaults.
L
A
Think
that
so
Thomas
thought
came
to
mind.
If
we
make
a
commitment
now
to
fix
the
contracts,
including
the
time-lock
contracts,
that
would
you
know,
make
the
ether
unaccept
accessible.
Then
this
is
a
commitment
that
we're
making
before
anything
breaks
with
the
intention
of
fixing
things
that
breaks,
which
I
believe
is
different
than
say
what
you
no
way
was
talking
about
before
when
it
comes
to.
A
You
know
having
the
perception
that
we're
you
know,
intervening
in
contracts,
necessarily
if
we
have
this
intention
beforehand,
then
I
think
it's
the
similar
thing
to
when
we
cleaned
up
the
dust
accounts.
We
intervened
in
that
case
for
the
sake
of
State,
for
states
like
I,
guess,
growth,
and
that
wasn't
really
complained
about
very
much
so
I
think
with
the
right
amount
of
PR
about
this
I
think
it
would
be
ok.
D
Yeah
I
definitely
think
if
we
break
things
we
should
fix
it
afterwards.
Yes,
but
I
do
think
in
all
the
cases
where
it's
just
oh
it's
hard.
We
need
to
upgrade
to
a
new
contract.
It's
a
pain
in
the
ass
I.
Don't
think
that
needs
fixing
I
just
think
they
need
to
take
that
thing
and
go
through
with
it,
because
they
need
to
change
the
behavior
of
the
contract,
because
the
contracts
exhibit
some
characteristics
that
we
don't
want
to.
C
C
I
The
I
think
consistent
action
will
combat
PR
over
time,
because
if,
if
we
look
back
to
how
people
thought
about
upgrade
Forks
a
year
and
a
half
ago,
how
everyone
panicked
all
the
time
that
we're
gonna
have
to
etherium
suddenly
every
time
we
were
doing
any
of
these,
but
now
today
no
one's
saying
that
and
I
would
say.
The
reason
is
because
consistent
action
has
happened.
That
I
think
in
this
case,
stuff
like
this,
is
going
to
be
happening
more
with
eath
1x
and
a
commitment
that
you
know
if
something's
really
like
upgrading
to
behavior.
I
That
isn't
intended
is,
of
course,
the
strategy
entirely.
But
it
we're
know
we're
doing
something,
and
we
know
we
can
do
something
to
help
fix
that
in
the
cases
that
it
goes
really
wrong.
Then,
as
this
happens
more
and
we
have
consistency
in
action
and
in
plan
and
a
narrative
that
the
PR
will
form
around
it.
What.
C
What
I
was
saying
is
that
is
not
a
PR
aspect
of
this,
but
actually
the
technical
aspect.
So
what
I
was
saying
is
it's
just
that
we
even
if
may
make
the
commitment
we
may
not
be
able
to
fix
some
contracts
unless
we
do
something
similar
to
Yeti
999,
which
can
be
problematic
for
the
community,
so
that
is
just
what
I,
what
I
want
to
keep
learning
to
the
community
that
we
just
may
not
be
able
to
do
this
commitment.
A
Okay,
the
cat
herders
can
write
up
something
about
this
and
pass
it
on
to
you
before
we
publish
it
in
order
to
make
sure
that
we
are
correctly
addressing
both
your
concerns
and
your
warning
in
a
way
that
is
palatable
for
people
to
read.
A
I
think
Thomas
I
want
to
give
you
the
last
word
on
this,
since
you
do
have
the
option
to
make
more
of
a
formal
suggestion
with
it,
and
you
know
within
the
EIP
discussion
or
you
know,
an
ethereal
magicians
thing
if
you
still
feel
really
strongly
about
this,
but
it's
seeming
more
and
more
like
people
do
want
to
go
ahead
with
Istanbul,
despite
the
fact
that
it
could
break
contracts
with
the
idea
that
we
would
fix
them
later.
So
I'd
love
to
hear
your
last
words
on
this.
You.
E
Know
yeah
sure,
so
do
some
obtain
the
foremost
specification
of
what
I
suggested.
I
agree
with
marking
that
it's
not
fully
analyzed
and
that's
why
I
was
inviting
more
people
to
analyze
it
and
I'd
love
to
see
these
analysis
and
a
more
detailed
level.
It
is
available,
and
it's
every
magicians
on
the
EIP
discussion
channel
on
on
guitar
as
well,
so
I
posted
in
in
the
same
forum
on
all
three
channels,
I
in
the
end,
actually
obviously
I
trust
marking
where
I
got
the
whole
team
to
actually
analyze
it
after
I,
raise
the
concern.
E
If
everyone
thinks
that
yes,
this
is
the
way
to
go,
then
orbitally
happy
to
agree
with
that.
I
would
still
like
to
in
in
any
discussion
for
those
concerns
to
be
it.
We
made
clear
stairs,
people
can
react
and
discuss
on
them
within
the
community
and
I
also
agree
with
way
that
they
technically
the
solution
to
those
contracts
potentially
broken
contracts.
I
agree
that
it's
a
speculation,
it
might
be,
it
might
be
non-issue
or
might
be
even
worse.
It
would
be
if
it
was
like
just
one
or
two
contracts
with
something
like
half
any.
E
For
two
heating
them
fixing
them
might
be
technically
challenging
and
it
might
require
a
lot
of
resources
later,
but
in
general,
I,
totally
trust
markings
like
analysis
from
the
security
and
the
the
general
idea
that
s
felt
change
is
super
important.
I
agree
with
that
as
well
mmm-hmm,
because
of
that
change
being
so
important,
we
want
to
push
I
guess
Istanbul
as
fast
as
we
can,
but
yeah
I
would
prioritize,
saying
security
and
backwards
compatibility,
but
I,
understand
in
this
case.
Security
is
what
requires
as
Thursday.
A
Okay,
very
good
last
statements,
I
think
that's
very
reasonable.
Thank
you
for
that.
I
think
I
want
to
quickly
if
anyone
has
a
really
quick
thing
that
they
need
to
address,
that
is
very
Istanbul
related.
Otherwise
I
want
to
get
to
picking
the
block
number,
so
we
could
move
on
through
the
call.
So
is
there
any
other
comments
that
are
necessary
for
Istanbul
discussion
before
we
talk
about
test
nuts
and
stuff.
A
Okay,
so
we're
talking
today
about
picking
the
block
number
for
the
Istanbul
testament,
which
would
then
pick
a
number
for
the
main
net,
because
it
would
be
about
a
month
out
from
the
test
net
date.
I
wanted
to
start
with
eg
he's
from
the
in
fira
team,
who
has
dealt
a
lot
over
the
years
with
a
lot
of
these
tests,
net
deployments,
main
net
deployments
and
I
know
he
had
some
perspectives
and
things
he
wanted
to
address
here,
real
quick,
so
that
we
keep
that
in
mind
as
we're
doing
this
so
eg.
A
M
One
of
the
lessons
learned
in
the
Byzantium
before
last
year
was
that
we
shouldn't
try
to
set
both
of
the
test
metaphors
in
the
main
that
Forks
at
the
same
time,
let's,
let's
start
with
setting
the
test
net
fork
and
see
how
that
goes
and
I'll
go
through
that
too
cola
finding
that
period
of
stability
before
revisiting
when
when
to
set
that
mean
that
fork,
so
we
don't
end
up
having
to
push
it
out.
That's
that's.
The
main
thing
I
wanted
to
highlight
today.
A
N
D
A
Great
in
that
case,
let's
just
do
a
test
net
block
and
we
won't
be
picking
a
main
net
block.
I
know,
I
have
the
word
main
net
and
the
agenda,
but
we
can
leave
that
out,
since
there
are
many
clients
who
have
merged
everything
and
just
some
of
the
things
to
iron
out
with
Blake
to
be.
When
should
the
test
net
block
to
happen?
Should
it
be
one
week,
two
weeks,
three
weeks?
What
are
we
thinking.
D
A
K
An
agreement
today,
Peter
not
with
this
back
channel
I-
think
you've
pretty
good
about.
Most
of
us
just
want
to
make
sure
that
the
rounds
are
completely
fixed,
but
mostly
other
stuff
discussed
here.
It's
really
about
do
the
clients
like
how
much
more
fiddling,
but
they
want
to
do
if
we
do
change
deep,
so
yeah
we're
happy
to
play
ball.
A
O
A
We
can
have
the
test
net
run
for
more
than
a
month
if
we
want
so
it's
not
literally
smack
dab
in
the
middle
of
DEFCON.
That's
not
a
that's,
not
an
issue,
it's
more
when
we
want
to
start
the
test
net,
because
I
think
that's
the
more.
N
D
A
A
Okay,
so
how
does
the
second
sound
for
everybody
that
brings
us
past
Def
Con
for
launching
main
net?
If
we
keep
it
up
for
a
month,
we
can
still
have
decisions
made,
especially
if
we
want
to
do
it
even
a
little
bit
longer.
If,
like,
we
can't
make
a
decision
on
the
main
net
number,
because
Def
Con
is
going
on
or
other
things.
A
N
N
E
There's
one
comment
from
me:
is
it?
Does
it
make
sense
to
actually
do
a
styling
change
on
the
test?
That's
starting
with
rinkeby,
which
is
practically
only
produced
by
gas,
which
means
that
we
won't
have
consensus
issues,
at
least
at
the
beginning
and
then
very
quickly,
following
up
with
with
the
Rob
stone,
which
has
to
two
miners,
practically
two
types
of
miners
party
and
get
and
then
continue
with
with
Gurley,
which
has
four
different
coins
actually
creating
blocks.
So
we
decrease
the
chances
of
consensus
issues
and
they
and
the
test
introduces.
A
Sorry,
yep
yep,
rinkeby
and
Gourley,
so
that
would
need
to
be
a
later
discussion,
but
yeah
we
could
do
those.
We
could
definitely
do
those
before
this
three
week
period,
if
as
long
as
they
agree
to
that,
that
would
be
something
we'd
have
to
talk
about
later
or
have
them
on
the
next
call,
or
things
like
that,
though,
but
that's
a
good
idea,
any
other
comments
about
it
being
on
the
second
for
Rob
stone.
A
A
A
That
would
also
be
nice
and
then
we'll
provide
any
commentary
that
anybody
has
based
on
the
document,
any
questions
they
have
or
concerns
or
ways
that
they
can
improve
the
document
and
then,
in
addition,
if
you're
not
comfortable
giving
feedback
here,
there
is
an
email
address
that
I
posted
in
the
Gator
chat
and
I
can
repost
him
there
for
everyone
to
send
their
commentary
to
so
we'll
do
intros
brief
summary
and
then
questions
comments.
Oh.
R
T
Q
And
we
also
I
think
Mirko,
who
is
also
in
the
project
I,
think
he's
watching
along,
but
wasn't
able
to
actually
join
so
yeah.
If
he
has
any
comments,
he'll
just
send
us
a
message
on
slap
or
something
so
yet
to
summarize
the
the
findings,
we
have
a
nice
summary
also
in
the
the
report
that
might
be
useful
to
just
spread
around
to
and
it's
this
is
the
initial
report
based
on
the
feedback
that
we
get.
Q
We've
got
some
clarifications
that
we've
gotten
from
from
you
Hudson
and
the
cat
and
Charles
from
the
cat
herders
and
any
other
feedback
that
we
get
we'll
take
that
into
account
for
the
final
report,
so
yeah
like
Hudson,
said,
feel
free
to
email,
us
any
kinds
of
feedback
for
this,
and
we
can
incorporate
it.
So
the
summary
here
that
we
we
found
that
generally
oh
yeah,
I,
guess
I
see
on
the
chat
that
somebody
wants.
Q
Q
Yeah
so
so
we
found
that
it
on
a
high
level
that
it
reaches
its
design
goals
and
that
it's
it's
reasonable
towards
its
intended
economic
effect
and
no
major
issues
there.
But
that
said,
we
also,
we
did
a
lot
of.
We
did
find
one
particular
attack
that
we
outline
in
the
in
the
report.
We
also
found
some
I
guess,
I
should
say
potential
attack
and
I
think
that
probably
some
others
on
our
team
can
give
more
details.
Q
If
we
want
to
get
into
more
details
on
that
now
and
then
we
also,
we
also
had
some
just
recommendations
about
things
that
could
be
done
to
have
better
I,
guess
assurances
of
of
Prague
pal
working
as
intended
in
the
future.
So
things
to
look
into
and
to
assess
and
one
of
the
pieces
of
feedback
that
we
did
get
us
to
clarify
a
little
bit
more.
Q
Q
On
a
high
level,
is
that
also
there's
a
lot
of
speculation
and
I
mean
we
did
a
lot
of
research
to
try
to
like
narrow
down
the
speculation
of
what
can
happen
in
terms
of
hardware-
and
you
know,
maybe
not
hardware
but
like
how
how
the
hell
this,
how
prong
pal
could
be
overcome
by
like
hardware
advancements
in
the
future?
So
I
guess
that's
a
high
level.
I
do
recommend
it's
better.
It's
much
better
articulated
in
our
report
and
I.
Think
I'll.
T
Or
not,
okay,
so
yeah
I
can
I
can
just
like
go
through
these
suggestions
from
the
from
the
document
and
just
like
say
a
word
or
two
about
it.
Is
that
interesting
to
you.
T
Cool,
so
the
first
suggestion
was
that,
like
power,
users
is
the
modified
catch-up
function
like
not
the
Ke$ha
push
yourself
is
not
what
if
I
would
use
like
a
similar
to
the
Blake
to
situation
where
you
have
like
parameters
and
oh
actually
is
round
reduced
as
well.
So
it's
like
a
slight
modification,
and
it's
not
really
clear
whether
or
not
this
is
like
a
proper
hash
function,
however,
is
used
like
well.
T
T
That
that
is
one
part,
but
I
understand
that
the
padding
should
not
be
necessary
because
it's
always
the
same
size.
But
it's
like
it's.
It
is
qualification
and
it
would
be
nice
to
have
people
who,
like
analyze,
hash
functions
day
by
day,
take
a
look
at
it.
P
P
T
T
You
have
like
the
stages
where
you
first
have
the
C,
then
you
compute
the
cash.
Then
you
compute
the
tag
and
in
the
dag
you
just
do
lookups
during
the
mining
and
the
cash
nowadays
is
small
enough
or
it's
getting
small
enough,
it's
kind
of
unclear
to
fit
on
an
ASIC
or
like
the
Asics
grow
and
or
like
you
can
put
more
and
more
SRAM
on
an
ASIC
and
it's
it's
either
now
or
soon
or
something
like
this.
T
It
should
be
feasible
to
put
enough
answer,
I'm
on
an
ASIC
to
care
to
hold
the
entire
cache,
and
then
you
can,
with
a
little
bit
more
computation
effort,
do
the
mining
directly
from
the
cache.
But
it
means
you
don't
have
the
memory
bottleneck
anymore,
which
is
where
the
security
is
basically
anchored
off.
Ethers
and
profile
collect
profound
heart.
So
this
is
something
where,
where
it's
not
entirely
sure
like
what
the
right
now,
some
I'm
not
like
a
hardware
expert.
T
A
One
so
quick
clarification
for
those
listening.
Bob
is
bob
rao.
He
is
doing
a
very
extensive
hardware.
Audit,
in
addition
to
the
least
authority
audit
that
focused
on
both
hardware
and
software,
but
with
a
heavier
lean
towards
software
and
ax.
Just
to
go
over
what
you
just
said.
Did
you
say
that,
although
it
met
the
goal,
what
I've
read
from
that
is,
although
it
met
the
goal
with
increases
in
hardware
like
efficiency
over
time
like
it
might
not
be,
you
might
be
able
to
get
advancements
through
an
ASIC
that
would
be
prog
pal.
D
My
second
question
is
so
you
just
want
to
see
if
I
understand
it
correctly.
The
thing
that
would
be
needed
is
extremely
fast
memory,
directly
yeah,
some
kind
of
very
fast
bus
and
not
like
the
D
Ram,
but
some
extremely
fast
thing.
That's
gonna
have
faster
access
to
the
like
key
to
this
issue,
or
so.
T
When
you
say
so,
the
thing
is
that
if
you
have
the
memory
on
die
on
the
same
chip,
you
don't
need
like
GDR,
bus
or
HP
m
or
something
you
just
have
it
on
your
chip.
You
don't
so
that's
why
you
get
like
waiver
access
timings
access
is
much
quicker
and
because
you
need
much
less
much
less
memory,
you
can.
You
can
put
it
on
by
now
or
soon
probably
does
that
answer
your
question?
So
you
don't
have
this,
this
bus
that
is
currently
slow
all
over
the
industry,
I.
D
A
So
I've
been
looking
over
Bob's
early
audit.
His
should
be
coming
out
very
very
soon.
It's
just
in
its
final
stages
and
from
my
understanding
it's
a
lot
about
future
hardware.
When
it
comes
to
that
type
of
memory
and
I
might
I
mean
I
might
be
reading
the
audit
wrong,
but
I
think
it's
good
Bob's
on
it's
gonna
answer
a
lot
more
of
those
questions
and
speculations.
Q
T
Q
So
and
then
I
yeah
so
I
think
that
as
far
as
whether
or
not
it's
a
it's
available
like
this
kind
of
tech
is
available
now
it
sounds
like
it
might
be,
but
to
get
back
again,
I
think
maybe
yeah
Bob
will
help
with
that
particular
answer.
But
we
did
put
a
recommendation
in
for
that
suggestion
about
changing
the
constant
data
set
parents
to
higher
value
like
512,
so
yeah,
there's
that
suggestion
for
now
but
yeah.
T
Q
And
this
is
something
that,
if
that
would
be
a
good
thing
to
do,
yeah
if
it's,
if
we
wanted
to
do
that,
now
is
the
good
time
to
do
it
between
the
initial
report
in
the
final
report,
because
we
have
these
recommendations
there
and
we
can
you
make
minor
edits
to
the
report
before
the
final
one,
but
usually
just
clarification,
nothing,
nothing
too
fundamental
in
terms
of
changing
the
report.
Q
L
L
T
T
We
don't
have
the
entire
dag,
faster
slower,
but
also
the
mining
and
I
think
increasing.
It
should
be
sufficient
for
following
many
years
so
I
think
it's
it's
easy
to
it's
easy
to
find
some
solution.
I
I
can't
really
comment
on
how
long
like
whether
it's
10
or
20
years
or
5
or
20
years,
a
yeah
I
mean
like
whether
or
not
this
the
solution
for
how
long
it
really
holds.
But
I
there's
leverage
to
to
tackle
this
definitely
yeah.
Q
Q
Q
I,
don't
think
it
matters,
but
yeah.
So
the
suggestion
three
it
was
to
create
additional
documentation.
There
was
some.
There
are
some
key
details
that
we
thought
were
missing
and
that
improving
the
documentation
would
also
just
generally
improve
improve
how
people
can
help
keep
an
eye
on
these.
It
pizzette
potential
issues
in
the
future
and
stuff
like
that.
So.
R
Q
Called
out
a
couple
areas,
there
could
be
more
documentation
on
it.
Another
suggestion
is
explore
a
formal
model
of
ASIC
resistance,
and
this
is
just
something
that
yeah,
if
particularly
if
mining
continues
to
be
a
growing
industry
which
it
so
far
has
been,
but
if
it
continues
on
that
path
that
just
looking
at
this
a
little
bit
more
formally
would
be
would
be
helpful
too.
Q
T
Basically,
it
doesn't
mean
it's
not
related
to
form
a
logic,
necessarily
necessarily
it's
more
like
yeah
like
like
a
sound
mathematical
model
or
something
or
something
that
you
can
actually
reason
about,
because
currently
there's
like
when
it
comes
to
ASIC
resistance,
it's
mostly
yeah
we
just
this
is
just
heuristics
basically,
and
it
would
be
nice
to
to
have
something.
That's
actually
reliable,
yeah.
Q
Okay,
then,
if
not,
then
the
last
one
is
just
monitor
hardware,
industry
advances
and
wow.
This
sounds
like
kind
of
obvious
in
some
ways
we
just
wanted
to
make
sure
this
got
documented
and
call-outs
in
specific
areas
of
hardware
industry
advances
that
we
thought
could
be
particularly
a
threat.
I
guess
you
could
say
to
pop
pal
in
the
future,
and
so
we
outline
that
in
a
little
bit
more
detail,
and
also
we
wanted
to
talk
about
like
the
potential
incentives
of
other
parties
and
their
hardware.
Q
Like
other
hardware
incentives,
I
mean
other
incentives
for
hardware
to
advance
so
like
outside
of
just
I,
guess
he's
outside
of
proof-of-work
mining
and
stuff
like
that.
So
that
would
be
something
that
I
think
is
also
just
good
for
the
the
community
and
everybody
to
maybe
have
a
way
to
keep
an
eye
on
moving
forward
and,
of
course,
like
this,
like
specialized
hardware
and
stuff
in
the
future
of
hardware,
is
difficult
for
all
of
us
to
predict.
Q
But
at
the
same
time
you
know,
there's
certain,
like
I,
said
certain
signs
and
stuff
that
we're
seeing
in
certain
incentives
that
that
to
keep
an
eye
out
for,
and
so
we
just
detailed
this
a
little
bit
more.
In
our
last
suggestion.
A
Ok,
excellent,
thank
you
all
so
much
the
team
from
least
authority
for
coming
on
and
fielding
these
questions
and
providing
an
overview
of
their
initial
audit.
If
there
isn't
any
other
questions
or
comments
on
that,
you
all
are
free
to
depart
or
if
you
want
to
stick
around
and
adhere
riveting
conversation
link
to
be.
That
would
also
be
ok.
A
So
thanks
again
and
yeah,
let's
go
ahead
and
go
back
to
the
Blake
to
be
conversation.
We
have
about
four
minutes
left
of
the
meeting,
so
if
people
need
to
drop
off
at
the
now
or
once
the
official
meeting
time
is
over,
that's
perfectly
fine.
Otherwise
we're
gonna
continue
with
Blake
to
be,
and
that
should
be
the
last
thing
on
the
is
there
anything
that
I
missed
on
the
agenda
that
people
want
to
address
before
we
start
on
Blake
TV.
A
K
Quick
we've
been
back
channeling
through.
Well,
it's
not
really
about
you
know:
we've
been
on
github
and
whatnot
discussing
this
throughout
the
call
and
I
think
where
we
are,
is
we're
not
going
to
worry
about
the
proposed
change
in
T,
0
or
T
2
T
1,
but
Alex
makes
a
great
point.
This
is
not
really
like
to
be
specific.
This
EIP
anymore.
K
Unfortunately,
you
know
the
RFC
doesn't
quite
have
a
target.
The
specifies
exactly
what
we're
trying
to
do
at
the
CIP,
so
I
think
the
discussion
that
we
need
to
have
here
to
sort
of
close
the
loop
and
perhaps
even
be
talking
about
this
Eve
is.
Do
we
want
to
revise
the
eat,
make
it
clear
that
that
this
is
for
all
Blake
to
64-bit
variants
and
and
then
also
where
we're
fine,
with
limiting
the
rounds,
which
has
been
another
thing,
we've
been
discussing
back
and
forth.
K
I
just
want
it
to
be
more
flexible
than
fixing
it
to
10
or
12,
because
there's
some
sort
of
exotic
stuff,
our
team
and
a
few
other
teams
that
I've
talked
to
have
considered
that
that
would
limit.
So
anyway,
my
stance
is
typically
I.
Don't
always
know
what
an
application
developer
is
going
to
do
with
something
I'm
building,
and
if
it's
not
an
attack
on
the
EBM,
and
if
it's
not
dangerous
and
it's
already
well
tested,
we
give
them.
K
About
yeah,
so
actually
for
the
stuff,
for
anything
that
I've
been
looking
at
I
would
be
fine
with
just
a
single
bytes
worth
of
rounds.
I
think
that,
as
Alex
rightly
pointed
out,
the
32-bit
round
was
was
totally
plucked
out
of
air.
So
I
don't
think
that's
the
right.
Handle
I
know
that
it
is
more
more
than
12
and
I'd
say
less
less
than
10,000,
but
if
it
were
limited
to
a
single
bite,
we'd
be
okay.
K
D
F
There's
no
difference
for
that.
It's
just
hard
coded
in
my
case,
but
the
implication
is
on
the
implementers
side,
because
if
this
is
clearly
a
Blake
to
be
a
compression
function,
then
they
likely
can
use
off-the-shelf
software
and
libraries
yeah.
But
if
this
has
the
rounds,
then
they
cannot,
they
have
to
implement
it
themselves.
F
Duplicate
some
code
make
the
changes
and
I
think
that's
where
people
were
really
confused,
because
when
they
reached
out
to
libraries
well,
of
course,
they
realised
they
cannot
use
libraries
first
of
all
this,
so
they
reached
out
to
them
and
ask
for
this
compression
function
and
whoever
has
implemented.
Blake
was
really
confused
that
you
want
the
rounds,
but
you
don't
specify
the
rest
of
the
configuration.
D
K
D
F
So
I
think
the
only
only
only
reason
it
could
make
sense
to
set
a
lower
bound
on
the
rounds
is
to
avoid
any
risk
with
with
the
gas
usage
where
baby.
Somehow,
you
know
the
the
one
gas
per
round,
which
was
determined
somehow
with
with
large
rounds,
it
could
not
be
accurate
and
that
could
be
one
motivation
to
set
a
lower
limit.
I.
D
F
Mean
since
it
has
been
implemented
and
if
there's
a
real
reason
that
there
gonna
be
other
configurations
of
64-bit
Blake
to
I
with
different
rounds
and
then
maybe
it
makes
sense.
But
yes,
I
said
we
wouldn't
have
had
this
discussion.
If
this
would
have
been
cleared
from
the
beginning
and
from
from
the
users
perspective,
the
rounds
doesn't
really
matter,
it
can
be
hard-coded.
This
is
really
only
a
question
for
testing
and
client
implementation.
I
We
have
when
we
first
talked
about
Blake
to
be
we
had,
we
had
asked
Zuko
and
others
is
Blake
to
be
the
only
one
or
is
Blake
to
what's
important,
and
then
we
went
into
well.
If
that's
the
only
thing
that
matters,
that
really
is
the
compression
that
function
which
we
can
get
from
somewhere
else
and
just
plug
that
in
and
then
rather
than
us
to
do
a
solidity
implementation
now,
which
would
be
highly,
which
would
be
much
more
resource
intensive.
I
We
can
get
the
compression
function
done
and
then
we
can
release
best
practices
or
libraries
that
that
interface
with
the
with
the
F
function.
So
I
think
this
is
a
little
bit
of
VIP
hasn't
follow
like
the
the
writing
in
the
IP
hasn't
followed.
The
evolution
of
our
of
the
discussion
that
has
happened
with
the
community.
K
Yes,
absolutely
so,
if
you're
talking
about
any
any
additional
ash
configurations
that
are
put
together
for,
for
example,
Internet
of
Things
applications-
and
we
need
to
verify
that,
then
this
will
be
for
compatible
if
you're
talking
about
frequent
for
tweaks,
like
what
we
just
saw
with
SIA,
not
to
be
clear,
they
didn't
change
the
route,
but
a
similar
idea
to
break
basics
that
we
need
to
verify.
This
would
be
portable,
so
there
are
definitely
I
mean.
There
are
definitely
reasons.
K
K
V
For
what
is
worth
one
comment,
if
you
want
to
32,
have
32
bytes
demise
version,
it
probably
means
you
need
to
separate
precompile
having
kid
in
one
probably
would
be
horrible
in
terms
of
developing
against
this
pre-compile,
because
yeah
we
have
like
two
or
three
additional
parameters
that
contains
some
bytes.
So
it's
not
really
developer
friendly
to
have
it
in
one
Pro
compiled.
K
U
I
think
it
might
be
the
only
chance
to
get
it
in.
So
if
you,
if
you
think
it's
not
ideal,
that
I
think
it
will
be
like
that
that
that
that
that
would
be
the
outcome,
I
would
bet
on.
So
if
it's
wrong,
it
would
be
wrong,
but
to
get
another
test
to
introduce
the
fixed
one,
I
think
it's
it's
it's
it's
less
likely.
K
K
K
F
F
F
So
I
would
be
I
would
be
okay
with
keeping
their
the
rounds
per
meter
and
I.
Don't
really
have
a
good
answer,
what
size
debt
should
be
and
because
it's
really
up
to
people
who
understand
hashing
functions
more
deeply
to
know
what
rounds
would
be
appropriate.
One
thing
I
have
found
is
a
comparison
that
four
rounds
of
Blake
to
would
be
equivalent
of
a
trance
of
cha-cha
and
complexity.
F
U
I
That
was
the
recommendation
for
Zuko
from
Zuko
listed
you
out
to
dinner,
dwell
my
memories
right
and
also
with
James
and
I
I.
Think
it's
a
bit
of
like
if
I
was
to
try
to
figure
out
what's
going
on
with
EVM
one
Mike
right
now
and
what
you
guys
are
working
on
versus
like
what's
publicly
available
about
it.
I
So
if
the
people
who
are
working
on
it
right
now
will
say,
hey,
we
see
reasons
for
doing
10
or
12,
and
the
hard
part
of
doing
the
work
of
getting
the
F
function
out
has
already
been
done
by
the
implementers,
because
that's
who's
really
the
work
is
added
on.
As
for
these,
the
work
has
already
been
done.
I
That's
been
added
on
then
I,
don't
and
we've
also
verified
that
there
isn't
a
gas
problem
with
having
higher
rounds,
because
that's
already
the
benchmarked
when
people
were
looking
at
it
I,
don't
think
like
necessarily
limiting
it
to
oh,
we
couldn't
find
it
on
Google.
So
then
there
is
no
people
working
on
this.
It's
like
the
right
way
to
go
when
we've
heard
from
people
who
are
working
on
it
that
10
or
12
is
what
they've
been
looking
at.
F
Actually,
there's
one
more
thing:
I
forgot
to
mention
is
Sigma
table
is
only
defined
for
10
and
12
rounds
and
the
RFC
suggest
to
mod
10
TD
rounds
when
looking
into
the
Sigma
table.
But
some
implementations
don't
do
that.
So
I
wonder
you
know
for
a
higher
number
of
rounds.
Would
the
configuration
need
a
different
Sigma
table.
D
V
F
F
Yeah
closing
note
on
this
this
round,
stuff
I,
think
the
reason
it's
really
confusing,
because
the
IP
just
refers
to
the
RFC
and
the
RFC
doesn't
make
the
round
configurable
and
therefore
I
think
we're
not
implementing
the
NRF
see
here
we're
implementing
a
very
specific
version
of
lake
two,
and
maybe
it
would
make
sense
to
make
this
clear
that
we
are
not
implementing
Blake
to
be
or
Blake
to.
We
are
implementing
a
really
specific
version
of,
like
maybe.
I
I
A
F
K
K
F
K
I'm
planning
on
following
up
on
Gator,
with
kind
of
a
large
like
our
experience,
the
IP
process
of
barring,
is
helpful
for
you
guys,
but
I
think
the
flip
side
is,
if
there
weren't
a
fixed
heart
work.
Maybe
just
wouldn't
have
been
considered
important
enough
to
include
an
artwork,
so
I
think
keep
specific
hard
courts,
wouldn't
be
awesome
for
teams
like
ours
that
aren't
solely
focused
on
pushing
and
deer
import.
But
we.
A
K
D
F
O
E
E
Need
more
than
two
bytes,
but
it's
up
to
you
I'm
like
we
will
go
with
whatever
we
decide
here,
whether
it's
one
bite
or
two
bites
the
tests
or
maybe
a
bit
more
cumbersome,
we're
like
because
we
have
to
think
about
the
edge
case.
It's
like
how
it
behaves
we're
different.
It's
like
Sigma
modulo.
That's
all
X
mentions
here.
We
have
to
pick
the
good
range
of
various
test
cases
with
different
numbers
of
rounds,
so
they're
the
bigger
the
number
the
more
we
have
to
test,
but
in
the
end
we
can
test
it
or
business.
F
B
So
I
guess
I'll
give
some
some
more
background
on.
Why
I'm?
Also
in
favor
of
dropping
the
number
of
rounds
is,
is
the
you
know.
We've
got
at
least
currently
an
optimized
implementation,
which
is
you
know.
If
we
drop
to
the
number
down
to
two
bytes,
we
could
probably
leave
it,
as
is
unoptimized
and
not
be
worried.
I'm
not
worried
about
it,
because
the
cost
of
calling
the
pre-compiled
makes
up
for
the
fact
that
one
gas
isn't
enough
to
account
for
the
time
it
takes
to
do
one
round
in
Python.
A
I
F
Yeah
I'm
really
not
sure
it
seems
that
technically,
the
upper
bound
we
have
is
just
8
million
rounds
because
it's
the
the
block
cast
limit
or
maybe
10
million
rounds
which,
which
you
couldn't
address
in
16
bit
but
yeah.
We
are
yet
to
see
any
kind
of
configuration
which
goes
above
12
because
one
of
the
main
benefits
the
blake
2
specification
cites
compared
to
blake
as
the
the
lower
number
of
rounds.
So
I
would
assume
that
they
would
only
just
want
you
only
go
above
or
below
this
standard.
K
D
A
Some
people
are
sticking
around,
it's
all
its
I
almost
said
night
at
local
time,
but
we
just
have
someone
make
it
a
suggestion
or
getter.
If
that's
the
last
thing,
I
mean.
V
V
Yeah
this
third,
but
it's
one
change
I,
know
I,
don't
really
have
a
strong
opinion
here
from
the
perspective
of
the
implementation.
It's
easy
to
to
change.
This
I
can
speak
for
GATT
client,
but
I
can
speak
for
other
clients.
I,
guess
it's
harder
for
sure.
It's
harder
than
changing
the
size
of
rounds.
F
So
the
motivation
is
I've
implemented,
a
I
think
more
efficient
implementation
of
using
the
pre-compiled
on
the
EVM
and
the
the
key
optimization
is
to
to
keep
the
context
once
in
memory
and
keep
reusing
it,
and
in
that
case,
for
the
last
for
the
last
chunk,
there
is
a
need
to
zero
out
the
memory
in
in
the
context,
and
it
seems
to
be
I
mean
it
seems
to
be
an
easy
way
to
avoid
them
and
in
the
case
of
the
Zika
spoof
of
work,
there
are
512
rounds.
I.
V
V
F
K
I'm
happy
to
be,
though,
we
should
stick
with
what
we
already
have,
because
I
think
that's
typically
pragmatic
and
because
I
don't
think
that
people
are
calling
this
compiler.
So
we're
not
going
to
be
doing
the
message
frequently
so
I'm
happy
to
kick
up
that
side.
If
we
need
that
on
this
call,
but
yeah.
K
K
F
K
E
F
The
bigger
problem
isn't
actually
just
zeroing
at
that
memory,
isn't
it
expensive?
But
if
you
have
a
loop
where
you
are,
you
have
an
input
memory
and
you
are
splitting
it
up
if
you
have
to
support
the
current
weight
and
you
have
to
have
a
lot
of
different
conditions.
How
big
is
the
input
or
you
could
just
zero
out
everything
prior
and
then
overwrite
it
or
you
could
just
copy
the
right
amount
and
then
zero
out
the
remainder.
So
it
becomes
way
more
complex
as
compared
to
just
being
able
to
set
the
linked.