►
From YouTube: Ethereum Core Devs Meeting #53 [2019-01-18]
Description
A
A
B
A
B
Hudson
I
can
jump
in
on
this
one
quickly.
So
just
as
a
recap,
I
think
we
spoke
about
this
back
in
like
November
I
did
some
analysis
of
this
using
a
script
that
the
talaq
wrote
for
the
previous
one
and
had
the
help
of
three
or
four
other
awesome
people
and
the
short
version
of
the
story
is
the
difficulty
bomb
has
already
started
to
tick
right,
so
I
believe.
If
you
look
at
the
block
times,
we've
had
like
one
visible
tick
up
so
far.
B
Above
the
current
difficulty
level,
we
were
originally
predicting
a
lot
of
times
to
reach
30
seconds
in
May,
so
I
guess
two
important
points
to
make.
The
first
is
that,
since
that
point,
the
hash
Power
has
declined
by
around
20
percent,
which
means
I
think
it
will
happen.
We
would
hit
30
second
block
times
closer
to
the
end
of
April,
and
the
second
point
to
make
we
just
need
to
keep
reiterating
this.
B
A
Okay,
so
the
next
part
is
I.
Think
everyone
by
now
knows
kind
of
the
story
about
why
the
reason
the
fork
was
delayed,
so
we've
been
talking
about
how
to
mitigate
or
whether
or
not
to
even
include
the
EIP
in
the
next
and
the
renew
hard
fork
so
that,
let's
let's
go
ahead
and
talk
about
that
a
little
bit,
we
had
a
report
from
trailer
bits
and
chain
security
on
some
potential
mitigation
efforts.
There's
also
a
talk
page
on
F
magicians.
A
A
D
F
F
F
A
Go
over
each
and
every
one,
I
don't
think!
But
if
you
want
to
go
over
the
one
that
you
all
landed
on
as
the
best
potential
one
and
then
if
anyone
else
in
the
room
wants
to
go
over
there
potential
recommendations
as
well,
because
I
know
we
have
some
people
who
came
up
with
some
in
the
room.
So
if
you
want
to
go
over
the
the
most
recommended.
Yes.
E
G
F
Should
be
like
for
all
the
data
storage?
That's
why,
in
the
report,
we
kind
of
create
a
slight
version
of
this
professor.
Well,
instead
of
changing
the
Guster's
to
five
thousand,
we
want
to
just
use
a
gas
cause
that
was
pre
AIP,
like
you,
you
you
keep
the
same
cost
computation,
but
are
you
change
over
four
mechanism
of
the
storage
in
case
of
data
storage
that
make
sense.
A
E
A
Okay,
who
has
some
comments
or
questions
or
anything
that
what
was
just
said
I
have
a.
I
I
I
A
H
Sir
I
had
the
so
first
of
all,
I
do
agree
that
the
simplest
way
to
proceed
right
now
is
to
just
excluded
and
I,
just
very
short,
I,
don't
I
know
that
Martin
suggested
not
to
go
into
details,
but
about
the
the
seven
whatever
the
proposal
number
seven.
What
I
wanted
to
clarify
is
that
this
will
somehow
reduce
the
likely
the
desired
effect
of
the
CIP
because
of
the
the
cap
which
we
know
exists
for
the
era
funds,
so
just
to
make
it
sort
of
known
that
it's
not
exact
the
replication
of
the
semantics.
J
Yes,
so
I
was
wondering
gay
regarding
him
or
not
in
the
comments.
The
comments
made
on
one
proposal,
so
I
don't
really
see
how
it's
in
how
it
breaks
anything
that
anything
that
we
desire
to
have
when
it
preserves
the
original
intention
of
the
e
with
including
it's
a
or
if
it's
a
benefits
and
as
far
as
I
can
tell
he
doesn't
break
any.
If
any
scenario
that's
say,
actually
that's
lacking
to
be
real
world.
So
could
you
elaborate
on
that?
Please.
J
Yes,
because
I
mean
even
if
the
guy,
if
the
gas
left
is
below
number
its,
then
before
the
EEP,
it
would
cause
a
revolt
anyway,
because
I
leave
out
anyway,
so
he
doesn't
change
that
behavior
and
with
the
and
when
you
do
have
enough
gas,
you
do
get
the
benefit,
and
this
way
you
don't
have
to
deal
with
complexities
like
their
refund
limit.
So
it's
it's
essentially
a
one-liner
fix
so
I'm.
Trying
to
understand.
Why
do
we
need
anything
more
complex?
Is
that
now.
K
It's
just
that
I
think
people
are
thinking
about
what
kinds
of
options
exist.
I
mean
it's
not
a
totally
straightforward
decision,
because
adding
this
particular
condition
is
you
know
it's
kind
of
a
weird
condition.
To
be
honest,
I
mean
it
is
kind
of
an
elegant
proposal.
I
agree,
but
it's
just
you
know,
there's
there's
a
lot
of
things
that
you
know
a
lot
of
other
things
that
could
be
done
and
I
think
what
everyone
kind
of
wants
is
to
is
to
reach
a
conclusion
where
the
with
the
proposal
that's
accepted.
Isn't
you
know.
E
When
we
investigated
this,
we
also
found
that
several
smart
contracts
out
there
have
higher
gas
limits
they
pass
on
to
other
cards.
It
also
depends
on
which
version
of
this
salty
compiler
was
used
at
the
certain
time,
so
those
existing
contracts
might
still
be
attackable
and,
in
addition,
it
would
obviously
enshrine
into
the
like
formal
semantics
of
ethereum,
certain
like
workarounds,
which
just
happened
because
of
the
way
people
used
for
some
time-
let's
say
in
five
years,
the
code
or
solidity
at
that
time,
which
we
are
at
of
course
depth
in
the
future.
E
J
Yeah,
that's
a
I
mean
keeping
you
keeping
yeah,
they
were
keeping
your
log
of
the
reasons
behind.
Such
decisions
is
essentially
anyway,
because
I
mean
me,
you
know.
If
we
had,
if
we
had
there,
there
may
be
a
maybe
there
were
the
whole
problem,
it
could
have
been
prevented,
it's
a
mean
if
we,
if
we
remembered
exactly
why
we
had
the
limits
in
the
first
place,
so
we
should
that
we
should
keep
this
regardless
of
whether
we
take
this
solution
of
that
solution
but
with.
J
But
if
we
change
the
gas
limit,
we
need
to
also
think
of
another
gas
limit
their
refund
limit.
We
need
to
think
carefully
about
the
reasons
why
why
would
we
chose
to
limit
it
in
the
first
place
and-
and
that's
and
I
mean
my
sense-
is
that
this
is
more
complicated.
You
know
understanding
how
it
affects
miners
everything
I.
H
Would
say
that
in
order
to
answer
your
question
you
have
is
that
if
we
make
a
thought,
experiment
and
move
ourselves
situation
free
prior
to
so,
let's
say
that
we
were
currently
deciding
whether
to
include
this
EAP
or
not
back,
let's
say
a
couple
of
months
ago
and
somebody
comes
and
then
we
knew
about
this
vulnerability,
and
one
of
the
proposals
was
to
add
this
check
about
three
two
thousand
three
hundred
so
would
have.
We
actually
accepted
it
back
then,
with
this
particular
workaround,
probably
no.
H
No,
so
that's
I
think
the
enough
reason
to
not
do
it
right
now,
because
we
kind
of
in
a
similar
situation.
Now
we
don't
have
to
rescue
the
main
chain
from
this.
So
we
are
we
basically
by
not
doing
the
Constantinople.
We
brought
ourselves
back
into
the
situation.
We
want
to
be
where
we
can
have
this
freedom
of
making
may
see
change
or
not
making
the
massive
change.
So
that's
my
opinion
in
that
third
grade.
H
B
A
second
what
Danny
said
before
I
think
the
right
question
to
ask
right
now,
like
I,
think
we
could
debate
forever
how
to
fix
the
issue,
but
I
think
the
real
question
is
how
to
move
forward
with
Constantinople
and
should
we,
you
know,
move
forward
with
it
without
the
EIP
in
question,
I'm
curious
to
ask
anyone
who
strongly
opposed
to
moving
forward
without
that
EIT.
That
might
be
a
good
use
of
time.
Yeah.
K
I
can
I
can
go
first,
so
basically
I
think
that
the
it
is
kind
of
a
useful,
app
and
I
think
that
you
know.
If,
if
we
keep
the
discussion
time-boxed,
it
might
actually
be
beneficial
to
think
about.
What
can
we
do?
How
can
we
change
this?
How
can
we
change
it
Eve,
so
it's
so
it
can
still
be
included
so
I'm
strongly
I'm
kind
of
strongly
opposed
to
like
moving
forward
without
it,
because
I
feel
like
like
taking
this
EEP
away
is
going
to
make
Constantinople
even
less
event.
B
K
B
K
K
G
I
I
agree
about
that
totally
I
mean
if
modified
implementation.
Basically,
then
we're
looking
at
months
of
time
for
testing
and
dropping
it.
We
can
do
that
quickly,
moving
forward
with
it
as
it
is
yeah.
We
could
do
that
quick
as
well.
If
the
analysis
had
shown
that
yeah
we're
nine
times
and
certain
that
no
contract
is
effective.
L
With
a
scenario
be
realistic,
where
the
CIP
is
not
included,
the
hard
work
is
made
within
a
few
weeks
and
then
roughly
like
three
months
after
that
so
like
early
summer
well
before
summer,
a
properly
designed
version
of
the
EIP
is
launched,
because
the
only
option
mentioned
so
far
was
to
the
potential
next
hartford
could
be
in
October
I.
Think.
G
I
couldn't
agree
more,
so
what
we
are
doing
now
is
going
to
like
a
fixed,
hard,
Fox
schedule.
This
means
we
have
like
at
least
two
hard
folks
here,
scheduled
I
mean
this
is
like
two
more
hot
dogs
than
we
had
last
year.
So
what's
wrong
was
about
waiting
another
half
a
year
just
to
get
in
this
proposal.
I,
don't
see
why
we
should
have
hard
folks
every
three
months.
This
is
like
a
huge
overhead
for
the
client
developers
and
yeah
I.
M
G
I'm
working
right
now
and
what
I
was
proposing.
Let
me
just
quickly
organize
us
before
before
we
discussed
port
Powell
and
before
we
had
this
contention
of
the
incident,
I
was
proposing
a
timeline
for
subsequent
hard
Forks.
So
what
I
was
saying
is,
after
a
hard
fork,
like
Constantinople
I,
want
to
schedule
in
subsequent
hard-fought,
like
nine
months
after
this,
this
means
we
have
like
another
three
of
five
nine
three
to
four
or
five
months
to
discuss
proposals.
It
should
be
included.
G
G
No,
no,
what
I'm
suggesting
are
like
deadlines,
so
we're
saying
we
cannot
have
the
tests
not
hard
fork
in
less
than
two
months
before
the
main,
a
type
of,
because
we
need
at
least
two
months
two
weeks,
there's
a
stable
Testament,
and
then
we
need
another
six
weeks
for
like
releases
to
be
pushed
to
the
user.
This
is
like
the
deadlines
I've
been
proposing,
I'm,
not
saying
that
there
cannot
happen
anything
in
parallel.
E
K
Yes,
well,
I
mean
I,
just
I
think
I
have
I'm
just
way
less
way,
less
professional
than
all
of
you
guys
when
it
comes
to
making
changes,
so
I
feel
like
yeah.
Certainly
there's
yeah
I
mean
if
the,
if
you
guys
feel
that
you
know
this,
it's
it's
best
to
be
very
conservative
and
wait
for
a
very
long
time.
Then
it's
fine
I
mean
there's.
It's
just
gonna
be
safer.
This
way,
okay.
A
A
B
N
C
D
Okay,
so
if
we
removing
this,
maybe
all
we
need
to
do
is
just
to
regenerate
the
tests
to
make
sure
that
no
clients
perform
operation.
This
way
they
do.
It
exist,
yet
people's
not
removed
crying
to
not
include
this
change,
regenerate
the
test
and
then
see
that
all
those
other
clients
also
behaving
like
this
disabled
on
this
plane.
D
I
M
Would
like
to
propose
an
alternative
to
this
whole
rollout.
So
as
far
as
I
understood,
this
would
be
kind
of
redefining
what
Constantinople
is
meaning
that
we
just
removed
the
IP
from
Constantinople
regenerate
every
test,
everything
as
if
it
didn't
exist
and
go
ahead
and
remove
it
from
the
clients.
Only
problem
with
the,
in
my
opinion,
is
a
huge
problem.
M
Is
that
Robson
forked
over
and
enabled
this
feature
five
months
ago,
rinkeby
enable
it
give
or
take
one
week
ago,
I'm
going
to
worry
about
whether
other
private
networks
or
other
private
proof
of
authority
chains
enabled
it.
The
problem
here
is
that
if
we
decide
that
Constantinople
doesn't
contain
this
and
we
just
wipe
it
out
of
the
clients,
and
it
means
that
we
are
going
to
break
every
Network
out
there.
M
That
already
had
this
feature
enabled
and
well
I'm,
not
sure
whether
so,
basically
we're
telling
developers
that
hey
we're
going
to
roll
back
five
months
blocks
worth
on.
Rob's
turn
so
might
as
well
chill
drops
them,
because
it's
it's
useless
at
that
point,
and
the
same
goes
for
Inca,
be
that
it's
rolling
back
but
actually
weeks
worth
of
transactions.
M
Just
to
undo
this
seems
like
a
really
bad
idea
for
me
if
we
can
fix
it
in
a
more
elegant
way,
and
my
proposal
was
that,
instead
of
removing
this
from
Constantinople
and
wiping
it,
we
can
define
so
to
say
a
second
hard
fork,
which
only
removes
this
code.
So
essentially,
every
client
would
have
at
least
clients
that
have
tests
that
were
capabilities
would
have
would
also
in
so
basically
would
in
half
gone.
M
They
would
leave
Constantinople
as
it
is
now
just
fork
it
and
implement
the
net
esto
changes
and
then
have
the
second
I,
don't
know
Constantinople
to
fork
which
disables
this
code
path
and
essentially
on
main
that
we
would
Constantinople
won
and
Constantinople
to
trigger
at
the
same
block,
and
the
effect
of
this
would
be
that
we
could
upgrade
every
private
network.
Every
test
network
could
cleanly
upgrade
and
cleanly
disable
this
VIP
without
having
to
roll
back
any
blocks
and
also
testing
wise.
Then
we
don't
need
to
regenerate
any
tests.
H
K
H
M
G
M
So
my
suggestion
is
that
we
define
two
hard
Forks.
One
of
them
is
Constantinople
as
it
is
currently
and
the
second
one
is
Constantinople
fix
up
that
just
disables
this
feature
and
on
main
that
these
two
two
forks
would
trigger
on
the
same
block
on
all
the
tests
that
works.
These
could
be
scheduled
after
the
water.
After
we
actually
believe
in
you.
D
M
Mean
that
there's
absolutely
no
difference.
The
problem
is
that
if
we
just
removed
from
Constantinople,
then
we
instantly
killed
every
single
test,
not
every
single
private
network
that
upgraded
or
attitude
wasn't
doable
and
by
having
two
forks,
those
who
already
upgraded
we
can.
They
can
do
a
second
fork
to
actually
downgrade
serve
the
same
yeah.
G
G
M
G
D
M
I
K
G
I
M
The
the
problem
is
that
we
so
Constantinople
was
defined
with
these
features.
That's,
and
if
we
create
a
new
fork
that
just
disables
this,
then
if
we
retain
the
name
as
Constantinople
for
Manor
at
least
forget
we
have
named
Forks,
which
means
that
for
Inca,
be
all
of
a
sudden,
Constantinople
name
would
mean
something
different.
So
that's
again
you
have
these
clashes
that
we
could
hack
around
somehow,
but
it
gets
messy.
Well,
it.
M
M
E
B
Just
some
random
words,
so
what
what
we
call
it
and
what
goes
into
the
code
doesn't
necessarily
need
to
line
up
exactly
with
what
you
know.
The
public
refers
to
it,
as
so,
in
the
case
of
spurious
dragon
and
tangerine
whistle,
they
were
just
called
like
Dawson
mitigation,
the
hard
forks
so
just
making
that
point
like
we
could
come
up
with
a
creative
name,
I
think
and
still
call
it
Constantinople
yeah.
G
A
Yeah,
maybe
so
are
we
good
there?
Does
anyone
else
have
any
other
comments
on
that
or
any
major
objections
or
anything
because
I
think
that's
a
good
idea
and
it
happened
on
the
same
block
and
we
can
still
do
it
in
a
timeframe
of
six
weeks
or
whatever
I
think
we
also
too
sited
on
six
weeks
right
was
there
anyone
who
thinks
we
should
have
more
time,
I'm.
G
P
Know
I
know
we
have
a
difficulty,
boss
and
I'm
just
figure
out
what
that
would
actually
look
like
in
practice
and
so
like.
Let's
see
it
would
six
weeks
is
42
days,
so
that
seems
to
be
very
close
to
what
we
would
get
if
we
have
put
the
block
number
to
be
7.3
million
and
I
feel
like
going
later
than
7.3
million
would
definitely
make
the
block
time
be
a
B
to
in.
B
Okay,
I'll
rerun
the
simulation
code
and
check
when
that
block
should
arrive
with
the
difficulty
bomb
factored
in.
Okay,.
P
Calculation
was
just
to
check
what
the
was
to
check.
What
the
block
number
would
be
at
that's
it
at
the
roughly
six
weeks
from
now
on
timeframe,
because
right
now,
block
times
are
already
am
increased
by
about
something
like
nine
percent
from
the
ice
age,
and
so
saying
six
weeks
would
say
that
we're
willing
to
take
two
more
steps
of
the
Ice
Age,
which
would
possibly
push
four
times
up
to
maybe
playing
in
21
seconds.
P
P
G
P
A
M
A
A
C
A
A
G
A
Okay,
so
post
mortem.
How
did
this
happen?
How
do
we
prevent
it
from
happening
again,
I'd
like
to
bring
in
Charles
st.
Louis
to
talk
a
little
bit
about
the
post
mortem,
that's
being
created
Charles's
with
the
etherium
cat
herders,
and
he
was
around
the
entire
time.
The
incident
was
happening
this
week,
so
he
kind
of
got.
He
took
some
good
notes
and
the
cat
herders
are
kind
of
working
on
a
post
mortem
along
with
a
few
other
people,
Hudson.
B
A
Want
to
be
a
surprise
or
a
secret,
but
no
yeah.
We
should
we
should
explain
and
I
I
kind
of
mentioned
this
like
three
meetings
ago,
but
we
have
a
group
set
up
now
of
about
maybe
twelve
people
total
that
go
around
and
help
take
notes.
Take
surveys
for
the
core
devs
eventually
do
some
project
management
tasks
around
the
ecosystem
late?
How
else
would
you
describe
that
you
kind
of
pin
the
blog
post,
so
you
probably
have
better
wording
yeah.
B
I
mean
it's:
it's
an
initiative
basically
to
improve
project
management
across
the
ecosystem
and
improve
like
inter
team
communication,
and
so
the
only
thing
I
would
add
to
what
you
said.
Hudson
is
that
these
are
all
people
with
like
many
years
of
real-world
project
management.
Experience,
which
I
think
is
something
that
we
as
a
community
have
been
somewhat
lacking
in
and
they're
volunteering,
so
yeah
I
think
we'll
see
exciting
things
come
out
of
the
initiatives
and.
A
A
R
We
feel
confident
that
we
could
get
every
stakeholder
notified
in
time,
and
then
we
made
the
decision.
It
was
mostly
over
the
call
if
I
recall,
then.
Lastly,
it
was
how
do
we
communicate
the
decisions
to
everyone
so
that
being
calms,
blogs,
social,
that
kind
of
emergency
comms
and
then
we're
at
the
process?
Where
we're
deciding
to
remove
the
EIP
on
this
call,
and
then
the
last
one
would
be
rescheduling
the
hard
fork,
I'm
gonna
paste
echad
in
there.
If
you
guys
wanted
to
know
or
comment
on
it,
yeah.
A
Because
it's
open
edit,
don't
paste
it
to
YouTube
or
anywhere
else,
everybody,
because
otherwise
it'll
get
vandalized,
okay,
so
yeah,
and
if
you're
interested
in
contributing
to
that
Charles
is
gonna
paste
the
link
right
now,
if
you're
a
core
dev
or
someone
who
was
involved
in
this,
who
wants
to
help
with
the
post-mortem
feel
free
to
reach
out
to
Charles
Charles.
Can
you
put
your
email
in
the
chat
as
well?
Yeah
sure,
thank
you
so
much
great.
So
that's
what
that
is.
M
I,
don't
have
a
comment,
although
I
wanted
to
bring
up
something
that
we've
discussed
after
Byzantium.
So
it
probably
a
lot
of
you
remember
that
we
had
a
similar
mess
up
during
Byzantium,
where
we
had
to
do
many,
many
last-minute
hotfix
releases
due
to
the
basically
we
had
the
fuzzer,
which
found
a
lot
of
bugs
and
we
the
problem
there
was
that
we
were
an
apparently
released
two
or
three
releases.
We
also
did
one
release
days
or
hours
before
the
hard
fork,
so
it
was
really
crazy
and
back
then
we
talked
about
that.
M
Maybe
it
would
be
nice
to
have
an
untrained
Oracle
to
pull
the
plug
on
on
a
hard
fork
and
I
guess
this
kind
of
it's
not
really
something
that
people
like
to
talk
about,
because
it's
obviously
a
pain
point
which
we
got
to
centralization.
That's
why
I
myself
never
really
brought
it
up,
because
it
doesn't
really
feel
too
right,
but
it
was
up
again
on
reddit,
so
I
I
thought
that
it's
worthwhile
to
mention
it
that
not
it's
not
really
relevant
for
the
upcoming
Hartford,
but
maybe
for
the
one
in
October.
M
M
M
You
know
previously,
so
it
might
be
worthwhile
to
either
say
that
yes,
we
want
it,
and
this
is
how
we
can
achieve
it
without
people
freaking
out
or
be
say
it
that
no,
we
don't
want
it,
and
these
are
the
reasons
so
that
at
least
next
time
something
goes
bad
and
people
ask
us
why
it's
not
there.
We
can
point
them
to
some
rationale.
I
think.
M
So
that's
that
would
be
even
messier,
because
then
you
have
this
client
so
in
in
practice
whether
clients
maintain
their
own
or
separate
ones.
That's
not
really
a
difference
from
a
centralization
perspective,
because
it's
still
one
team
deciding,
and
so
the
original
idea
was
that
if
you
have
a
contract
where
every
client
has
so
the
sailboat
and
if
I
don't.
If,
if
we
have
three
five-minute
clients,
everyone
can
vote
anything.
The
majority
vote
can
actually
postpone
the
fork.
M
B
N
It's
important
goal
behind
a
mechanism
like
this
to
kind
of
state
the
intent
which
would
be
to
make
the
decision
whether
or
not
to
go
along
with
a
hard
fork.
A
conscious
decision
in
the
knowledge
of
you
know,
security
for
our
exploit,
like
this
happened,
the
Sun
run.
So
if
it
was
more
of
an
automated
switch,
some
people
might
be
more
acceptable.
M
So
the
problem
is
that,
for
example,
gate
also
had
upgrade
Oracle's
where
it
we
bumped
some
numbers
in
a
smart
contract
and
then
get
started.
Logging
messages
to
the
console
that
there's
a
new
version
or
something
the
issue
is
that
realistically,
not
many
people
look
at
the
logs,
so
it's
just
boring
set
of
August
on
you,
don't
get
up
every
morning
and
see
that
who
did
my
node
lock
something
interesting
and
I
guess
most
people
run
these
a
theorem
nose
in
headless
modes
on
some
server.
M
N
M
No
I
agree
that
that
would
be
a
the
best
scenario.
For
example,
if
the
client
would
signal
to
the
user
that
hey
there's
this
decision,
do
you
want
to
opt
out?
That
would
be
a
real
nice
thing
to
do.
Just
I
don't
really
see
that
viable,
for
example,
if
in
Fuhrer
has
45
nodes
and
then
how
do
those
45
nodes
signal
that
hey
I
would
like
to
opt
out
it
Sam
from
that
perspective,
it's
a
bit
messy
but
yeah
anyway.
I
don't
want
to
hijack
this
discussion.
M
M
N
A
B
G
B
B
Batalik
suggested
blocks,
seven
point,
two
five
or
seven
point:
three
million
seven
point:
two
five
million
looks
like
it
would
happen
on
February,
19
and
seven
point:
three
million
looks
like
March
second,
so
if
we
want
six
weeks
from
now,
today's
January
18
we
want
the
end
of
February.
So
that
puts
us
around
seven
point.
Three
million
is
that
that
is
what
the
Telex
said.
So
that's.
P
G
B
G
G
A
Sounds
good?
The
next
item
is
Prague
town
and
we
actually
have
I
think
we
have
if
death
and
else
in
here.
So
that's
really
awesome.
Before
we
go
into
whether
or
not
we
are
doing
a
Prague
PAL
hard
fork
week.
We
tentatively
decided
that
was
it
last
meeting
last
meeting
feels
like
forever
ago
I
think
we
decided
at
last
meeting,
but
since
then,
there's
been
some
community
feedback
about
it.
A
So
I
wanted
to
make
sure
that
if
death
and
else
were
able
to
respond
to
that
community
feedback,
if
they
had
any
updates
or
commentary
and
then
also
just
Reedus
cus
Prague
pal
within
the
group,
so
welcome
if
death
and
else
and
feel
free
to
start
out
with
any
commentary
or
updates.
You
have.
A
S
S
One
interesting
development
that
came
up
was
we've
discovered
a
what
appears
to
be
an
AMD
compiler
bug
so
absolutely
once
in
a
while,
as
we
generate
these
random
programs,
the
ambi
compiler,
just
completely
missed,
compile
with
it,
and
so
AMD
hardware
will
give
bogus
answers
for
an
entire
period
the
next
period.
When
you
get
a
new
random
program
it
compiles,
it
correctly
were
actively
engaged,
both
our
team
and
also
some
of
the
east
minor
developers,
with
some
AMD
engineers
trying
to
root
cause
this
and
see
how
we
can
mitigate
this
issue
to
go
ahead.
S
Some
other
feedback
that
we've
gotten
is
that
the
the
there's
a
number
of
parameters
that
thun
talked
how
how
much
compute,
how
much
memory
it
uses
and
the
tunings
that
we've
set
turned
out
to
be
a
little
bit
too
harsh
for
AMD
a
little
bit
to
compute
heavy
for
some
AMD
hardware.
So
we
have
a
recommendation
to
tune
it
down
a
little
bit.
We
move
about
10%
of
the
compute
workload
that
has
no
effect
on
some
AMD
hardware.
It
has
no
effect
on
any
of
them.
S
Q
S
S
A
Think
the
biggest
community
feedback
that
I've
been
seeing
lately
is
that
that
this
decision
was
potentially
rushed
a
little
bit
not
enough
time
given
for
discussion,
but
even
the
bigger
argument
than
that
was
that
AMD's
video
cards
are
affected,
so
I'm
glad
that
you
all
are
looking
into
that
and
fixing
it
with
AMD
and
yeah
good
good
work
there,
but
yeah
I,
don't
think.
Does
anyone?
Has
anyone
have
any
proposals
to
change
the
decision?
I
I
But
then
there
is
the
other
discussion,
because
these
claims
or
features
the
acid
resistance
relies
on
the
kind
of
assumption
that
we
don't
want
asacs
on
the
network
and
that
a
level
of
asset
assistance
is
something
worth
striving
for
I
personally
kind
of
assume
that
naturally
that's
what
we
should
aim
for,
because
that's
what
one
of
the
early
goes
with
eta
each
hash,
but
it
appears
that
there's
been
well.
It
appears
that
this
is
not
well.
I
O
Status
quo
with
GPU
mining
Accor
with
those
who
are
making
or
have
made
Asics
how
that
motivates
the
conversation
to
kind
of
take
non-productive
angles.
If
you
will
I
think
that's
just
the
exact
type
of
negativity
that
we're
trying
to
avoid
eventually
I
think
the
problem
that
we
would
like
to
try
to
help
with
is
to
minimize
the
economic
incentive.
Alignment
of
particular
hardware
holders
or
of
any
particular
hardware
ecosystem
to
the
development
of
the
network
as
a
whole.
O
I
think
it's
super
important
to
preserve
the
independence
and
the
ability
for
aetherium
core
developers
to
continue
making
progress
in
technology
and
enhancements
to
the
network,
and
it's
very
unfortunate
if
the
network
hash
and
the
neck
network
security
model
becomes
unaligned
to
a
particular
type
of
system
or
a
particular
specific
set
of
owners
or
miners,
or
something
like
that.
Now,
that
said,
I
mean
even
there's.
There
are
even
problems
with
the
status
quo
right
I
think
it's
been
correctly
remarked
that
that
AMD
and
NVIDIA-
and
there
are
not
enough
particular
independent
brands
or.
O
Taking
that
same
line
of
reasoning,
I
think
the
existing
status
quo
also
already
has
the
best
let
the
least
evil,
if
you
will,
because
it
is
the
most
distributed
hardware
compared
to
any
other
ASIC,
that's
available
or
even
FFP
J's
or
at
what?
What
have
you
right
in
terms
of
the
economies
of
scale
that
people
have
pointed
out
in
the
ASIC
economy
and
make
terms
of
the
distribution
and
they
can
economy
of
scale
being
aligned
to
a
wide
and
massive
distribution
base
and
a
very
good
availability
in
terms
of
fairness
and
availability?
O
Fairness
of
distribution,
fairness
of
market
price,
the
status
quo
already
has
the
least
evil
of
all
the
evils,
and
so
that
was
kind
of
the
the
starting
point
for
for
our
development
and
our
rationality.
Working
on
this
beyond
that
I,
don't
think
the
goal
is
to
necessarily
try
to
win
for
one
particular
hardware
or
another,
because
I
don't
think
necessarily
that
Hardware,
any
particular
hardware
maker
or
owner
should
necessarily
win
in
the
end.
O
H
Is
that
I'm
not
aligned
with
any
of
the
kind
of
basic
or
not
ASIC
sides,
as
some
people
might
have
suggested
because
of
my
activity
in
terms
of
discussing
it?
My
only
sort
of
point
of
view
is
that
I
have
taken
on
the
task
of
kind
of
developing
and
sort
of
designing
the
one
of
the
parts
of
this
theorem.
H
H
Otherwise,
I'm
not
really
interested
in
kind
of
arguing
with
with
minors
or
with
ASIC
manufacturers
or
whoever
that
that
be
and
I
also
wanted
to
point
something
out
is
that,
although
some
people
say
you
know,
if
you
don't
like
the
hard
work,
you
just
you
know
for
it
off,
and
then
you
do
your
own
fork.
I
I
would
also
say
that
this
is
not
the
correct
advice
because
it
actually
depends
on
the
entities
that
fund
the
further
development.
Let's
say
that
the
whoever
does
the
fork.
You
know.
H
The
theorem
foundation,
for
example,
possesses
the
ether
on
both
sides
of
the
fork
and
whatever
they
choose
to
do
with
that,
then
it's
it's
actually
determines
the
the
chances
of
any
of
the
forks
winning.
So
that's
why
I
think
the
you
know
that
that's
why
people
kind
of
appealing
to
this
forum,
for
particular
for
a
decision,
because
everybody
know
it's
not
formalized,
but
people
informally
know.
This
is
why
we're
actually
making
these
decisions
here.
So
that's!
That's
it
for
me.
O
Just
an
observation,
I
I
think
more
forts
is
not
just
hard
for
developers,
but
it
causes
discussion
and-
and
it
causes
charity
anyway,
so
so
to
the
extent
that
that
there's
an
many
much
earlier
thread
saying
that
if
a
proof
of
stake
could
just
be
implemented
in
the
short-term,
we
should
go
directly
to
that
and
I
wholly
agree.
I
mean
there's
if,
if
basically
progresses,
proof
of
work
is
only
going
to
be
in
effect
for
a
month
or
two
before
everything
else
is
ready.
O
Then
there's
kind
of
no
point
and
an
extra
fork
is
not
a
good
thing.
That
said,
if
the
development
team
of
the
folks
meeting
here
believe
that
there
is
a
longer
a
more
arbitrary
amount
of
time
needed
to
get
a
proof
of
stake.
Working
correctly,
then
I
think
one
of
the
mechanisms
that
is
built
into
progressive
proof
of
work
is
in
effect
it's
basically
a
bunch
of
microscopic
forts
that
are
not
hard
forced
right.
O
What
what
people
do
to
do?
Basic
resistant
Forks
I
mean
that
that
is
the
mechanism
that
that
underlies
the
core
of
progressive
terrific
work,
so
it
helps
eliminate,
but
the
need
for
at
least
algorithmic
to
look
for
King
for
the
algorithm
or
the
mining
algorithm.
So
it
maintains
a
basin
and
then
you're
just
doing
a
bunch
of
continuous
Forks
all
the
time
in
a
predictable
and
and
well-understood
manner.
A
And
actually,
with
regards
to
that,
the
next
thing,
if
we
have
time
was
probably
gonna,
be
the
POS
finality
gadget
on
the
pouch
Ain,
but
Danny
or
metallic.
Do
you
all
have
any
kind
of
words
on
how
fast
things
are
coming
along,
not
like
necessarily
like?
Oh
it's
gonna
happen
on
this
date,
but
anything
like
that.
A
P
I
think
that's
relevant
here
is
that
we're
designing
the
protocols
so
that
the
be
the
beacon
chain
will
be
and
of
usable
as
a
finality
gadget
for
the
proof-of-work
chain.
If
people
wants
to
so
that
well,
that
would
basically
give
the
the
same
security
property.
Is
that
the
original
kind
of
pre
beacon
chain
dat,
hybrid
Kaspar?
P
If
she
was
going
to
go
and
that's
something
that'll
be
out
and
not
focusing
on
timelines
before
sharding
and
before
and
ever
before,
state
executions,
though
so
that
would
basically
mean
that,
if
that,
if
that
happens,
that
if
everyone
is
listening
to
the
approve
of
state
system,
Pro
finality
and
if
enough
people
are
participating,
that
it's
actually
secure,
then
a
51%
attack
on
the
proof
of
work
will
basically
be
able
to
censor.
But
it
will
not
be
able
to
revert
anything
finalized.
C
P
C
O
P
Basically,
if
we,
the
only
thing
that
we
can
really
do
is
well.
If
we
know
the
attacker
has
a
sixth
and
we
can
change
the
proof,
work
algorithm
to
cancel
DeRay
six,
and
but
if
it's
a
51%
attack,
that's
run
based
on
basically
GPU
hardware,
then
we
would
really
have
no
choice
but
enough
scramble
as
quickly
as
possible
to
my
greatest
some
kind
of
primitive
stake.
H
There's
another
avenue
actually
so
I've
been
thinking
about
and
researching
the
ways
to
way
to
avoid
this
kind
of
situations.
Regardless
of
what
kind
of
miners
we
have
like
a
signal,
ASIC
so
and
I
thought
about.
There
was
a
proposal
on
the
BTC.
Read
it
a
bit
from
this
guy
who
wrote
the
mining
farm
in
the
US,
so
he
essentially
proposes
to
modify
the
the
fork
choice.
H
Rule
in,
in
that
case,
in
a
Bitcoin
cache,
but
in
the
in
the
in
our
case,
could
be
in
the
theorem
1.0
so
that
in
the
long
term
it
converges
to
the
same
fork.
Choice.
True,
but
you
know
short
term
it
could
favor
the
blocks
which
are
not
censoring
for
or
the
blocks
are,
which
are
not
trying
to
revert.
So
essentially,
even
if
the
attacker
has
a
six,
and
so
it
would
still
be
beatable.
P
P
At
least
like
the
only
version
of
that
sort
of
thing
that
I
would
find
secure
is
probably
something
based
on
combining
my
and
the
99%
fault,
tolerant,
consensus
approach,
whichever
which
requires
a
basically
clients
and
like
all
the
nodes
in
the
network,
to
be
online
with
proof
of
work
in
some
way
which
could
be
possible
like
properly
researching
inspecting.
You
know
it
would
take
a
huge
amount
of
time
so.
O
Its
theorem
currently
is
is
resistant
by
the
virtue
of
it
being
the
biggest
coin
biggest
work
consumer
on
the
GPU
type
of
ASIC
right,
in
any
case
in
any
sort
of
hardware,
specific
algorithm
or
an
algorithm.
What
more
correctly
said,
an
algorithm
that
can
be
made
optimized
for
a
specific
type
of
hardware,
yeah,
only
the
only
the
biggest
work
consumer
that
consumes
the
most
amount
of
that
hardware
is
ever
protected,
so
so
by
changing
the
algorithm
to
some
other
algorithm.
O
What
whatever
Hardware
and
affinity
it
has
the
that
that
algorithm
selection
would
only
be
protected
from
a
51%
attack.
If
the
algorithm
selected
was
the
biggest
work
consumer
on
that
type
of
new
new
type
of
hardware
and
I
forked
to
a
new,
an
algorithm
that
requires
a
new
hardware
to
be
made
well.
First,.
O
G
O
Pga's,
but
so
so
it
would
still
be.
You
know,
attackable
if
it
was
originally
attackable,
it
would
be
still
attackable
and
on
programmable
hardware.
If
it
was
forking
to
a
some
sort
of
optimized
ASIC
algorithm,
then
you
immediately
run
into
a
distribution
problem
where
the
first
to
produce
gains
economies
of
scale
and
gains
leverage
and
also
limits
you
run
into
a
limited
abilities.
A
scaling
hardware
is
always
hard
right.
It's
not
like
software,
you
can
send
it
all
around
the
world
and
any
any
piece
of
soft
hardware
that
can
run
that
software
immediately
gets
it.
O
M
If
you
remember,
would
switch
over
to
some
ASIC
friendly
target
and
then,
for
example,
an
elegant
solution
would
be
to
announce
it
that
one
year
from
now
or
two
year
from
now
we're
going
to
use
this
ASIC
from
the
algorithm
and
that
kind
of
gives
everybody
ample
time
to
use
all
these
problems
out.
Yeah.
O
O
So,
first
of
all,
there
is
economy
of
scale
and
then
the
same
person
referred
to
manipulation
in
terms
of
partnerships
in
hardware
manufacturing
in
China,
and
both
of
these
are
very,
very
real,
so
economies
of
scale
obvious
term.
First
in
names,
you
have
the
advantage
and
you
can
leverage
additional
profits
from
the
initial
sale.
Whoever
is
most
efficient
to
basically
have
a
pricing
advantage
and
a
manufacturing
advantage
for
whatever
initial
partnerships
they've
started
a
first
mover
advantage.
O
I
mean
everyone
should
be
familiar
with
that
and
in
terms
of
production
in
in
the
Asia
and
eco
system,
once
you've
established
a
money-making
path
it
these
relationships
become
very
well
cemented,
so
equal
technology,
even
if
people
have
an
equally
efficient
to
to
competitors,
have
an
equally
efficient
basic.
The
natural
evolution
of
the
production
economy
is
that
you
have
one
dominant
producer,
a
second
competitor
and
everyone
else
being
terribly
far
behind
study
any
sort
of
stable
production
ecosystem
I
mean
look
at
a
beer.
Look
at
paper.
O
O
So
that's
that's!
Why
I'm
more
pushing
for
this
to
thatis
quo,
if
you
have
an
ecosystem
that
you
already
know
is
distributed,
it's
the
enemy,
you
know
and
you
given
its
it's
already
stabilized
in
kind
of
trading
off
all
of
the
benefits
or
sorry
dealing
with
all
these
damages
or
difficulties
of
a
hardware
ecosystem
and
ameliorating
that,
because
those
incentives
aren't
aligned
to
trying
to
manipulate
the
etherium
chain
right
there
already
aligned
to
going
to
win
a
different
market.
A
Cool
thanks
for
your
comment
there,
so
something
that
I
just
want
to
bring
up
before.
We
sign
off
alexei
mentioned
earlier,
that
this
isn't
the
right
forum
for-
or
this
is
a
bad
form
for
a
decision
for
prog
pal
and
I
actually
agree
with
that.
I
don't
know
of
a
better
forum,
but
that
is
something
that
the
cat
herders
said.
They
want
to
try
to
look
at
and
just
evaluate
and
see
if
there
a
better
form
or
decision
making
process
for
that.
A
If
anyone
else
has
ideas
feel
free
to
shoot
them
in
the
awkward
devs
chat,
because
that
that'll
be
very
helpful
and
it's
something
else
that
was
brought
up
to
was
that
the
people
who
aren't
commenting
on
the
call
about
the
proc,
Pao
decision
or
tentative
decision
is
because
the
they
haven't
formed
an
opinion
yet
or
it's
kind
of
out
of
their
wheelhouse.
As
far
as
this
hardware
stuff
goes
so
I
think
that's
kind
of
an
important
thing
to
keep
in
mind.
I
would
say
that
we
will
need
to
continue
this
discussion
in
the
future.
A
I,
don't
know
if
it'll
fit
into
the
next
cordon
of
call
or
not
we'll
we'll
try,
but
I
do
want
to
thank
if
deafening
on
and
providing
their
perspective
and
answering
questions
and
things
like
that.
Do
you
have
any
final
comments
to
just
wrap
things
up
we're
a
little
bit
over
time,
but
if
anyone
had
anything,
they
really
wanted
to
say.
B
Yeah
I'm
proposing
seven
to
eight
million
the
simulation
changes
slightly
depending
on
like
two
variables,
which
are
the
network
hash
power
right
which
could
change
between
now
and
then,
as
well
as
the
current
block
time.
But
we
did
some
sensitivity,
analysis
and
it
looks
like
7.28
million
gets
us
pretty
close
to
the
target
and
and
as
someone
else
proposed,
I
think
we
can
maybe
go
with
that.
If
people
are
okay
with
it
for
now
as
a
tentative
number
and
then,
once
all
the
clients
are
updated,
reevaluate
in
like
two
weeks
or
something
but.