►
From YouTube: FIXED - Eth2.0 Implementers Call #5 [10/11/2018]
Description
Note, we had audio troubles with the stream. This is a local recording that started about 10 minutes into the call.
The failed call is here: https://www.youtube.com/watch?v=WbXa9MFttaw
agenda: https://github.com/ethereum/eth2.0-pm/issues/11
B
A
C
Yes,
I
mean
benchmarks
for
me
like,
like
that
in
Java
takes
0.2
milli.
Second
wide
piling
takes
ten
milliseconds
yeah.
That
sounds
about
right.
B
Sure
all
right
cool
yeah
look
whose
neck
harmony.
G
So
we're
building
or
shots
for
implementation
using
the
the
substrate
framework.
So
so
far
we
implemented
the
state
transition
and
that's
our
calling
to
the
current
Tyson
reference
implementation
at
least
two
weeks
ago.
So
we
also
have
some
basic
skeleton
for
the
for
the
networking,
collection
pool,
etc.
So
far,
it
looks
good
for
us,
so
I
mean
the
the
circus
is
meant
to
be
a
general
blockchain
framework
and
we
indeed
get
some
benefits
using
that
we
don't
need
to
read,
need
to
rewrite
our
blocks
in
poor
logic
or
the
backend
storage.
G
So
that's
why
it's
slightly
faster
and
for
next
I
think
we
might
be
focusing
on
implementing
some
of
the
features
in
subjects
that
might
be
potential
blocker
for
for
Jasper.
So
there
are
basically
two
things
we
found.
The
first
is,
of
course,
blackboard
consensus.
We
haven't
browsed.
The
flog
twice
group
are
implemented
according
to
the
shots
per
spec,
and
second
is
support
of
beautiful
start
routes.
So
we
don't.
We
don't
need
that
yet
no,
but
we
will
need
that
once
we
get
the
validator,
the
validator
try
or
something
like
that.
H
We've
also
drafted
the
simple
serialize
spec
and
and
built
some
Yambol
tests
for
some
shuffling
and
handily
splitting
that
way
that
we
have
that
we
have
active
release
those
yet
we've
also
been
looking
at
doing
the
networking
side
of
our
clients.
So
that's
looking
at
gossips
up
in
rust
and
we
kind
of
started
going
down
a
path
of
implementing
gossip
soybean
rust
is
going
to
be
our
next
kind
of
little
venture
yeah,
that's
about
from
us
cool.
I
Yes,
we
did
our
little
hope
of
concept
ammo
release
and
which
rebus
will
they
allow
a
single
validator
and
like
a
network
simulator
to
advance
a
beacon
chain
based
on
C
transitions
and
we
relaxed
a
few
of
the
constraints
and
some
parameters
to
allow
us
to
happen.
So
that
was
that
was
a
big
milestone
for
us.
I
Another
thing
that
we're
working
on
right
now
is
we're
working
on
getting
DLS
integrated,
so
be
doing
all
the
signature
aggregation
stuff
we're
dealing
with
some
issues
with
finding
a
good
go
library
that
has
a
permission
license
and
also
you
know,
has
everything
we
need
for
1203
81.
Aside
from
that,
we're
also
implementing
the
fork
choice.
I
Rule
at
the
moment,
where
we're
using
basically
the
last
finalized
lot
left
justified
slot
and
current
block
slot,
as
as
a
way
as
weighting
factors
and
a
scoring
rule
in
the
meantime,
what
we
wait
for
settling
on
the
immediate
message,
ghosts
or
like
LMD,
so
just
wondering
from
the
research
team.
Are
you
guys
leaving
worn
out
away
from
immediate
message,
codes.
A
Still
thinking
it
needs
to
think
about
this
more,
but
definitely
a
very
definitely
looking
at
latest
message.
You
know,
I
also
mean
an
implementation
of
latest
message
in
the
beacon
in
the
beacon
knows
in
the
clock:
disparity
folder
of
the
research
repo
you're
all
just
to
like,
like
that
again,
it's
surprisingly
simpler
than
immediate,
immediate
message,
ghost
in
certain
ways,
I.
I
J
Yeah,
so
in
terms
of
updates,
we
were
more
focused
on
a
low-level
business
than
high-level
achievements
in
the
past
two
weeks.
Still
we
have
implemented
the
sparse
metal
mackerel.
Trees
in
and
benchmarks
should
be
coming,
I
hope
in
the
next
two
weeks.
It
was
done
by
someone
outside
the
stairs
through
bounties,
so
the
timeline
is
not
decided
yet.
J
J
Also
we
hardened
simple,
serialize
or
suppose.
I
realized
a
premonition
with
regard
to
alignment
and
made
the
yes
tech
makes
all
proposition
about
padding,
and
hopefully,
if
we
gets
the
some
kind
of
test
format
decided
today
we
can
start
working
on
tester
generator
so
that
we
can
test
all
implementations.
A
Right
regarding
the
sparse
Merkle
tree
is
I,
don't
know
if
people
saw
but
I
made
a
sample
implementation
for
how
to
optimize
them
by
basically
layering,
but
your
expedition
trees
on
top
of
them.
So
you
get
the
same
kind
of
the
same
level
of
a
database
efficiency
it
in
case
I.
Don't
know
that
might
be
snap
might
be
something
that's
other
people
Vitaliy.
Where
can
I
see
it
I
just
did
there.
Oh,
is.
D
You
Hey
sorry,
my
audio
was
messed
up
for
a
second.
What
was
the
question.
K
K
D
D
D
A
On
the
spec
side,
one
of
the
kind
of
bigger
sites-
that's
probably
worth
dedicating
more
it's
more
time
to
specifically-
is
replacing
like
the
possibility
of
replacing
BM
hash
algorithm
with
some
kind
of
merkel.
Hashing
enzyme,
aid
and
I
mean
a
issue
about
on
this.
I
think
it's
issue
number
fifty-four,
so
would
be
good
to
try
to
get
a
get
more
feedback
or
comments
about
this.
A
Basically,
the
idea
is
that
instant,
that
UUM
and
a
kind
of
Merkel
route,
but
basically,
instead
of
hashing
everything
you
kind
of
hash
to
the
objects
in
a
morgue
Altria
and
the
Merkle
tree
hashing-
is
sort
of
done
along
the
lines
of
these
objects.
Syntax
tree
itself,
because
that
makes
things
simpler
in
a
bunch
of
ways.
L
This
beacon
there
was
a
nice
little
improvement
here
and
all
that
was
found.
So
basically
it's
about
housing,
randall
against
Oakland
reveals
so
an
orphan
reveal
is
when
someone
makes
reveals
there
Randall
commitment,
but
for
some
reason
or
another
it
doesn't
go
on
chain.
You
know
it
could
be
latency
or
it
could
be
active
censorship,
and
the
problem
with
that
with
often
reveals
is
that
when,
when
the
revealer
is
invited
to
reveal
the
next
time,
everyone
already
knows
what
they're
going
to
reveal.
L
So
it's
as
if
they
they
had
already
it's
as
if
they
had
skipped
their
last
lot.
So
the
simple
solution
is
just
to
count
the
number
of
times
that
a
revealer
was
invited
to
reveal,
but
the
no
block
proposal
made
it
into
the
canonical
chain
and
then,
when
the
revealer
does
eventually
reveal,
then
they
reveal
n
plus
one
layers
deep.
Where
n
is
the
number
of
times
that
they
didn't
reveal
on
chain.
L
Another
nice
piece
of
progress
for
the
vdf
is
that
foul
coin
has
confirmed
that
they
are
collaborating
with
us
on
a
50/50
basis,
for
you
know
on
the
financial
side
for
the
various
studies.
So
we
have
three
studies
and
that
we're
looking
to
do
and
we're,
starting
with
analog
performance
study
so
basically
seeing
how
much
performance
can
be
squeezed
by
designing
custom
custom
cells
at
the
the
transistor
level.
L
L
L
You
know
there
would
be
a
large
bouncy
to
a
competition,
but
one
of
the
complications
is
in
terms
of
which
cell
library
do
we
use
for
benchmarking,
all
the
various
circuits.
So
generally,
the
cell
libraries
are
have
a
lot
of
IP
protections
and
the
vendors
are
not
very
keen
to
work
with
open
source
projects.
L
Is
that
it's
a
FinFET
library
and
we
want
to
to
design
a
a
FinFET
ASIC
to
have
high
performance
in
terms
of
the
spec
I
guess
I
will
gradually
be
spending
more
time
on
this
back
and
just
as
a
heads
up,
I
think
the
pace
of
change
to
this
to
the
spec
will
will
continue.
It
will
maybe
even
accelerate
in
the
near
future.
Maybe
all
the
way
through
the
end
of
2018.
D
L
There's
also
some
not
so
insignificant
changes.
For
example,
what
was
talked
about?
You
know
where
we
merge
the
crystal
eye
state
and
the
active
state.
That's
a
relatively
large
change.
One
of
the
things
that
I've
started
doing
is
adding
a
to-do
list
in
the
spec
itself.
I
mean
that
to
do
is
incomplete,
but
it
gives
you
an
idea
of
the
things
to
do.
I
mean
when
you
know
outside
of
the
content
of
the
spec,
is
just
the
presentation
and
the
readability.
L
M
Sorry
about
that
I
was
just
wondering.
You
said
there
was
some
kind
of
increase
in
terms
of
the
time
it
takes
to
do
some
swearing
operation.
So
does
that
mean
that
you're
working?
This
was
starting
with
a
specific
fee,
noted
mine
and
was
there
any
kind
of
any
kind
of
how
to
say
if
you
reticle
advances,
it's
just
an
optimization
that
happened.
L
Yeah,
so
I
don't
have
lots
of
visibility
into
how
the
the
Chinese
team
improved
the
the
latency
I.
Imagine
it's
all
optimizations.
So
you
know
one
of
the
things
that
these
Hardware
multipliers
have.
Is
these
large
reduction
trees
and
you
can
try
and
be
clever
in
the
way
you
you
organize
the
cells
into
what
I
called
compressor
modules
and
then
how
you
arrange
your
compressor
modules
to
reduce
the
critical
path
of
the
circuit
and
I?
Imagine
they
did
some
and
optimizations
for
that,
but
I
don't
have
that
much
visibility.
L
D
N
Yeah
and
brother
Charlie
PvP
Oh,
since
I,
we
merged
the
Python
code,
Python
lingo,
logics
in
PI
via
and
POC
Rico
and
things.
The
cat,
DHT
and
other
functionalities
are
done
in
the
p2p
demon,
so
we're
more
actively
testing
lipedema
and
working
on
the
Python
bindings
for
it,
and
we
currently
deployments
scripting,
instable
and
terraformed
on
the
way.
Yeah
and
that's
our
update.
D
Cool
Raoul
from
whippy-tippy
couldn't
be
here,
but
he
has
a
handful
updates
in
the
agenda
that
he
put
there
real,
quick,
the
p2p
daemon,
slash
finding
interface
spec
they've
approved
a
merger.
Nancial
spectra
would
be
to
be
daemon.
They've
developed
a
go
binding
implementation
that
adheres
the
above
spec
I.
Think
it's
in
years,
evolution,
he'd
loved
any
feedback.
D
D
We
use
the
lead
peach,
VIP
ipfs
we've
shot
peers
by
default,
but
you
can
pass
different
settings
and
line
options.
Spec,
review
and
update.
Some
of
us
like
each
be
folks,
are
huddling
up
this
week
to
review
specs
and
bring
it
up
to
speed
with
implementations,
corresponding
pgps
extra
go
and
Mike
and
Raul
will
be
attending
the
to
a
meet-up,
October,
29th
and
Prague.
If
any
of
that
was
relevant
to
you
check
it
out
on
the
agenda
and
response
row
older.
D
D
B
D
Right,
yeah,
I,
think
I
think
it's
probably
reasonable
to
use
it
for
SSD
and
maybe
like
the
shuffling
algorithm
and
some
of
these
little
things
not
directly
not
like
big
chain
test
and
stuff.
So
in
that,
in
that
sense,
I
think
I'm,
comfortable
moving
forward
and
then
continuing
a
conversation
and
hopefully
making
some
firm
decisions
in
progress
near
Prague.
D
K
Okay,
I
kind
of
changed.
My
intention
slightly,
but
I
will
tell
you
what
I
think
now,
because
I've
done
a
lot
of
research
since
last
meeting,
so
I
have
actually
quickly
reviewed
what
the
Chaddock
posted
in
the
I
actually
was
aware
of.
This
optimization
before
I
think
I
kind
of
do
talk,
probably
presented
in
June
in
whatever
a
meet-up,
but
at
least
I
thought.
K
K
So
there's
three
things
and
I
am
starting
to
believe
that
we
are
very
far
from
the
optimal
set
of
trade-offs
in
terms
of
how
we
do
how
we've
done
things
so
far,
and
some
of
it
will
to
do
with
the
purchased
choice
of
the
with
the
choice
of
the
the
databases
that
we
use
and
stuff
like
this.
But
basically
my
idea
currently
I'm
doing
a
proof
of
concept
which
I'm
I'm
hoping
to
finish
in
about
three
weeks,
but
maybe
in
the
next
meeting.
K
Working
I
can
give
you
more
data,
so
that
would
be
more
convincing
than
the
words
but
the
main
stages
of
this
proof
of
concept.
So
the
idea
is
to
create
the
sort
of
to
fuse
together.
The
like
some
sort
of
B
will
not
be
tweet
page,
but
basically
the
bed
database
key
value
database
with
the
temporal
element
so
that
you
can
store
the
history,
something
that
is
a
kind
of
lacking
in
Tobruk.
K
Yes,
in
terms
of
efficiency,
then,
with
the
native
support
for
some
sort
of
tree,
hashing
algorithms,
and
also
with
the
easy,
easy
ease
of
pruning.
So
for
that,
there's
like
four
stages
of
these,
of
this
proof
concept
I'm
currently
on
the
stage
two,
so
the
first
stage
is
implement
weight,
balance,
trees
with
tightest
possible
balance
parameters
and
and
made
a
recursive
non
recursive
so
that
there's
efficient
bulk
update.
So
I've
done
this
already,
and
there
are
a
couple
of
lessons
I've
learned,
but
I'm
not
gonna.
Do
it
right
now.
H
K
But
again
the
lesson
I
learned
from
the
turbo
gift
is
that
when
you
store
the
history
in
such
a
way
that
the
you
record,
the
updates
relative
to
the
past
to
the
like
a
one
block
in
the
past,
for
example.
So
it's
like
I
would
I
call
forward
Delta.
Then
it
becomes
really
difficult
to
prune
such
a
structure.
K
It's
just
basically
get
some
numbers
around
and
the
stage
four,
which
is
the
most
interesting
for
this
kind
of
call,
is
that
again
similar
to
the
italics
idea
is
echo
with
embedding
of
the
different
tree
hashing
algorithm
into
the
into
the
database.
So,
essentially,
you
can
use
the
patricia
trees
or
sparse
Merkle
trees
or
AVL
trees,
maybe
and
kind
of
in
the
same
place
as
the
WBT,
which
is
the
weight
balance
trees
at
I'm
using
at
the
moment,
and
so
you
can.
K
You
try
to
record
one
hash
poor
page
to
assist
kind
of
computing
hashes,
which
is
again
the
current
deter
bagasse
is
lacking.
Is
that
you
can't
go
the
fast
sink
or
you
can
do
that?
Don't
do
light
like
line.
So
when
I've
completed
this
proof
of
proof
of
concept,
I
will
know
whether
this
whole
set
of
ideas
works
or
not,
and
I,
hopefully,
could
talk
about
it,
a
bit
more
in
details
but
yeah.
K
K
D
Yeah
thank
you
and
to
be
clear
if
you're
happy
with
the
numbers,
what
type
of
change.
K
Would
you
be
proposing
okay?
So
if
I'm
happy
with
our
numbers,
one
of
the
the
outcome
of
the
stage
4,
which
is
the
how
you
can
essentially
graphed
the
different
types
of
tree
onto
the
weight
balance
tree
I,
would
want
to
measure
the
overhead
that
you
were
gonna
get
so
essentially
the
systems
that
will
be
based
at
night.
K
Natively
on
this
weight
balance
trees
will
be
the
most
efficient
by
using
the
storage,
but
the
systems
that
will
use
different,
hashing
algorithms
will
pay
some
overhead
and
so
I,
hopefully,
will
be
able
to
tell
what
kind
of
level
of
overhead
it's
going
to
be.
And
so,
if
the
overhead
is
let's
say
reasonably
large,
so
then
I
would
suggest
to
consider
by
using
the
WBT
trees
instead
of
this
forest
Merkle
trees.
K
Because
the
main
my
I
understand
all
these
optimizations
is,
you
can
do
an
SMT,
but
my
main
complaint
about
sparse,
more
poor
trees
with
all
this
elegance
and
everything
is
that,
in
order
to
maintain
the
Malon
balance
of
the
tree
in
adversarial
settings,
you
have
to
pre
hash
the
keys.
You
basically
have
to
randomize
the
key.
K
Otherwise,
if
you
don't
do
that,
your
attacker
basically
will
create
like
a
really
long
sibling,
sibling
kind
of
nodes,
which
will
always
have
a
very
long
miracle
proofs,
no
very
long
I
mean
like
256
bits,
not
bits,
but
basically
256
things.
So
if
your
keys
are
balanced
or
randomized,
then
it's
not
a
problem,
but
as
we
see
in
like
in
the
current
aetherium,
we
basically
do
it
in
double
hashing
or
triple
hashing
of
everything's
like,
for
example,
solidity
is
hashing.
The
the
keys
of
the
Hat
I
hope
the
mappings,
and
then
you
get
these.
K
These
indices
on
storage
gets
hashed
again
before
they
insert
into
patricia
tree,
and
then
you
get
another
hash
thing
which
actually
happens
over
patricia
tree
so
there's
like,
but
these
particular
things
there's
a
triple
hashing
stuff.
I
would
like
to
have
less
hashing
so
for
again
for
the
performance
resistor,
but
again
now,
hopefully
have
numbers
later
I'm.
A
K
The
the
tree
hashing
is
basically
it's
not
only
to
perform
I
understand
it.
It's
not
like
a
biggest
performance
hit,
but
at
some
point
I
had
to
optimize
it
away
because
it
was
going
coming
on
top
of
my
profiling.
But
one
of
the
things
is
creating
inconvenience,
because
you
have
to
keep
the
preimage
database
and
currently,
let's
say
in
archive
node
in
a
serial
preimage.
A
K
D
The
next
thing
on
the
agenda
is
just
general
z21
discussion.
There
are
a
lot
of
things
changed
and
added,
as
we
noted,
are
there
any
questions
or
comments
regarding
everything.
Right
now
also
note
that,
because
we're
using
github
for
this
there's
a
rich
conversation
going
on
via
the
issues
and
the
PRS
and
would
love
more
input
there.
D
If
you
see
something,
that's
wrong
or
an
issue,
please
raise
an
issue
and
when
there's
PRS,
if
it's
something
that
you've
been
keeping
your
eye
on
or
you
understand
or
want
to
get
some
insight
and
feedback,
please
don't
hesitate
to
pop
in,
but
for
now
we
have
time
to
discuss
anything.
If
you
want
to.
E
One
thing
to
ask
about:
there
will
open
the
idea
about
three
hash
functions
that
Metallica
is
published.
I
would
like
to
propose
to
store
big
structures
like
what
the
data
said
in
a
tree,
but
it
seems
to
be
a
pretty
avoid
thing,
and
since
nobody
has
mentioned
that
before
I
know
it
is
there
any
difficulties
with
that
or
just
so.
We
that.
A
E
E
A
Like
what,
if
the
reason
why
we
did
things
this
way
with
indices,
is
because
the
validator
set
is
potentially
large
like
it
could
go
up
to
a
few
hundred
megabytes
and
we
wanted
to
be
maximally
easy
to
just
store
the
whole
thing
in
RAM.
So
I,
basically,
I
would
be
worried
that
having
any
more
complex
structure
other
than
just
a
simple
list
for
the
weight
is
huge
in
efficiency.
A
Definitely,
but
the
entire
thing
definitely
has
to
be
in
RAM,
because
literally
the
entire
validator
set
gets
updated
every
time.
There's
a
recalculation,
so
like
first
of
olders
like
there's,
basically
there's
not
that
much
benefit
to
a
data
structure
that
makes
it
easier
to
change
small
pieces
at
a
time
personally,
because
we're
teaching
everything
at
once.
A
E
E
E
D
E
Not
sure,
actually,
because
yep
this
is
like
this
will
be
like
when
we
are-
will
have
to
dump
the
disk
based
what
I
did
or
said
from
time
to
time
when
it's
changed
and
then
when
the
app
is
starting
load
it
from
this
to
the
realm
right.
So
this
is
what
I
am
worried
about
a
bit:
okay,
yep!
It's
gonna
be
a
huge
structure
that
should
be
stored
in
disk
in
one
time,
like
maybe
gigabyte,
or
something
like
that.
D
D
That
said,
you're
probably
also
taking
you
might
be
taking
snapshots
of
the
state,
maybe
every
cycle
or
so,
and
storing
them
in
your
database
so
that
at
least
since
the
last
finalized
state.
You
could
probably
prune
beyond
that.
But
there
is
you're
storing
the
current
in
memory,
but
you
probably
have
references
to
snapshots
of
it
in
the
database.
D
Right,
you
could
have,
you
could
receive
two
conflicting
blocks
that
cause
the
state
transition,
one
of
which
would
be
considered
the
head,
but
the
other
maybe
would
be
a
close
tie.
I
mean
a
close
second,
and
in
that
case
you
would
have
two
conflicting
crystallize
dates,
probably
locally
in
your
database,
that
you
would
only
prune
later
once
you've
finalized,
which
direction
the
chain
would
go
in.
D
D
O
D
Validator
set
because
the
Valladares
that
can
only
the
volunteers
that
can
only
change
every
I,
think
four
cycles
and
only
1/32
of
it
can
change
at
a
time.
So
you
do
have
your
validator
set
is
essentially
your
largest
component
of
the
crystalloid
state
likely
at
any
given
point.
So
you
do
have
some
generational
things
there.
D
A
Think
the
problem
was,
that
is
that
that
would
basically
mean
that
we
change
one
over
64
of
the
validators
every
time
and
the
Merkel
tree
of.
But
basically
we
would
be
doing
something
like
six
times
more
hashing
than
we're
than
we're
doing
right
now,
because
we
would
have
to
kind
of
update
a
bunch
of
work
over
extra
merkel
branches
whereas
currently
would
just
reconstruct
the
entire
tree,
or
even
currently,
we
do
hashing
but
like
currently
plus
rehashing.
We
would
reconstruct
the
entire
stream.
D
D
D
You
cool
I,
apologize
for
the
technical
difficulties
and
for
my
relative
absence
for
the
first
half
of
this
meeting,
stay
tuned
in
the
spec
Rico
a
lot
going
on
there,
because
I
just
finalized
this
location.
For
our
event,
I
will
be
sending
out
details
regarding
the
event
that's
of
schedule
and
the
proposed
working
groups
shortly.
M
Yes
might
be
appropriate
moment
a
few
sessions
back,
someone
brought
up
a
question
about
aggregation
of
signatures,
and
especially
the
question
of
whether,
including
several
times
a
given
signature,
introduces
any
kind
of
security
problems
and
I
would
just
like
to
know
whether
it's
a
question
of
taste
that
would
one
would
rather
just
prefer
to
have
an
aggregate
signature
which
represents
any
given
signature
only
once
at
most
or
if
there's
any,
how
to
say,
secured
the
reduction
or
whatever
any
kind
of
security
problem
with
including
any
given
signature.
Multiple
times,
I
mean.
M
So
it
might,
it
might
increase
so
the
data
necessary
to
untangle
the
aggregate
signature,
but
it
might
have
to
make
the
process
of
creating
the
aggregate
signature
significantly
easier.
So
I'm
just
wondering
like,
is
it
a
hard
line?
No,
or
should
we
be
avoiding
this
or
is
there
maybe
a
good
compromise
to
be
found,
I
think.
L
Go
ahead,
I,
don't
think,
there's
any
security
implications.
I
mean
one
possible
compromise
would
be
to
replace
the
bid
field
with
a
two-bit
field
where
everybody
data
has
to
bit
and
the
signature
could
be
included
at
most
three
times
and
that
will
keep
the
complexity
and
performance
while
low
and
high
respectively.
D
Allowing
to
have
for
creating
some
multiple
of
complexity
for
calculating
the
group
signature
of
the
group,
public
key
I,
don't
know.
I
I
would
prefer
to
see
a
reasonable
solution
that
doesn't
have
multiple
aggregates
multiple
representations
for
aggregate,
but
I
build
I
agree
that
there
might
be
some
compromise
there,
depending
on
the
aggregation
strategy
offline,
and
maybe
some
of
the
real-world
results
around
that
yeah.
A
Basically,
that,
like
this,
allowing
multiple,
multiple
inclusions
in
his
signature
would
and
of
substantially
increase
efficiency,
and
so
on.
You
know
or
really
increase
efficiency
in
some
concrete
way.