►
From YouTube: Consensus Layer Call #71 [2021/8/26]
Description
A
A
Stream
should
be
transitioned
and
we
are
starting
the
and
I'm
sorry
ben
the
consensus
layer
call
number
71
great,
so
we
will
focus,
I
think,
a
lot
on
altair
today
as
usual,
then
we'll
do
some
client
updates.
We
can
do
quick,
merge
discussion,
although
we
had
quite
a
murder
discussion
over
the
past
hour
on
the
engine
api,
then
research
updates
and
any
closing
remarks,
spec
discussion,
etc.
A
V110
beta
3
is
now
out
and
the
test
vectors
are
being
uploaded.
Currently,
I
will
try
to
keep.
I
have
to
upload
each
file
of
those
target.
Dc's
independently
or
github
dies,
so
I
will
do
them
one
at
a
time
during
this
call-
and
hopefully
they'll
be
done
at
the
end
of
the
call.
A
Thank
you
for
your
patience
on
that
and
huge
shout
out
to
alex
for
greatly
greatly
increasing
test
coverage.
So
hopefully
some
good
stuff
will
come
out
of
that
cool.
Well,
let's
so
that
was
the
beginning
of
altair.
We
do
have
a
new
release,
there's
some
other
goodies
in
there
with
the
merge
and
sharding
if
you're,
following
along
on
those
and
piermont
upgraded
one
week
ago,
and
we
do
have
some
coordinated
testing
going
on
on
that.
I
believe
today
we
should
be
turning
finality
back
on.
A
It
looks
like
some
some
of
y'all
have
begun
to
do
that.
We're
back
at
60,
we've
been
non-finalizing
for
702,
epochs,
very
cool,
and
there
are
a
number
of
other
things
that
we
like
to
get
on.
There
specifically
get
a
bunch
of
deposits,
get
a
bunch
of
exits,
make
sure
we
have
all
the
different
slashing
types
covered,
and
you
know
do
weird
things
to
your
heart
desire.
B
Right
so
for
food,
transparency
would
be
ran
into
an
issue
where
the
prison
could
not
sink
under
a
finalist
period,
but
the
issue
was
quickly
fixed
and
patched,
and
then
we
got
a
new
release
out
so
yeah.
Personally,
I'm
really
grateful
for
this
type
of
test
that
that
allow
us
to
catch
issues
like
this
was.
B
A
Gotcha
cool-
and
I
mentioned
this
elsewhere,
but
we
are
looking
into
doing
a
more
concerted
effort
to
do
scenario,
type
testing
in
real
large
but
kind
of
controlled
test
nets.
This
is
just
a
bit
of
a
taste
of
that.
Hopefully
we
can
get
some
of
that
going
over
the
next
couple
months
to
harden
all
tear
stuff
and
get
ready
for
merge.
A
A
Okay,
pratter
will
launch
in
one
week
minus
two
hours.
I
believe
there
is
a
config
up.
It
was
merged.
If
you
haven't
taken
a
look,
please
do
a
final
sanity
check
on
that.
If
you
are
listening
to
this
call,
keep
an
eye
peeled
for
pratter
altair
releases
on
your
client
of
choice
and
upgrade.
If
you
are
on
that
test
net,
as
we
approach
that
one
week
from
today
is
any
any
comments
or
questions
on
prada.
A
Perry
you're
gonna
work
on
upping
the
validators
on
powder.
Can
you
give
us
a
quick
on
that.
C
Yeah
I've
just
reached
out
to
all
the
client
teams
to
figure
out
who's
gonna
be
taking
part
in
in
that
and
I'm
just
creating
a
small
dock
to
figure
out
who
will
who
would
have
to
turn
on
the
validators
on
which
date?
I
guess
I
do
the
deposits
over
the
weekend
or
early
next
week
and
should
hear
more
from
me
next
week.
A
Got
it
so
obviously,
after
pratter,
we
do
want
to
target
a
mainnet
launch,
so
we,
I
think
we
have
a
handful
of
things
in
the
works,
this.
These
continued
paramount
testing,
seeing
prater
go
well
and
also
probably
not
well.
I
guess
we
could
turn
off
finale
if
we
want,
I
think,
there's
a
marginal
gain
to
doing
that,
but
hitting
it
with
operations
and
different
things
and
the
test
factors
that
just
came
out,
hoping
that
doesn't
uncover
anything
crazy
and
then
I
think
we
need
to
be
eyeballing
a
main
net
launch.
A
I
don't
think
that
we
need
to
set
a
date
today.
I
think
we
should
get
through
what
happens
in
a
week
and
then
set
a
date,
but.
D
Main
thing
is
that
we
give
people
plenty
of
time
to
upgrade
between
when
we
announce
it
and
get
releases
out
to
when
it
actually
happens.
We
will
depend
on
the
community
for
this
one
right,
whereas
we've
controlled
them
all.
A
Absolutely
and
we'll
do
we
can
just
certainly
do
you
have
blog
posts
with
all
releases
and
things
like
we
do
and
I
have
done
on
the
proof
of
work
chain.
What
do
you
think
suggestion.
F
I
know
I
was
gonna
prove
a
different
question,
which
is
basically
we've
been
considering
whether
to
make
two
releases
before
a
main
adult
hero,
just
one
release
like
a
big
one,
with
a
bunch
of
features
and
stuff
and
then
just
a
small
one
with
you
know
an
epoch,
update
or
two
releases
and
I'm
kind
of
curious.
What
people
think
about
that.
A
So
I
think
lighthouse
has
probably
done
a
version
of
that
and
that
they
just
put
1.5
out
officially
last
week,
which
I
think
was
a
big
release
and
then
altair
obviously,
would
just
be
something
more
minor
with
an
epoch.
But
as
we
approach,
my
intuition
would
be
doing
a
release
within
seven
days
of
each
other
or
something
like
that
might
cause
more
confusion
than
it's
worth.
But
I
don't
know.
D
I'm
pretty
keen
to
see
each
client
have
an
actual
release
with
the
product
config
in
it,
rather
than
an
rc
or
so
on
how
that
ties
into
big
features,
for
you
is
kind
of
a
side
effect,
but
I
actually
have
alter
emerge,
the
main
branch,
it's
a
full
release
that
clients
are
upgrading
to
because
then
it's
it's
ready.
The
one
change
you
have
to
put
in
for
maintenance
is
just
a
conflict
change
which
you
know.
D
Hopefully
you
can't
screw
up,
whereas
anything
else
is
kind
of
merging
and
doing
other
other
kind
of
more
complex
stuff,
it's
much
easier
to
introduce
other
bugs
and
and
then
effectively
haven't
tested
it.
D
So
that's
kind
of
my
view
on
it.
I'd
lean
towards
whatever
gets
you
to
there,
but
beyond
that
uses
a
these
are
pretty
slow
to
upgrade.
Unless
you
tell
them,
this
is
going
to
completely
break
if
you
don't
upgrade,
so
I
think
it's
it's
really
just
going
to
come
down
to
when
you
put
out
the
release
that
says
you
know
this
got
the
main
net
fork
in
it.
You
have
to
upgrade
that's
when
a
lot
of
users
are
going
to
actually
pull
the
trigger
and
apply
it.
A
D
B
D
New
clients
out
to
cancel
it
and
it
went
very
smoothly.
So
it
is
possible
to
do
fast,
but.
A
Yeah-
and
I
think
we
might
not
cover
all
this
in
altera,
but
I
think
there's
a
desire
to
define
a
bit
more
clearly
what
the
finding
bugs
and
disaster
scenarios
and
things
like
that
are
especially
leading
to
the
merge
you
know,
rather
than
very
subjectively,
being
refined,
defining
a
bit
more
clearly.
What
our
like
halts
and
arrows
are.
G
H
H
For
some
reason
I
had
just
in
my
head
is
a
month
after
we
fork
prada
so
like
a
month
from
next
thursday.
That
probably
could
happen
quicker.
I
guess
if
we
wanted
to,
but
I'm
not
sure
that
I
think
it
seems
like
we're
pretty
much
there
in
terms
of
engineering,
it's
just
kind
of
waiting
for
it
now
and
perhaps
giving
like
a
little
bit
of
extra
time
for
people
to
move
over
and
a
little
bit
of
extra
time
for
us
to
run
these
test
steps
is
good.
H
I'm
not
I'm
not
sure
that
I
can't
at
least
for
us.
It
seems
I'm
not
sure
we
need
a
deadline
to
push
us
at
the
moment,
we're
pretty
much
there.
We
that
that
deadline
was
the
test
nets,
so
yeah.
A
I
mean,
I
think
that
puts
us
at
the
last
day
of
september,
which
I
think
is
a
pretty
good
target.
And
then
we
need
to
subtract,
probably
two
and
a
half
weeks
on
mainnet
releases
and
two
weeks
on
really
getting
that
blog
post
out,
which
I
think
is
looking
at
the
calendar
that
all
adds
up
in
a
pretty
reasonable
way.
Assuming
that
we
don't
run
into
any
unexpected.
A
Oh,
I
just
meant
you
know
if
we're
gonna
release,
if
we're
gonna
do
mainnet
on
september
30th,
which
would
be
one
exactly
one
month
after
the
prouder
upgrade
subtract.
Two
and
a
half
weeks
on
the
like
deadline
for
mainnet
releases
and
the
blog
posts
going
out
a
couple
days
from
there.
So,
like
you
know
the
september
13th
would
be
everyone
needs
to
have
their
mainnet
releases
out
and
then
the
blog
post
is
coming
out.
The
14th
or
15th,
which
gives
you
know
slightly
more
than
two
weeks
of
lead
time.
D
A
A
A
A
Okay
back
to
the
schedule:
let's
do
a
quick
round
of
client
updates.
I
suspect
it
has
a
lot
to
do
with
altair
no
need
to
hit
us
too
hard,
but
we
can
start
with
nimbus.
G
Hi,
as
you
said,
a
lot
of
altair
upgrades,
especially
performance
improvements
to
make
the
blocks
between
slots
and
also
ebooks
way
faster.
A
Great
grand.
I
Dean,
yes,
this
solution,
so
we
still
work
mainly
on
the
on
this
separate
forks
running
on
separate
runtimes
thing,
and
even
though
it
has,
we
passed
all
this
alter
tests.
However,
there
are
still
a
lot
of
work
that
that
needs
to
be
done
in
order
to
to
connect
everything
into
one
piece.
I
As
a
as
running
in
the
separate
run
times
it's,
it
comes
with
a
with
a
different
with
the
new
challenges
really,
for
example,
just
just
an
example
to
understand
the
complexity
when
the
all
their
runtime
starts
to
work.
It
needs
to
to
get
the
history
before
the
merger
so
before
the
fork
point
in
order
to
to
have
four
choice
working.
I
So
basically
the
initial
idea
that
we
thought
that,
after
the
merge,
we
will
not
after
the
fork
that
will
not
any
will
not
have
any
history
of
the
previous
fork.
It's
it's
not
actually
valid,
as
we
need
to
have
at
least
some
some
history
to
make
a
full
choice
on
the
ltr
runtime
work.
So
so
we
are
solving
a
lot
of
these
interesting
issues.
I
E
Hey
everyone
lyon
here,
so
finally,
we
have
released
our
browser
like
lion.
Prototype
super
excited,
had
very
good
reception
and
looking
forward
to
add
more
transports.
Next,
we
have
also
hacked
an
alternative
representation
of
bytes
in
javascript,
which
has
allowed
us
to
reduce
the
size
of
the
hashing
cache
by
half,
and
it's
been
great
to
reduce
garbage
collection,
performance.
E
A
Thank
you
and
on
that
I,
I
think
maybe
it
goes
without
saying,
but
on
the
execution
engine
side,
I
think
people
are
expecting
to
be
heavily
in
kind
of
prototyping
over
the
next
month
and
so
beginning
to
shift
a
bit
of
allocation
on
your
side
to
implement
the
latest
on
consensus
layer.
Merge
specs
will
help
unblock
the
other
side
of
the
aisle
cool,
great
and
prism.
B
Yep,
hello,
everyone,
so
last
two
weeks
we've
been
mostly
reviewing
a
tail
changes
and
merging
those
changes
into
the
developed
branch
and
then,
as
you
know,
last
friday
there
was
a
minor
incident
with
the
with
the
with
the
lido
validators.
So
I
think
I
think
they
were
proposing
blocks
fairly
late,
and
due
to
that,
we
found
a
few.
We
found
a
few
deficiencies
with
how
we
handle
attestation
mempool.
B
So
first
we
were
not
re-inserting
orphan
attestation
back
to
the
main
tool,
and
that
is
fixed
and
we
found
another
deficiency
where
we
did
the
attestations
when
the
block
gets
verified
instead
of
blood
becomes
instead
of
blob
becomes
kind
of
canonical.
B
So
we'll
still
fix
that
as
well,
and
on
top
of
that
we're
just
working
on
e2
api
and
then
gearing
for
the
v2
release
yeah.
That's
it.
D
Yeah
hi-
this
is
adrian,
so
we've
got
a
release
should
be
coming
out
tomorrow.
It's
just
during
the
final
kind
of
staging
process
at
the
moment
that
will
be
21.8.2
and
it
will
have
the
alto
upgrade
epoc
baked
in
for
prata.
D
It'll
also
have
some
really
nice
optimizations,
particularly
around
reducing
garbage
collection
time
so
lower
cpu
and
memory
usage,
and
we've
done
a
bunch
of
research
with
teku
working
against,
particularly
in
furore
but
anywhere.
You
have
load
balanced
beacon
nodes
and
so
there's
a
new
option
to
disable
producing
early
adder
stations,
because
it's
quite
common
to
get
from
one
node,
a
head
event
saying
they
have
the
block.
And
then
you
produce
the
attestation
against
a
different
node
and
it
doesn't
have
it
yet.
D
So
you
wind
up
with
a
bad
head
vote,
so
we've
seen
performance
dramatically,
improved
against
load
balance
at
bigger
nodes
and
inferior
with
that
option.
So
that's
in
there,
and
it
will
also
include
a
change
for
the
remote
signer
api
so
that
we
can
send
alte
air
blocks
through
to
things
like
3
sign
up,
which
is
a
block
v2
type.
D
So
we'll
get
the
the
rest.
Api
specs
for
web3
signer,
updated
with
that.
But
yeah
that
should
be
should
be
good.
To
get
should
be
good
to
go
with
the
prior
then
hopefully,
and
that's
it
from
us.
H
H
Hello-
everyone
paul
here
so
this
week
we
released
version
1.5.0.
It
seems
to
be
going
well.
There's
a
couple
of
reports
have
missed
that,
just
like
a
lowered
at
the
station
performance
happening,
but
I'm
not
sure
if
that's
just
limited
to
lighthouse
or
it's
network
wide,
I'm
still
trying
to
figure
out
exactly
what's
going
on
there.
We
have
a
1.5.1
release
scheduled
for
monday,
that's
going
to
include
our
prada,
the
prada
fork,
so
1.5.0
means
that
we
have
all
of
our
altair
stuff
in
master.
H
Now
we're
also
started
working
on
remote
sinus
support
for
web
3
sauna,
which
is
what
adrian
from
tekken
mentioned
just
before.
So
that
means
I
believe
that
my
house
and
taku
will
both
be
able
to
support
the
same
remote
signer
from
their
vc,
which
should
be
pretty
cool.
H
A
Okay
and
number
three
merge
discussion.
If
there's
anything,
people
want
to
talk
about
by
all
means,
I
think
we
hit
the
engine
api
pretty
hard
this
morning,
so
I
would
defer
additional
conversation
on
that
to
discord
and
some
fault
conversations
rather
than
doing
it
here,
but
are
there
any
other
merge
related
items
we'd
like
to
discuss
today.
K
Yeah
I
have
some
yeah.
We
decided
to
like
continue
our
engine
api
discussion
during
the
next
talk,
or
that's
called
great.
It's
been
a
bit
a
bit
of
time
on
it
yeah.
With
regards
to
other
stuff
on
the
merge.
There
is
the
post
by
dimitri,
who
has
been
evaluating
the
precision
of
the
total
terminal,
total
difficulty
computation
that
we
will
use
for
the
actual
transition
from
proof
of
stake
to
proof
of
work
a
bit
of
a
context.
K
This
terminal
total
difficulty
is
going
to
be
computed
during
the
merge
fork
and
it
targets
the
seven
days
after
the
merge.
Work
is
happening
for
the
actual
transition
to
proof
of
stake,
and
this
research
is
about
evaluating
the
precision
of
this
prediction
on
the
historic
data
on
the
historical
data
yeah.
The
key
takeaway
from
it
is
that
we
might
want
to
use
a
more
precise
value
for
a
more
precise
value
for
the
second
spur
for
fourth
block
than
the
14
seconds
which
are
used.
K
Currently,
it
will
increase
like
the
precision
of
this
like
yeah.
It
will
make
it
it
will
make
the
prediction
more
accurate.
You
you
may
take
a
look
at
the
comment
below
this
post
for
the
comparison
table,
but
yeah
it
will
like
make
it
more
accurate,
but
the
precision
is
like
we
will.
We
should
expect
the
merge
like
within
the
20
hours
interval,
which
is
which
will
be
around
this.
K
You
know
target
target
time,
which
is
seven
days
according
to
the
current
spec,
so
this
is
because
of
the
of
the
difficulty
fluctuations
and
because
of
the
stochastic
process
that
we
have
to
drive
the
block
building
of
the
proof
of
work
chain.
I
don't
think
we
can
do
anything
better
here
and,
as
I
understand,
we
have
the
like
the
london
hard
fork
slightly
after
the
predicted
time.
Am
I
right.
K
Okay,
okay,
anyway,
so
there
is
24
20
hours
interval,
but
we
might
increa.
We
might
move
this
into
a
wall
and
to
make
it
more
accurate
if
we
change
this,
this
second
square
block
parameters,
so
I
would
suggest
to
add,
like
a
new
parameter
to
the
spec
and
to
get
back
to
evaluating
the
average
block
time,
as
as
we
are
close
on,
the
historic
data
is
where
close
the
merge
and
settle
with
some
value
that
will
be
other
way
to
the
state
of
the
network
by
that
time.
A
Got
it
I
trust
one
confounding
factor
might
be
if
we
are
approaching
an
ice
age
which
may
be
the
case.
K
A
Okay,
it
looks
like
there
will
be
some
continued
refinements
that
come
out
of
enter
into
that
engine
api
over
the
next
couple
weeks.
So
keep
your
eyes
peeled
and,
like
I
said,
as
altair
wraps
up
getting
merged
prototypes
that
are
in
the
direction
of
the
current
merge
specs
and
the
eip
that
is
up
will
help.
A
No
thanks
for
it:
okay,
cool
any
research
updates.
People
would
like
to
share
today.
H
I've
been
asked
to
mention
something
in
the
configs
about
the
temporarily
set
fork
versions:
the
how
people
would
feel
about
nulling
them.
I
don't
know
the
background
for
this,
so
I
can't
make
an
argument
to
it,
but
I'm
doing
what
I'm
asked.
H
Sure,
there's
a
so
issue
on
the
consensus
specs
two
five:
six:
nine
it's
about
knowing
using
null
instead
of
u64
max
value,
sorry
to
two
to
the
64
minus
one
yeah,
I'm
not
yeah!
There's
I'm
just
trying
to
see
if
people
would
be
against
that.
A
I'd
like
to
see
more
just
the
the
justification
here,
I'm
just
I
know.
I
think
that
the
justification
at
least
some
legacy
is
better
clarity.
I
think
the
the
alternative
argument
is
that
you
don't
have
to
have
any
exceptional
logic,
because
you
can
just
use
your
basic
comparison
operator
and
not
really
worry
about
what
the
value
is.
I
don't
feel
strongly
here,
but
does
anyone
else
want
to
jump
in
or
shall
we
just
move
to
the
issue.
H
A
A
Okay,
snowman
is
going
off
in
the
chat,
youtube
chat.
Having
a
conversation,
okay,
cool.
I
think
that
we
are
good
anything
else.
L
I
have
a
question
concerning
the
peer
id
when,
when
the
nodes
update
on
their
client,
it
seems
that
the
peer
id
changed
this
to
a
new
one,
and
I
would
like
to
know
why.
Why
is
this?
Is
there?
Is
there
any
particular
reason
why
this
is
like
this,
or
it's
just
arbitrary
decision.
A
Update
as
in
change,
their
client
version
or
just
like
cycle
their
node,
and
is
this
all
clients.
L
L
We
have
been
using
the
crowler
to
see
how
you
know,
people
adopt
new
versions
and
how
this
evolves,
and
but
the
thing
is
that
what
we
see
in
the
figure
is
that
when
a
new
verse,
for
example,
a
new
version
comes
up,
we
just
see
kind
of
an
increase.
L
So
we
see
the
new
version,
a
lot
of
notes
getting
up
the
new
version,
but
we
just
see
a
lot
of
new
notes
and
it
just
we
don't
see
a
decrease
on
the
other
ones,
and
this
is
because
I
think
they
changed
the
id
and
then
it's
difficult
to
track.
You
know
we
just
see
them
as
new
notes.
We
don't
see
like.
Oh
these
notes,
just
change
from
previous
version
to
another
version.
F
F
And
the
other
thing,
the
last
thing
about
crawlers
that
I
say
that
I
will
say
is
that
we
don't
when
our
peer
table
is
full
like
when
we
have
all
the
connections
that
the
users
have
configured
it
to
have.
We
will
no
longer
accept
connections
on
the
socket.
Even
so,
we
won't
allow
the
crawler
to
come
in.
So
those
are
like
common
sources
of
why.
A
Yeah
I
mean,
I
think,
it's
a
very
reasonable
to
keep
that
as
an
optional
design
to
be
able
to
cycle
or
persist,
and
it's
unfortunately,
probably
in
the
impetus
of
the
crawler,
to
figure
out
what
is
stale
and
what
is
not.
Because
I
think
there
are
very
valuable
privacy
considerations
for
wanting
to
be
able
to
cycle
and
move
things
around.
H
A
All
right-
and
I
do
we
also
got
you
all,
have
no
you,
you
don't
have
a
target
and
a
max
so
that
you
can
accept
inbound
connections
and
then
kind
of
like
prune
them
back
down.
There's
just
a
strict
max.
F
Well,
we
have
a
strict
max
for
the
simple
reason
that
every
time
we
accept
a
connection,
we
have
to
negotiate
a
key
with
those
new
connections,
so
that
is
actually
one
way
to
dos
a
client.
It's
just
like
keep
opening
connections
and
we
kind
of
want
to
save
those
resources.
So
when
the
connection
table
is
full,
it's
full.
I
mean
we're
not
going
to
be
talking
to
these
people
anyway.
F
But
again,
like
the
only
one
that
benefits
is
and
and
and
again,
you're
like
leaking.
A
A
little
bit
yeah
I
mean
you
can
imagine
an
extreme
where
you
have
like
a
lot
of
network
rigidity
and
it's
difficult
to
join.
If
everyone
was
following
this
strategy,
as
opposed
to
say,
target
50
max
55,
allowing
you
to
grow
up
to
55
and
then
like
having
some
pruning
strategy
which
could
still
potentially
handle
the
dos
as
long
as
your
pruning
strategy
wasn't
really
aggressive
and
you
could
always
open
new
connections.
But
anyway,
I
don't
think
there's
a
strictly
correct
behavior
here.
A
Well
sure
you.
F
A
M
I
think
it's
a
tension
between
you
want
to
rotate
clients
to
prevent
the
like
danny
said
from
the
network
being
too
rigid
and
getting
stuck
with
you
know,
no
one
knew
can
join
because
you
have
everybody's
at
max
and
on
the
other
side,
you
want
to
prevent
eclipse
attacks
which
definitely
get
opened
up
a
lot.
If
you
allow,
if
you
have
a
well-known
strategy
for
rotating
clients,
people
can
exploit
that
to
force
their
own
nodes
to
get
rotated
in
and
other
nodes
rotate
it
out.
M
F
Yeah
so
like
we
did
two
things
to
balance,
so
one
thing
is
that
we
have
quite
a
high
pair
limit
compared
to
the
others,
160
ish,
and
this
works
well,
because
gossip
sub
itself,
which
is
like
the
bandwidth
hog,
manages
its
own
bandwidth
usage
through
through
the
mesh
right.
So
so
having
lots
of
pairs
connected,
doesn't
really
affect
bandwidth
that
much
and
then
yeah.
F
There
is
a
peer
scoring
system
in
place
where,
where
we
occasionally
get
cares
that
are
not
pulling
their
weight,
so
at
the
end
of
the
day
we
we
still
have
a
some
dynamic
behavior.
It's
just
that
the
moment
that
it's
full
we
no
longer
spend
resources
on
new
pairs
until
we
have
determined
that
some
pair
is
useless.
L
But
if
we,
if
we
update
the
prd
so
frequently,
doesn't
this
damage
a
little
bit
the
pr
scoring
algorithm.
F
So
we
start
with
a
clean
slate
with
the
new
peer
id,
and
we
work
work
on
that
reputation.
Right
and
the
reason
is,
I
think,
guests
does
also
drop
peer
reputation
on
this
connective.
I
remember
right
and
they
motivated
this
with
the
fact
that
you
know
the
moment
that
you
restart
something
completely
different
might
be
happening
even
if
it's
the
same
peer
id
connecting.
So
there's
like
difficult
to
reason
about
security
and
peer
score.
A
You
could
have
fixed
a
bug
in
your
client
that
made
your
score
bad
or
you
could
have
made
your
client
malicious
trying
to
leverage.
What
was
your
previously
good
score,
but
you
could
also
like
make
that
change
to
move
towards
maliciousness
without
cycling,
but
anyway
yeah
I
mean
leo.
I
think,
there's
there's
something
at
odds
here.
Fundamentally,
you
know
I
there
are
it's
not
always
in
the
best
interest
of
a
client
to
make
things
easy
for
a
crawler,
because.
A
Issues
and
other
other
types
of
issues,
and
so
I
think,
as
a
crawler,
you
have
to
try
to
just
navigate
the
emergent
landscape.
L
Absolutely
absolutely
no,
I
completely
understand,
and
we
will
figure
out
a
way
to
just
you
know,
discard
the
connections
that
we
don't
see
for
a
certain
time
and
we
will
figure
that
out.
I
just
wanted
to
bring
up
the
discussion
a
little
bit
to
understand
and
another
thing
just
to
remind
please
all
the
clients
about
the
standardization
approach.
L
I
mean
effort
that
we
are
doing
so
that,
because
we
are
trying
to
start
building
these
dashboards
with
with
all
the
standard
metrics
that
we
agreed
on
so
yeah
just
keep
an
eye
on
that,
so
that
the
next
releases
have
all
those
metrics
and
using
the
standard
metric
sorry
naming
system
that
we
that
we
are
open
on.
Thank
you.
A
Okay,
anything
else
would
people
like
to
discuss
before
we
close
today.
F
F
F
Obviously
you
need
to
keep
the
peer
id,
but,
on
the
other
hand,
there's
a
privacy
issue,
so
I
mean
right
now
in
the
network
design.
The
most
private
thing
we
can
do
is
actually
use
a
different
player
id
for
every
connection
that
we
open,
and
that
is
that
is
an
extreme
option
that
I
don't
think
any
client
does
right
now,
but
it's
certainly
a
possibility.
F
It
would
cost
a
little
bit
in
terms
of
dht
lookups
and
like
it
would
be
a
little
bit
more
difficult,
but
it's
certainly
impossible.
F
F
The
main
issue
is
that
if
you
know
which
validator
is
using,
which
beacon
node,
you
can
trivially
dos
them
in
the
targeted
attack
and
then,
if
they
don't
have
a
good
backup
strategy,
then
potentially
you
can
like
a
low-cost
craft
grief,
individual
validators,
basically.