►
Description
We'll be discussing the Network Layer in Eth 2. We'll explore how the nodes communicate and for what reason. This will include Discovery v5, libp2p, Gossipsub, and the Eth2 RPC.
B
A
B
So
I'm
a
co-founder
and
director
of
sigma
prime
and
we're
building
an
ethereum
two
client
in
rust.
So
the
last
two
years
we've
been
building
the
lighthouse.
So
that's
our
implementation
of
the
ethereum
two
client-
and
I
guess
so
today,
we'll
just
be
talking
about
the
the
networking
layer
of
ethereum,
seeing
as
we're
going
to
mainnet,
hopefully
pretty
soon
next
month
or
so.
A
B
A
B
People
to
get
like
a
high-level
overview
before
we
kind
of
go
down
to
some
of
the
deeper
layers,
so
in
ethereum
two,
I,
I
guess
I'll
be
talking
about
phase
zero,
so
theorem
two
has
multiple
phases.
I
won't
go
into
those
but
phase.
Zero
is
the
one
that
we'll
be
launching
within
a
month
or
so
so
in
phase
zero,
there's
a
concept
of
the
beacon
chain
and
essentially,
what
happens
in
a
beacon
chain?
Is
we
get
a
chain
of
blocks?
B
Hopefully
you
can
see
this
and
the
change
blocks
gets
kind
of
grouped
into
an
epoch
which
is
32
slots
long.
Each
slot
is
12
seconds.
That's
where
a
block
kind
of
gets
fit
into
so
essentially
we're
forming
this
beacon
chain
of
blocks
every
you
know,
so
a
block
every
12
seconds
and
32
of
those
form
an
epoch,
the
validators.
B
If
you're
a
validator
on
this
network,
you
perform
some
extra
actions,
then
somebody
that's
just
a
an
observing
node
or
something
that's
just
participating
on
the
network,
so
they're
going
to
have
different
networking
implications
between
somebody,
that's
validating
and
somebody
that's
just
running
a
node
on
the
network.
So
for
validators.
B
In
this
32
epoch
window
a
validators
get
split
into
committees,
so
they
get
kind
of
subdivided
such
that
every
validator
has
to
pretty
much
do
an
attestation
which
is
kind
of
a
vote
on
one
of
at
least
once
per
epoc,
so
once
per
32
slots.
So
if
we
have
20
000
validators,
we
group
them
all
up
into
into
into
committees.
Each
slot
there'll
be
one
or
up
to
64
committees
and
each
committee
for
each
slot
performs
some
attestation.
B
So
the
networking
challenges
then
that
for
each
node
on
the
network,
if
you're
a
validator,
you
need
to
form
these
little
committees.
You
need
to
be
able
to
and
and
send
your
votes
to
all
the
other
people
on
the
network
of
the
20
000
validators,
one
randomly
gets
selected
to
propose
a
block,
so
the
person
who's
supposed
to
propose
a
block
like,
for
example,
the
first
one
or
the
second
one.
They
need
to
get
the
votes
from
everybody
else.
B
They
need
to
kind
of
group
them
all
up,
put
them
into
a
block
and
then
submit
the
block.
The
block
then
has
to
get
submitted
to
everybody
else
on
the
network.
So
fundamentally,
the
challenges
are:
we
need
everybody
on
the
network
to
make
a
vote
once
per
epoch
and
we
need
the
proposer.
The
person
that's
going
to
create
the
block
to
receive
everybody's
votes
that
have
made
a
vote
so
far
and
we
need
to
he
needs
to
he
or
she
needs
to
propose
the
block
and
then
submit
the
block
across
the
network.
B
So
they're
pretty
much
the
main
areas
of
what
I'm
going
to
talk
about,
how
how
we've
achieved
that
and
what
protocols
we
use
to
do
it.
So
I
guess
to
start
so:
we've
got
any
questions
so
far.
That's
just
kind
of
like
an
overview
of
the
ethereum,
too,
very
fundamental.
Thank
you.
B
Okay,
so
I
think
the
easiest
way
to
start
is:
let's
talk
about
somebody
who's,
not
a
validate
somebody
that
just
wants
to
run
an
ethereum
2
client
on
phase
2
without
any
validators
attached
to
it.
So
this
kind
of
network,
or
this
kind
of
client
or
node
on
the
network,
needs
to
pretty
much
keep
track
of
which
blocks
are
being
proposed.
B
They
don't
need
to
kind
of
group
them
or
collect
them
all.
They
just
need
to
kind
of
participate
in
the
network.
So,
let's,
let's
start
the
like
a
life
cycle
of
a
node
that
kind
of
just
wants
to
passively
observe
the
network,
so
an
ethereum
2
node.
Let's
call
it.
This
guy
has
fundamentally
two
kind
of
transports
built
into
it.
One
is
udp,
so
it's
got
like
kind
of
two
output
ports.
We've
got
udp
and
tcp.
It's
pretty
much.
What
we're
using
at
the
moment
the
tcp
one
might
change.
B
We
might
use
a
different
transport
at
a
different
layer,
but
there's
two
fundamentally
ports
that
are
going
to
be
exiting
on
your
node
at
any
given
time
and
ideally,
if
you're,
if
you're,
behind
a
nat
or
a
router,
you
want
to
kind
of
forward
these
two
kind
of
udp
and
tcp
ports
so
inside
an
ethereum
two
client.
The
reason
that
there's
two
of
them
is
because
the
the
protocols
that
we
use
are
kind
of
split
up
for
different
reasons.
B
B
B5,
the
other
one
is
an
f2.
B
B
So
when
a
node
first
gets
on
the
network,
the
first
thing
you
have
so
I'm
going
to
start
I'll,
go
down
down
the
list
from
the
top
to
the
bottom.
So
we'll
start
with
discovery.
So
when
a
node
first
joins
the
network,
this
happens
for
a
node,
that's
passively,
observing
the
network
or
a
node.
That's
also
a
validator.
B
When
we
want
to
join
the
network,
we
need
to
find
pretty
much
the
network
of
peers
that
we
have.
We
need
to
find
everyone
else,
that's
proposing
on
the
chain
that
already
has
a
long
chain.
We
need
to
kind
of
discover
these
guys,
hence
the
hence
the
word
discovery.
So
the
protocol
that
we're
using
is
called
discovery
version
five,
it's
a
an
extension
or
not
an
extension
or
the
next
generation
of
discovery
version
four,
which
is
used
in
ethereum
one.
B
So
this
was
kind
of
like
the
brainchild
of
felix
and
it
there's
a
number
of
different
reasons
and
which
is
to
why
we've
used
udp
and
haven't
kind
of
combined
these
things
mainly
because
discovery
b5
is
chatty.
So
the
way
that
it
roughly
works
is
we
we
communicate
very,
very
often
and
very
quickly,
with
many
different
peers
to
try
and
find
more
peers.
B
I
guess
on
the
network,
which
doesn't
kind
of
lend
itself
all
that
great
to
tcp,
which
requires
a
long-lived
connection,
so
we
can
just
quickly
send
udp
packets,
it's
lossy
across
udp,
so
implementations
should
be
robust
against
packet
loss,
so
the
the
udp
choice
in
discovery,
v5
kind
of
kind
of
works
well
for
us
in
a
particular
in
a
particular
case,
it's
fast
and
it's
quick
and
we
don't
have
to
keep
track
of
lots
of
file
descriptors.
I
guess,
if
you're,
if
you're
doing
lots
of
tcp
connections.
B
So
if
and
if
you're
familiar
with
kedemia
disc,
before
the
application
layer
works
very
similar
to
academia
and
kadamlia
kind
of
builds
up
a
a
distributed
hash
table
of
of
peers.
So
essentially
what
happens
when
our
node
first
connects
to
the
network?
We
have
a
set
of
boot
nodes,
so
these
are
trusted
nodes
that
we
initially
connect
to.
We
form
initial
udp
connections
to
discovery.
V5
has
its
own
different
kinds
of
wire
or
an
encryption
protocol
across
the
wire
so
everything's
encrypted.
B
Maybe
I'll
go
yes.
I
started
the
application
over
here.
A
Thing
like
when
we
were
trying
and
we
were
implementing
the
network
layer,
we
consider
using
cadence,
but
one
of
the
issues
we
see
with
that
is
that
the
diameter
of
the
network,
like
the
distance
between
two
nodes,
might
be
somehow
larger
than
we
expect.
So
when
we
were
rooting
message,
it
may
take
more
time
than
expected.
So
is
that
like
how
how
fast
our
message
being
propagated.
A
B
Yeah,
so
in
so
locally,
let
me
kind
of
rub
this
thing
out
yeah.
So
every
node
has
so
ink
down
there.
Every
node
has
a
local
routing
table
which
gets
split
up
into
depending
on
the
metric.
B
You
use
256
different
buckets,
so
you've
got
like
these
buckets
like
this
go
from
like
one
two,
three
four-
and
these
are
these-
are
labeled
by
the
the
metric
that
you
use
between
two
peers,
which
is
not
geographic
in
the
sense
that
you
know
how
far
they
are,
it's
just
based
on
the
an
xor
between
the
node
identities.
B
So
if
you
have
so
the
way
the
buckets
are
designed
in
academia
is
that
like,
if
you
were
to
randomly
sample
50
of
any
node
on
the
network,
50
should
go
in
the
first
one
25
should
go,
and
then
this
into
the
second
one.
A
B
B
That
are
often
greater
distance
than
you
they're,
usually
pretty
good,
because
that
that
helps
you
to
kind
of
avoid
eclipse
attacks
in
the
sense
that
if
you
have
nodes
that
are
in
lower
distance
buckets,
if
you
were
to
try
and
fill
up
that
bucket
to
to
replace
you
know
a
legitimate
node,
you
would
need
to
generate
a
lot
more
it'll.
Be
it's
it's
harder
to
generate
a
random
id
that
would
fit
in
that
bucket
and
replace
that
guy,
because
it's
a
it's
a
it's
a
greater
distance.
B
So,
in
terms
of
like
network
propagation
and
speed
of
doing
those
things,
the
the
metric
doesn't
really.
My
knowledge
play
much
of
an
effect
there.
It's
just
the
the
the
difference
in
in
the
distance
helps
you.
I
guess
it
depends
on
how
you're
doing
the
node
lookups
right.
So
if
you,
if
I
go
and
ask
you
know
and
say,
hey
always
give
me
the
first
two
buckets
and
there's
a
and
there's
a
there's
a
whole
heap
of
peers
that
are
in
the
lower
end,
buckets
I'm
not
going
to
find
those
beers.
B
If
I
only
ask
them
in
these
buckets
so
yeah,
I'm
not
sure
if
it's
worth
while
going
over
how
academia
works
and
find
node
works,
but
we've
got
the
time
to
yeah,
but
maybe
just
as
a
high
level
overview
for
people
that
haven't
kind
of
seen
this
stuff
before
what
happens
in
discovery,
it's
over
three
nodes
on
the
net
on
a
network
all
right,
so
they
each
each
of
these
nodes
store
their
own
buckets.
B
I
guess
they're
called
buckets
which
store
peers
that
they've
seen
so
you
can
only
fit
inca
down
there
at
16..
You
can
only
fit
16
piers
per
bucket,
so
if
you've
got
100
pairs,
some
of
them
are
not
going
to
be
in
your
local
routing
table,
because
this
one's
usually
mostly
full
this
one,
it's
it's
it's
less
likely
to
get
into,
and
it's
it
and
it's
kind
of
harder.
B
These
guys
don't
get
all
that
often
so,
usually
most
people's
local
routing
table
kind
of
look
like
this,
where
the
buckets
are
filled
with
peers
that
they
know
about.
So
you
go
and
you
ask
one
of
your
neighbors
hey.
I
need
to
find
some
peers,
so
the
way
that
you
do
it
is
you
pick
some
random
identity,
some
random
node,
so
essentially
you
randomize,
which
bucket
you're
going
to
ask
them
for
to
getting
peers.
B
There's
a
there's
a
bit
of
theory
as
to
why
you
do
this,
but
this
is
just
fundamentally
how
it
works.
So
I
let's
say
this
is
us
here:
where
a
is
b,
here's
c
we
go
and
we
ask
b,
hey
what
peers
do
you
know
in
these
three
buckets,
for
example,
and
they'll
come
back,
they'll
return
these
lists
of
peers.
B
One
of
them
may
be
c
because
c,
because
c
is
in
this
bucket,
for
example,
then,
given
c's
c's
distance
from
us,
which
is
just
an
xor,
an
xor
thing
of
of
their
identity
versus
identity,
we
we
fill
it
into
our
bucket,
so
it
could
be
re,
it
could
be
in
the
lower
back.
It
could
be
in
a
higher
bucket,
and
essentially
we
fill
our
local
routing
table
the
same
as
everybody
else
which
gives
us
that
kind
of
a
list
of
peers
that
are
on
the
network.
A
B
Yeah
yeah
so
yeah
I'll
lift
list
these
things
here.
So
we
have
just
b5.
B
A
B
The
rpc
gives
direct
node
to
no
communication,
and
gossip
sub
does
kind
of
like
a
network-wide
propagation,
so
that
that
handles
some
of
the
other
things.
So
let
me
just
talk
about
instead
of
yeah,
hopefully
how
discovery
works.
I
just
talk
about
the
theorem
specific
things
about
the
discovery
that
we've
implemented
because
yeah,
I
think,
the
other
stuff
you
can
kind
of
just
just
look
up
quite
easily
on
the
internet.
B
So
for
for
discovery,
we
have
signed
peer
records,
so
essentially
we
have
a
container
which,
which
is
what
all
peers
kind
of
use
to
identify
themselves,
and
this
is
what
gets
stored
in
the
buckets
that
I
was
talking
about
in
every
in
every
node.
So
for
each
individual
peer.
We
have
this
thing's
called
an
enr,
an
ethereum
node
record,
and
it
has
associated
with
it
like
an
id
which
has
a
public
key
associated
with
it,
and
this
thing
is
signed
so
I'll
draw
a
key
now.
B
So
just
this
whole
thing
here
is
is
signed
with
the
public
key.
That
represents
the
identity
of
this
thing,
so
so
that
prevents,
if
I'm
asking
somebody
for
a
particular
peer
id
that
prevents
somebody
in
the
middle.
Just
you
know
making
changing
changing
the
the
values
that's
stored
inside
here,
because
it's
signed
with
a
peer
id,
so
the
signature
would
break
if,
if
someone
should
sacrifice
doing
it
so
inside
here,
you
can
store
key
value
data.
B
So,
for
example,
you
can
store
your
ip
udp
port,
udp,
tcp
port,
so
every
every
peer
record
on
in
discovery.
B
So
we
can
search
for
these
particular
things,
and
these
are
the
things
that
we
get
back
so
for
every
when
I'm
looking
for
peers
and
I
get
a
list
of
16
in
return,
I
get
this
kind
of
signed
container
which
comes
from
the
original
p,
and
it
tells
me
what
ip
I
can
connect
that
pier
on
whether
it
supports
discovery
over
udp
and
the
tcp
port,
which
means
that
it
supports
the
rpc
and
gossip
sub
so
that
we
can
connect
via
by
these
protocols
to
it.
It
also
ethereum
specific.
It
gives
us.
B
We
have
a
field
called
f2.
That
field
represents
the
fork
id
that
the
node
is
on,
so
we
could
have
multiple
test
nets
or
or
different
networks
where
the
dhts
could
kind
of
be
combined.
So
if
I
ask
it,
I
could
ask
up
here
and
find
you
know:
16
16
peers
in
return,
but
those
peers
could
be
on
a
different
network
than
what
I'm
looking
for.
B
So
I
can
filter
those
out
based
on
their
on
their
enr
there's
another
field
which
I'll
have
to
get
to
a
little
bit
later,
which
is
called
attestation
nets
back
to
station
next,
and
essentially
we
subscribe
to
these
subnets
in
ethereum
too,
and
it's
useful
to
be
able
to
search
for
peers
on
those
particular
subnets.
So
inside
an
enr.
If
you
want
to
search
the
ethereum
two
network,
four
peers
for
a
particular
topic,
you
want
to
put
that
kind
of
information
inside
the
inner.
B
So
inside
enrs
we
can
put
information
that
allows
us
to
search
repair.
So
the
two
ethereum
specific
ones
are
the
fork.
So
we
can
search
for
peers
on
a
particular
fork
or
network
that
we're
looking
for
or
chain
and
the
other
one
is
the
annotation
nets
field
which
I'll
talk
about
later.
But
it's
represents
what
subnet
they're
on.
So
if
we
can
look
for
different
kind
of
subnets
so
back
to
just
give
a
recap.
So
far,
what
happens
is
when
a
node
first
starts
on
the
network.
B
It
runs
discovery.
Discovery
has
a
set
of
boot
nodes
from
those
boot
nodes.
We
ask
them
for
a
set
of
peers.
We
get
back
a
collection
of
enrs
and
we
try
and
connect
to
those
enrs
on
the
iptcp
ports
if
they're
on
the
the
correct
fork
id
that
we're
looking
for.
So
that
should
give
us
initially
a
connection
to
a
range
of
initial
peers
that
are
on
our
network
for
ethereum,
too
cool.
So
that
gives
us
discovery
and
we
use
yeah.
We
use
udp
to
do
that.
B
B
So
when
we
connect
via
tcp
the
essentially
what
happens
is
we
start
the
negotiation
framework
that
exists
inside
the
p2p
and
the.
B
Is
there's
a
protocol
of
sorts
called
multi-stream
select
which
for
every,
I
guess,
every
protocol
inside
lib
b2p
that
you
support?
You
have
an
identifier
name
or
it's
called
a
protocol
id.
So
when
we
connect
we,
we
establish
an
initial
connection
to
another
peer.
We
start
this
negotiation
process
where
I
say
hey.
I
support
this
this
this
this
and
this.
Basically,
I
support
this
this
business
and
and
amongst
the
protocols
that
they
all
support
we
choose,
which
ones
we
both
support
and
try
and
set
up
the
initial
connections
for
for
those
protocols.
B
So
for
ethereum.
Two,
the
the
handy
thing
about
this
is
that
it
doesn't
just
support
high-level
application
protocols.
It
actually
can
support
different
transport
protocols.
So
if
somebody
supports
tcp
and
quick
or
web
sockets,
you
can
choose
the
preference
or
the
order
of
which
transport
you
actually
kind
of
connect
under
yeah.
So
sorry,
I
should
rephrase
that
you
get
connected
on
one
of
the
transports
and
the
pdp
can
then
negotiate
the
higher
level
applications
from
the
transports.
B
B
So
once
somebody
connects
to
you
on
a
particular
transport
that
you
support,
you
then
try
and
establish
an
agreed-upon
encryption
scheme,
so
you
want
to
have
your
traffic
encrypted
for
ethereum,
two
we're
using
noise.
B
So
in
the
lib
b2p
framework
we
in
the
negotiation
phase,
we
say
that
we
support
where
there's
a
protocol
id.
We
support
this
protocol
id,
which
represents
the
noise
version
and
everything
that
we
support
if
the
other
person
connecting
to
us
or
the
people
we're
connected
to
also
support
that
we
establish
a
noise
connection
which
is
our
encryption
layer
once
we
do
that,
we
then
try
and
set
up
a
multi
multiplexer.
B
So
this
allows
us
to
set
up
across
one
kind
of
connection
we
can.
We
can.
We
can
multiplex
a
number
of
different
protocols
along
the
top
and
set
up
different
kind
of
substreams,
the
the
two
that
we
support,
one
in
ethereum.
Two,
we
always
support
all
client
support.
Mplx
is:
is
the
stream
multiplexer
that
that
everyone
needs
to
support
and
optionally?
We
can
support
yamax
so
depending
on
the
clients
and
whether
they
support
that
we
negotiate
the
multiplexer
and
then,
after
that,
we
we
start
negotiating
the
application
layout.
B
So
for
ethereum,
two,
as
I
said,
there's
two
protocols
at
the
application,
one's
an
rpc
and
one
is
gossips
up.
So
at
this
point
the
p2p
framework
we've
got
connected
on
a
particular
transport.
The
we've
we've
negotiated
a
an
encryption
layer
which
is
noise,
we've
negotiated
a
multiplexer,
either
mfx
or
yamax,
and
now
we're
negotiating
which
protocols
we
support
at
the
client
level.
Most
clients,
I
think
currently,
should
support
rpc
and
gossip
sub.
So
this
will
form
two
connections.
I
guess
we
could.
B
B
A
B
Yeah,
so
the
the
data
that
we've
got
in
the
application
layer
isn't
isn't
entirely
encrypted
at
a
at
a
kind
of
a
transport
layer
in
general,
just
for
for
privacy
reasons
and
for
other
people
kind
of
putting
in
the
middle.
I
think
we,
we
always
want
kind
of
transport
that
we
always
want
encryption
at
the
transport
layer
just
to
prevent
any
any
application
layer
from
from
from
looking
at
the
ethereum
to
traffic.
B
A
B
Yeah,
so
in
in
even
in
just
in
discovery
v5,
we
decided
to
make
even
the
headers
all
the
information
is
encrypted
from
and
the
way
that
it's
encrypted,
based
on
the
the
node
id
you're
connecting
to
so
that
prevents
things
like
deep
packet
inspection
so
for
routers
that
decide.
Oh
I'm
gonna,
I'm
gonna
stop
ethereum
two.
By
blocking
its
discovery
protocol,
I'm
gonna
look
at.
I
can
kind
of
inspect
the
packets
figure
out.
B
Oh,
this
is
an
ethereum
two
header
packet,
but
if
it's
encrypted
at
the
that
layout,
the
headers,
then
then
even
it's
very
difficult
for
a
firewall
or
something
to
actually
block
specific
traffic
associated
with
associated
with
ethereum,
too
yeah.
Okay.
So
let
me
briefly
just
talk
about
make
sure
I'm
not
going
too
much
over
time.
B
Okay,
let
me
briefly
talk
about
the
two
different
protocols
and
their
primary
use,
so
the
rpc
protocol,
this
kind
of
answers,
your
question
about
the
connectivity
and
I
guess
the
topology
between
the
peers,
the
rpc-
is
a
direct
peer-to-peer
communication
so
for
a
node
on
the
network
once
you've
directly
connected
to
to
the
peers
that
you've
discovered.
So
let's
say
this:
is
us,
try
and
make
it
bigger,
and
we've
got
four
beers
one
two
three
four,
we
form
you
know
two-way
direct
communications
with
these
four
peers.
B
Okay,
so
there's
two
kind
of:
let
me
let
me
talk
about
what
we
need
to
do
so
as
a
passive
observer
on
the
network.
We've
now
connected
to
four
peers
that
are
on
the
ethereum
two
network.
We
need
to,
firstly
make
sure
that
we're
we're
up
to
sync
with
the
with
the
current
state
of
the
chain.
B
So
if
we're
a
new
node
that
just
joined
the
network,
we've
just
connected
to
four
piers,
we
we
don't
know
whether
our
current
view
of
the
of
the
chain
is
up
to
date
with
with
all
the
other
peers
on
the
network.
So
we
have
to
directly
ask
them
essentially,
when
we
first
connect,
what's
the
current
state
of
the
chain
that
you're
looking
at
and
we
ask
each
of
them
individually,
so
we
use
the
rpc
protocol.
The
rpc
protocol
has
like
four
four
or
four
or
five
different
methods.
B
So
it's
essentially
a
request
response
and
we
use
that
to
essentially
kind
of
figure
out
the
current
state
of
the
chain,
the
status
of
these
peers
and
get
just
kind
of
simple
information
directly
from
them.
It's
primarily
used
for
syncing.
So
let
me
go
through
an
event
where
we're
kind
of
a
structure
where
we're
not
in
sync,
so
we
use
so
the
rpc
has.
Let's
let
me
just
go
through
the
different
request
responses
that
we
have
in
the
rpc
for
ethereum
two,
so
there's
a
status
message.
B
B
What's
their
current
state
of
of
the
chain
that
they're
looking
at
it
tells
us
in
response,
is
the
current
the
current
head
slot
that
they're
looking
at,
which
is
kind
of
where
that,
where,
where
their
chain
is
there,
the
head
of
their
chain
in
ethereum
two,
we
have
a
concept
of
finalization,
so
it
also
returns
when
was
their
last
finalized
slot
and
also
the
root
hashes
for
both
of
those
things.
B
So
if
so,
there
could
be
two
piers
that
have
you
know
the
same
head
slot,
but
they're
they're
different
blocks,
so
they're
on
they're
on
different
forks.
So
with
that
information
we
can
determine
essentially
the
current
state
or
the
current
head
or
the
or
roughly
the
you
know,
the
head
view
for
each
of
the
peers
that
we
connect
to
of
of
the
beacon
chain.
B
The
status
also
responds
with
the
fork
id
so
that
we
can
tell
if
any
of
these
peers
are
connected
to
are
on
a
wrong
fork.
So
then
we
instantly
disconnect.
So
I
I
won't
give
him
the
time
microphone
go
into
all
the
syncing
logic,
but
we
have
so
we
could
say
this
guy
is
up
to
block.
B
I
know
10
this
guy's
11,
this
guy's
10
and
this
guy's
nine.
So
when
we
first
connect,
we
look
at
the
status
and
each
of
the
pairs
give
us.
You
know
their
current
view
of
the
chain.
If
the
current
block
that
we
know
of,
if
we
haven't
seen
any
of
theirs
and
our
current
block
is
let's
say
five,
then
we
know
that
we
need
to
download
the
the
blocks
that
these
guys
have
seen
to
be
able
to
make
an
informed
decision
on
what
is
the
current
state
of
the
ethereum
two
beacon
chain.
B
So
that's
when
we
start
to
invoke
another
rpc
method,
which
is
blocks
by
it's
called
blocks
by
range,
I'm
just
going
to
go
bvr
and
a
blocks
by
range.
Rpc
method
allows
us
to
essentially
request
up
to
a
thousand
blocks
in
a
range
from
one
slot
number
to
another.
So
in
the
particular
case
that
I'm
using
here,
we
want
to
get
blocks
blocks
from
slot
five
to
ten
from
this
period,
5
to
11
from
this
p
5
to
10.
B
From
this
p,
we
can
do
this
a
little
bit
smarter,
where
we
don't
double
up
and
and
duplicate
the
requests.
But
fundamentally
that
request.
Will
the
response
to
that
request?
Will
will
be
that
these
peers
give
us
the
blocks
of
the
beacon
chain
that
they
know
we
collect
all
of
these?
We
kind
of
we
process
them,
and
then
we
decide
using
some
fork
trace
algorithms.
What's
the
current
state
of
the
beacon
chain
and
then.
B
Yeah
so
yeah
so
yeah
so
of
the
peers
that
I
I
connect
to.
I
get
roughly
a
view
of
the
network
based
on
that,
so
the
algorithms
behind
all
this
are
a
little
bit
more
complicated
than
what
I'm
doing,
I'm
kind
of
simplifying
it,
but
essentially
from
the
peers
that
you're
connected
to
you
get
a
view
of
the
beacon
chain.
You
can
use
the
rpc
to
download
blocks
directly
from
these
peers.
From
from
your
immediate
peers,
there's
a
there's.
B
You
know
you
can
get
more
heuristics
or
extra
information
from
the
network
using
the
second
protocol,
but
I
haven't
kind
of
thrown
that
in
yet
because
I
haven't
described
it.
I
haven't
talked
about
it
but
yeah.
So
fundamentally
we
connect
to
all
of
our
peers.
We
find
them
from
discovery,
given
that,
given
that
view
of
using
the
status
request,
we
can
use
the
blocks
by
range
request
to
to
collect
all
of
the
blocks.
B
B
There's
a
goodbye
message
so
when
we
disconnect
from
we
disconnect
from
up
here,
we
can
gracefully
go
we're
leaving
and
give
them
a
reason
why
one
of
the
main
goodbyes
that
you'll,
probably
see
on
the
ethereum
two
network,
is
that
peers
have
a
specific
kind
of
number
of
peers
like
a
peer
limit
and
usually
there's
a
lot
of
peers.
Trying
to
connect
to
everybody
else
and
you'll
get
a
goodbye.
Because
your
note
on
the
network
kind
of
has
too
many
and
we're
kind
of
pruning.
B
You
there's
a
blocks
by
root
blocks
by
root.
B
Which
allows
you
to
request
individual
blocks
I'll
explain
that
guy
it
kind
of
goes
goes
hand
in
hand
with
gossip
sub,
there's
a
ping
protocol,
so
that
for
all
the
directed
all
the
peers
that
you're
you're
currently
connected
to
you
can,
inter
you
can
ping
them
intermittently
to
figure
out
whether
they're
still
live
kind
of
like
a
liveness
liveness
check
and
there's
a
metadata
rpc
request.
So
each
of
these
peers
store
a
small
amount
of
data
called
metadata
which
contains
these
adaptation
subnets,
which
was
in
the
enr.
B
The
reason
we
have
that
is
that
if
a
peer
connects
to
us
directly,
we
don't
have
access
to
their
enr
so
that
that
attestation
nets
value
that's
inside
the
enr
we
we
would
not
otherwise
know
so
we
need
we
have
this
extra
protocol
to
request
it.
Often
if
we
don't
otherwise
know
it
and
I'll
explain
that
in
a
second
with
gossips
up
okay.
So
if
I,
if
I
move
on
to
gossip
sub,
that's
all
right
with
the
rpc
roughly
that
we
have
direct
it.
Communication
make
sense,
yeah,
okay,
so
gossip's
up.
B
So
the
next
thing
that
we
need
now
is:
let's
imagine,
we've
connected
to
peers.
So
I
just
do
another
recap
just
so
that
everybody's
kind
of
still
with
me
when
the
node
first
gets
network,
we
use
discovery.
We
connect
some
minions.
Other
people
can
connect
to
us.
At
the
same
time,
we
connect
via
a
different
transport
tcp
using
the
olympiad
framework.
We
negotiate
these
protocols
and
we
now
have
connected
peers.
We
use
the
rpc
protocol
to
determine
the
current
state
of
the
chain
and
figure
out
whereabouts.
We
stand
relative
that
chain.
B
B
So
let
me
explain:
there's
there's
a
number
of
very
good
talks
and
articles
by
the
pdp
on
how
gossip
sub
works
and
what
it's,
what
it's
designed
to
do.
B
Hopefully
you
can
see
this
you've
got
all
these
different
kinds
of
connections.
The
the
very
naive
way,
quick
way
of
sending
a
message
to
every
peer
on
the
network.
Is
you
send
a
message
to
all
your
peers
and
you
get
them
to
send
a
message
to
all
their
peers
and
then
they
send
their
message
to
all
their
peers,
and
you
get
this
message
getting
propagated
really
rapidly
across
the
narrative,
because
everybody
just
sends
the
message.
B
So
if
one
of
the
peers
in
the
network
says,
oh
I've
proposed
a
block,
he
publishes
a
total
of
his.
They
publish
it
to
all
theirs,
they
publish
to
all
theirs
and
it
just
kind
of
rapidly
propagates
across
the
network.
The
the
downside
of
that
is
that
there's
massive
message
amplification,
because
if
every
peer,
every
peer
is
connected
to
every
other
peer
you're
you'll,
obviously
find
that
the
message
gets
duplicated
quite
rapidly,
almost
exponentially.
B
So
the
there's
huge
bandwidth
costs
associated
with
that
the
next
semi
semi
improvement
to
this
is
instead
of
sending
to
all
your
peers.
You
randomly
send
it
to
a
subset
of
your
peers.
So,
instead
of
if
I
have
50
peers
that
I'm
connected
to,
I
don't
just
send
my
message
to
50p
as
I
send
it
to
a
random
20
of
them,
and
then
they
send
it
to
a
random
20
and
they
center
a
random
20.
B
and,
depending
on
the
probability
that
you
use
to
randomly
select
pairs
as
to
how
many
that
you
select
you,
can
you
can
kind
of
tweak
the
amplification
factor
that
gets
kind
of
sent?
But
the
downside
is
if
the
probability
is
too
low
based
on
the
network.
Let's
say
you
know,
use
one
percent,
you
only
send
it
to
what
one
key
and
then
listen
to
one
p,
there's
a
chance
that
the
message
doesn't
get
propagated
fast
enough
or
to
all
the
peers
on
the
network.
B
So
if
I'm
connected
to
50
pairs,
I
depending
the
the
configuration
parameters,
are
configurable
to
set
up
the
mesh,
but
for
ethereum
we
use
roughly
about
eight
is
the
ideal
number
of
peers
that
we
connect
to,
and
we
say
we
form
what's
called
this
mesh
and
everyone
else
on
that
network
also
form
connects
to
eight
of
their
peers
and
they
form
this
overlay
mesh
network.
So,
instead
of
connecting
to
every
single
pair,
we
form
like
a
mesh
which
is
a
subset
of
of
the
connections.
B
I
guess,
then,
when
we,
when
a
message
gets
forwarded
to
us,
we
don't
fold
it
to
every
single
appears.
We
only
forward
it
to
our
mesh
pairs.
B
So
that
way,
we
if
you
design
the
mesh,
depending
on
the
the
number
of
connections
that
you
have
to
fit
kind
of
the
topology
of
you,
know
how
many,
how
many
nodes
and
stuff
are
on
your
network.
You
can
minimize
the
amplification
factor,
but
still
get
quite
good
propagation
across
the
network
to
counteract
the
the
the
probability
that
a
message
doesn't
get
sent
across
the
entire
network.
There's
the
idea
of
gossip.
B
So
if
a
message
comes
to
me,
I
send
it
back
across
all
of
my
mesh
peers,
which
is
only
a
subset
appears
that
I
have
then
I
randomly
select
the
remaining
pairs
that
are
subscribed
to
this
topic
and
I
tell
them:
hey
I've
seen
this
message
and
the
message
is
identified
by
an
id
and
I
and
I
randomly
tell
people
hey
I've,
seen
all
of
these
messages
over
the
last
few
seconds,
then
so
those
are
the
peers
that
are
in
my
match.
B
So
then,
if
those
peers
get
that
get
that
gloss
up
and
go
hey,
I
actually
haven't
seen
this
from
my
mesh.
They
can
ask
me
directly
to
get
that
message
and
then,
and
then
they
fold
it
on
so
it's
kind
of
a
correcting
thing
for
any
messages
that
don't
get
that
don't
get
propagated
across
the
network.
So
it's
a
five
minute
inter
overview.
Yeah.
A
That
was
very,
very
interesting
about
gossip
sub.
It
has
its
sound
healing
mechanism,
but
I'm
wondering
like
it's
fully
organically
the
way
it
grows
like,
or
it
also
has
a
mechanism
to
guarantee
that
it
has
some
nice
properties
about
fast
propagation.
B
Yes,
so
there's
a
number
of
there's
a
lot:
there's
a
few
papers
that
I
probably
recommend
having
a
look
at
specifically
about
how
its
performance
goes
and
how
it
grows
and
scales
with
number
of
peers
and
the
propagation
and
latency
associated
with
that.
Specifically,
as
I
said,
the
mesh
is
very
configurable,
so
you
can
configure
the
mesh
to
be
the
size
of
the
number
of
pairs
that
you
have,
in
which
case
it.
B
You
know
it's
like
a
slider
it'll
it'll
go
down
towards
the
flood
sub-level,
where
you'll
get
very
rapid
propagation
and
very
small
latency,
but
huge
message.
Amplification.
Then,
with
the
configuration
parameters
you
can
scale
it
back
to
the
other
to
the
other,
the
other
side,
where
you
only
have
one
mesh,
pier
or
something
and
then
a
lot
of
it
kind
of
gets
properly,
go
through
gossip
sub,
but
you
get
an
increased
latency.
B
The
the
lib
p2p
guys
that
invented
this
had
a
upgraded
this
to
gossip
sub
1.1
in
the
process.
It
was
mainly
security
updates
to
to
pretty
much
address
a
number
of
different
security
issues
that
attackers
can
have
when
they
kind
of
join
the
network.
In
that
process,
they
built
kind
of
a
simulation
and
testing
framework
that
allows
you
to
simulate
these
networks
at
large
scale
and
based
on
the
parameters
that
they
have
and
the
networks
that
they
set
up.
B
They,
you
can
see
how
this
thing
performs,
what
it's,
what
its
general
properties
are
and
how
it
scales.
So
I'd
encourage
to
kind
of
have
a
look
at
that
for
ethereum.
Two,
the.
B
As
I
said,
it's
a
it's
a
pub
pub
sub
system,
so
you
you
subscribe
to
specific
topics
and
in
ethereum
too
there's
a
number
of
different
topics
and
they
have
different
network
properties.
So
the
parameters
kind
of
the
scoring
parameters
at
least
change
a
little
bit
based
on
the
topics
that
we're
subscribed
to.
But
for
our
use
cases
this
fits
the
need
for
for
the
block
propagation,
so
so,
okay,
so
yeah
so
for
block
propagation.
B
Fundamentally,
one
of
the
nodes
on
this
network
will
say:
I
have
published
a
block
and
then
I'll
send
it
across
their
mesh
piers.
It
should
all
get
propagated
across
the
network.
If
it
doesn't,
the
gossip
sub.
The
gossip
element
of
gossip
sub
should
should
pick
that
up
for
peers
that
haven't
seen
that
message
and
then
they
they
they
request
it
and
propagate
it
across
their
mesh.
B
So
this
has,
from
from
all
of
our
tests,
has
worked
quite
well
for
for
what
we
need,
so
so
that
roughly
makes
sense
so
far,
so
I
think
that's
all
the
as
I
still
had
a
high
level,
I
was
gonna
potentially
go
deeper,
but
at
a
high
level,
that's
roughly
for
an
observing
node,
an
observing
node
they
connect
to
the
network.
They
use
discovery
over
udp.
They
find
a
number
of
peers
which
are
enr's.
The
enr's
tell
them
how
to
connect
them.
What
fork
they're
on
once
we
connect
to
them.
B
We
use
the
rpc
protocol
that
allows
us
to
get
a
view
of
the
current
state
of
the
beacon
chain
and
download
any
blocks
that
we
think
that
we're
missing
from
that.
We
use
gossip
sub
to
then
subscribe
to
the
topic
that
publishers
blocks
and
then
because
we're
subscribed
on
that
on
that
protocol,
we'll
regularly
see
blocks
that
get
published
through
this
propagation
mechanism.
B
B
Essentially
the
same
thing
happens
they
go
through
and
they
find
peers
and
then
they
get
their
current
view
of
the
network,
but
they
have
to
now
do
this
extra
extra
step,
which
is
voting
so
they
get
every
epoch.
Let's
say
this:
guy
is
one
epoch.
There
should
be
32
of
these,
so
I
can
put
a
little
dot.
B
So
every
epoch,
a
single
validator-
you
can
have
many
validators
per
node.
So
if
you've
got
a
thousand
validities,
you
need
to
multiply
this
this
step
a
thousand
times,
but
for
every
epoch
we
get
shuffled
into
a
committee
and
we
need
to
attest
to
a
particular
to
a
particular
block
and
and
form
essentially
an
attestation.
B
So
for
every
committee
we
get
shuffled
into
for
each
slot,
they
can
be
up
to
64
of
these
things,
depending
on
the
validate
account.
So
I'm
going
to
so
I'll.
Do
these.
These
are
committees
that
we
kind
of
get
shuffled
into,
so
they
so
validators
kind
of
get
grouped
into
these
little
sections
per
slot.
So
every
slot
there
could
be-
let's
say,
there's
10.
That
would
mean
there's
a
lot
of
validators,
but
but
very
slightly
get
shuffled
into
these
kinds
of
committees.
B
B
So
people
on
the
network
can
let's
say
if
you
get,
if
you
get
a
thousand
validators
and
you
group
them
all
up
and
you
get
them
all
to
vote
if
they
all
vote
on
the
same
thing,
you
can
kind
of
group
all
those
those
signatures
into
a
single
signature
called
an
aggregate
signature,
and
then
you
can
give
that
to
the
block
proposal
and
the
block
proposer
then
only
needs
to
verify
one
signature,
not
a
thousand.
So
that's
the
idea
of
these
committees,
these
kind
of
like
sub-committees.
B
We
group
all
the
validates
into
these
sub-committees
and
get
them
to
vote,
and
I
and
then
the
validators
get
grouped
into
there's.
On
average,
it's
a
it's
a
probabilistic
thing.
On
average
we
get
16
of
the
people
in
these
committees
to
collect
them
all
and
aggregate
them,
so
they
collect
them
all
for
each
of
these,
for
so
for
each
of
the
committees.
16
of
these
people
are
randomly
chosen,
and
so
everyone
kind
of
votes
they
they
grab.
B
All
those
votes
they
group
them
all
up
into
a
single
into
the
vote
that
wins
or
the
vote
that
the
sixth
that
the
person
that's
aggregating
them
voted
on,
and
you
group
them
into.
Essentially
one
aggregate
vote
so,
instead
of
and
this
happens
for
each
of
the
committees,
then
the
person
that
aggregates
them
all
sends
them
on
a
kind
of
like
a
public
gossip
subtopic
similar
to
a
block.
So
a
block
proposer
who
needs
to
needs
to
create
a
block
just
listens
on
a
gossip
subtopic
which
represents
the
aggregate
aggregate
signatures.
B
So
all
these
committees
kind
of
group
all
their
signatures,
and
then
they
publish
the
grouped
signature
of
the
group
vote
on
the
gossip
sub
aggregate
topic.
So
the
proposer
just
watches
that
gets
all
of
these
aggregate
votes
in
checks.
Them
all
throws
them
into
a
block
and
that
forms
the
next
block.
Fundamentally,
so
the
topics
that
we
have
in
gossip
sub
there's
a
beacon
block
I'll
just
talk
about
the
beacon
block,
the
main
ones,
so
there's
a
beacon
block.
B
So
if
we
subscribe
to
this
topic
in
gossip
sub,
we'll
get
regular
blocks
that
come
through.
As
I
was
explaining
before
there's
a
beacon
aggregate
topic,
it's
called
aggregate
and
proof-
and
this
is
what
block
proposers
should
listen
to
and
they
get
the
aggregates
from
all
of
these
committees.
B
The
there's
there's
a
few
others,
so
each
yeah,
so
the
next
important
one
is
that
the
from
a
networking
layer,
when
you
form
people
into
these
committees,
you
don't
want
the
each
of
these
individual
votes
being
sent
across
the
network
to
every
single
peer
on
the
network,
because
only
these
only
these
aggregators
or
the
people
in
these
committees
really
care
about
those
individual
votes
to
kind
of
group
them
right.
So
each
of
these
committees
get
split
into
their
own
gossip
sub
topic,
which
we,
which
we
call
subnets.
B
So
I
won't
write
out
the
name.
It's
a
it's
beacon,
beacon
after
station
underscore
subnet,
but
we
can
just
have
like,
for
example,
topic.
One
topic,
two
topic:
three
and
there's
up
to
on
mainnet
there'll,
be
up
to
64
of
these
things,
these
kind
of
subnet
gossip
sub
topics.
B
So
so,
if
you're,
if
you're
on
subnet
one,
for
example-
that's
that's
for
every
slot.
Let's
say
this
is
one
two.
Three:
four:
you
get
shuffled
up,
you
find
out
you're
in
slot
one,
so
you
need
to
subscribe
to
the
gossip
sub
topic
one
and
you
publish
your
vote
on
the
gossip
subtopic,
one
all
the
other
people
in
this
in
this
subnet,
I
guess
or
or
committee
group.
B
If
you
will,
they
will
also
subscribe
to
the
gossip
sub
topic
one
so
that
they
so
the
messages
that
get
sent
don't
get
sent
to
you
know
the
entire
network.
They
only
get
sent
to
all
these
people
here.
The
aggregators
then
kind
of
group
them
all
up,
as
I
was
saying,
into
a
singular
vote
and
then
submit
it
onto
the
beacon
aggregate
topic
in
gossip
sub,
so
that,
as
I
was
saying,
the
block
proposal
just
listens
to
these
two
main
ones
and
they
get
the
the
gripped.
One
yeah
is
that
amazing.
B
Yeah
so
eat,
so
you
form
a
mesh,
as
I
was
saying
so
this
mesh
of
pears
there's
a
mesh
per
topic,
so
yeah,
so
to
simplify
it.
Let's,
if
I'm
connected
to
10
peers
of
those
10
peers,
two
of
them
could
be
connected,
could
be
subscribed
to
this
beacon
block
topic.
B
So
if
they
subscribe
to
that
beacon
block
topic-
and
I
only
have
two
peers-
I'll
chuck
them
into
my
mesh
of
beacon
block
topic-
but
if
I
have
10
peers
and
they're
all
subscribed
to
beacon
aggregate,
I'd,
probably
chuck
them
into
my
mesh,
but
then
prune
them
off
to
to
have
summers
left
over
and
some
that
are
in
the
mesh.
But
there
is
a
mesh
per
topic.
B
So
when
I
subscribe
too
okay
yeah,
so
you
form
these
kind
of
like
dynamic
overlay
networks
for
each
subnet,
yeah
yeah,
there's
a
bit
of
complexity
in
there
in
the
sense
that
if
peers
are
like-
let's
say
this-
is
the
this
time
period
that
you
you
need
to
shift
between
sub
next
is
12
seconds.
If
you
have
all
these
peers
shifting
every
12
seconds,
then
the
stability
of
whether
you
can
find
peers
that
are
on
on
those
subnets
very
quickly
within
the
kind
of
the
12
second
shifting
is,
is
difficult.
B
So
we
have
kind
of
like.
If
you
have
a
validator
you,
you
subscribe
to
a
subnet
for
a
very
long
period
of
time,
so
we
have
these
kind
of
backbone
peers
that
kind
of
sit
on
that
network
that
you
can
form
meshes
with.
So
this,
I
guess,
brings
me
to
the
part
that
I
was
talking
about
way
back
at
the
start
where
there
was,
I
said
there
was
an
an
at
nets:
agitation
next
key
value
thing
inside
the
enr
and
also
in
the
metadata.
B
So
this
thing
is
a
bit
filled,
so
it
contains
a
whole
heap
of
up
to
64
for
mainnet
lots
of
zeros
and
ones
which
determine
which
tell
you
which
subnet
they're
subscribed
to
long
term.
So
you
can
use
discovery
to
find
peers
that
are
subscribed
to
these
subnets
in
advance.
So
I
know
I
know
one
epoch
in
advance,
so
I
know
32
slots
in
advance
when
which
one
I
need
to
subscribe
to.
B
So
I
can
use
discovery
to
search
for
peers
that
are
already
subscribed
and
will
be
subscribed
when
I
connect
to
to,
for
example,
subnet
3..
So
I
find
those
peers
I
connect
to
some
subnet
three
that
allows
me
to
form
a
mesh
and
then
other
beers
do
that
as
well
to
to
in
order
to
do
this
small
individual
votes
they
can
get
grouped
into
the
larger
nitro
voting
system.
B
I
think
so
that
covers
it
like
a
kind
of
like
a
high
level,
the
the
dynamics
of
how
we're
how
we're
splitting
up
the
the
individual
votes.
So,
instead
of
the
proposer
having
to
verify,
if
we've
got
twenty
thousand
validators
on
there
to
verify
twenty
thousand
votes,
we
can
split
them
up
into
these
subnets.
The
subnets
get
propagated.
The
individual
votes
get
propagated
using
the
gossip
sub
protocol
and
they
get
aggregated
by
a
subset
in
that
committee.
B
The
resulting
aggregate
gets
sent
to
the
beacon
aggregate
topic
which
most
observers
and
stuff
will
just
listen
on,
and
you
can
see
the
aggregates
kind
of
coming
in
and
the
block
proposer
whoever's,
whoever
that
is,
can
kind
of
chuck
it
into
a
block
and
then
all
the
votes
kind
of
get
included
in
in
the
block.
As
we
go.
A
Okay,
it
sounds
great.
I
have
like
a
lot
of
follow-up
questions,
but
yeah
too
little
time
left,
that's
okay,
so
one
of
the
things
we
are
struggling
at
near
is
about
upgrade
regarding
the
network.
How
how
do
you
handle
that
in
ethereum,
like,
for
example,
when
there
is
an
upgrade
that
requires
some
network
changes?
B
Yeah,
okay,
so
upgradeability.
So
I
guess
for
consensus
level
things
as
I
mentioned
through.
I
I
kind
of
skimmed
over
it
because
it
wasn't
a
huge
important
detail,
but
inside
the
enr
is
a
fork
id.
So
we
can
upgrade
the
fork
version
essentially
of
the
entire
chain,
and
here
we
can
then
find
piers
on
a
new
fork.
B
This
fork,
I
guess
number
if
you
will
like,
if
the
if
the
chain
hard
forks,
for
example,
it's
built
into
ethereum,
two,
that
all
the
signatures
become
essentially
invalid
on
on
an
old
fork.
So
all
the
new
signatures
and
everything
will
kind
of
shift
across
for
peers
that
agree
on
on
a
new
fork.
The
rpc
status
message
when
we
connect
to
appear
also
tells
you
what
fork
that
they're
on
from
a
networking
layout.
B
I
didn't
actually
mention
this,
but
the
firstly
in
libpetterp
when
we
negotiate
the
protocols.
So
actually
let
me
just
split
this
up
logically,
so
the
fork
id
that
I'm
just
talking
about
handles
consensus,
kind
of
level
upgrades,
so
the
signatures
change.
We
could
change
some
core
level.
Parts
of
the
specification
and
nodes
will
be
able
to
identify
other
nodes
on
the
network
and
signatures
kind
of
become
invalid.
B
The
next
part
is,
we
need
to
kind
of
split
out
all
of
the
networking
communications
on
a
different
fork
as
well,
so
we
can
do
that
based
on
so
discovery,
usually
wouldn't
change.
We
can
still
talk
to
other
peers
on
old,
forks
or
old,
outdated
code,
but
we
can
discover
new
peers
for
a
specific
fork
based
on
the
enr
for
lib
p2p.
The
way
that
we
do
the
negotiation
of
protocols
that
I
was
mentioning
earlier
is
that
we
have
essentially
a
protocol
id
so
protocol
id
is
a
string
that
says
which
protocols
we
support.
B
If
we
want
to
upgrade
one
of
those
protocols,
we
can
change
that
string
inside
inside
the
protocol.
Id
for
all
of
our
all
of
our
for
our
rpc
protocol
and
in
gossip
sub
has
a
version
number.
So
if
we
upgrade
the
version
number
of,
for
example,
one
of
their
rpc
messages
speaking,
I
want
to
go
like
for
the
status
message.
For
example,
then
we
can
support
both
versions
if
we
want.
B
So
when
we
do
the
negotiation
through
lib
p2p,
we
can,
if,
if
one
node
supports
status
version,
one
and
another
one
supports
version,
one
and
two
status
version,
one
will
get
negotiated
and
we'll
use
that.
But
if
both
nodes
have
upgraded-
and
let's
say
only
one
supports
two
or
one
supports
both-
you
can
you
can
order
which
one
you
prefer,
which
one
has
a
preference,
so
we
would
negotiate
based
on
the
version
and
the
ordering
preferences
in
libya
p.
We
can
do
that
with
gossip
sub
as
well.
B
So
if
we
upgrade
the
rpc
or
any
of
the
rpc
methods
or
gossip
sub,
then
libya
people
want
to
go
share
that
for
us,
the
the
final
part
is
for
each
of
these
topics
that
we're
subscribing
to.
I
only
gave
the
name
of
the
actual
topic,
but
the
actual
topic
contains
the
fork
id
that
we're
using,
which
and
the
encoding
that
we
use.
So
if
we
decide
to
change
the
encoding,
then
the
topic
itself
fundamentally
changes.
So
you
would
subscribe
to
different
topics
for
different
encodings
or
different
versions.
B
So
I
think,
based
on
the
lipid
p
modularity
and
the
way
that
it
negotiates
protocols,
we
can
easily
kind
of
upgrade
and
change
versions
for
old
code
and
it
kind
of
shifts
everything
across
we
won't
be
able
to.
We
won't
communicate
with
other
peers
on
different
versions
that
we
don't
support.
A
Yeah,
okay
sounds
nice.
It's
very
good!
This
all
the
way
you
have
the
versions
inside
the
topics
or
together
so
and
about
sharpening
well,
you
use
use
the
topics
to
like,
for
example,
when
you
are
rooting
and
you're
submitting
new
transactions,
and
you
need
to
route
them
to
new
block
producers.
B
Yeah
yeah,
so
so
there's
there's
some
there's
further
issues,
especially
in
the
networking
which,
when
we
introduce
sharding,
there's,
there's
more
challenges.
That
kind
of
come
in.
One
thing
that
they're
that
we're
kind
of
working
on
at
the
moment
is
that
there's
a
data
availability
for
it.
So
we
need
to
do
kind
of
like
random
sampling
at
a
network
layer,
but
the
shards
should
fundamentally
come
into
kind
of
subnet
similar
to
these
committees.
So
we
can
still
split
up.
I
guess
we're
calling
things
subnets,
but
it's
based
on.
B
B
A
B
Having
me,
hopefully,
it
was
somewhat
useful
to
get
a
general
idea
of
kind
of
how
these
things
work
and
whether
there's
some
crossovers.
We
can
have
with
near
that
that
make
things
applicable
or
if
you,
you
guys,
solve
some
problems
that
we're
having.
Maybe
we
can.
You
can
browse
some
of
those
ideas
as
well.