►
From YouTube: Mina Community Node Working Group Meeting | 05.03.22
Description
The weekly Node Working Group session serves as a platform for the community, ecosystem partners, and the Mina Foundation to interact.
This meeting is a time to discuss the latest core protocol releases and for members to propose suggestions, raise issues, and provide and get feedback.
The issues discussed can be found on GitHub: https://bit.ly/NWG-issues
Chapters
0:00 — Intro & Preamble
1:07 — O(1) Labs - Andrew - GitHub issues updates and overview
2:29 — Docker changes and logging - (docker entrypoint changes: https://github.com/MinaProtocol/mina/pull/10841)
5:50 — Simultaneous restarts
10:21 — Berkeley testnet redeployment timeline
Mina Protocol – the world's lightest blockchain
A
All
right,
hello,
everybody
welcome
to
the
node
working
group
session,
we'll
start
off
with
a
preamble
and
then
have
some
updates
and
it's
a
small
session
today.
So
if
anyone
has
anything
I'd
like
to
discuss,
please
just
shout
it
out,
then
here's
the
preamble,
the
networking
group
sessions,
are
a
time
when
members
come
together
to
discuss
and
collaborate
with
ecosystem
partners
in
the
mena
foundation.
A
Topics
discussed
generally
include
latest
releases.
Auto
operations
excuse
me
and
other
technical
matters
related
to
the
protocol.
Although
this
is
the
discretion
of
the
networking
group,
my
name
is
john
robinson
and
I'm
the
community
operations
manager
at
the
meena
foundation,
I'm
here
to
support
and
facilitate
these
sessions.
So
please,
let
me
know
if
there's
anything
you
do
to
help
the
meeting
will
be
recorded
and
posted
on
youtube.
So
if
you
don't
want
your
face
on
there,
please
turn
off
your
cameras.
B
Yeah,
hello,
everyone,
so
we
actually
have
a
variety
of
changes
coming
in
that,
I
think
we're
we're
ready
to
release
for
one
there.
B
There
should
be
a
beta
coming
out
later
this
afternoon
with
some
of
the
most
critical
changes
of
the
bunch
from
the
last
six
weeks
or
so
one
is
to
fix
an
edge
case
in
super
ketchup
that
would
cause
crashes,
so
we're
trying
to
mitigate
that
as
quickly
as
possible
and
get
that
out
on
mainnet,
as
well
as
some
improvements
to
the
uptime
submitter
client,
which
we
also
want
people
running
on
mainnet
sooner
rather
than
later,
so
that
we
can
start
getting
even
better
data
on
that
front,
as
well
as
an
update
to
the
docker
entry
point
which
I
just
want
to
test
more
thoroughly.
B
But
I
think
a
beta
is
sufficient,
so
that
will
be
coming
out
today
and
then
we're
going
to
be
working
on
a
little
bit
more
experimental
alpha
release
off
of
sort
of
the
best
of
what
we
have
from
from
compatible
over
the
last.
B
So
this
should
both
come
out
this
week.
I
think
the
beta
will
almost
certainly
be
later
today.
C
A
B
So,
just
furthering
on
the
changes
that
I
I
made
in
the
previous
release
to
clean
up
the
entry
point,
as
well
as
removing
the
mina.log
temporary
file,
because
we
were
noticing
in
kubernetes
and
various
other
environments.
B
When
you,
you
will
eventually
run
out
of
disk
just
by
creating
an
infinite
stream
of
logs
into
that
file,
and
so
we're
removing
that
in
favor
of
the
existing
like
meena.log.star,
you
know.0.1.whatever
rotated
logs
and
just
outputting
to
the
docker
like
outputting
to
standard
out,
and
then
you
can
rotate
them
in
docker
or
store
them
on
your
host
machine.
Or
what
have
you?
B
So
that's
the
biggest
change,
but
I'm
also
trying
to
just
standardize
how
we
pull
environment
variables
in
and
how
these
different
flows
are
used
to
to
make
it
easier
to
get
it
right.
On
the
first
attempt,
as
opposed
to
the
sort
of
issues
that
folks
are
having
in
discord
getting
it
set
up
when
when
the
change
was
originally
made,
go
ahead,
jonathan.
D
Is
that
mina.log
gonna
go
away
for
all
or
just
under
docker.
B
It
already
doesn't
exist
in
the
debian
world
like
mina.log.some
like
that
exists.
For
sure,
like
I
mean
the
one
that
docker
makes
an
additional
file
currently,
whatever
you're
dealing
with
in
debian
is
gonna
exist
for
everyone,
including
docker,
and
is
still
going
to
exist.
D
B
B
I'll
send
a
I'll
resend,
the
specific
pr
with
those
entry
point
changes
into
the
node
working
group
channel,
because
I
know
you
guys
are
at
least
following
all
of
that.
So.
A
So
we
don't
have
a.
We
don't
have
a
lot
of
stuff
today.
So
if
anyone
has
any
other
questions
or
anything
for
andrew,
that's
probably
a
good
time
to
ask.
A
D
So
it
may
be
premature
because
I
haven't
done
much
looking,
but
it
still
seems
like
there
are
occasions
where
I
see
simultaneous
restarts
on
multiple
distributed
nodes,
and
I
remember
there
was
an
old
issue
about
that.
But
I
just
wonder
if
everybody
else
has
been
experiencing
that
to
what
degree
I've
I've
noticed
it
maybe
a
couple
of
times
in
the
last
few
months
like
a
couple
of
months
and
I'm
just
wondering
if
that's
been
observed
by
others
still.
B
B
D
B
But
the
point
is:
there's
an
edge
case,
there's
an
edge
case
for
how
we
are
catching
up
to
a
very
particular
sort
of
block
that
was
causing
yeah
mass
crashes
for
anyone
who
saw
both
of
these
blocks
within
a
particular
period
of
time.
Maybe
nathan
do
you
want
to
speak
speak
more
to
the
is
that.
B
E
Yeah,
I
guess
to
that
point.
You
know
I
I've
already
traced
down
one
grace
condition
which
I
am
pretty
confident
is
a
culprit
of
this,
but
we
also,
I
I've,
discussed
with
another
engineer
on
the
team,
some
other
steps
that
we
can
take
to
help
eliminate
other
race
conditions
in
the
same
part
of
the
code.
E
This
is
really
sort
of
a
category
race
condition
we
didn't
see
before
because
previously,
due
to
the
prior
cpu
performance
issues,
ketchup
would
run
a
lot
slower
and
wouldn't
produce
these
race
conditions,
and
now
that
ketchup
runs
very
quickly.
There's
a
scenario
where,
if,
if
you
learn
about
a
chain
that
you
were
missing
and
attempt
to
catch
up
to
it,
and
you
quickly
download
those
blocks,
then
it
has
the
potential
to
crash
your
notes
so
yeah.
E
We
don't
have
any
reason
to
believe
that
there
is
malicious
behavior
intended
here,
but
are
currently
already
have
one
fix
that
has
been
merged
in
and
is
ready
to
go
out
in
the
next
release
and
are
also
working
on
trying
to
better
bulletproof
the
system
against
these
kind
of
errors
in
the
future,
and
I'm
hoping
to
have
that
in
the
upcoming
release.
After
this.
A
D
D
E
I
need
to
check,
but
I
think
9381
is
a
different
issue
that
has
already
been
fixed,
but
I
need
to
dive
into
the
logs
here
to
double
check.
It
seems.
C
Is
there
an
update
for
redeploying
the
berkeley
testnet.
B
I
think
there
was
already
a
plan
to
redeploy
potentially
by
the
end
of
the
week,
but
I
don't
know
I
don't
know
for
sure.
I'm
not
I'm
not
tuned
into
that
effort.
There's
a
lot
of
work
going
into
trying
to
get
the
archive
node
like
properly
working
before
we
deploy
again
just
for
the
utility
to
to
all
the
folks
who
are
testing
on
it.
Now.
A
Does
anything
anyone
else
have
anything
like
to
discuss
now?
If
not,
what
I'd
like
to
do
is
stop
the
recording
and
if
you
could
just
stay
around
for
a
second
after
I'd
like
to
ask
you
guys
a
couple
of
questions.
A
All
right
well
we'll
stop
this
now,
then,
if
there's
no
other
discussions
and
if
you
could
just
hang
around
for
a
second,
I
I
do
have
a
question
for
you
guys
kind
of
unrelated
to
this
all
right.
Everyone
thanks
for
watching
and
we'll
see
you
next
week,
I'll
just
stop
the
recording
now
here.