►
From YouTube: Hive Core Dev Meeting #36
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool
okay,
so
that
sync,
I
figured
out
all
of
the
issues
that
I
had
with
the
pipeline
and
the
on
hivemind.
I
can
now
run
all
the
tests
locally.
I
found
that
it's
not
that
bad.
It's
mostly
that
since
I'm
adding
new
tests,
the
id
of
all
of
the
individual
posts
gets
shuffled
a
bit
and
so
and
since
when
we
compare
the
json,
we
compare
directly
id2id.
A
The
id
are
not
correct
so
in
in
on
the
surface.
200
tests
are
failing,
but
in
reality
it's
fine.
So.
A
I
don't
think
that
works
because,
basically
I
oh,
I
guess
maybe
oh
no.
No,
I
cannot
because
the
indexing
runs
and
then
the
apa
tests
are
executed,
and
so,
if
I
add
even
one
extra
thing,
I
guess
maybe
I
could
add
it
all
the
way
to
the
end.
C
Where
did
you
put
your
mock
commands
your
mock
operations,
and
it
is
quite
complex,
stuff,
unfortunately,
and
here
to
be
sure
that
you
don't
have
any
side
effects
to
other
tests,
you
should
create
a
new
account.
You
would
like
to
operate
on
and
put
all
your
specific
new
new
operations
specific
to
your
new
tests
to
last
blocks
or
new
blocks.
C
You
will
add
artificially
to
the
thing,
because
if
you
will
attribute
your
tests,
your
mock
operations
to
some
illegal
blocks,
you
always
can
impact
existing
tests
and
especially,
if
you
add
your
operations
to
some
of
existing
accounts,
you
also
can
impact
api
results
specific
to
such
accounts.
So
if
you
would
like
to
simplify
your
your
life
and
eliminate
risk
of
of
changing
other
test
results
best,
if
you
will
create
a
new
account,
so
let's
search
for
mock
tests
for
account,
create
operation
and
similarly
create
your
own
testing,
account
and
execute
all
your
specific
operations.
C
As
I
remember
right
now,
we
have
26
artificial
blocks
and
what's
important,
if
you
decide
to
add
another
one,
you
need
to
change
in
the
script
running
a
continuous
integration,
a
maximum
block.
We
would
like
to
think,
for
example,
to
30
blocks.
If
you
would
like
to
to
add
four
more
blocks,
this
is
safest
way
to
adding
completely
new
tests.
C
Also,
you
are
said
that
some
ideas
were
changed.
As
I
remember,
our
pattern
by
stats
in
some
parts
are
ignoring
changes
in
ideas.
I
mean
it
is
generated
by
database
itself,
because
sometimes
we
are,
we
are
putting
into
api
responses.
Such
ideas
for
several
reasons,
usually
because
of
historical
needs
and
to
be
common
to
previous
responses,
but
because
we
also
in
the
past
had
similar
problems,
and
usually
such
ideas
are
completely
ignorable
for
api
clients.
C
C
Okay,
cool
but
safest
way
is
add,
new
account
and
every
operation
you
would
like
to
generate.
Data
for
api
calls
two
last
blocks,
and
this
way
you
are
eliminating
site
effects
to
other
tests.
A
My
main
problem
with
one
of
those
which
is
preventing
people
from
setting-
let's
say
a
moderator
accounts
unless
you're
not
subscribed
to
the
community,
is
that
if
they
don't
add
new
apps
now
here
and
there,
the
previous,
what
happened
previously
would
not
work
correctly
like
because
one
of
the
mod
requests
does
that,
and
so
that
means
the
test
suite
where
it's
like.
Oh,
create
a
test
safari
and
then
set
that
area
as
a
moderator
and
then,
let's
test
a
very
mute
someone.
A
All
of
that
would
fail,
because
test
safari
wouldn't
be
able
to
be
whatever
unless
he
subscribed.
So
that's
where
a
bunch
of
stuff
chat
for
communities
v2
though
yeah
I'll,
definitely,
I
think
I'll,
add
everything
at
the
end
to
make
it
easier
to
work
on
but
yeah.
Apart
from
that,
I
looked
again
a
bit
under
the
api,
but
we'll
talk
about
it
later.
I
saw
your
comment
on
the
vr
merge
request,
but
I
don't
think
there
was
a
reply
from
someone
else
comment
related.
A
Oh
yeah
also,
but
I
mean
I'll
talk
about
that.
I
think
I
have
a
thing
about
the
explorer
we
can
talk
about
it
now.
C
You
remember
the
other
yeah.
A
Yeah,
you
think
you
think
trailer
about
if
we
needed
the
legacy
the
newly
created
and
yeah.
Oh,
that
was
a
week
ago,
but
I
don't
think
there
was
a
response.
No,
there
wasn't
so
we'll
yeah
we'll
see,
but
I
can
update
that
one
if
needed
generally.
C
My
question
was
related
to
fact
that,
a
few
months
ago
we
have
implemented
some
mechanisms
in
high
d,
which
allows
to
eliminate
explicit
data
structure
for
legacy
operations.
I'm
going
to
eliminate
a
need
to
define
substructures
for
the
operations
which
we
are
adding
right
now,
just
to
to
have
com
to
complete
algorithm
related
to
condenser,
api
history
and
etc,
which
are
sterilizing
them
in
old
way
old
fashion.
C
So,
probably
all
what
we
need
is
just
remove
structures
specific
to
legacy
form
of
new
operations,
which
you
added
yeah.
A
And
pass
it
directly,
I
guess
yeah
I'll,
okay,
I'll
I'll
change,
the
merge
request
to
to
have
that
and
yeah
and
then
we
should
be
good.
I
guess-
and
I
guess
that's
it
for
my
my
yeah-
I
have
to
go
backwards.
B
Okay,
so
let's
see
where
are
we
at
a
bunch
of
stuff
going
on
doing
a
lot
of
validation
of
the
existing
code?
I
guess
is
probably
a
lot
of
major
effort
right
now,
so
we've
been
working
on
as
part
of
that
validation.
We've
been
trying
to
get
the
mirror
nest,
mirnet,
stuff
working
and
we
keep
fixing
and
finding
finding
problems
and
fixing
them,
and
hopefully
we'll
have
that
to
a
point
where
we
can
start
doing
some
testing
soon.
I'm
not
sure
what
the
current
blocker
is.
Maybe
bartek.
C
B
Okay
sounds
good
another
testing,
as
I
mentioned
in
my
posts,
we
did
a
a
lot
of
testing
of
the
peer-to-peer
changes
and
so
far
those
have
came
out
pretty
well,
I
I'm
still
running
it
on
production,
so
our
hive
d,
that
we're
providing
is
is
is
a
production
is,
is
using
that
right
now
and
it
seems
to
be
having
no
problems.
B
So
hopefully
we'll
have
something
in
a
few
more
days
to
try
again
with.
I
guess,
let's
see
well
so
that's
5d
other
things
going
on
in
hive
d,
we're
still
working
on
the
rc
stuff.
There
was
a
merge
just
committed
recently
to
it,
but
I
haven't
actually
looked
to
see
even
what
that
did.
B
So
I
can't
comment
on
that
much
if
andre,
no,
it
doesn't
look
like
andre's
here
he
might
have
been
able
to
comment,
but
otherwise
I
can't
say
what
that's
about
and
other
stuff
in
hive
d,
we're
working
on
the
finality
changes,
the
block
finality
changes
and
that's
still
kind
of
an
early
phase,
but
it's
progressing
pretty
well
so
far
and,
like
I
said
I'll,
write
a
post
about
that,
probably
probably
this
week.
I
think
if,
if
everything
goes
well
not
next
week,
let's
see
so
that's
5d,
you
know
there's.
B
B
B
Let's
see,
we've
been
doing
a
lot
more
benchmarking
of
half
and
various
configurations.
B
One
thing
I
think
just
found
interesting
today
as
I
was
trying
to
speed
up
the
so
when
we
first
do
a
sql
serializer
run
to
basically
take
all
the
data
from
the
blockchain
and
stick
it
into
half
there's
two
times
involved.
One
is
sticking
the
data
itself
in
the
blockchain
data
and
then
there's
also
creating
indexes
afterwards
to
basically
make
that
data
quickly
accessible
via
sql
queries.
B
So
I'm
lately
been
doing
some
experiments
with
tuning
of
postgres
to
see
if
we
can
speed
up
the
creation
of
those
indexes
because
they
can
take
a
while
on
machines
without
really
fast,
fast
io
systems.
So,
whereas
in
our
benchmarking
I
think
they
take
around
three
hours
on
our
are
super
fast
systems
where
they
have
like
four
nvmes
rating.
Together
for
fast
nvmes
at
that
it
gets
significantly
longer
on
slower
machines.
I've
been
testing
on
two
x-rays
and
seen
times
of
around
six
seven
hours.
B
So
I'm
doing
some
experiments
to
cut
that
down
now,
and
I
did,
I
did
have
a
little
progress
today.
I
cut
it
down
from,
I
guess:
eight
eighty
four
400
seconds
down
to
looks
like
6
700
seconds
everybody's
trying
to
translate
that
into
hours
which
isn't
easy,
but-
and
that
was
just
by
increasing
the
increasing
the
work
memory
that's
used
for
the
index
creation,
so
I
think
what
we
can
probably
do
is
temporarily
increase
the
work
memory.
B
Just
during
the
time
we
we
do
the
initial
sync
and
then
set
it
back
to
a
more
normal
value.
Afterwards,
let's
see
what
else,
there's
been
a
lot
of
other
benchmarking
of
that
sort.
I've
probably
run
about
20
or
30.
Half
sql
serializer
runs
over
the
past
couple
days.
Just
trying
to
one
of
the
things
I've
been
looking
at.
B
A
lot
is
how
it
works
in
zfs,
because
with
zfs
file
system
we
can
compress
the
data
and
we
get
the
data
down
to
about
1.3
terabytes,
which
is
obviously
a
lot
better
than
2.7
terabytes,
especially
since
that
allows
it
to
fit
inside
smaller
two
terabyte
drives.
So
I
think
that
all
looks
really
good.
I
haven't
seen
any
any
serious
degradation
there,
there's
probably
a
slight
performance
loss
for
an
api
side,
as
far
as
I
can
tell,
but
nothing
significant
enough
to
make
it
not
worth
the
gain
in
terms
of
cheaper
storage.
B
I
think,
let's
see
what
else
so.
That's
hive
hefty
half
trying
to
think
what
else
is
going
on.
There's
still
more
work
going
on
the
wallet
api
code,
and
I
think
I
guess
we're
close
to
finishing
that.
How
are
we
looking
there
bartek.
B
A
Cool
yeah,
just
regarding
the
indexes,
are
you
making
sure
that
it
doesn't,
because
we
had
that
we
had
that
issue
a
while
ago,
where
in
during
sync
the
the
disk
requirements
would
go
very
high
and
then
they
would
like
kind
of
break
some
configurations.
A
Are
you
making
sure
that
allocating
during
the
index
creating
won't
go
too
high?
I
guess
it's
memory.
We
could
just
add
some
swap
all
that
time.
B
Yeah
I
mean
I,
the
memory
isn't
going
too
high,
that's
for
sure,
but
I
mean
basically
I'm
increasing
the
default.
The
default
is
something
ridiculously
low
like
64
megabytes,
okay,.
B
A
Also,
I
don't
think
that
many
people
are
using
postgres
to
such
an
extent
as
we
are
so.
B
Well,
I
mean,
I
think
it's
actually
a
common.
In
fact,
I
found
this.
I
found
this
recommendation
by
looking
at
some
other
places
where
it's
not
an
uncommon
thing
for
people
to
do,
which
is
when
they
want
to
fill
a
database
fast.
They
first
turn
off
all
their
indexes,
fill
the
data
in
the
tables
and
then
create
the
indexes
afterwards,
which
is
exactly
what
we're
doing
with
the
sql
serializer
too.
It's
just
faster
than
leaving
the
indexes
around
and
filling
the
data.
So
this
is
a
common
need.
B
I
think
this
ability
to
then
rapidly
create
the
indexes
afterwards,
and
I
was
just
reading
through
a
basically
a
paper
that
gave
different
ways
to
speed
that
process
up
and
trying
the
different
suggestions
they
had
and
the
one
that
seemed
to
give
us
the
most
bang
for
the
buck.
Right
now
was
that
increase
of
memory
now
I've
only
tested
in
combination
with
one
of
their
other
changes,
which
was
to
increase
the
number
of
parallel
workers.
B
So
I
don't
know
if
it
will
be
as
effective
without
increasing
parallel
workers.
That's
the
next
experiment.
I
need
to
try
so
the
default.
You
only
get
two
workers,
two
parallel
workers
to
do,
create
an
index
table
and
I've
got
it
set
to
four
now,
but
I
I've
set
it
to
four
without
setting
the
memory
up
and
it
didn't
seem
to
have
much
impact.
So
I
think
it's
going
to
turn
out
to
be.
The
memory
is
a
critical
critical
feature,
but
it
still
needs
to
be
tested.
A
Okay
sounds
good,
so
I
had
a
thing
on
the
hbd
progressive
lockup.
I
don't
know
if
it's
if
we
should
talk
about
it
today,
because
I
I'm
not.
C
A
That
we
have
raised
it
to
20.
There
are
some
a
bunch
of
discussions
in
the
community
regarding
hpd
and
to
a
bigger
extent,
to
hp
and
basically
to
looking
to,
instead
of
looking
it
up
for
three
days.
You
could
lock
it
up
for
a
longer
term,
but
for
higher
returns,
but-
and
you
are
guaranteed-
to
have
a
20
percent
for
that
time
and
that's
kind
of
the
kind
of
the
ideas
that
are
floating
around
and
I
put
it
in
there
because
I
thought
it
would
be
interesting.
B
A
I
mean
for
marketing
I
mean
progressive,
can
be
very
progressive,
encompasses
many
ideas
I
we
could
have
three
days
or
one
year
and
have
both
at
the
same
time.
Yeah
sure
I
mean
yeah,
it
could
be
like
presets
things
and
same
could
be
said.
To
have
power
where
you
could
lock
it
up
for
a
long
term
or
you
could
lock
it
up
for
less
but
have
way
less
benefits.
B
It's
simpler
to
do
things
with
hpd:
it
doesn't
it's
not
as
so
critical
to
the
system,
and
you
know
in
a
way
I
I
thought
his
idea
with
the
not
only
the
one
year,
but
also
locking
the
rate
at
one
year.
I
thought
that
was
kind
of
clever,
because
it
does
give
people
that
are
concerned
about
a
variable
rate.
It
gives
a
way
to
to
supply
a
fixed
rate.
I
think
that
itself
is
a
valuable
thing
to
do
so.
I
I
that's
really.
B
A
A
And
yeah,
I
agree
with
gandalf
who
says
I
keep
it
super
simple,
because
yeah
a
bunch
of
hpd
as
a
system
is
already
getting
a
bit
complex.
When
people
want
to
convert
back
and
forth,
they
some
stuff
can
get
complicated
and
we
should
try
to
make
the
pitch
as
simple
as
possible
where
yeah
it's
three
days
or
a
year,
and
that's
it.
B
Yeah,
that's
my
that's.
Why
so
yeah
rosetta.
A
A
B
Yeah
yeah,
it
sounded
like
from
what
gp
said
there
might
be
an
issue
of
other
issues
before
we
could
get
even
if
we
implemented
it.
So
I
think
that's.
I
think
we
we
need
to
get
those
questions
answered
before
we
want
to
go
ahead
and
work
on
that.
A
Oh,
oh,
I
know
why
stuff
is
going
like
this:
it's
because
we
no
longer
have
a
zoom
premium,
so
this
meeting
will
end
in
10
minutes.
So
with
that
in
mind,
yeah
the
last
point
it
was
about
the
hiveminefloy.txt.
A
I
found
out
that
it
was
inaccurate
for
at
least
the
community's
marks,
and
so
I
ran
like
a
small
plc
where
I
basically
passed
the
marked
blocks
to
to
generate
the
flow.txt
and
found
like
a
bunch
of
differences
and
yeah.
I'm
I'm
basically
wondering
if
it
would
be
worth
to
basically
always
generate
it
and
not
try
to
edit
it
by
hand,
because,
obviously
it
creates
some
disturbances.
And
then,
when
you
try
to
look
at
the
at
the
flow,
you
do
not
actually
get
the
what
actually
happens
on
the
the
marks.
A
Yeah
bartek
was
more
on
the
on
the
fence
about
this,
and
he
wanted
to
have
things
and
still
done
by
hand,
but
yeah
things
mock
the
the
mock
file
are
getting
bigger
and
bigger.
I
don't
know
if
it's
really
feasible
at
this
point.
A
I
could
I
cannot
share
my
screen.
I
don't
have
it
right
there
yeah
long
story
short,
the
the
flow.txt
is
just
a
file
that
describes
in
a
in
in
very
simple
terms
what
happens
on
the
mock
file.
So
the
mock
file
is
a
json
that
contains
all
of
the
blocks,
and
I
mean
the
marked
blocks
and
it's
like.
Oh,
we
created
an
account.
A
Then
we
did
that
and
then
we
did
that,
and
so
the
float.60
is
just
one
line
equals
one
up
and
it
goes
like
this
and
the
issue
that
I
found
is
that
the
flow
is
is
updated
by
hands
and
the
under
and
yeah.
There
are
errors
on
the
floor.txt,
because
when
you
look
at
the
at
all
as
a
json
value
in
the
marks-
and
you
look
at
what's
going
on
on
the
floor,
they
see
the
two
don't
match
and
so.
C
C
C
C
C
B
A
Yeah,
it's
yeah.
Basically,
it's
the
the
flow.
Is
it's
just
the
it's.
It's
mostly
like
the
the
everything
that
happened
on
the
marks
and
described
in
a
more
human,
readable
formats,
and
I
and
I
so
I
did
a
part
to
a
proof
of
concept
to
basically
convert
from
the
mark
to
more
human
readable
formats,
which
is
which
I
believe
would.
I
don't
think
it
would
be
more
like
something
automated
more
like
basically
use
that
script
and,
more
often
to
generate
the
flow.60
when
we
do
and
update
the
test
flows.
Yeah.
B
A
I
guess
I
could
I
could
do
something
too.
The
only
thing
missing
from
that
is
annotations.
Yes,
so,
but
I
can
yeah,
I
guess
if
it's
amazing,
we
can
like
look
at
the
git
diff
and
then
add
the
annotations
that
were
erased
by
regenerating
it
so
yeah.
I
just
wanted
to
know
if
that
sounded
like
an
interesting
idea
or
not,
and
if
I
should
like
pursue
on
that
proof
concepts.
A
So
I
I
guess
I'll
look
into
it
a
bit
more
so
that
I
can
expand
it
to
the
other
mock
files
and
see
how
that
goes
and
and
then
we
can
like
have
I'll
make
that
into
a
separate
request
that
we
can
discuss
about
it
directly
and
okay.
That's
everything
to
me
guilty
parties
wanted
to
speak
about
the
renaming
hvd
to
do
something
else,
yeah,
I
I
don't
think
we
can.
B
A
I
don't
think
we
can
do
it
in
three
minutes,
so
I
guess
we'll
postpone
it
yeah,
but
I
I
I
think
the
short
answer
is
gonna
be
thought.
It's
I
mean.
B
A
B
Think
it's
personally,
I
I
think
it's
probably
a
lot
of
work.
Certainly
I
mean
I
don't
see
us
changing
the
code
right,
so
I
think
that's
too
much
work
to
to
do
so
all
we'd
be
doing
is
talking
about,
is
basically
marketing
change
and
then
I
think
we're
going
to
have
confusion
underneath
about
because
everything
is
still
hbd.
A
B
Dollars
if
somebody
wants
to-
and
they
are
called
high,
that
one
of
their
names
is
high
dollar,
I
think
that's
just
clear
enough,
so
I
don't
know,
I'm
not
even
sure.
If
the
debate
is
mainly
about
the
name
or
the
symbol
it
felt
more
like
it
was
about
the
symbol
than
anything
else.
They
wanted
to
match
something
that
looked
like
tether,
which
again,
I
think,
is
only
confusing
so.