►
From YouTube: Hive core developer meeting #27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh
so
github
is
back.
I
guess,
but
I
didn't
write
the
issue
there.
So
yeah
in
terms
of
devsync
rc
delegations
are
now
in
a
state
where
I'm
going
to
push
the
I'm
actually
going
to
push
the
email
requests
this
afternoon
for
a
review
by
vartek.
I
spent
quite
a
bit
of
time
on
in
on
an
issue
which
was
also
present
with
the
rc
pools,
which
I
I
don't
really
know
how
to
solve
long
story
short.
A
The
issue
is
that
I'm
afraid
we
we
create
like
a
performance
performance
hog
where,
when,
if
you
basically
let's
say
pearl
down-
and
you
can
no
longer
sustain
your
answer
delegations,
that
means
all
of
those
delegations
need
to
be
removed,
and
so
the
issue
that,
if
that
happens,
this
has
to
happen
within
the
block,
because
you
cannot
like
it.
A
It's
not
like
you
can
transfer
where
you
can
just
delay
that
to
the
next
time,
because
every
all
the
rc
values
needs
to
be
at
the
same
as
the
right
value
for
the
next
block,
because
then
you
end
up
in
situation
where,
if
you
cannot
make
a
transaction
for
the
next
block,
you
shouldn't
be
able
to
do
it.
The
issue
is
something
like.
Let's
say
you
delegate
rc
for
to
like
10
000
people,
and
then
you
delegate
all
of
your
your
half
power
to
someone
else.
A
Then
you
no
longer
have
that
rc
and
so
within
one
block.
10
000
delegations
will
have
to
be
removed,
and
so
I'm
not
sure
how
to
solve
that
effectively,
and
that
was
also
also
the
case
with
the
pools,
although
to
a
lesser
extent,
because
if
like
multiple
people
they
get
to
a
pool,
then
all
of
the
the
pool
objects
will
still
have
enough
rc,
usually
but
yeah.
That's
one
thing
where
I
spent
quite
a
bit
of
time
like
looking
around
and
I'm
not
really
sure
how
to
solve
it.
A
B
I
mean
I
can
think
of
one
thing,
but
maybe
not
the
ideal
solution.
We
could
potentially
delay
the
power
down
itself
or
something,
but
we
told
them
until
they
were
moved.
I
mean
it's
not
far
from
ideal
and
it's
just
an
idea,
but
that's
at
least
a
way.
We
could
do
it.
A
Yeah,
I
guess
we
could.
We
could
kind
of
prevent
that,
like
kind
of
like
when
you
want
to
power
down,
but
you
don't
have
any
voting
power
left
yeah.
I.
B
Mean
so
you
can
just
yeah
there's
multiple
options.
You
can
reject
the
power
down
completely.
I
mean
that's,
that's
done
in
other
places
when
you
have
delete
when
you've
delegated
your
rc.
I
mean
you
delegated
hp,
for
instance,
and
you
want
to
power
down
that.
You
can't
do
it,
so
we
could
just
force
manual
dropping
of
the
the
delegations
prior
to
I
mean-
and
that
was
the
solution
that
was
used
for
hp.
A
Yeah,
I
think
we'll
have
to
do
something
like
this.
It's
kind
of
like
a
way
around
this
was
to
right.
Now
it's
done.
The
index
is
only
in
a
vertical
order
order,
so
I
just
remove
them
in
alphabetical
order,
but
we
could
do
it
in
a
way
where
we
go
from
biggest
delegation
to
slow
to
lowest
delegation.
A
That
way,
if,
like
you,
power
down,
maybe
the
biggest
delegation
can
like
tank
that,
in
a
way
that
it'll
only
be
modified
to
be
less
and
not
like
have
to
remove
10
dedications,
but
that
still
leaves
like
a
potential
performance
attack.
Basically
yeah.
B
A
So
yeah
I'll,
probably
I'll,
do
that
so
yeah
in
terms
of
testing.
I
did.
I
also
run
into
issues
on
the
unit
tests.
I
don't
know
if
it's
a
long-standing
issue,
but
basically
vests
are
not
represented
correctly
same
for
rc,
like
if
you
power
up
like
one
test
you
end
up
with,
like
I
don't
know,
if
it's
it's
not
max
ends,
but
it's
like
a
random
number
that
I'm
pretty
sure
is
like
randomly
set
in
memory
because
it
changes
every
time.
A
And
but
it's
like
a
very
big
end,
and
I
don't
know
why,
because
literally
when
you
instantiate
the
the
object,
it's
like
the
construction
puts
it
equal
zero.
So
there
is
no,
it
shouldn't
pick
a
random
value
and
I
think
it's
like
an
as
a
strand
effect
in
with
the
testing
environment.
I
haven't
dig
too
much
into
it,
but
yeah
something
good.
B
Yeah,
I
guess
probably
file
an
issue
on
that
we
can
try
to
research
what's
going
on.
There
is
this.
This
is
the
unit
tests.
Yes,.
A
Okay,
no,
not
no,
it
doesn't
happen
when
you,
when
you
do
a
proper
like
a
test
net
test
like
yesterday
wallet
okay,
yeah
finishing
up
on
that
I
updated
the
cli
wallet
to
be
able
to
delegate
file.
Rc
has
been
updated.
At
least
our
rc
accounts
have
been
updated.
I've
added
a
list
also
delegations
to
work
both
from
and
too
so.
If
you,
if
you
subscribe
to
you,
then
you
get
everything
dedicated
to
you.
A
If
you
search
from
you
get
all
of
your
delegations
and
yeah,
the
one
thing
I
need
to
also
need
to
do
is
update
everything
using
test
tools,
because
right
now
it's
using
the
beam
tool
for
the
cli
wallets
and
yeah.
I
don't
know
if
it's
mandatory
to
add
the
functional
tests
are
using
the
test
tools,
so
I've
socialized
enough.
I
think.
B
I
think
the
tesla
should
be
fine.
Bartek
can
maybe
add
something
about
that,
but.
A
B
Sounds
good
okay,
so
we've
been,
I
guess
it's
been
a
while,
since
we've
had
a
meeting,
so
a
bunch
of
stuff
happened,
I
guess
one
of
the
most
notable
things.
Of
course
was
we
had
all
the
additional
traffic
going
through
hive
now,
and
so
we
spent
a
fair
amount
of
time,
analyzing
that
traffic
and,
first
of
all,
just
to
get
the
aps
structure
speeded
up
again
and,
as
we've
discussed
previously,
we're
now
reduced
the
amount
of
broadcast
transactions.
B
Synchronous
calls
that
are
being
used
and
we've
seen
that
to
have
a
lot
of
benefit
throughout
the
system.
So
at
this
point
I've
been
the
load
on.
Our
servers
is
looking
quite
light.
B
In
fact,
surprisingly,
despite
the
increased
traffic,
we're
actually
having
higher
response
times
now
for
a
bunch
of
the
api
calls
that
I've
been
looking
at
I've
been
using
juicy
traffic
analyzer
just
to
check
their
performance
different
calls
and
looking
at
the
ones
that
take
up
the
most
time
overall
and,
for
instance,
a
git
account
post
call
we
get
about
30
more
calls
now,
but
the
3x
faster
response
time
than
previously,
for
instance.
B
I'm
comparing
data
from
back
in
february
and
the
response
time
now
worst
case
response
time
now
for
that
call
before
we
get
discussions
about
4.4
versus
6.5
seconds,
so
again,
pretty
good
speed
up
and
then
get
profile
about
five
times
faster
again
about
100,
more
calls,
traffic
and
worst
case
response
is
about
say
six
times
faster.
B
It's
just
been
kind
of
that
way
across
the
board.
This
one's
really
surprising,
get
trending
topics
28
times
faster,
even
though
we're
getting
100
more
calls,
and
those
are
mostly
we're
talking
primarily
high,
mind
calls
there,
but
I've
also
looked
at
a
couple
of
the
5d
calls
again
we're
seeing
pretty
good
speed
up.
We
have
40
percent
more
calls
of
get
ops
in
block,
but
about
30
percent
faster
and
about
a
worse
case
of
about
0.9
seconds
versus
two
seconds.
B
In
the
past,
so
I
mean
there's
not
a
lot
been
done
to
do
get
these
improvements.
For
instance,
there's
no,
we
haven't
increased
any.
We
haven't
had
any
servers
or
rent
we're
not
running
on
any
faster
servers.
So
it's
just
been
a
couple
changes
that
I
think
are
responsible
for
the
speedups.
One
has
been
we.
B
We
did
some
modifications
to
the
to
the
juicy
caching,
so
we
did
get
some
improvements
from
juicy
caching
and
that
was
done
a
while
back,
probably
a
couple
months
ago,
and
then-
and
but
the
other
thing
I
think
the
other
big
one
has
really
just
been.
The
reduction
of
the
reduction
of
broadcast
transaction
synchronous
calls
and
also
rerouting
the
remaining
calls
of
that
type.
We
have
to
a
separate
server,
so
it
just
serves
just
that
type
of
traffic.
So
I
was
really
surprised
when
I
was
looking
I
was
looking.
B
I
was
getting
some
of
these
numbers
today
and
surprised
that
we
saw
quite
so
much
improvement
across
the
board,
but
I
mean
I've.
I
had
kind
of
noticed
that
even
on
condenser
on
hive.blog
that
things
seemed
faster
and
I've
been
doing
some
eyeball
measurements
there
of
api
call
times
and
seeing
the
speed
up.
So
so
that's
all
good
news.
B
I
think
the
other
interesting
thing
is
when
I
look
at
the
load
on
our
api
servers,
it's
almost
unnoticeably
higher,
so
we
really
we're
not
doing
any
more
cpu
loading
with
all
this
increased
traffic
and
I
think
we're
probably
talking
about
overall
traffic
increase
of
three
to
four
times
and
yet
really
no
no
significant
cpu
loading
increase
so
which
tells
me
that
we
could
scale
up
quite
a
bit
without
really
impacting
or
hitting
any
kind
of
bottleneck
there.
B
On
the
similar
lines,
I
checked
the
traffic
going
to
our
hive.blog,
so
this
is
web
traffic
versus
api
traffic
and
another
thing
that
kind
of
surprised
me
is:
over
the
past
few
months
we've
had
about
a
400
increase
in
traffic
to
hive.blog
and
again
very
little
increase
in
the
cpu
loading,
so
four
times
more
requests
to
hive.log
and
just
it's
it's
flying
along
without
any
real
impact
at
all.
So
again,
I
think
we
could
scale
up
there
quite
a
bit
now.
B
Some
of
that
traffic
increase
is
coming
from
bots,
specifically
search
bots,
not
we're
not
like
google
bots,
and
things
like
that.
B
So
I
think,
what's
going
on
is
we've
had
more
traffic,
we
had
more
organic
traffic
coming
to
the
site
and
I
think
that's
making
the
the
bot
the
web
traffic
search
bots
more
interested
in
indexing
traffic,
so
that
that's
good,
because
that
means
our
long
time
seo
should
improve
as
a
result,
let's
see
what
else
so
that's
kind
of
what
I've
been
doing
most
recently,
which
is
analysis
of
the
traffic,
but
some
of
the
other
guys
have
been
working
on
other
things.
B
One
of
our
guys
has
been
working
on
further
memory,
reductions
for
hive
d
and
he's
removed
the
link
from
the
comment
to
the
root
post
and
that
was
needed
in
older,
older
hard
forks.
But
it's
not
needed
now,
and
so
he
managed
to
come
up
with
a
way
to
temporarily
generate
that
extra
data
when
it
was
needed
in
the
past.
But
then
so,
but
it
effectively
gets
it's
temporarily
stored
and
gets
dropped.
So
it
doesn't
impact
our
long-term
memory
usage
in
doing
that
he
got
more
memory
than
I
was
expecting
to
save.
B
It
looks
like
he's
reduced
shared
memory
usage
from
about
20
gigabytes
at
the
head
block
down
to
probably
16
gigabytes
at
that
block.
I've
still
got
to
get
confirmation
on
that,
but
it's
somewhere
between
three
and
four
gigabytes
dropping
memory
requirements,
so
that
was
surprisingly
good
beyond
that.
The
other
big
thing.
Oh
another
thing
that
was
done.
This
is
kind
of
a
minor
one,
but
we
removed
the
active
we're
removing
the
active
field
out
of
hivemind
this.
B
This
is
generated
by
some
of
the
queries
and
that
allows
us
to
speed
up
the
post
massive
sync
startup
time
for
hivemind
that
mostly
comes
into
play
when
somebody
has
to
resync
hotline
from
scratch-
and
I
guess
the
other
big
work
that's
going
on
well
before
we
get
to
that.
B
So
one
more
thing:
the
smaller
thing
we're
looking
into
replacing
the
legacy
coin
types
for
hbd
and
hive
with
the
the
nai
symbols,
we're
still
kind
of
trying
to
get
a
feel
for
how
much
work
that's
going
to
be,
but
I'm
hoping
we
can
get.
We
could
get
include
that
in
the
next
hard
fork,
okay
and
so
finally,
I
guess
the
biggest
thing
that's
been
going
on,
that
most
people
been
working
on
has
been
half
work
and
we
had
a
good
milestone.
B
As
of
I
guess
earlier
today,
we've
completed
a
full,
I
guess
end-to-end
test,
where
we've
had
the
sql
serializer
filling
the
data
of
a
postgres
database
with
the
block
log
data
and
then
another
application.
Another
we've
got
a
c
plus
plus
extension.
Basically
that
computes
impacted
data
from
that
data,
and
that
just
says
like
what
operations,
when
operations
affect
particular
accounts
and
then
finally,
using
that
data
plus
the
block
log
data,
we've
got
a
basically
an
example:
half
application
that
acts
as
an
account
history
provides
an
account
history,
api,
so
account
history.
B
Api
is
normally
provided
by
hive
d
node,
and
this
is
the
first.
This
is
a
first
working
example
of
a
half
application
that
replaces
that
functionality
with
a
half
application,
so
at
least
as
a
proof
of
concept
we've
got
that
going,
and
so
the
next
step
we're
going
to
do
some
benchmarking.
I
think
in
the
next
in
the
next
few
days,
just
to
see
if
the
performance
is
as
good
as
I
expected
to
be
so.
B
The
other
things
we
were
looking
at
at
half
recently
is:
we've
been
analyzing
how
to
have
half
apps
utilize
each
other's
data
more
effectively,
so
that,
instead
of
having
several
faps,
have
to
basically
accumulate
the
same
data
like
a
practical
example.
In
fact,
his
account
history
account
balances,
for
instance,
have
one
application
that
generates
that
and
then
have
the
other
half
applications
use
that
data
instead
of
having
to
keep
their
own
copies.
So
that's
one
of
the
areas
where
we're
doing
work
right
now.
B
Half
so
I
guess
that's
pretty
much
it
that's
where
we're
at
right
now,
and
so
I
guess
that's
unless
bartek
did
you
have
anything
you
could
think
of
to
add
to
that,
or
did
I
cover
pretty
much
everything
you
could
think
of.
C
Actually,
there
was
also
some
number
of
lower
level
tasks
related,
for
example,
to
refactoring
regression,
tests
to
use
test
tools,
library,
also
improving
these
test
tools,
library
which
we
are
trying
to
develop
in
in
much
more
efficient
way
than,
for
example,
beam
and
also
to
allow
us,
in
the
future
to
automate
setup
setup
on
of
high
g
nodes
and
build
faster
test
nets
and,
for
example,
emulate
forking
networks
just
for
testing
etc.
A
Cool
well
lots
of
stuff
to
talk
about
okay,
so
I
guess
the
main
thing
I
wanted
to
talk,
which
is
going
to
be
well
well.
First
of
all,
do
you
guys
think
you'll
have
some
bandwidth
to
do
some
reviews
on
direct
authentication
at
one
point,
and
well
I
mean
while
we're
at
it.
We
can
probably
discuss
right
now.
A
What
we
think
would
be
the
best
discussion,
it's
the
best
solution
for
the
the
performance
issue
regarding
pouring
down,
because,
if
like
bartek
your
you
tell
me
now
that
you
think
this
is
like
a
no-go
to
to
have
that.
I
mean
the
way
it's
currently
done,
then
I
might
as
well
implement
it
right
now,
instead
of
waiting
for
you
to
review
and
then
do
it.
C
When
we
will
see
the
code
get
to
know
that
directly
and
in
details
that
the
problems
we
will
see,
okay,
what
to
do
next
right
now,
that's
to
me
yeah,
okay,
that
there
are
some
problems
and-
and
actually
I
don't
know
what
can
I
say-
okay,
focus
on
that
at
least
a
little
and
think
out
what
can
be
done.
A
Okay,
yeah
well
I'll
I'll,
probably
push
it
this
afternoon
and
I'll
ping
you,
along
with
kendall,
maybe
to
first
you'll,
know
that
it's
there
so
yeah.
Meanwhile,
so
then
the
following
topic
would
basically
be
half
and
communities.
A
B
A
Didn't
think
about
it,
but
yes,
oh
yeah
yeah,
but
then
okay,
but
then
that's
quite
quite
a
huge
task.
Isn't
it
well.
B
It's
not,
I
mean
truthfully,
we've
sort
of
done
part
of
that
work
already,
so
I
mean
I
consider
hivemind
gonna
be
like
one
of
the
I
mean.
It'll,
certainly
be
one
of
the
bigger
tests
of
of
half,
but
I
I
mean
we've
already
kind
of
got
a
lot
of
that
work
done.
So
I
I
think
we'll
have
it.
Bartek
will
probably
kill
me
for
saying
this,
but
I
think
we'll
have
it
relatively
soon,
assuming
no
problems
pop
up.
C
C
To
create
some
death
points
at
deadlines.
B
Yeah
yeah
relatively
soon,
so
I
it
so
that
I
mean
I
don't.
I
think,
that's
probably
the
way
to
go
about
the
whole
community's
issue
and
then
at
that
point,
if
we
basically,
if
you
want
to
you,
could
I
mean
now
that
aside?
You
could
of
course
just
go
straight
to
creating
a
half
app.
That
supplied
new
api
calls
and
you
know
so.
A
B
A
Api
knows
we'll
run
a
half
app
anyways
yeah,
but
then
I
would
still
need
to
import
most
of
the
stuff
in
the
in
because
most
of
the
modifications
I
want
to
do
is
modification
in
the
car
yeah.
B
A
B
In
other
words,
it's
like
one
point
in
which
you
have
to
change
it
to
a
half
app
and
that's
the
point
at
which
you're
sort
of
saying,
where
you're,
what
data
you're
operating
over.
But
beyond
that
all
the
general
queries
and
stuff
will
mostly
port,
as
is.
A
Only
members
can
comment
and
post
and
your
members
can
yeah
yeah
and,
unlike
all
stuff
like
whenever
you
make
a
comment,
a
benefit
goes
to
an
account
set
by
the
community
setup
and
that's
all
stuff
where
it's
like
pretty
straightforward
codewise,
where
you,
you
just
add,
either
a
new
field
or
you
set
it
in
the
metadata
somewhere
and
so
in.
B
Practice
and
so
the
way
that
would
work
in
half
just
to
sort
of
lay
it
out.
So
you
know,
half
you've
got
two
sets
of
tables.
Essentially,
you've
got
the
the
the
base,
half
level
tables
which
are
which
are
being
filled
by
the
hive
d
plug-in
and
then
you've
got
all
these
auxiliary
tables,
basically
that
your
application
defines
so
hive.
B
D
A
Okay,
so
I
guess
it's
because
like
right
now
on
my
to-do
list,
it's
either
work
on
communities
which
is
like
highly
requested,
and
I
feel
like
it's
a
lot
of
quick
ones
that
we've
been
putting
off
forever,
because
our
segregation
is
like
the
it's,
not
the
quickest
twin,
but
it's
the
biggest
win
we
can
have
in
terms
of
low-hanging
fruits
and
in
in
hype.
A
This
stuff,
I
have
like
a
bunch
of
v
up
to
push
and
and
like
the
cli
wallet
thing
and
then
maybe
automated
actions,
but
as
meta
selection
is
not
really
a
priority
right
now
so
yeah
I
was,
I
was
probably
going
to
go
on
communities
afterwards,
so
I
just
wanted
to
know
if
like
if,
if
it's
gonna
interfere
with
a
half
or
not,
so
I
don't
see
it,
I
don't
think
so
at
all.
Okay,
great.
B
Cool,
I
mean,
like
you,
know,
we're
actually
making
changes
to
hide
mine
now.
So
that's
what
that,
like
that
active
field,
is
getting
dropped.
Okay,.
A
Cool
yeah,
basically
speaking
of
reviews
and
etc.
So
gitlab
is
back,
did
we
lose
data
or
did
we
not.
B
A
B
Okay,
it
sounded
like
a
bee
talking
so
yeah,
so
I
I
don't
expect
we
will
lose
any
actual
code,
but
I'm
afraid
we
will
block
some
some
issues
and
but
that's
probably
the
worst
of
it
I'll
once
I
know
exactly
where
it's
at
I'll
post,
some
more
information
to
see
where
things
are
at
okay,
so
I
mean
we
just
got
gitlab.
B
Okay,
when
I
talk,
is
it
no
okay?
It's
not
it's
not
feedback
from
me
or
anything,
okay.
So
what
was
going
to
say
so
yeah.
I
think
we'll
probably
lost
issues
from
past
past
a
little
bit,
but
I
think
we
probably
have
most
of
those
active
in
our
heads.
So
I
don't
expect
any
any
major
problem
from
that,
but
we
just
got
get
lab
back
up
really
just
before
the
meeting
so.
A
Okay,
great
okay,
do
we
when
so,
basically,
you
restored
a
backup
from
earlier
right.
B
B
I'll
get
the
date.
I
I
can't
name
the
day
off
the
top
of
my
head,
but
I'll
get
that
for
you
I'll
post.
It.
A
A
A
So
yeah,
okay
cool,
so
that's
great
and
yeah.
I
think
let
me
check
my
notes
that
I
have
somewhere
one
question
about.
E
Gitlab,
oh
yeah.
I
made
some
much
requests
a
few
weeks
ago.
I
see
they
are
not
there
anymore.
So
I
have
it's
about
I've.net,
so
I
guess
I
have
to
merge
them
again.
B
B
Yeah,
it's
it's
all
library,
it's
everything
it's
every
repo
so
but
they
should
be
in
your
in
your
repo
itself,
so
the
repo
changes
are
still
there.
It's
just
the
metadata,
like
the
merge
requests
that
we
won't
have
your
actual
code
changes
that
got
merged.
If
the
merge
request
is
merged
in
the
change
is
there
you
know
it'll
be
in
your
local
repo.
If
you
push
it
it'll
it'll
get
back
in
there,
but
the
exact
procedure
I've
got
to
talk
to
the
system.
B
D
Actually
the
wrong
button,
but
I
can
I
can
say:
okay,
so
hyphen
hive
mind
were
configured
to
have
push
mirror
to
github,
so
they
are
the
latest
version
already
without
any
manual
action,
because
I
mean
if,
if
someone
pushed
them
back
to
gitlab,
but
any
other
repo
that
we
had
on
gitlab
has
to
be
updated
to.
You
need
to
check
if
your
local
copy
is
newer
than
what's
in
gitlab.
B
B
D
Yeah,
but
a
good
for
for
the
future
is
that
if
we
want
the
centralization,
we
really
hope
that
everyone
who
is
interested
in
particular
code
base
they
have
their
own
local,
copies
and
and
forks
of
the
code.
B
B
D
That's
actually
a
side
effect
of
of
the
battle
between
github
and
gitlab,
because
otherwise
it
would
be
too
easy
to
to
move
between
those
two.
If
we,
if
we
could
have
metadata
inside
the
repo,
there
were
some
projects
that
allowed
that
so,
for
example,
if
you
create
wiki
pages
or
issues.
D
B
Okay,
so
yeah,
but
to
answer
the
original
question:
yes
realistically,
yes,
we're
certainly
looking
for
more
ways
to
make
it
impossible
to
lose
data.
B
A
B
A
That's,
oh
my
mind:
okay
yeah,
it's
it's
well
yeah!
Once
you
wanted
to
like
get
some
feedback
from
you
guys
on
associations.