►
From YouTube: Filecoin Miner Show & Tell
Description
Welcome to the 3rd Filecoin Miner Show & Tell! During this event, we explore hardware, architecture, and share tips and best practices. Above all, we’re hoping to spark conversations, relationships, and themes that bring us all forward together.
This week we have presentations from BenjaminH, Charles Cao of NBFS Canada, and Hlib Wondertan!
00:00 Intro - Angie Maguire
2:44 BenjaminH
18:56 NBFS Canada
38:29 Wondertan
A
Really
excited
for
today's
event,
this
is
our
third
minor
show-and-tell,
which
is
very
exciting
yeah,
so
pop
in
the
chat.
Let
us
know
where
you
are
calling
tuning
in
from
and
we'll
get
underway
very
shortly.
A
Hi
laurence
from
canada
very
warm
welcome.
We've
got
some
of
your
with
some
of
your
fellow
canadians,
presenting
today
hi
edison
edison
is
in
china.
Very
awesome,
emily
from
washington
dc,
always
good.
Welcome
from
austin
texas,
wonderful,
very
exciting.
Okay,
let's
get
cracking!
A
Thank
you.
Everyone
for
joining
us
for
the
third
falcon
miners
show
and
tell
this
week
we'll
be
hearing
from
some
of
our
top
minors
in
north
america
and
europe.
Tonight's
event
is
part
of
a
series
of
events
throughout
space
race,
which
is
our
incentivized
test
net
competition.
A
You
can
see
what's
coming
up
on
space,
race.filecoin.io
and
catch
up
on
anything
you've
missed
on
our
youtube
channel,
which
is
linked
below
we've
got
a
fabulous
lineup
of
speakers
today.
So
let's
get
started.
Our
first
miner
to
present
is
benjamin,
who
was
also
named
a
file
coin
community
champion
in
last
week's
space.
Race
report
welcome
benjamin.
B
Thank
you
very
much
I'll
just
keep
my
video
on
here.
So
should
I
just
get.
B
Yep
yeah,
so
so
I
guess
I
represent
some
of
these
smaller
sized
miners,
so
I
think
last
time
we
saw
some
of
the
bigger
ones.
I
guess
what
I
wanted
to
show
and
tell
about
today
is
how
to
mine
efficient
on
a
smaller
mining
setup,
but
also
the
terms
that
you
run
on
an
unmodified
bluetooth.
So
basically,
I
haven't
not
been
doing
any
custom
coding
any
tweaks
or
anything
I
basically
just
rely
on
whatever
is
released
and
hope
for
the
best
from
there.
B
That
also
means
that
the
setup
that
I'm
running
is
is
kind
of
treat
against.
You
know
meeting
the
least
amounts
of
problems
with
the
unmodified
builders,
so
just
to
dive
right
into
it.
My
own
setup
does
not
look
exactly
like
this,
but
this
is
more
like
a
representation
of
what
what
it
basically
could
look
like.
So
what
I've
done
for
these
smaller
setups
is
basically
try
to
keep
each
of
the
of
the
individual
processes,
jobs
as
separate
as
possible.
B
So,
as
you
see
here
on
the
top,
I
have
my
miner.
We
don't-
which
holds
the
demon,
the
miner
and
also
the
storage.
So
that's
the
one
goes
on
top,
then
I
have
a
dedicated
worker
only
doing
a
pc
one.
B
I
have
a
dedicated
worker
only
doing
pc2,
and
then
I
have
the
last
one
doing
yeah.
That's
so
mainly
the
c2
job
all
connected
with
the
tin
kick
or
better,
but
this
is
kind
of
the
the
base
setup
for
having
a
an
environment
that
mine's
pretty
efficient.
B
So
I
know
that
there
are
a
lot
of
small
miners,
partly
complaining
about
having
a
single
miner
setup,
partly
saying
it
works.
Fine
they're,
also
minus
having
multiple
single
minor
setups.
I
find
that
this
setup
is
probably
one
of
the
yeah
least
problematic
setups,
to
have
also
because
I
have
pretty
good
control
over
the
the
individual
processes,
so
making
sure
my
lotus,
daemon
and
the
miner
is
not
clogging
up,
because
it's
doing
other
stuff
that
is
not
supposed
to.
B
Of
course,
it
is
possible
to
run
it
as
one,
but
I
just
find
it
much
easier
to
to
try
to
separate
things.
So
at
least
this
is
a
this
is
kind
of
the
base
for
what
it
looks
like.
I
also
put
some
prices
on
here
just
to
give
you
an
idea
about
what
is
the
size
of
this.
So
you
see
it's
it's
a
little
bit,
I
think
over
16
000.
There
are
also
some
parts
missing.
B
Feels
like
a
more
stable
platform,
so
I
would
say
that
I
have
seen
very
few,
like
hardware
related
compute,
related
errors
on
on
this
setup,
and
I
guess
in
the
end
that's
also
pretty
nice
that
you
don't
have
that
additional
factor
of
what,
if
you
have
like
a
big
corruption
and
stuff
like
that
on
top
of
it.
So
you
know
trying
to
minimize
that
the
other
part
is
that
it's
not
like
amd
epic
cpus
are
really
you
know
super
expensive,
so
so
it's
still
doable
now.
B
The
setup
here
is
basically
again
taking
into
account
that
you
have
the
the
different
individual
workers.
What
you
see
is
that
actually
the
miner
in
itself
is
pretty
beefed
up
and
well.
We
saw
on
the
the
emulated
attack
here
this
morning
that
that
is
actually
a
good
idea,
because,
even
though
that
you
might
have
a
miner
that
can
follow
the
chain,
then
it
gets
really
stressed
when
trying
to
do
this
under
attack
and
so
on.
So
so
this,
at
least
for
my
setup,
has
been
working
pretty
well.
B
B
So
another
thing
that
I
wanted
to
point
out
is
that
the
setup
I
chose
here
for
the
pc
one,
which
is
a
lot
of
people,
is
talking
about
the
pcr
one
worker,
like
that's
the
only
really
important
thing,
and
I
I'm
pretty
sure
that
yeah
it
can
be
difficult
to
make
work,
but
I
actually
find
it
easy
just
to
have
like
a
single
epic
here
with
like
half
a
terabyte
of
memory
and
then
about
six
terabytes
of
the
nvme
disk
in
ray
zero.
So
you
want
as
much
I
o
as
possible.
B
So
that's
one
thing,
and
the
other
thing
is
that
when
you
have
the
six
terabyte
you
could
ask
you
know.
Why
do
you
need
that
much?
If
it's
only
600
gigabytes
per
sector,
you
can
have
10
sectors
on
there.
Why
do
you
need
that
much
you're
not
going
to
have
10
sectors
in
parallel
here?
B
So
the
thing
is
that
that
it
takes
some
time
before
this.
This
storage
is
released
again.
So
you
basically
need
to
make
sure
that
you
have
enough
storage
here,
that
you
can
keep
on
patching
new
sectors
but
at
the
same
time
keep
that
until
they
have
been
committed
to
the
network.
So
that's
why
there
is
some
additional
on
it.
I
do
not
have
time
to
run
through
this
in
detail,
but
the
thing
is
that
I
can
share
this
afterwards.
If
somebody
wants
some
inspiration.
B
Basically,
I
have
10
gigabit
network
on
all
of
them,
but
I
think
that
most
of
you
sooner
or
later
knows
that
it's
basically
between
pc1
and
pc2,
that
you
have
this
big
bottleneck.
You
need
to
move
those
600
gigabytes
of
files
yeah.
So
let's
just
continue
here.
This
is
actually
the
one
that
I
wanted
to
talk
most
about,
because
I
pretty
sure
that
this
looks
pretty
complex,
but
this
is
actually
the
key
to
efficient
mining
on
a
setup
like
this.
Basically,
you
have
every
hour,
so
you
have
from
zero
hour
to
six
hour.
B
Here
you
have
is
just
about
every
hour.
You
do
an
app
piece
or
catch
a
new
sector,
so
you
have
a
process
running
there.
Every
hour
you
have
this
kind
of
rolling
setup
where
you
have
about
six
sectors,
doing
pc1
in
parallel,
so
the
miner
is
able
to
do
about
seven,
but
it
does
not
really
make
sense
in
this
setup.
So
that's
why
I
put
it
to
six,
because
it
also
has
to
balance-
and
that's
kind
of
the
key
point
here
is
that
you
need
to
make
things
balance.
B
So,
while
you
have
the
six
sectors
here,
you
would
see
that
you
also
have
six
sectors
going
down
here.
So
you
have
six
sectors
working
in
the
pc2.
You
have
six
sectors
on
the
weight
seat.
Actually
you
can
have
a
bit
more
because
we
don't
care
about
them
when
they're
there,
but
you
can
also
have
about
six
sectors
competing
c2
and
then
they
go
into
proving.
B
So
the
point
is
that
if
you
do
this
and
really
tweak
it-
and
you
know
in
reality,
I
might
do
something
like
five
to
six.
But
if
you
really
tweet
this,
then
basically
you
can
have
this
system
spitting
out
a
new
32
gigabyte
sector
every
hour.
So
so
that,
actually
is
not
that's
a
lot
more
than
500
gigabytes
per
day.
B
It's
actually
more
close
to
770,
but
the
thing
is
that
in
reality
you
know
we
have
all
these
deal
errors
syncing
the
chain,
lotus,
updates
everything
and,
as
I
said
here,
it
takes
about
half
a
day
to
gain
momentum,
because
you
don't
want
to
pitch
all
six
sectors
in
once.
You
want
to
have
it
rolling
like
this,
and
that
is
also
to
optimize
how
much
storage
you're
using.
B
So
I
can
definitely
tell
this
one
and
I
do
think
that
it's
a
little
bit
complex
to
look
at.
But
if
you
see
the
green
line
here,
you
basically
see
that
at
a
point
in
time
you
have
six
running
in
parallel.
Here
you
have
a
pc2
running,
you
have
something
in
weight
seat
and
you
have
a
c2
running.
So
that's
the
idea
and
just
to
give
you
an
idea
about
what
it
looks
like
in
in
bluetooth.
B
You
have
not
that
many
here,
but
you
basically
have
your
c2
jet
running.
You
have
your
pc2
job
running.
You
have
a
bunch
of
c1s
running
and
you
will
see
a
lot
of
stuff
over
here
because
you
have
weight
deals.
You
have
something
that
goes
packing.
You
have
a
weight
seat.
You
have
something
going
to
piece
two
and
so
on,
so
you
want
to
see
that
it's
got
like
widely
spread
all
over
the
place
here.
So
that's
kind
of
the
idea
to
make
this
this
work.
B
So
this
is
just
some
pictures
from
my
basement
and
I
already
showed
this
to
a
few
other
miners
which
find
it's
pretty
fun.
But
but
the
thing
is
that
this
is
my
my
basement,
where
I
just
have
just
done
stuff
thrown
around.
This
is
my
miner.
This
is
actually
my
pc
one,
which
is
a
hpe
server.
Basically
using
a
non-hp
memory,
non
hp,
nvme
disks,
just
using
it
like
a
regular
computer
and
it
works
perfectly.
Then
we
have
the
pc2
and
the
c2
up
here.
B
So
over
here,
that's
my
normal
networking
rack
where
I
have
the
firewall.
I
have
my
switch
and
then
because
this
is
actually
a
deep
loading
setup.
It's
actually
have
some
pretty
powerful
switches
going
on
here.
So
this
is
actually
one
of
these
kind
of
rocky
setup,
where
you
can
do
a
really
fast
file
transfer
over
almost
a
lossless
connection,
so
that
also
helps
a
lot
now.
B
The
other
thing
that
I
wanted
to
point
out
here
is
that
we
also
have
some-
I
I
basically
use
liquid
cooling
to
to
cool
the
computers,
so
my
basement
is
only
about
25
degrees,
but
I
basically
pumped
this
over
in
my
in
my
flow
heating,
so
I
remove
about
200
2000
watts
here
at
about
40
degrees,
and
it
is
pumped
with
the
this
is
cyclical
and
is
pumped
over
into
the
the
dirty
water
going
into
my
my
heating
system
in
the
house.
B
So
so
that's
also
a
way
to
to
utilize
the
energy
coming
out
of
this.
So
that's
basically
what's
easier
heating,
my
house
right
now-
and
this
is
just
a
look
inside
one
of
these
when
you
have
filled
it
all
up
so
another
thing
that
I
or
this
is
actually
the
last
thing
I
just
wanted
to
point
out
is:
I
also
have
this
other
one
called
the
gogo
intel
miner
and
the
thing
is
I
wanted
to
see
how
crabby
hardware
I
can
actually
run
this
on.
B
So
the
only
thing
where
I
check
it
out
is
on
the
miner
itself,
where
you
see
it
has
a
pretty
beefy
c
and
gold
and
a
and
a
2080
ti.
So
that's
pretty
awesome
to
to
do
that
part,
but
the
thing
is
that
I
really
wanted
to
do
some
really
low-key
stuff
for
the
other
things.
So
my
pc2
worker
here
is
a
super
old
intel
xeon,
which
is
totally
beat
up,
but
it
can
still
do
a
pc2
in
a
little
bit
over
an
hour
and
then
this
one
is,
I'm
really
proud
of.
B
That's
also
a
totally
old.
You
know
totally
underpowered
ceon
and
it
does
not
have
much,
but
you
know
still,
you
can
do
it
in
two
and
a
half
hours
and
what
I
did
was
basically
just
put
two
in
to
do
c2.
So
now
I
have
one
to
do
a
pc,
pc2
and
I
have
two
to
do
the
c2
and
I
end
up
having
about
a
sector
coming
out
a
little
bit
over
every
hour.
So
this
is
really
you
know
the
kind
of
basement
stuff
that
you
know
some.
B
Some
miners
would
work
and
I
think
that
it's
really
interesting
to
see
how
low
can
you
actually
go
because
it
comes
down
to
that
the
pc
one,
which
is
the
one
that
you
saw
before?
That's
the
one
doing
pc
one,
but
that
is
basically
the
only
one
that
really
needs
the
amp
and
for
the
rest
of
it.
You
know
you
can
be
a
lot
more
creative
in
how
you
set
it
up.
So
I
guess
that
was
pretty
much.
What.
C
B
To
say
so,
it
might
be.
There
are
some
questions
here.
A
Yeah
we've
got,
we've
got
one
question
here
in
the
chat:
benjamin
have
you
from
edison
he's
asked:
have
you
tried
using
worker
to
do
p1
plus
p2.
B
B
So
so,
when
you
have
a
new
sector
coming
in
so
basically
right
now,
if
you
set
your
your
weight,
wait
for
deals
to
like
every
hour
which
mine
is
set
up,
then
basically
it
would
just
keep
on
pushing
in
new
pc
once
and
you
know
it
wouldn't
do
a
pc2
now.
Obviously
you
are
able
to
do
this
on
a
on
a
single
minor
setup.
B
A
Yep,
that
makes
sense.
Thank
you
so
much
benjamin,
like,
first
of
all,
I'm
just
completely
blown
away
by
your
mining
set
up
heating.
Your
house.
A
A
It
seems
like
it
seems
like
that's,
that's
so
amazing
and
a
quick
question
for
you.
How
did
you
kind
of
first
come
across
file
coin
like
where,
like
what
was
it
that
kind
of
motivated
you
to
to
mine
on
file
point
in
the
first
place.
B
B
I
think
it's
a
general
idea
that
you
know
if
you
can't
do
trusted
storage
and
in
the
end,
maybe
just
you
know
not
really
have
cloud
like
we
have
today.
You
know
using
for
for
heating
our
houses
instead
of
heating.
You
know
ridiculous.
Nothing,
then.
I
think
that
you
know
that
would
be
much
better
and
even
if
you
can
run
it
on
a
mobile
phone
and
use
the
storage
there,
then
you
know
we're
really
getting
somewhere.
So.
A
Absolutely
absolutely
well
listen!
Thank
you
so
much
again
for
taking
the
time
to
share
your
your
experience
and
your
expertise
with
us.
Also,
congratulations
on
being
a
falcon
community
champion.
Thank
you
so
much
for
helping
your
fellow
miners
throughout
space
race
and
congratulations
on
being
number
seven
on
your
continental
table
very,
very
exciting.
A
Awesome
all
right!
Thank
you
so
much
benjamin
okay!
Next,
our
next
presenter
is
charles
carr,
representing
the
nbfs
canada
team,
who
are
currently
ranked
second
on
the
north
american
leaderboards
over
to
you,
charles.
C
Yeah
so
yeah
thanks
everyone
to
give
us
the
opportunity
to
present
our
staff.
Yes,
we
are
running
freikon
cluster
mining
in
canada,
montreal
so
so.
First
of
all,
I
would
like
to
introduce
ourselves.
Well.
Everybody
was
curious
about
what
we
call
mbfs.
Actually,
the
nebula
ai
file
system.
We
were
doing
the
decentralized
ai
cloud
computing
in
the
past
days.
C
You
know
our
white
paper
in
2017
were
saying
that
we
would
like
to
have
the
storage
solution
based
on
ipfs,
so
we
can
doing
the
decentralized
storage
with
the
ai
together.
So
we
are
waiting
the
day
when
there
are
a
good
solution
can
provide
the
storage.
At
the
end,
we
found
that
the
fragrance
will
be
a
very
good
solution
potentially
for
us,
so
we
jump
into
the
mining
industry
to
find
how
it
works.
So
we
have
been
working
in
the
project
for
about
three
years.
C
We
get
a
one
million
dollars
funding
from
canada
government.
So
it's
quite
interesting
we're
going
here
and
as
those
are,
the
servers
currently
we're
using
way
when
directly
using
the
servers
from
hp,
high
performance
computing
servers
and
we
are
hp,
official
part
business
partners.
So
we
with
the
p1
we're
using
the
md
cpu
server,
basically,
is
the
hp
385
type
and
for
p2
worker
we're
also
using
hp
servers.
Basically,
it's
a
hp
server
with
the
dual
core
external
intel
server,
but
we
added
one
gpu
and
2080
ti
into
it.
C
So
for
the
screen
server,
it's
just
a
normal
storage
server,
micro,
super
micro
or
any
types
it
will
be.
Okay
and
vending
has
already
introduced
lots
of
details
about
how
to
assembling
servers,
how
to
doing
either
those
and
storage
plug
plus
how
to
scan
them.
So
I
want
to
repeat
his
work
basically
like
we
have
very
similar
design,
except
that
we
didn't
split
the
p2
and
the
c1
c2,
so
p2
and
72.
We
are
in
sandbox
with
one
gpu
card
by
the
way,
and
then
we
put
everything
in
a
data
center.
C
So
we
are
using
data
center
for
mining
and
we
have
a
third
center
in
montreal,
which
is
it
can
support
30,
kilowatts
per
rack
so,
basically
like
with
certain
vehicles,
we
can
have
all
the
servers
up
to
20
in
the
same
rack
and
on
the
rack
server.
We
have
one
node
server
running.
We
have
one
miner
and
about
three
to
four
p1
servers
with
p2
servers
working
together.
C
So,
according
about
that,
we
added
the
money
monitoring
scheduler,
which
can
help
you
monitoring
any
service
downtime.
Anything
like
overuse
used.
We
have
an
anti-spend
module
because
early
is
the
date.
When
was
there
the
number
one
in
north
america?
We
get
an
attack
from
traffic.
At
that
time
we
build
a
new
anti-spam
shield
which
can
detect
a
malicious
behavior
and
add
to
the
federal
policy
to
block
them
under
the
schedule
running
every
day
to
get
detect
those
behavior
from
script
and
logs
and
lots
of
other
strings.
C
At
the
end,
we
have
authentication
services
built
on
top
of
the
fargo
infrastructure,
so
for
people
who
want
using
a
mobile
to
monitor
each
of
the
servers
which
they
want
to
rent,
they
can
see
how
it
works
using
the
username
password
to
get
a
credential
to
access
it.
So
this
is
the
software
in
architecture
reviewed
and
those
are
the
dashboard
we
are
using
by
the
way.
This
is
not
the
real
dashboard
we're
currently
using
now.
This
is
a
conception
that
we
use.
C
You
can
show
the.
If
you
have
a
multitude
location,
you
can
see
how
many
servers
running
the
traffic
flow
and
up
and
down
time-
and
this
is
the
one
screenshot
is
taking
from
our
production
server.
So
we
can
see
that
there
was
place,
for
example,
this
one
we
have
a
99.99
percent
is
used.
Obviously
this
is
the
p1
worker.
It
has
the
multi-threads
parallel
about
16
to
17
parallel
processing,
so
it
depicts
about.
So
we
need
to
be
careful
about
this
one.
C
We
need
to
distribute
load
to
other
servers
and
we
can
see
that
we
have
the
partition
used
about
80
or
70
percent.
We
need
to
take
care
when
it
arrives.
90
percent
were
certain
alerts
to
our
notification
and
mobile.
So
our
engineer
will
take
a
look
release
some
space
or
decide
if
we
need
to
add
a
extra
hard
disk
for
help.
C
See
the
throughput
per
hour
per
day,
you
have
about
30
terabytes
input
and
output,
so
sometimes
it
goes
too
high
means
some
service
inside
the
smart
has
problem.
Those
are
the.
If
we
want
to
check
any
of
the
services
working,
we
can
check
the
gpu
usage.
Usually
when
we
have
a
gpus
like
this,
which
means
it's
healthy,
which
the
pipeline
is
not
broken.
You
continue
doing
the
job.
This
is
that
big
gaps
between
those
spikes,
which
means
that
you
have
too
much
idle
time.
C
You
need
to
check
your
schedule
to
see
if
there
are
some
stacks,
for
example,
message
points
for
or
something
like
you
have
lots
of:
p1
jobs,
idling
or
something
will
stack
somewhere.
You
don't
know
you
need
to
log
in
and
check
it
and
we
have
a
dev
development
environment
just
for
testing.
So
writing
a
customized
scheduler
to
grab
all
the
status.
C
C
So,
but
if
we
have
a
lot
of
issues
during
or
we
can
see
challenges
during
the
setup
due
to
the
corporate
nineteen
in
canada,
the
supply
chain
because
of
big
problem,
it
already
takes
two
to
four
weeks
from
ordering
to
shipping
and
when
parts
arriving,
sometimes
you
need
to
customize
and
upgrade
by
yourself,
because
the
hp
servers
usually
have
the
original
design.
C
But
if
you
want
to
add
new
pieces,
for
example,
you
want
more
memory
or
you
want
a
more
cpu
kit-
it's
not
just
expensive,
but
it
takes
one
month
or
two,
even
two
months,
shipped
to
you,
but
for
mining.
If
you
want
to
catch
up
with
the
meetings
there's,
basically
it
won't
work.
So
you
need
to
manage
this.
However,
testing
is
another
heavy
job.
We
have
so
many
servers,
lots
of
different
cables
and
lots
of
components
and
some
components
which
was
remarkable
by
the
community,
something
like
a
nvm
or
something
like
that.
C
C
So
operation
is
extremely
difficult,
because
all
the
hardware
is
working
on
the
high
pressure
almost
100
to
use
it.
You
might
have
broken
in
ssd,
you
might
have
broken
your
network
card
and
you
need
to
set
up
lots
of
monitoring
and
and
you
need,
and
then
the
data
center
is
not
in
charge
of
maintenance.
Those
things
you
need
to
work
in
visit.
You
need
to
study
people
schedule
the
service
you
have.
You
need
to
have
backup
parts,
so
you
need
a
management
budget
to
say.
Okay,
we
have
100
servers
running
now.
C
We
need
to
have
about
10
to
backups
or
20
backups.
Those
things
is
a
very
complicated,
unique
monument
and
because
of
the
the
current
space
race,
we
don't
expect
it
that
we
didn't
expect
it
that
the
team
were
released
on
daily
basis.
So
originally
we
are
not
a
design
for
this,
we
think
is
that
we
have
one
thing
fit
up
and
then
we
can
just
running
after
21
days.
Everything
is
done
well,
okay,
but
actually
it's
not
like
that.
We
basically
have
every
race
per
day,
so
we're
struggling.
C
Do
we
need
to
follow
the
upgrades
or
just
waiting
a
few
release,
then
we're
doing
upgrade
that
will
save
lots
of
time
and
effort,
but
at
the
end
I
decided
to
follow
the
official
releases
page
where
to
upgrade
every
day,
even
though
we
lost
about
20
to
30
percent
of
the
output,
but
I
think
it's
worth
it
because
from
those
things
we
found
that
we
can
hear
lots
of
people.
We
found
a
lot
of
bugs
in
the
day
one,
and
we
found
a
lot
of
things
like
people
also
incumbent
need
to
help.
C
But
if
you
see
that
no
I'm
sorry.
C
Using
0.2
5.1,
so
I
cannot
answer
questions.
I
don't
think
that's
helpful
for
the
community,
so
we
decide
instead
of
gain
high
rewards.
We
decide
to
find
more
bugs,
find
them
have
more
community
members.
So
in
that
case
we're
doing
that
one
please
become
challenges.
You
need
to
management.
So
there's
this
is
some
solution.
C
We've
found
so
first
of
all
we're
building
an
erp
system
for
trading
the
inventory
and
the
quotations
for
when
the
conditions
comes
in
from
different,
renders
we
immediately
put
in
the
erp
system,
and
we
manage
our
equipment
inventory
to
see
how
much,
how
many
ssd
we
remaining,
how
many
hdd
will
remaining
and
can
we
need
some
quotations
fill
up
fulfill
our
inventory.
So
this
is
not
done
only
automatically.
We
have
invoice
system.
We
have
quotation
system,
everything
added
together.
So
when
there
is
something
happens,
there
will
be
trigger
alert
as
us
to
fulfill
the
equipment.
C
C
Reconnect
to
the
pu
to
the
miner.
I
think
miners
also
aware
that
your
workers,
occasionally
we
are
offline
during
whatever
reason
disk
io
was
just
ac
keynote
received.
In
any
case,
it
will
be
found.
We
have
two
steps
of
monitoring
as
a
watchdog
running
on
the
server
and
watchdog
monitoring
service
on
remote
worker,
which
is
our
own
work.
C
So
basically,
we
have
another
distributed
system
working
with
a
fraction
distributed
system
together,
it's
a
little
bit
complicated
and
also
we've
been
using
lots
of
container
based
technology
for
continuous
delivery,
so
building
the
anti-spam
system
to
monitor
those
management
behavior
to
block
those
servers.
C
So
after
that
we
didn't
suffer,
the
problem
of
the
log,
misses
the
wind
post
or
lost
low
power
due
to
the
attack
to
the
message
attack
to
your
miners
and
in
order
to
find
the
best
way
to
manage
those
hardware
p1p2
how
to
working
them
together,
I
go
to
a
simulator
which
can
accelerate
the
process
using
deep
learning
technology,
so
you
can
simulate
thousands
of
times
how
the
workers,
working
with
each
other
and
exceptional
handling,
use
the
software
instead
of
you
defining
one
configuration
you
put
it
on
running
and
you
spend
another
12
hours,
then
you
know
that
okay,
that
works
or
not.
C
No,
we
don't
do
that,
we're
using
dividing
technology
to
involve
those
systems
and
it
works
very
efficiently.
It
can
self
involve
a
self
evolved
with
a
job
process.
C
Yeah
and
here
comes
our
future
plans,
so
we
would
like
working
with
china
suppliers
for
a
lot
of
important
parts
and
to
scale
up
the
money,
and
we
want
to
fund
the
partners
in
china
doing
the
marketing
and
the
coal
mining
in
north
america,
because
north
america
for
now
is
still
low
in
storage
compared
to
china
and
asia.
So
we
wanted
to
working
with
them
together
to
building
another
mining
location,
a
bigger
nine
location
in
canada
or
even
us
so,
and
we
have
very
good
relationship
with
the
local
government.
C
We
can
help
them
to
working
with
the
local
government
and
how
to
pass
the
compliance,
follow
the
law
and
everything
locally,
and
we
would
like
to
contribute
to
code
for
operating
and
testing,
for
example,
those
anti-spam
software,
those
continuous
monitoring
software
components.
We
want
to
contribute
as
soon
as
it
is
mature
enough
for
open
source,
and
then
we
want
to
working
more
with
other
management
committee.
C
We
get
lots
of
help
from
the
community,
we
get
lots
of
interesting
information
and
they
have
us
a
lot
in
the
early
days
and
we
want
working
with
them
and
contributes
yeah.
This
is
our
team,
and
this
is
me
actually.
Last
year
I
was
visiting
china.
This
is
our
office.
Now
our
people
install
the
servers
and
developers
coding,
testing
everything
yeah,
that's
pretty
much
for
our.
A
C
A
Awesome,
thank
you
so
much
charles.
That
was
really
really
interesting
and
it's
especially,
I
think,
valuable,
to
kind
of
hear
about
some
of
the
problems
your
face
you
faced
and
and
how
you've
overcome
them,
the
solutions
that
you
kind
of
implemented.
So
that's
really
really
wonderful,
and
I
know
that
we've
seen
a
lot
of
bugs
come
in
a
bug.
Reports
come
in
from
your
team.
That's
really
really
helped
and
kind
of
like
improve
the
system.
So
thank
you
so
much
for
that.
A
C
Yeah,
but
now
for
our
system
we
about,
we
have
about
twenty
percent
of
redundancy,
usually
like.
If
we
have
one
p1
worker
or
two
p,
one
worker
offline,
other
workers,
we
are
taking
more
jobs.
Usually
we
reserve
twenty
percent
of
the
capability,
and
this
is
from
the
software
part
and
the
hardware
part
we
have
components
for
replacement
as
well.
Usually
if
the
system
is
done,
we
need
about
six
hours
to
get
it
offline
on
the
run
line
online.
A
Wow,
okay,
great,
thank
you
so
much.
Another
question
here
from
ask
ender
when
p2
plus
c1
plus
c2
will
p
go
to
another
machine
to
continue
c2.
If
c2
comes
first,
will
it
start
c2
first.
C
Yes,
that's
exactly
the
problem
we
are
facing
now,
so
that's
some
people
was
asking
us
to
rewrite
some
scheduled
customized
scheduler.
To
so
can
better
arrangement,
those
sequence
and
that's
something
the
next
step
we
want
to
do
after
the
space
race
is
completed.
We
would
like
to
write
some
code
more
deeper
about
the
scanning
service.
C
I
think
why
is
working
on
that
is
that
he
has
something
is
interesting,
can
reduce
the
p1
to
101
119
minutes,
so
that
will
be
interesting,
but
I
I'm
thinking
about
that
because
for
me
I
really
don't
want
to
touch
this
part
of
code.
I
think
the
team
can
do
the
best
job
for
that,
but
if,
when
magnets
are
launched-
and
we
still
don't
have
that
one
ever
service
considerably
write,
it.
A
Awesome
well,
it's
like,
like
we
say
it's
like
a
community
effort,
so
yeah
definitely
like
always
welcome
contributions
for
everybody.
Okay,
one
last
questions
for
you,
charles:
how
are
you
monitoring
infrastructure.
C
So,
for
instance,
we
have
also
software
working
together,
mostly
was
created
by
ourselves
previously
in
the
cloud
computing.
So
we
have
the
software
monitoring
the
rack,
the
hardware
so
with
basically
you
they
provide
because
they
are
servers,
they
have
ipmi.
So
you
can
read
the
disk,
the
temperature,
everything
you
can
read
it,
and
so
the
data
matrix
in
collection
is
using
graphing
for
sure,
but
for
the
service
jobs.
That
is
another
service.
Writing
by
myself.
We're
writing
a
service.
C
We
create
a
database,
so
we
go
through
all
the
logs
with
tokens
using
the
api
talking
to
the
bluetooth
api
and
uses
a
minor
api.
We
gather
the
real-time
information
into
the
installation
database.
Then
we
can
doing
the
data
analyze
we
won't
take
the
to.
We
won't
describe
it
as
a
big
data,
but
we
do
using
lots
of
big
data
technologies
to
extract
those
informations
to
building
the
graphs.
A
Awesome
very
cool
charles,
listen!
Thank
you
so
much
again
for
taking
the
time
to
share
with
us
today
and
very
best
of
luck
to
you
and
the
team
for
the
rest
of
the
competition.
A
Okay,
our
final
presenter
for
today's
show
and
tell
is
wondertan
who
is
competing
in
the
european
continental
board,
with
an
impressive
five
six
terabytes
in
storage
added
to
the
network.
Wonderton.
Are
you
with
us.
A
D
D
D
So
I
won't
let
me
start,
I'm
cleve
wonderton.
I
live
in
ukraine
and
I
decided
to
go
into
space
race
for
many
reasons
I
live
in
ukraine,
and
this
is
this
talk
will
be
much
different
from
what
you've
seen
before.
There
won't
be
any
kind
of
special
hardware
like
a
lot
of
team
working
on
updating
the
codes.
D
This
is
kind
of
like
just
just
a
simple
miner
who
wants
to
commit
some
storage
for
file
point
and
maybe
help
it's
to
evolve
right
now
or
see
any
future
perspective
from
it
kind
of
this
thing.
So
I
would
like
to
tell
you
why
file
point.
D
D
Then
I
saw
a
space
race
events
and
and
and
and
decided
to
join,
like
even
I
had
a
really
short
time
frame
for
that,
because
I
wasn't
in
in
ukraine
for
for
when.
D
To
do
that,
and
I
had
like
actually
two
weeks
before
space
race
to
prepare
for
them,
so
those
like
last
month
for
me
was
kind
of
intense
to
manage
all
of
this
while
file
coin
more.
I
really
think
that
file
point
is
has
great
value
for
a
crypto
community
and
for
like
technical
evolution
of
humanity
at
all,
it's
kind
of
different.
They
have
really
strong
economic
model
having
power
having
storage
as
your
stake
in
the
network,
provable
state,
approvable
storage.
D
This
is
really
cool,
so,
like
I'm,
really,
I'm
sure
that
file
coin
is
will
be
really
helpful
and
demanding
in
future
and
be
very
popular
crypto
projects.
So
I
decided
to
join
it,
that's
kind
of
well,
so
the
team
and
investor.
That's
a
mistake.
Team.
D
I
would
like
to
say
here
so
the
team
is
only
one
person
me
who
decided
to
do
that
and
found
some
investors
for
this
to.
I
convinced
them
that
this
is
a
really
great
idea.
One
of
them
is
my
father
and
other
is
my
like
my
good
friend
also
I've
put
some
money
on
it
and
we
bought
a
total
like
seven
rigs.
For
this,
that's
that's
kind
of
a
big
amount
of
money
for
just
mining
in
your
basement.
I
think,
but
that's
that's
all
so
about
hardware
and
architecture.
D
Getting
professional
server
hardware
in
ukraine
is
is
an
issue
to
do
this
in
two
weeks,
so
the
fast
decision
was
to
to
buy
a
lot
of
available
hardware
and
available,
like
imd
imd
processors
like
gpus,
that
are
popular
in
gaming
sphere.
D
Seven
machines
that
have
same
specs,
they
have
imd
cpu
3790x.
They
have
geforce
gtx
2008
ti
three
terabytes
of
ssds,
like
that.
This
is
very
popular
setup
that
was
around
on
slack,
so
I've
just
decided
to
use
it
and
to
mine
about
architecture.
So
this
is
architecture
I
want
to
show
you.
This
is
just
a
photo
of
custom-made
shells
for
this,
so
this
is
seven
machines,
seven
rigs
that
are
interconnected
with
the
10
gigabytes
switch
and
that's
all
so.
D
Currently,
there
are
just
seven
separate
miners
that
are
combining
space
race,
but
in
future
like
before
may
natas,
I
think
I
will
still
have
time
to
find
a
like
best
way
to
use
this
hardware
efficiently
for
all
the
committing
steps
not
to
have
any
of
the
hardware
just
sitting
around
without
work
to
combine
them
in
one
minor.
That
would
be
a
very
interesting
thing
to
resolve
so
yep.
D
Next,
so
also,
I
would
like
to
tell
you
about
their
experience
in
a
space
race,
because
when
you
start
to
dive
into
and
try
to
understand,
like
how
everything
works
and
how
to
set
up
everything,
you
have
only
two
options
for
space
phrase:
there's
dogs,
documentations,
maybe
specs
and
slack
so
docs
and
specs,
are
aren't
very
helpful.
D
They
just
tell
you
how
to
do
some
basic
thing
on
how
to
install
dependencies,
maybe
and
initialize
your
miner,
but
nothing
more
special,
because
when
you
dive
deeper
into
sectors
how
she
doing
works,
what
are
the
steps?
D
How
this
all
works?
What
time
it
takes
for
every
step
to
complete
how
to
improve
work
distributed
in
scheduler?
What
workers
to
do
how
to
set
up
workers,
and
do
you
need
them?
There
is
a
lot
of
things
you
need
to
understand
before
actually
starting
to
mind,
so
I've
got
a
lot
of
nights
just
reading
the
and
finding
the
answer
for
my
questions
since
like
if
somebody
already
asked
them
to
understand
what
and
how
everything
works
like
take
everything
by
pieces
and
to
somehow
like
create
a
whole
picture.
D
In
your
mind,
also
as
a
go
developer,
I
had
got
some
experience
and,
like
just
I
just.
D
Code,
because
it
was
this
simply
then
understanding
it,
some
concepts
from
slack,
and
even
there
is
a
lot
of
misinformation
there.
D
There
are
three
like
four
parameters
in
configuration
for
ceiling
like
the
max
deal
sectors,
all
this
stuff
and
they're
really
not
intuitive
from
like
from
the
first
side,
you
see
them
and
how
this
work,
what
what
they
are
doing
and
a
lot
of
people
telling
different
stuff
on
slack
that
are
not
meshing,
and
I
just
decided,
like
okay,
I'll,
read
the
code
and
understand
what
it
does,
and
that
took
me
much
less
time
to
understand
that
trying
to
to
get
this
from
slack
like-
and
I
I
got
my
hardware
like
two
days
before
the
start,
so
there
was
a
huge
rush
and
I
got
many
mistakes
to
actually
stop
all
of
this.
D
First
of
all,
like
I
understood
that
I
can
do
one
minor
in
my
in
my
setup
and
just
to
run
workers
on
every
machine
and
the
scheduler
will
be
able
to
like
do
the
whole
work
itself,
but
then
I
realized
that
that
isn't
working
and
I
haven't
no
other
options
just
to
set
up
like
a
minor
on
every
worker.
D
So
I've
done
a
lot
of
mistakes
and
lost
a
lot
of
time
to
actually
gain
some,
some
more
terabytes
and
and
actually
like.
My
current
position
in
europe
would
have
been
higher
for
the
reason
I
said,
like
I've
lost
some
time
on
understanding
things
and
right
now,
like
I've,
lost
a
lot
of
power
through
post
submission
failures.
For
various
reasons.
D
I
think
you
know
them
on
slack.
So
I'm
like
my
current,
like
power,
is
about
like
12
13,
not,
I
think
12
terabytes
is
visits.
So
yes,
and
thanks
like
that's
all,
I
think
I
would
like
to
tell
you.
This
is
kind
of
I
think,
not
usual
minor,
show
presentation,
but
I
think
that
you
you
liked
it.
So
thanks.
A
Yeah,
that
was
that
was
awesome
and,
I
think,
actually
ben
our
first
speaker
had
a
kind
of
similar
approach
as
a
kind
of
independent
minor.
A
So
it's
just
great
to
see
like
a
real
variety
of
approaches
to
falcoin
mining
and
also
great,
to
see
you
moving
back
up
the
table
again,
like
you
said,
you
lost
power
there
for
a
while,
but
it
seems
like
you're
and
there's
stop,
there's
still
quite
a
few
days
to
go
in
the
space
race,
so
I'll
be
excited
to
kind
of
watch
you,
as
you
continue
to
progress
in
the
competition.
A
There's
a
question
here
from
you
from
ben.
Actually,
if
you
run
the
servers
as
seven
individual
miners,
then
one,
why
did
you
set
up
a
10
gigabit
network.
A
D
A
good
question
I
just
bought
it
for
like
originally,
I
thought
that
they
should
exchange
some
data,
but
that
setup
just
to
be
honest,
that
switch
just
like,
sits
there
without
work,
and
this
is
for
future
braid,
where
I'll
like
combine
this
all
into
one
minor
and
the
10
gigabit
switch
there
is,
is
a
must.
A
I
think
I
think
benjamin's
gonna
have
to
win
the
community
champion
award
again
this
week.
He's
he's
really
been
awesome,
and
I
know
that
he's
super
active
in
the
slack
channels,
and
I
know
that
you've
been
helping
people
to
leave.
So
thank
you
so
much
okay.
A
Well,
unfortunately,
that's
the
end
of
the
minor
show
and
tell
today
I'd
like
to
close
the
session
by
saying
a
huge
thank
you
to
our
wonderful
presenters,
ben
charles
and
leave
for
sharing
their
falcoid
mining
expertise
and
experience
with
us
and
to
all
of
you
for
tuning
in,
if
you'd
like
to
present
at
the
next
show
and
tell
them
we'll
be
posting
a
sign
up
form
in
the
file
coin
slack
shortly.
A
You
can
also
catch
up
on
all
of
the
events
that
have
been
taking
place
throughout
space
race,
on
our
youtube
channel,
which
we'll
post
in
the
chat
below
and
we'll
also
link
to
in
the
far
coin,
slack
we'll
be
announcing
the
next
minor
show
and
tell
very
soon
in
the
meantime,
enjoy
the
rest
of
your
week
and
your
weekend
and
happy
mining.
Thank
you.
So
much
take
care.