►
From YouTube: Textile Office Hours - Dec 2, 2020
Description
Have questions on Textile tech like Powergate and Buckets? Ask here!
Keep up with events for the Filecoin community by heading over to the Filecoin project on GitHub:
https://github.com/filecoin-project
Check out the Filecoin community resources:
https://github.com/filecoin-project/c...
And stay connected on Filecoin Slack:
https://app.slack.com/client/TEHTVS1L6
A
So
my
thinking
for
today
was
just
kind
of
open
it
up.
If
anybody
you
you
are
or
john
who's
who's
joining,
have
any
questions
or
specific
things
to
get
into
happy
to
jump
into
those.
Otherwise
I
was
gonna
just
demo
this
little
this
new
tool
that
we're
working
on
to
kind
of
help
people
get
data
into
filecoin
more
efficiently.
B
Well,
I'll
just
give
you
the
so.
My
full-time
job
is
now
working
with
our
file
coin
rigs
and
also
doing
some
distributed
storage
for
a
startup
company
branched
off
cbt
nuggets.
I
was
a
cbt
nuggets
trainer
for
for
years
and
now
I'm
working
on
servers
again.
One
of
the
things
we
want
to
do
is
leverage
ipfs
for
virtual
lab
storage
and
also,
to
be
honest,
make
some
money
with
filecoin.
B
So
looking
at
the
textile
stuff,
I
mean
bucket
seems
to
be
the
perfect
tool
for
that
type
of
a
an
idea
we
want
to.
You
know,
be
able
to
have
our
client.
Our
software
client
pull
data
from
ipfs.
A
stored
in
bucket
seems
ideal,
but
I
haven't,
I
haven't
had
the
time
this
is.
This
is
all
about
two
weeks
old
for
me,
so
I
have
not
had
the
time
to
even
look
into
what
the
logistics
are.
B
I
know
that
my
boss
would
not
let
us
use
the
hub
that
exists,
but
I
see
that
you
can
install
the
hub
locally
and
use
your
own
hub.
I
was
hoping
there
would
be
a
bunch
of
people
here
asking
better
questions
than
I
could
and
I
could
be
a
fly
on
the
wall,
but.
A
Yeah
well
so
one
thing
just
to
direct
you
where
you
might
want
to
go
for
that
for
the
solution
you're
looking
for
is
running
the
hub
on
your
own
is
sort
of
like
it's
it's
kind
of
a
very
it's
a
very
like
complete
sort
of
product
platform.
So
it's
not
what
you
want
if
you
just
want
buckets,
but
buckets
actually
has
a
daemon,
so
you
can
run
buckets
all
on
your
own
in
front
of
like
whatever
kind
of
user
management
system
or
bucket
ownership
system
you
want,
and
it
will
handle
all
the
file.
A
Syncing
and
and
folder
folder,
sort
of
updating
and
management
for
you
and
pinning
to
ipfs
and
all
of
that,
and
so
you
can
find
that.
Are
you
familiar
with
go?
Are
you
do
you
have
any
go.
A
It's
okay
programmer
with
it,
yet
you
don't
have
to
know,
go
to
use
buckets
it
actually
in
the
textile
in
our
main
textile
repo
textile
dash
text
style,
there
there's
a
bunch
of
builds
that
are
spit
out
of
every
release,
and
one
of
them
is
actually
the
bucket
stamen,
so
you
can
run
it
as
a
binary.
A
If
you
want
right
now,
I
was
just
mentioning
that,
because
that
package,
it
is
its
own
sort
of
self-contained
package,
but
it
lives
in
that
repo
and
that's
just
because
of
the
way
that
go
packages
are
often
architected
where
they
just
live
in
subfolders
within
a
larger
architecture.
So
there's
a
folder
in
there
called
the
buckets
daemon
bhakti.
A
I
think,
and
that's
where
that
build
spits
out
of
so,
if
you
ever
want
to
like
tweak
the
code
or
like
get
into
it,
that's
where
that
exists,
but
you
don't
have
to
jump
right
into
there
because
you
could.
You
can
just
go
grab
that
binary
and
run
buckets.
However,
you
you
envision
running
them
all
right
cool
and
we
we
also
have
a.
We
have
a
couple
slack
channels
in
the
file
coin:
slack
if
you're
there.
A
Slack
as
well,
okay,
sweet
yeah,
so
just
ping
us
if
you
have
any
questions
like
just
even
navigating
and
you're
looking
for
stuff
or
or
you're,
trying
to
think
about
the
white
right
like
way
that
these
pieces
could
fit
into
what
you're
imagining
okay
always
happy
to
like.
I,
I
might
take
you
up
on
that
bounce
ideas
off
of
or
try
to
point
you
other
two
other
examples
or
or
whatever
so
cool,
and
I
see
john
finally
made
it
in
so
john.
A
I
was
just
mentioning
I'm
going
to
I'm
going
to
kind
of
open
it
up
for
any
questions
you
guys
have
here
for
a
minute,
but
if
we
don't
have
many
questions,
then
I
was
gonna
jump
into
a
demo
of
a
new
tool.
I
think
might
be
interesting
to
to
you
actually.
C
Okay,
is
it
in
lieu
of
hub
or
it's
just
like
an
add-on
to
hub.
A
It's
it's
going
to
be
an
add-on
to
hub
it's
or
it's
in
development,
and
I
think
the
first
release
may
not
be
directly
useful
on
the
hub,
but
by
the
second
release.
We'll
have
it
something
that
you
could
use
on
the
hub
as
well.
A
C
A
Cool
cool
all
right,
so
why
don't
I
jump
into
that?
If
you
guys,
if
you
guys
want
so
and
and
the
reason
I
wanted
to
demo
it
today-
is
because
it's
just
sort
of
in
the
like
earliest
phase
of
development,
where
what
would
be
really
valuable
is
if
you
have
use
cases
or
workflows
that
you're
hitting
challenges
on,
and
you
think
this
could
fit.
A
We'd
love
to
know
kind
of
what
features
it
needs
in
order
to
be
really
complete
for
your
needs,
and
so
the
origin
of
this
project
is
that
in
the
previous,
in
sort
of
the
first
phase
of
file
coin,
launching
we
had
a
lot
of
teams
trying
to
sort
of
build
pipelines
for
getting
data
onto
the
network
and
those
pipelines
were
often
you
know.
A
Many
different
deal,
storage
deals
that
they
wanted
to
create
and
they
wanted
to
do
that,
like
you
know,
as
parallel
as
possible,
with
as
many
miners
as
possible
and
they
needed
to
manage.
You
know
how
much
data
they
were
caching
in
ipfs
and
how
much
resources
they
were
using
and
they
need
to
manage.
A
You
know
potential
fault,
potentially
error,
deals
and
and
recovering
from
those
also,
we
saw
use
cases
where
developers
were
trying
to
blast
a
single
deal
at
ten
different
miners
at
a
time
and
then,
and
then
you
know
and
then
taking
whatever
ones
existed
and
then
filling
it
or
succeeded
and
then
filling
in
the
others,
but
that
flow
could
be
sort
of
zipped
up
and
made
sort
of
more
optimal.
A
And
so
this
tool
is
we're
right
now,
calling
it
the
filecoin
data
transfer
service,
and
so
it's
just
gonna,
be
called
fts
in
the
command
line,
and
the
idea
is
that
you
can.
You
can
feed
the
fts
any
sort
of
format
of
of
tasks,
and
it
will
take
those
tasks
and
start
making
them
conform
to
whatever
storage
you
want.
So,
for
example,
you
can
take
a
folder
of
a
folder
that
are
a
directory
that
contains
many
different
folders
and
potentially
large
files,
and
it
will
look
at
that.
A
It
will
look
at
that
directory
and
turn
every
single
one
of
those
into
a
new
new
storage
steel
request.
It
will
build
a
pipeline
and
it
will
start
queuing
those
up
to
to
move
them
into
file
coin
and
and
ipfs.
Actually,
if
you
want
so,
the
idea
is
that
this
would
actually
use
buckets
on
the
hub
in
the
in
the
near
future.
Right
now,
it's
using
powergate
directly.
A
We
actually
want
it
to
be
able
to
do
either
depending
on
what
a
developer
wants,
but
it
the
developer,
doesn't
need
to
switch
tools
in
order
to
change
the
back
end
that
they're
using,
but
the
idea
would
be
that
it
would
use
buckets
really
effectively
push
all
push
the
data
for
a
deal
into
a
single
bucket.
A
Wait
for
that
to
confirm
and
then
kick
off
the
archiving
flow
and
then
actually
the
another
really
cool
thing
it
does.
Is
it
compiles
all
the
results
along
the
way,
and
so
you
have
a
nice
structured
data
at
the
end
or
or
actually
along
the
along
the
way
for
what
jobs
were
created.
What
deals
came
out
of
it?
What
are
all
the
cids?
How
do
they
map
to
the
original
tasks,
and
so,
like?
A
I
said
right
now:
if
you
pointed
at
a
directory
in
the
future,
you
could
pipe
deals
to
it
and
have
it
be
a
continuously
running
process
or
you
could
you
could
you
know
feed
it,
a
csv
of
urls,
for
example.
So
let
me
jump
in
to
show
you
that
what
we've
got
here
all
right,
so
quick
preview
of
the
file
coin,
data
transfer
service,
so
just
called
fts
in
the
command
line,
and
you
can
kind
of
see
the
it's
mvp
here.
A
So
very
simple:
there's
just
the
one
command
to
run
this
thing,
so
it
takes
a
lot
of
different
knobs
for
tuning
how
your
pipeline
runs,
and
so
that's
just
because
a
lot
of
this
is
running
on
your
system.
So
how
many
different
concurrent
deals?
Do
you
want
to
run?
A
What
will
the
back
end
support
so
that
you,
you
kind
of
want
this
client
to
know
about
what
the
limits
of
your
either
your
hub
account
are,
or
the
powergate
endpoint
are,
and
so,
but
the
primary
input
method
here
is
just
to
pass
it
a
folder
with
some
organized
tasks,
and
so
I
have
that
here.
So,
let's
just
open
this
folder
up
and
I'll
show
you
so
here
I
have
a
bunch
of
different
folders
each
containing
a
data
set.
A
You
could
also
do
you
can
also
do
let
me
just
you
can
also
do
files
right
in
in
the
in
this
root
here
and
it
will
they'll
treat
either
one
as
a
as
a
different
task.
So
right
so
so
there
are
all
my
tasks
that
I
want.
I
want
to
push
to
filecoin,
so
all
I'm
going
to
do
is,
do
ff
fts
run
and
then
I'm
gonna
give
it
my
folder
and
I
wanna
yeah.
A
So
I
wanna
pipe
these
to
the
standard
out,
because
I
just
wanna
see
the
results
here
and
I'm
just
gonna.
Do
a
dry
run
to
take
a
look
at
what
this
would
do
and
you
can
see
that
it?
It
basically
is
going
to
start
a
storage
task
for
each
of
those
folders
as
well
as
we
should
see,
the
file
becomes
its
own
as
well,
and
then
it
completes
the
tasks
and
spits
out
the
outputs
here.
A
If
I
didn't
pipe
it
to
the
standard
out,
I
could
also
have
it
output
csvs
right
now
we
could
do
csvs.json
whatever,
obviously,
but
right
now
it
spits
out
csv
for
each
of
the
output
types.
The
output
types
are
errors
jobs,
so
every
every
sort
of
task
becomes
a
job
and
then
deals
and
every
every
every
task
can
become
multiple
deals.
So
I
split
those
out
into
three
different
output
files,
and
so,
if
we
remove
the
pipe
flag,
that's
how
that
would
happen.
A
Okay,
so
what
is
this
doing?
It's
taking
the
limits
of
how
many
different
parallel
tasks
do
I
want
to
be
running
and
it
is
spinning
up
a
pipeline
in
each
of
those
in
each
of
those
concurrent
task
queues
and
then
it's
going
to
each
task
in
my
folder
and
putting
them
into
one
of
the
queues.
The
first
step
is,
it
goes
and
moves
this
data
onto
ipfs.
It's
doing
that
with
a
the
remote,
a
remote
ipfs
node,
in
this
case
I'm
using
powergate
as
a
back
end.
A
So
it
moves
it
to
the
powergate's
ipfs
node,
and
so
that's,
if
you're,
if
you,
if
you,
if
you
have
used
powergate
before
that's
equivalent
to
the
staging
step
when
this
runs
on
the
hub,
that
will
be
equivalent
to
creating
a
bucket
and
pushing
the
bucket
for
each
of
these
tasks.
A
A
couple
things
that
this
does
nice
is:
it
will
actually
manage
that
ipfs
caching
layer
for
you,
so
that
you
don't
blow
up
all
of
your
caching
space
and
it
so
it
will
nicely
stay
under
a
certain
level
of
used
stage,
sort
of
used,
ipfs
space
and
wait
for
it
to
clear
up
and
when
the
way
it
clears
up
is
by
getting
moved
to
filecoin
and
so
okay.
A
So
it's
it's
going
to
move
each
of
these
to
staging
get
the
cid
kick
off
a
job
if
you
again,
if
you're
familiar
with
powergate
the
way
that
this
is
doing
it
under
the
hood
is
with
the
storage
config
and
so
there's
a
quite
a
few
more
knobs
that
I'll
add
here
for
how
you
want
these
deals
to
move
on
to
the
network.
A
I
think
the
default
way
is
going
to
be
that
every
every
sort
of
task
becomes
one
storage
deal
first
and
then
progressively
moves
to
more
storage
deals
up
to
whatever
limit
you
want,
as
they
are
successful.
So
it
won't
be.
It
wouldn't
be
one
task
trying
to
push
out
to
ten
different
miners.
At
the
same
time,
it
would
be
one
task
pushing
out
to
one
or
two
and
then
and
then
queuing
up
more
as
those
are
successful
and
then
moving
those
out
of
the
way
and
moving
to
the
next
ones.
A
But
you
can
see
here,
it's
it's
jumped
through
and
it's
it's
getting
a
bunch
of
successful
storage
deals
on
the
network,
moving
those
out
of
the
queue
moving
on
to
the
next
tasks
and
we're
done,
and
so
I
have
the
sort
of
all
this
debug
output
going
on.
But
the
key
thing
is
that
it's
created
these
these
outputs,
and
so
we
can.
A
We
can
see
those
right
here,
so
every
task
became
a
job,
and
so
these
are
outputs
that
a
lot
of
teams
were
trying
to
manage
how
they
created
in
their
own
pipelines,
and
so
I
sort
of
just
automated
that
you
can
see
actually
that
the
stage
didn't
get
updated
a
little
fix.
I
need
there,
but
so
every
every
every
sort
of
path
within
my
task-
folder
got
calculated.
Bytes
became
a
job
id
and
then
every
job
became
a
deal
on
the
network
and
the
way
this
worked
is
again.
A
It
used
a
storage
config
here
I
had
it
set
up
to
just
store
the
data
with
with
one
miner
and
so
every
everyone
here.
Every
successful
one
had
just
one
one
minor
success
and
the
output
here
is
a
bunch
of
nice
structured
data
again
showing
you
all
the
information
that
you
need
to
get
that
data
back
off
the
network.
A
What
minor
it's
on
with
the
proposal
see
ideas
everything
there,
and
so
so
that's
cool
and
that's
and
that's
basically
it,
and
so
you
can
take
this
fire
it
up
on
a
bunch
of
tasks.
A
Leave
it
come
back
in
a
few
days
and
check
the
results
of
moving
all
that
data
to
the
network
and
then
what
I
think
will
be
really
cool
is
making
it
so
that
you
can
just
run
this
system
and
keep
pushing
new
tasks
at
it
as
you
need
them,
and
a
lot
of
other
other
really
neat
things
that
we
hope
to
make
possible.
So
again,
the
goal
here
is
to
make
this
run
on
custom,
powergate
instances
or
on
the
textile
hub
and
really
help
you
move
lots
of
data
to
the
filecoin
network.
C
Hey
andrew
awesome,
is
it
same
thing
like
hub?
Will
it
create
will
create
links
so
that,
like
let's
say,
a
user,
you
you
push
a
whole
bunch
of
stuff
and
user?
Could
just
click
on
the
link
and
then
open
up
the
different
yeah,
the
different
okay
yeah.
A
Yeah
totally
so
the
hub
will
have
some
some
new
configurations
that
I'll
all
I'll
work
on
which
would
be
like.
Do
you
want
the
data?
Do
you
want
the
bucket
to
remain
live
on
ipfs?
So
that's
a
that's
like
the
hot
storage
in
powergate.
Do
you
want
the
bucket
to
remain
live
on
ipfs
after
you've
got
if,
after
you've
got
it
in
filecoin,
or
do
you
want
it
to
archive
and
and
and
sort
of
collapse,
and
wait
for
you
to
pull
it
back
out
of
filecoin
later?
A
And
so,
if
you
leave
it
on
ipfs,
then
all
those
links
would
be
available
to
you
exactly
and
we
can
make
those
part
of
the
output
files
as
well
for
sure.
A
Well,
there
is
a
storage
cost
on
the
hub
because
that's
like
data,
that's
like
real
data,
just
on
on
disk
on
ipfs
nodes.
C
Okay,
so
was
that
kind
of
like,
like
the
I
open,
the
account
that
you
sent
an
email
I
think
was
last
week,
yeah
and-
and
I
opened
up
an
account,
so
I
guess
like
charges
for
that
to
keep
the
stuff
on
ipfs
would
come
out
of
something
like
that.
Yep.
A
A
Yeah,
that's
possible,
we're
actually
so
explain
what
you
mean
by
users.
Are
you
using
the
textile
api
keys
to
create
users
with
different
pub
keys,
or
is
this
or
is
this
something
custom.
C
Well,
well,
something
custom
like
people,
you
know
using
my
curated
data
set
and
pulling
stuff
out
of
it,
and
so
ultimately,
I'm
gonna,
I'm
gonna,
try
and
monetize
it.
So
I'm
thinking
of,
if
I
have,
if
I
have
the
cost
of
a
of
adding
this
on
to
it
and
having
to
pay
for
ipfs,
can
I
fold
it
into
what
I
would
charge
people
to
use
the
curated
data
set
yeah.
C
Okay,
all
right
cool
in
this
new
release
or
in
releases
coming
shortly.
Will
there
be
some
way
that
I
can
sign
and
verify
my
addresses
so
that
it
would
be
included
in
sr2
phase
phase
phase
two.
A
Yeah
fully,
I
I
think
it
might
already
be
there,
but
if
it's
I
I
saw
aaron
from
my
team
sharing
the
command
to
do
it.
I
don't
recall
if
it's,
because
the
command
is
actually
already
there
or
because
it's
coming
out.
He
has
the
pull
request
out
right
now
for
a
bunch
of
changes
for
how
those
commands
work.
So
I
think
it
should
be
coming
out
shortly
if
it's
not
already
in
there.
I
I'm
fairly
sure
it's
in
there,
but
but
we
can
double
check
after
the
call
all.
B
C
Very
very,
very
cool,
but
if,
if
I
have
any
other,
I
know
there's
a
question
that
I'm
forgetting
but
but
maybe
I
can
hit
you
guys
up
on
slack,
try.