►
From YouTube: Filecoin Master Class // Intro to Buckets
Description
@andrewxhill will discuss how to use Textile Buckets, a cloud storage system built on IPFS with Filecoin backup, and best use cases.
Stay connected on Filecoin Slack:
https://app.slack.com/client/TEHTVS1L6
And dive into the Textile docs:
https://docs.textile.io/
A
Hello
and
welcome
everybody.
Thank
you
for
joining
us
today.
My
name
is
jason
and
it's
my
pleasure
to
introduce
today
andrew
hill
from
textile
today
andrew
will
be
taking
us
through
the
textile
buckets
master
class
and
with
that
andrew,
do
you
want
to
take
over.
B
Sure,
good
morning
again,
everybody
I'd.
Imagine
that
there's
a
few
people
watching
today's
session
that
were
also
part
of
monday's
session
on
the
power
gate.
B
Today,
I'm
going
to
present
something
very
related
to
powergate,
but
a
totally
different
technology
and
we'll
actually
play
with
it
a
bit
and
I'll
show
you
how
it
it
leverages
powergate
over
an
api,
that's
built
on
filecoin.
So
if
you
didn't
watch
that
master
class
on
introduction
to
the
powergate,
definitely
recommend
recommend
reviewing
it.
B
Just
a
quick
summary
there.
The
powergate
is
actually
a
technology
built
to
integrate
into
systems
that
want
to
use
filecoin
for
data
storage,
and
so
it
does
a
lot
of
things
for
deal
management,
wallet,
management,
deal
life
cycle
management
and
doesn't
have
strong
opinions
about
what
applications
you
build
on
top
of
it.
It's
really
meant
to
to
build
into
systems
that
already
have
user
models,
security
models,
all
of
their
own
ideas
and
then
just
layer
that,
on
top
of
file
coin
based
storage,
one
really
good
example.
B
That
is
the
textile
hub,
which
is
a
an
api
and
platform
that
we
build
and
we'll
use
that
a
little
bit
today
and
so
you'll
actually
see
filecoin
apis
enabled
through
the
hub
and
with
buckets
that
are
actually
coming
from
powergate.
They
will
feel
slightly
different
from
powergate
because
they
they
are
just
they're
just
enabling
the
model
of
the
hub
and
that's
really
the
power
of
powergate
and
other
products
like
that
could
build,
could
build
it
into
it.
B
So
buckets
has
more
of
an
opinion
about
how
you
store
data
and
that
opinion
comes
from
our
work
with
ipfs
in
the
past
and
how?
What
are
some
of
the
difficult
pieces
for
developers
to
figure
out
in
ipfs
and
and
our
thinking
about
how
to
make
that
easier.
B
And
so
let's
take
a
look
at
at
buckets
and
and
start
playing
with
them,
and
I
think
that
the
reason
why
a
lot
of
people
will
be
on
this
call
is
they
want
to
know
how
to
use
filecoin
and
and
one
of
the
really
cool
things
about
buckets.
Is
we
built
it
to
make
some
things
in
ipfs,
easier
and
and
and
more
intuitive
for
developers?
B
B
So
what
are
buckets?
I'm
gonna
start
at
the
beginning,
and
I'm
gonna
show
you
kind
of
the
things
it
solves
in
the
ipfs
network
and
then
I'm
gonna,
and
then
I'm
gonna
jump
into
filecoin,
which
is
literally
one
extra
command
when
you're
building
things,
which
is
pretty
cool.
So
buckets
are
dynamic,
directories
stored
and
synchronized
over
ipfs
they
build
on
ipfs,
they
build
on
another
technology
we
created
called
threads
and
and
for
filecoin
capabilities.
B
They
build
on
the
powergate
when
you
use
buckets
if
you've
used,
say
amazon
s3,
the
remote
nature
of
buckets
should
feel
very
familiar.
B
Buckets
really
helps
with
the
synchronization
and
persistence
of
folders
on
ipfs,
and
it
does
that
by
using
the
internal
structure
of
data
in
ipfs,
the
dag,
to
help
you
figure
out
what
data
you
need
to
move
to
different
peers
without
having
to
consume
new
bandwidth.
So
it
actually
does
diffing
locally
and
pushes
just
the
things
that
need
to
change,
and
then
it
has
a
lot
of
really
useful
features
so
that
you
can
think
I
have
a
directory
here.
A
B
B
The
strengths
of
buckets
really
comes
down
to
the
way
that
it
it
it
can
track
and
move
your
data
over
ipfs,
efficiently
and
intuitively,
and
so
we
have
a
really
nice
blog
post
on
the
textile
blog
about
how
this
is
done.
The
technical
pieces
of
how
this
is
done,
but
I
think
a
really
cool
piece
to
to
kind
of
showcase
here
is
how
it's
moving
data.
So
when
you
create
a
folder
on
your
desktop
and
then
you
initialize
it
as
a
bucket,
that's
going
to
be
one
command.
B
The
first
step
that
buckets
will
do.
Is
it's
actually
going
to
start
calculating
the
the
dag
so
the
structure
of
that
data,
the
the
graph
of
that
data
in
a
way
that
ipfs
knows
how
to
communicate
so
the
dag
breaks
up
your
your
directory.
It
breaks
up
each
of
the
files
into
these
pieces
with
their
their
persistent
identifiers
and
that
dag
in
ipfs
world
that
dag,
never
changes,
so
it
has
a
it
has
an
address,
and
every
time
you
want
to
reference
that
directory.
You
would
use
this.
B
You
need
to
move
data
and
when
you
move
data
the
first
time,
that's
easy.
We
do
that
all
the
time,
but
when
you
move
data
the
second
time
you
can
add
efficiency
here
by
not
moving
data
that
you've
moved
before
and
buckets
does
this
by
using
that
internal
structure,
the
dag
used
in
ipfs
to
calculate
only
the
pieces
that
have
changed.
B
So
if
I
have
a
bucket,
I
push
it
and
then
I
change
a
file
within
it.
It
will
recalculate
the
dag
and
only
look
at
the
changed
nodes
within
the
dag
and
push
those
changes
to
remote.
So
it
can
very
efficiently
move
and
change
the
state
of
these
folders
for
your
peers
or
for
these
remote
services
that
you're
pushing
data
to.
B
So
you
I
keep
mentioning
that
you
push
data,
you
push
data
to
other
places.
Buckets
are
really
good
for
managing
ipfs
data,
that's
going
to
be
in
multiple
places,
and
so,
if
you've
built
on
ipfs
before
this
is
a
pretty
common
pattern.
B
You've
probably
used
pinion
services
to
do
this
in
in
a
simple
way,
but
buckets
does
this
and
it
kind
of
it
kind
of
gets
all
the
advantages
that
I've
I've
been
pointing
out
of
of
diffing
and
synchronization
and
id
management,
and
all
that
just
in
this
nice,
simple
api,
and
so
one
place
that
you
can
push
your
buckets
too.
As
I
mentioned
before
the
textile
hub,
this
is
our
hosted
api.
B
That
knows
how
to
speak
buckets,
so
you
can
go
grab
an
account
on
the
hub
and
then
you'll
get
an
api
key,
and
with
that
api
key,
you
can
start
pushing
your
buckets
there.
Why
would
you
want
to
push
your
buckets
there?
Well,
one
of
the
biggest
reasons
you
want
to
push
your
buckets.
There
is
because
the
data
that
you're
creating
locally
on
your
computer,
you
might
want
it
to
also
be
available
on
the
ipfs
network
and
ipfs
is,
is
great.
That
way,
people
could
put
could
pull
it
directly
from
your
local
computer.
B
But
the
moment
that
you
close
your
laptop
the
network
is
going
to
have
a
hard
time
finding
that
data
ever
again.
If,
if
nobody
else
is
persisting
it
or
nobody
else
is
replicating
it,
and
so
the
hub
provides
that
for
you,
and
so
you
can
push
these
buckets
and
you
can
push
these
diffs
to
the
hub
in
order
to
have
a
remote,
ipfs,
node,
also
hosting
it,
and
public
publishing
that
data.
B
Technically,
you
can
think
of
buckets
as
sort
of
a
wrapped
layers
of
protocols
that
make
all
this
pretty
simple,
and
so
the
protocols
themselves
are
are,
you
know,
are
pretty
complex
actually
and
you,
but
you
don't
really
have
to
know
all
of
them
in
depth
in
order
to
use
the
tools
to
start
moving
data
across
the
network.
B
It's
the
same
thing
like
all
like
really
great
protocols
can
be
pretty
pretty
layered
and
pretty
complicated,
but
you
don't
have
to
know
every
piece
of
it
in
order
to
get
a
nice
simple
ui
at
the
end,
and
so
you
can
think
of
that
like
putting
putting
a
domain
name
in
your
browser
and
pressing
enter.
It
goes
through
through
many
steps
to
kind
of
move
through
the
layers
of
the
protocols
to
find
you,
the
content
that
you
want,
and
so
buckets
provides
that
kind
of
simple
ui.
B
On
top
of
this
stack
of
of
protocols
in
the
same
way,
and
so
the
stack
is
like
at
the
center
is
really
ipfs
and
all
the
protocols
that
are
that
are
packed
inside
of
ipfs
and
then
around
that
we
have
a
protocol
called
threads.
This
is
our
our
you
can
think
of
it
like
our
our
database
protocol
on
ipfs,
and
this
helps
us
with
moving
data
and
permissioning
data
across
peers
and
then,
finally,
we
have
buckets
and
the
buckets
apis
for
moving
sort
of
binary
data.
B
That's
associated
with
these
databases.
But
again
you
don't
have
to
know
each
of
those
things
you
don't
have
to
know
them
in
depth,
but
that's
just
kind
of
the
map
of
what's
happening,
and
so
then,
when
you
push
data
to
remote
peers,
it
can
take
advantage
of
all
of
these
protocols
in
order
to
get
it
there,
and
actually
it
can
take
advantage
of
all
these
protocols
so
that
you
can
build
it
into
other
things
or
get
it
back
out,
which
is
really
cool.
So
you
can
use
the
cids
of
ipfs.
B
You
can
actually
use
the
persistent
identifiers
of
ipns.
You
can
use
the
threads
database
to
treat
these
these
synchronized
data
sets
as
collections
in
a
database.
You
can
use
the
permissioning
of
that
database
to
share
specific
paths
within
your
folders
and
lots
of
other
good
stuff.
It's
it's
like.
B
We
were
having
a
discussion
yesterday
about
how
how
much
our
documentation
actually
lags
behind
the
features
that
are
available,
because
this
stuff
has
been
coming
online
so
fast
and
it's
so
cool,
but
we're
catching
up
with
it,
and
so
you'll
have
to
dig
in
a
bit
to
find
all
the
all
the
magical
pieces.
But
there's
a
lot
there.
The
documentation
goes
pretty
far
as
far
as
getting
your
getting
you
grounded,
and
so
you
can
kind
of
navigate
all
this.
But
then
you
can
dig
in
further
and
find
a
lot
of
really
neat
features.
B
I
mentioned
that
it's
this
stack
of
protocols
and
you
can
and
it
sort
of
will
use
those
protocols
to
move
data,
but
you
can
also
use
those
protocols
to
get
data
back
out
and
so
data.
The
data
in
your
buckets
is
available
over
threads.
You
can
every
bucket
you
create
will,
especially
with
when
you're
using
the
hub.
You
can
create
buckets
and
they'll,
give
you
a
dns
endpoint
for
it's
a
domain
for
always
grabbing
the
latest
version
of
the
bucket.
B
You
can
use
the
ipfs
cids
to
do
things
like
snapshotting
of
your
bucket
or
pushing
those
cids
to
filecoin,
which
is
what
we'll
look
at
today.
You
can
use
the
ipns
addresses
and
then
most
recently
there's
this
final
layer
of
protocols
that
is
is
supported
by
buckets
and
that's
archiving
buckets
to
filecoin.
B
So
this
piece
of
buckets
isn't
really
designed
to
run
fully
locally.
If,
if
you
check
out
the
powergate
master
class
two
days
ago
with
the
video
online
you'll
see
that
what
we've
designed
is
actually
a
a
set
of
file
coin
services
inside
the
power
gate,
that's
really
meant
to
run
in
hosted
or
or
long
running
systems,
and
so
with
buckets.
B
The
support
for
archiving
buckets
to
file
coin
is
done
by
pushing
your
bucket
to
the
textile
hub
and
saying
archive
the
snapshot
of
that,
and
that
moment
in
time
will
will
it
will
grab
the
cid
from
from
ipfs
the
exact
current
state
of
that
bucket
and
create
a
deal
for
it
on
on
filecoin
and
what's
really
cool
about
this?
I
think
there's
a
there's
a
lot
of
things.
B
I
really
like
about
doing
like
this
idea
for
using
filecoin,
but
one
of
the
really
cool
things
that
I
think
is
hidden
in
here
is
the
way
it
actually
will.
Let
you
rethink
your
relationship
with
an
api,
because
at
the
end
of
the
day,
the
textile
hub
is
a
set
of
apis,
and
you
know
for
the
last
20
years
of
apis.
B
They've
really
been
they've,
really
been
a
sort
of
a
wall
that
you
can't
see
over,
and
so
you
push
data
or
you
pull
data
from
from
an
api,
but
that
you,
you
are
stuck
in
that
relationship
and
with
this
archiving
component
of
buckets,
you
can
use
the
bucket's
technology
and
all
the
protocols
within
it,
and
you
can
lean
on
the
textile
hub
apis
to
to
get
scaling
to
to
get
access
to
get
offline
data,
and
and
now
you
can
use
it
to
get
filecoin.
B
A
B
You
could
go
to
the
file
coin
network
directly
and
always
get
your
data
back
out,
which
is
very
cool
in
that
it,
like
we've
disintermediated
our
own
service
from
between
you
and
your
data,
which
is
the
which
is
which
is
in
my
mind,
the
right
way
to
imagine
the
future
of
apis,
make
them
available
to
help
you
scale
and
build
your
applications
quickly,
but
then
get
them
out
of
the
way
as
far
as
owning
your
data
or
blocking
you
from
it
from
getting
your
data
in
the
future.
B
This
is
your
data
and
I
think
file
coin
is
really
the
way
to
to
make
sure
that
you'll
always
have
it.
B
So
buckets
has
pretty
already
have
pretty
broad
language
and
platform
support
the
command
line.
Tool
that
we'll
use
today
is
the
one
bundled
with
the
hub
command
line
tool,
there's
also
a
a
buck.
B
There's
also
like
a
a
standalone
buckets
based
command
line,
tool
and
daemon
that
you
can
run
locally,
but
we're
not
going
to
get
into
that
today
and
those
are
all
built
for
all
major
platforms
and
then
language
based
support.
We
we
have
things
in
go.
Everything
is
a
textile
is,
is
written
primarily
in
go
as
the
first
place,
and
so
really
complete
support
over
in
in
go
based
things,
so
you
can
use
the
library
directly.
You
can
use
the
client
to
do
things
over
the
api
and
then
in
javascript
we
have.
B
We
have
the
client
libraries
that
you
can
build
things
on
buckets
and
when
you're
using
buckets
over
the
hub
api,
these
can
be
all
permissions.
So
it
can
be
you
as
a
developer
or
you
can
create
api
keys
to
then
create
buckets
on
behalf
of
your
users.
Your
end
users
and
buckets
all
today.
Now
all
user
models
on
the
hub
have
a
a
file
point
address,
and
so
that
address
you
can
use
it
as
a
developer.
B
Your
own
address
or
again,
you
can
create
users
in
your
application,
and
each
of
those
users
will
get
their
own
address
with
their
own
balances
and
they
can
be
creating
buckets
over
our
apis
and
or
you
can
store
data
in
your
app
for
these
users
in
their
own
buckets
and
and
then
give
them
the
ability
to
fund
and
create
deals
on
file
coin
to
archive
their
data
or
put
their
data
on
this
decentralized
network.
B
And
the
the
methods
to
use
to
use
buckets
are
available
in
all
those
languages.
Here's
a
couple
examples
we'll
we'll
play
with
the
first
one
today,
which
is
using
the
command
line
tool.
There's
there's
just
a
strip
down
pure
buck,
based
cli
as
well,
but
again
we'll
use
the
the
buck
services
available
in
the
hub
cli.
That's
the
first
example.
Second
example
is
a
go
based
example
and
then
finally
doing
the
same
thing
in
javascript,
so
just
pushing
a
file
or
pushing
it
directory.
B
There's
a
bunch
of
different
features
in
buckets
that,
as
you
start
digging
around
and
navigating
and
figuring
out
how
to
use
them,
you
can
kind
of
lean
on
each
of
these,
depending
on
what
you
need
and,
depending
on
what
your
the
needs
in
your
app
are,
you
might
come
and
try
buckets
for
for
these
different
reasons.
You
know
dynamic
data
on
ipfs
is.
Is
it
comes
with
challenges?
Because
of
that?
B
Because
you
know
just
a
it's
sort
of
like
a
mind:
change
in
thinking
about
content
content
addresses
cids
that
are
a
static
you
can.
You
can
never
change
the
data
behind
a
cid,
so
when
you're
moving
folders
and
dynamic
data,
you
have
to
constantly
managing
these
cids
and
buckets
really
helps
you
out
with
that
multi-protocol.
B
I
already
mentioned
the
api
on
the
hub:
will
get
you
really
far
really
fast
for
free
and
then
this
is
one
of
the
first
apis
to
be
integrated
with
filecoin,
and
so
it's
already
ready
to
start
testing
today
and
then,
obviously,
when
mainnet
launches,
hopefully
we'll
have
really
worked
out
all
the
kinks
and
made
this
a
really
awesome
production
api
just
from
day
one
and
and
one
thing
I
think,
we'll
touch
on
just
quickly.
B
If
I
have
time
today
are
the
path
based
permissions
which
are
really
sweet,
which
is
basically,
you
can
create
encrypted.
You
can
create
encrypted
buckets
and
then
push
role-based
permissions
to
remote
peers.
That,
then,
will
tell
them
how
to
expose
the
data
within
that
path,
and
you
can
do
that
based
on
other
people's
public
keys
or
you
can
do
it
based
on
token.
So
we'll
do
it
with
a
token
today,
just
as
a
quick
example
and
then
finally,
it's
open
source
and
peer-to-peer.
B
Any
way
you
like.
So
with
that,
I'm
going
to
jump
into
the
actual
kind
of
tour
of
the
technology
now
and
show
you
how
to
do
some
cool
things.
The
last
thing
I
want
to
know
in
these
slides
is
where
to
find
us.
If
you
are
part
of
the
file
coin
slack
already,
you
can
find
us
in
the
bucket
users
channel
and
we're
there
to
support
you
and
answer
any
questions.
Give
you
feedback,
debug
things
with
you
or,
if
you're,
and
then
a
reminder.
B
If
you
were
part
of
the
master
class
on
monday,
we
also
have
a
powergate
users
channel
so
for
people
going
straight
to
powergate,
you
can
go,
find
us
there
and
then
we're
gonna
have
weekly
office
hours
as
well.
I
forget
when
ours
is
this
week,
but
I
think
it's
tomorrow,
so
just
look
out
for
the
file
coin,
slack
announcements
about
about
office
hours
and
find
find
me
there.
I
think
tomorrow.
B
So
with
that,
why
don't
we
jump
over
to
doing
some
things?
So,
as
always,
I
think
the
first.
The
first
thing
to
do
is
point
out
that
you
can
find
everything
I'm
going
to
do
today,
written
up
in
our
documentation,
and
so,
if
you
just
go
to
docs.textile.io,
you
can
go
into
the
getting
started.
You
can
learn
about
the
hub,
account,
setup
installation
and
then
you
can
dig
into
buckets
in
particular
and
from
later
in
the
workshop.
B
So
the
first
thing
we
need
to
do
is
we
need
to
install
the
the
command
line
tool
to
start
working
with
buckets,
there's
a
few
different
ways
to
install
the
to
install
the
command
line
tool
both
of
them
involve
going
to
the
github
page
for
for
the
client-
and
this
is
all
this
is
all
written
up
in
the
account
sorry
in
the
installation
and
setup
section
as
well,
but
the
two
ways
that
you
can
install
the
hub
cli
one
is,
you
can
go
to
the
releases
page
and
you
can
download
the
appropriate
binary.
B
The
these
ones
are
all
the
buck
d
buck
hub
d,
all
of
those
are
not
for
today
and
and
you
you
don't
need
to
really
worry
about
them
until
you
get
pretty
deep,
but
the
only
thing
that
you
need
for
today
is
one
of
these
builds
for
the
hub,
and
this
is
the
hub
cli.
So
we
have
mac
and
a
bunch
of
different
linux,
flavors
and
windows.
So
you
can
just
download
that
and
I'll
just
show
you
actually
really
quick.
A
B
So
it
will
contain
a
few
different
things.
This
is
the
cli
that
you'll
end
up
running.
You
can
run
this
install
script,
which
will
just
move
the
hub
to
your
your
path,
so
you
can
just
call
hub
from
anywhere
on
your
computer.
So
that's
one
way
to
do
it.
The
other
way
to
do
it
is
to
build
it
from
source.
B
This
is
this
is
not
always
the
best
way
to
do
it,
because
if
you
build
it
from
source,
you
may
be
using
our
latest
code,
which
can
sometimes
get
ahead
of
what
we've
released
on
the
apis
and
so
I'll
I'll
go
ahead
and
show
you
this
anyway
and
then
I'll
show
you
how
to
get
back
onto
I'll,
show
you
how
to
get
back
onto
a
production
release.
B
Actually,
so,
if
I
go
ahead
and
and
clone
that
and
then
we're
gonna
just
jump
in
there,
the
way
that
you
now
install
it,
it's
pretty
easy.
There's
there's
just
a
make
file
in
here.
That
makes
all
the
all
the
methods
pretty
straightforward.
You
you'll
only
be
able
to
do
this
if
you
have
go
set
up
on
your
computer.
So
again,
if
you're,
not
if
you
haven't
developed,
would
go
and
you
haven't
set,
go
up
on
your
computer.
B
Definitely
just
go
with
the
binary
install
that
I
recommend
that
for
95
of
users,
but
I
wanted
to
just
show
this
for
now,
so
I
can
just
do
make
install
and
it's
going
to
go
through
the
process
of
collecting
all
the
correct
dependencies
for
the
library
and
downloading
all
the
all
those
pieces.
And
then
it's
gonna
use
go
to
compile
the
binary
cli
and
then
it's
gonna
finally
move
it
into
my
path.
So
I'll
have
the
hub
command
available
on
my
computer
to
start
using.
B
B
B
Was
this
cd?
Let's
go
look
in
this
one
too,
and
I'll
just
try
to
do
the
same
thing
so
so
now
I
should
have.
I
should
have
a
hub
installed.
Okay
and
I
can
just
do
witch
hub
and
you
can
see
that
it's
installed
into
my
my
go
path,
and
so,
if
I
do
hub
version,
it's
not
gonna,
it's
not
gonna
like,
but
it's
fine
with
it.
B
But
it
gives
me
this
warning
that
I'm
I'm
not
running
a
production
build,
and
so
I
should
be
able
to
run
a
hub
update
and
it
should
pull
the
latest
version.
But
let
me
just
show
you
actually
this
install
so
this
this
was
the
this
was
the
version
I
downloaded
from
the
releases,
and
so
I
just
call
install
on
this
and
now,
if
I
do
which
hub
oop,
let
me
let
me
just
remove
this
guy.
B
You
can
see
that
it
put
that
release
into
my
my
binaries,
so
it's
also
available,
and
now,
if
I
go
check
out
the
version
okay,
this
is
actually
a
pretty
nice
piece
to
work
through
so
on
on
max.
It
doesn't
like
the
fact
that
we're
not
signing
these
dfis
yet
which
which
will
fix
that
in
the
near
future.
But
for
now
you
just
need
to
you
just
need
to
say
like
okay.
Well,
I
download
this
from
a
verifiable
source,
which
is
our
github
releases,
so
I'll
trust
it.
B
So
when
you
get
this
warning-
and
this
is
all
in
the
documentation
again
so
in
the
installation
stuff
in
docs-
you
can
go
read
how
to
do
this
and
follow
it
a
bit
more
slowly.
But
you
need
to
go
into
your
security
and
privacy
and
it's
going
to
have
this
warning
down
here.
That
says
it
was
blocked
and
you'll
just
you'll
just
tell
it
to
allow
it
anyway
and
then
you're
going
to
go
ahead
and
you're
going
to
try
to
run
a
command
again.
It's
going
to
give
you
a
final
warning.
B
This
one
looks
a
little
different,
just
making
sure
you're
sure
you
want
to
open
it
and
I'm
going
to
say
sure-
and
you
can
see
now-
I'm
running
version
2.1,
so
cool
we're
all
set
up.
Let's
start,
let's
start
messing
around
with
the
buckets
apis,
so
the
hub
has
a
bunch
of
different
stuff
in
it
that
you
can
use
you're
going
to
need
to
actually
create
an
account,
so
you'll
you'll
call
hub
in
it,
but
I
already
have
an
account.
B
So
don't
worry
about
that
or
I'm
not
going
to
worry
about
that
today.
But
if
you,
if
you
haven't
done
this
before
you're,
going
to
need
to
go
through
the
account
setup,
steps
which
are
which
again
are
all
here
so
just
you'll
call
hub
in
it
you'll
give
it
an
email
address,
you'll
verify
that
there's
also
organization
management.
B
So,
let's
take
a
look
at
the
powergate
methods,
because
a
lot
of
this
will
be
potentially
very
useful
for
people
on
this
call.
B
So
I
can
see
I
can
see
that
so
it's
called
pow
hub
hub
pow,
because
it's
actually
passing
you
apis
directly
from
the
powergate.
B
So
you
can
take
advantage
of
a
bunch
of
different
filecoin
capabilities
that
the
power
powergate
enables
with
lotus
and
and
use
those
from
the
from
your
command
line.
You
can
also
build
them
into
your
applications,
and
so
I
can
just
do.
B
Okay,
so
these
are
in
autofill,
so
I
don't
I'm
not
really
a
quadrillion
error
or
whatever,
but
but
cool
so
definitely
check
out
the
the
powergate
capabilities
a
bit
more,
I'm
not
going
to
get
into
that
today,
but
they'll
help
you
do
a
bit
with
the
filecoin
network
right
away,
so
we're
gonna
jump
into
buckets
so.
B
So
I
wanted
to
show
you
how
to
use
some
different
data
sets
with
a
bucket.
So
why
don't
I
go
ahead
and
why
don't
I
go
ahead
and
just
create
a
new
folder,
where
I'm
going
to
put
I'm
going
to
put
my
bucket
in
this
folder.
B
B
So
now
I
should
have
a
few
different
folders
if
you're
part
of
powergate
master
class.
This
is
data
that
I
was
showing
over
there.
That
is
a
pretty
fun
data
set
and
it
kind
of
matches
the
slingshot
competition
type
of
data
sets
that
you'll
want
to
be
bringing
into
filecoin.
But
essentially
these
are
a
bunch
of
scientific
images
of
different
shelled
organisms,
some
different
mollusks
here.
So
these
are
the
genus
and
species
and
then
each
one
has
a
bunch
of
various
images,
and
so
yesterday
I
showed
how
I
collected
and
organized
this
data.
B
So
right,
so
I
was
saying
in
the
previous
master
class
that
there's
this
step,
that
I
think
everybody
here
will
need
to
take
which
is
you'll.
Have
some
data
set
that
you
want
to
be
part
of
you
know
the
slingshot
competition
or
that
you're
just
building
into
your
application,
but
you're
gonna
probably
need
to
do
some
processing
to
get
it
into
a
good
format,
step
or
format
for
for
the
loading
step,
and
so
here
I'm
I.
B
I
gave
a
good
example
that-
and
I
and
I
published
this
on
slack
as
well,
so
you
can
see
this
code,
but
basically
I'm.
Basically,
this
shell
data
came
from
a
bunch
of
researchers
and-
and
I
have
the
link
online
as
well
too
so
it's
presenting
us
with
60
000
images
of
8
000
species
and
when,
when
I
downloaded
this
data
here,
it's
actually
a
big
flat
list
of
all
these
images,
and
so
I
wanted
to
add
a
bit
of
structure
to
the
image
images.
B
They
have
a
bunch
of
indices
here
as
well
that
I
I
didn't
go
into
the
process
of
loading
those
yet.
But
I
wanted
to
load
all
of
those
images
in
a
more
structured
format
and
then
I
wanted
finally
to
create
a
few
indices
of
that
data
as
json
files.
So
what
I
do
is
I
read
through
the
full
directory
and
I
split
up
the
names
of
the
files.
Let
me
show
you
an
original
name
of
a
file.
B
I
don't
I
don't
have
them
handy,
but
but
basically
it's
like
it's
just
a
concatenated
name.
That's
like
genius
underscore
species
underscore
some
image
identifier,
and
so
I
break
those
up
and
then
I
create
indices
of
the
gene,
nsi
indices
of
the
species,
and
I
nested
that
data
into
a
bunch
of
folders
that
it's
a
folder
for
every
genus
and
then
inside
of
that
a
folder
for
every
species,
name
and
then
finally,
the
image
index,
and
so
here
I'm
going
to
show
you
actually.
B
Why
don't
I
replicate
this
a
bit,
so
I
can
show
you
something
kind
of
cool
here.
So
if
I
go
in
here
to
my
shared
images,
I'm
just
going
to
create
one
more
file,
one
more
folder,
I'm
gonna
put
everything
in
there
because
we're
gonna
come
back
to
this.
B
So
okay.
So
I've
got
my
my
shared
images
in
here
and
I'm
just
taking
a
subset
so
that
we
can
load
them
pretty
quickly
for
the
workshop,
but
all
right.
So
I'm
in
this
I'm
in
this
folder
with
a
bunch
of
those
images,
and
now
I
want
to
create
a
bucket
for
them.
So.
A
B
The
command
line
tool
for
the
hub,
there's
the
hub
buck
commands,
and
it
will
give
you
a
bunch
of
information
here
about
the
different
commands
that
you
can
run.
B
So
that's
why
I've
I've
already
cd
into
my
shared
images
and
in
this
is
the
this:
is
the
folder
that's
going
to
become
a
bucket
and
that
I'll
be
able
to
push
to
bush
push
remotely
so
to
initialize
the
bucket?
I
just
do
hub
buck
in
it
and
I'll
create
a
bucket
called
shells,
and
let's
just
make
this
not
encrypted
to
start
and
finally
I'll
put
it
into
so
this
part.
Don't
worry
too
much
about
this.
You
can
just
use
the
default
in
most
cases
and
that's
to
put
it
into
the
the
thread.
B
So
that
means
what
thread
you're
going
to
kind
of
manage
the
metadata
in
so
there
I'm
just
putting
in
a
thread
with
all
the
rest
of
my
all
the
rest
of
my
demos
cool.
So
now
I've
initialized
it
and
you
can
see
it
gave
me
back
a
bunch
of
links
and
these
are
links
that
are
corresponding
to
those
different
protocols
that
I
mentioned
before
and
so
on.
The
hub
it
actually
the
hub
has
a
bunch.
B
The
hosted
hub
has
a
bunch
of
different
gateways
for
these
different
protocols,
so
you
can
go
visit
this
this
data
set
through
any
of
those
gateways.
So
if
I,
if
I
want
to.
B
I
can
go
check
out.
The
the
bucket
here,
which
you
can
see
has
has
nothing
in
it.
Yet
so
so
the
bucket
was
initialized,
but
now
I
want
to
push
this
data
remotely.
So
all
I
have
to
do
is
call
hub
buck
push,
and
it's,
like
I
mentioned
before
it's
going
to
go
figure
out
what
this
folder
looks
like
and
start
creating
the
dag.
So
here
it's
figured
out
what
all
the
files
are
in.
B
That
folder
says
that
there's
70
of
them
and
confirms
that
I
want
to
push
them,
and
now
it's
just
going
to
go
through
the
process
of
pushing
each
of
these
each
of
these
files
remotely
and
that's
it
so
just
with
that
we'll
have
our
we'll
have
our
remote,
our
remote
bucket.
Now
I
wanted
to
do
I
wanted
to.
B
I
moved
this
around
a
little
bit
at
the
beginning
here,
because
I
wanted
to
show
something:
pretty
cool,
which
is
now,
which
I
think
is
the
way
that
a
lot
of
projects
for
the
slingshot
competition
are
going
to
be
wanting
to
think
about
publishing
data.
So
when
I
publish
these
images,
they
become
a
bunch
of
urls,
a
bunch
of
ipfs
hashes,
but
they're
not
meaningful
they're,
not
really
necessarily
useful
for
people
that
might
find
them.
B
However,
in
yesterday's
master
class
I
showed
I
showed
an
example
where,
instead,
what
I
did
is,
I
created
a
bunch
of
structured
data,
so
here
are
the
full
set
of
genus
and
species,
and
with
that
I
then
also
created
a
single
page
application.
This
index.html
and
the
indices
that
are
created
by
that
python
script
before
and
the
index
file
can
pull
locally.
Sorry,
the
yeah
the
index.html
can
pull
locally
from
these
indices
and
populate
a
simple
data
data
visualization
straight
from
the
files
here
and
the
I
didn't
get
into
any
real
data
visualization.
B
I
just
populate
a
table
that
is
searchable
and
sortable
and
all
that
good
stuff,
and
so
the
reason
that's
really
interesting
is
that
then
anybody
who
ever
accesses
this
data
again
has
an
interface
that
they
can
go
explore
the
data,
and
I-
and
I
want
to
show
that
to
you,
because
I
think
that's
the
right
way
to
be
for
many
projects
to
be
thinking
about
this.
So
after
this
completes
pushing
the
70
files
and
my
network's
a
bit
slow
right
now,
because
I'm
running
too
many
things
so
that'll
take
a
second.
B
B
All
right
couple
more
seconds
on
that,
so
then
the
next
thing
that
I
wanted
to.
Oh,
let
me
let
me
actually
just
show
you.
B
Let
me
show
you
one
other
thing
here:
okay,
so
so,
basically
what
when
I
created
this
bucket,
I
created
what
is
called
a
public
bucket.
When
you
create
buckets,
you
can
also
create
private
buckets.
B
So
this
bucket,
when
I
initialized
it,
I
actually
initialized
it
as
a
private
bucket,
which
means
all
the
data
within
it
will
be
encrypted,
and
so,
if
I
do
a
hub
buck
push
it's
going
to
tell
me
everything's
up
to
date,
because
if
you
remember
I
I
just
added
these
folders
here
but
but
when
I,
when
I
started
this
tutorial,
I
just
take
it.
I've
taken
them
right
back
out
here
to
populate
my
shared
images,
so
I
just
added
them
back
so
everything
matches.
So
that's
that
step.
B
I
was
talking
about
diffing,
so
we're
waiting
for
this
one-
maybe
I'll
do
a
little
diff
example
over
here
and
that
that's
pretty
easy.
So
here's
the
bucket
that
I'm
the
here's
my
encrypted
bucket
that
I'm
mentioning
and
if
I
go
ahead
and
add
a
new,
a
new
file
here
and
I
go
and
and
try
to
do.
B
If
I
try
to
push
this
bucket
again
see
it's
detected
that
there's
just
one
file
different
and
so
interesting.
So
it
doesn't
like
that.
So,
basically,
if
you've
used
git
at
all,
that's
a
really
this
this.
This
might
look
familiar,
and
it's
basically
saying
the
bucket
that
I
pushed
remote
doesn't
match
the
what
I
have
the
working
directory
that
I'm
in
it.
B
B
B
Let's
actually
just
stick
to
the
let's:
stick
to
this
first
one
for
a
second
I'll
go
back
and
show
you
I'll
show
you
this
in
a
minute
more
so,
okay,
so
we've
successfully
pushed
all
of
these
images
in
our
shared
shared
images.
Example.
Now,
if
I
go
back
to
the
published
bucket,
let
me
just
pull
up
the
links.
B
If
I
go
to
this
this
web
page
version
of
the
bucket
there's
no
web
page
in
there,
it's
just
a
folder
with
some
images.
So
instead
I
need
to
go
to
the
the
thread
url,
which
will
be
a
navigator
here,
and
you
can
see
that
all
my
files
have
successfully
pushed
to
this
remote
bucket
as
well,
and
so
I
can
drill
into
any
of
those
and
just
pull
up
the
image
directly.
B
Now
what
I
mentioned
before
is
that
like
if
anybody
else
found
this
data
as
part
of
part
of
the
slingshot
competition,
it's
it's
useless,
it's
just
a
bunch
of
images.
So
why
don't?
We
add
this
little
ui
that
I
created
the
other
day
that
will
also
publish
an
explorer
for
this
data
at
the
same
time.
So
I'm
just
going
to
move
this
into
here.
This
isn't
I
mean
you
I'll,
publish
this
all
online.
It's
just.
B
B
B
So
again,
there's
nothing
in
this
bucket
that
can
render
it
as
a
web
page.
So
using
this
website
url
it's
going
to
come
up
blank
until
this
application
completes
its
loading,
and
I
want
to.
I
want
to
also
bring
up
the
the
thread
based
link
because
we'll
show
you
all
this
data
in
there
in
just
a
second
cool.
I
think
that's
it.
So
if
I
refresh
that
you
can
see
cool
here's,
my
here's
my
bucket
now
full
of
of
files,
and
if
I
go
to
the
website
url,
you
should
see
this.
B
B
So
it
seems
to
be
something
with
with
it
not
liking
the
jquery
loading.
B
Refreshed,
okay,
okay,
hates
it!
I'm
gonna
skip
this
for
now.
There's
something
kind
of
funky
going
on
here.
B
Right,
okay
anyway,
I'm
gonna
move
on,
because
the
thing
I
I
really
want
to
show
you
here
is
pretend
that
that
that
that
file
worked
fine,
because
what
we
want
to
do
now
is
get
it
onto
filecoin,
and
I
can
show
you
another
example
of
this
working
just
async
I'll
share
it
as
part
of
the
results
of
the
workshop,
so
that
you
can
explore
what
I
mean
by
pushing
the
application
with
your
data.
B
So
the
final
step
that
I
want
to
do
is
I
want
to
get
this
on
filecoin,
and
so
I'm
already
in
the
bucket
folder,
and
so
there's
a
command
for
buckets
called
archive
and
archive
will
create
an
archive
on
filecoin.
This
is
the
snapshot
of
this
bucket
right
now.
So
if
I
create
this
deal
or
if
I
start
creating
deals
on
file
coin
for
this
bucket
and
then
a
month
from
now,
the
bucket
has
changed
significantly.
B
I
may
want
to
go
and
create
a
new
set
of
storage
deals
on
filecoin
to
get
the
latest
snapshot
stored
there.
So
I
can
recover
it
later.
So
there's
a
bunch
of
interesting
commands
associated
with
archiving
there's
one
there's
some
around
the
config
there's
some
around
info
and
statuses,
and
so
a
quick
tour
of
those.
If
I,
if
I
just
do
a
public
archive
default,
config.
B
It's
going
to
print
out
a
bunch
of
information
about
how
this
data
is
going
to
be
stored
on
filecoin.
This
isn't
something
that
that
you
need
to.
You
need
to
modify
this.
These
defaults
are
set
up
specifically
for
a
slingshot,
so
when
you
start
archiving
buckets,
they
will
immediately
be
part
of
or
they'll
immediately
be
formatted
and
stored
with
the
correct
miners
to
be
part
of
the
space
race,
slingshot
competition.
B
Having
specific
country
codes
that
you
want
to
find
only
miners
in
those
countries
setting
up
how
you
want
these
archives
to
renew
over
time
and
and
setting
up
what
your
max
prices
that
you
would
pay
to
store
that
to
store
that
archive,
and
you
can
find
all
the
information
about
about
these
configs
in
the
powergates
documentation.
We'll
move
we'll
move
a
subset
of
this
over
to
bucket
documentation
soon,
but
this
all
comes
straight
from
powergate.
B
B
So
we
don't
you
don't
need
to
tweak
that,
so
it
just
exposes
the
cold
storage,
and
you
can
read
about
each
of
these
fields
here
to
learn
more
how
this
is
going
to
happen
so
and
then
the
next
thing
that
I
wanted
to
show
is
just
about
the
hubbuck
archive
status
and-
and
this
is
something
that
we're
going
to
come
back
to
in
a
second,
because
there
is
no
status
right
now
same
thing
with
info
should
be
pretty
pretty
empty
because
there's
there's
no
archiving
happening.
B
B
It's
going
to
fire
this
bucket
off
to
a
powergate
using
my
filecoin
address,
and
it's
going
to
kick
off
a
bunch
of
deals
with
different
miners
around
the
world
in
order
to
archive
and
store
this
bucket
on
filecoin.
B
So
if
I
proceed
all
of
that's
going
to
start
kicking
off
now,
one
caveat
here
is
that
our
our
sort
of
filecoin
portal,
the
powergate
and
lotus
nodes
running
on
the
hub
are
just
getting
slammed
right
now
with
people
testing
the
filecoin
network,
which
is
awesome,
but
we've
implemented
a
queue
in
order
not
to
overload
our
network
connection
there,
and
so
that
queue
is
growing
at
the
moment
and
we'll
work
on
setting
up
new
nodes
or
setting
up
different
ways
to
improve
the
flow
of
this
queue.
B
But
right
now
the
queue
is
pretty
long
and
creating
deals
on
file
coin
is
pretty
long
itself
so
connected
to
get
an
archive
of
your
bucket
from
ipfs.
All
the
way
onto
the
filecoin
network
can
take
upwards
of
a
couple
days
right
now,
so
just
be
aware
of
that
fire
fire,
your
archive,
be
patient.
Now
you
just
need
to
wait
for
it
and
it
will.
It
will
get
there
and
if
I
call
info
you
can
see,
it's
not
been
it's.
It's
not
been
archived.
Yet.
B
If
I
call
status
though
it'll
just
tell
me
to
be
patient
and
I
can
come
back
and
call
status
again
and
and
kind
of
watch,
this
flow
go
forward
and
all
the
logs
kind
of
fill
in
as
the
different
steps
of
negotiating
with
different
miners
and
then
closing
the
different
different
deals
but,
like
I
said,
we'll
work
on
improving
this
q
speed.
B
So
you
should
start
seeing
status
changes
more
quickly
over
the
coming
weeks,
but
but
then
the
deals,
the
deal
flow
itself
is
always
going
to
be
kind
of
slow
upwards
of.
I
think.
A
B
It
jumped
right
into
the
queue
now
and
started
making
deals.
It
would
take
upwards
of
eight
to
ten
hours
just
to
move
through
this.
This
the
process
of
sort
of
negotiating
the
deal
as
quick,
then
moving
the
data
to
the
miners
can
be
quick
or
or
medium
quick,
but
then
the
miners
need
to
get
it
into
their
storage
sector
seal.
The
data
confirm
that
it's
stored
and
that
can
take
that
can
take
eight
to
ten
hours.
So
even.
A
B
The
queue
it
will
take
time
and
that's
it
so
with
that,
once
this
is
done,
my
my
data
set
and
my
and
my
images
all
exist
on
the
file
coin
network,
which
is
super
cool.
Let
me
I'm
just
gonna,
I'm
just
going
to
do
this
really
quick
to
see
if
we
can
do
one
other
thing,
so
I
just
removed
the
the
bucket
metadata
from
the
folder.
B
So
now,
hubble
commands
won't
know
that
this
is
an
existing
folder
and
I
can
just
create
a
new
one
and
I
don't
think
I'll
have
time
to
push
this
before
the
end
of
the
workshop.
So
I
realized
the
reason
the
web
app
wasn't
working
is
because
this
is
this:
isn't
this
was
an
encrypted
bucket,
so
those
links
weren't
working
because
they
didn't
have
the
ability
to
pull
open
the
encrypted
data.
B
But
if
I
just
made
it,
if
I
just
made
it
open,
all
of
these
files
would
have
worked,
and
so
once
this
is
done
pushing.
I
will
I'll
share
the
link
to
this
bucket
in
our
hub
buck
users
slack
and
you
can
all
check
out
what
it
looks
like
to
have
I'll
share.
All
these
links,
so
you
can
see
what
it
looks
like
to
have
a
remote
bucket
that
has
the
data
plus
has
a
little
web.
B
App
plus
has
the
ability
to
to
browse
that
data
with
a
with
a
with
an
application
and
and
ui
that's
published
with
it
on
ipfs
and
pretty
shortly
on
filecoin.
So
with
that,
I
think
that's
kind
of
a
basic
tour
of
buckets
and
and
the
things
that
you
can
do.
B
I
mentioned
that
you,
you
probably
want
to
make
buckets
with
organizations,
and
you
can
invite
collaborators
to
buckets
so
then
you
can
be
doing
polls,
you
can
be
having
multiple
people
adding
data
to
buckets,
and
I
didn't
get
at
all
into
how
you
can
use
this
with
javascript
in
your
applications.
But
we
have
a
bunch
of
great
examples
and
documentation
online
for
you
to
check
out
there.
B
So
with
that,
I'm
gonna
just
take
one
minute
to
answer
any
questions
if
there,
if
there
were
any
and
then
and
then
I'll,
set
you
all
loose
because
today
in
five
minutes,
there's
an
office
hours,
that's
that
I'm
sure
a
lot
of
people
want
to
take
part
in
which
is
the
lotus
office
hours
and
so
they'll
be
helping
a
lot
of
you
figure
out
some
of
the
core
lotus
things
and-
and
I
don't
want
to
get
in
the
way
of
that.
B
So
I
don't
see
any
critical
questions
here,
which
means
all
of
you
should
join
the
slack
channel
and
get
us
your
questions
after
this
call
and
we'll
be
we'll
be
happy
to
take
them
over
there.
So
let
me
just
share
that
one
more
time
here
you
go
bucket
users
on
the
file
coin
slack,
and
let
me
just
if
you're
not
part
of
the
file
point
slack.
It's
just.
B
B
Oh
actually,
we
might,
we
might
successfully
have
this.
We
might
successfully
have
this
web
app
up.
If
you
give
me
one
more
minute
I'll
show
you
I'll
show
you
what
this
looks
like
at
the
end
of
the
day.
So
I
think
we
have
just
you,
know
six
more
images
or
six
more
files
to
to
push
and
there
should
be
pretty
quick
ones.
So,
let's
go
back
up
to
the
top
of
all
these
pushes
and
find
that
url
again.
B
Where
was
it
okay,
many
files
all
right?
Let's
skip
that,
okay
cool
there
there
they
are
actually
so
all
right.
So
this
case
I
pushed
a
non-encrypted
bucket,
which
is,
which
is
handy
if
you
wanted
to
publish
as
a
web
app.
So
if
I
go
to
this
thread
again,
you
can
see
it's
all
the
same
data,
but
here
it's
not
encrypted,
which
is
handy
so
I
can
go
into
here.
I
can
see
any
of
my.
B
I
can
see
any
of
my
images
again
but,
like
I
said,
I
can
now
use
the
html
to
render
and
explore
this
data
directly,
which
is
super
cool
and
and
again
I
just
I
didn't
do
any
a
complicated
data
visualization
here.
I
just
created
a
table
that
lets
you
search
and
filter
this
data
so
that
you
can
find
any
of
the
60
000
images.
B
Visualization-
and
this
is
this-
is
excellent
for
the
slingshot
competition,
because
now
you
can
hand
this
over
as
not
only
the
data
set
being
on
file
coin,
but
a
simple
user
interface
for
people
to
explore
the
data
and
if
you
build
those
indices
and
it
might
actually
be
useful
for
people
and
people
can
verify
what
each
of
these
data
points
are
on
the
ipfs
network
as
well
as
on
filecoin.
So
that's
that's.
That's
the
complete
idea
here.
So
hopefully
that
helps
out
people
and
I'm
gonna,
I'm
gonna!
Leave
it
there
for
the
day.
B
One
one
last
question
is
on
size
caps.
So
I
don't
recall
total
file
size
cap,
but
I
think
it's
only
limited
by
the
size
of
the
buckets
which
are
currently
capped
at
four
gigabytes
per
bucket.
So
you
need
to
create
multiple
buckets.
If
you
are
trying
to
store
data
sets
that
are
larger
than
four
gigabytes.
You'll
need
to
break
that
data
up
and
then
and
then
I
think
files
are
only
limited
to
that.
And
so,
when
you
push
when
you
push
a
file,
it's
gonna,
it's
gonna
stream.