►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone
again,
my
name
is
francisco.
I'm
a
devops
engineer
at
near
farm,
I've
been
working
with
protocol
labs
in
the
elastic
ipfs
project,
so
near
farm.
The
company
that
paul-
and
I
we
work
for
we
are
a
remote
forest
company.
We
have
lots
of
interesting
projects,
some
of
them
are
in
the
web3
ecosystem,
and
today
I
want
to
talk
with
you
a
little
more
deep
dive
about
the
the
cloud
architecture
that
we
have
created
to
do
elastic
ipfs
to
leverage
aws
services.
A
So
you
can
see
here
lots
of
components
that
are
very
familiar
for
in
aws,
like
lambdas,
a
cluster
dynamodb,
a
messaging
system
using
sqs
and
sns,
so
s3
buckets,
of
course,
and
those
are
all
managed
by
infrastructure
as
cold.
So
we
have
terraform
and
terraspace
managing
this
all
of
this
ecosystem
for
us
watch.
What
makes
it
very
easy
to
like
create
new
environments.
A
So
it's
when
you
want
to
start
this
from
scratch
and
put
in
your
account
and
put
like
dev
staging
product
environment.
One
two,
three:
four,
you
don't
care!
It's
very
easy.
We
have
a
teraspace
framework
on
top
of
terraform
that
helps
all
of
this.
A
To
be
created
in
just
a
small
set
of
comments-
and
one
thing
that
was
very
interesting
is
that
yesterday
I
talked
with
some
guys
here
during
the
conference
that
they
actually
cloned
the
infrastructure
repo
and
were
able
to
to
create
everything
just
reading
the
docs
and
and
and
reading
the
comments
I
didn't
even
know,
they
were
doing
that.
A
So
I
just
found
out
that
was
awesome,
feedback
and
okay,
and
we
have
those
logical
subsystems,
which
is
the
indexing
the
publishing
appear,
and
they
have
objectives
that
they
are
focused
on
doing
and
we're
gonna
zoom
in
on
on
those.
So
it
all
starts
at
this
client
up
there
that
sends
a
message
to
the
indexer
queue,
and
that
message
is
just
saying:
what
is
the
location
of
the
car
file?
That's
supposed
to
be
indexed.
A
This
client
can
be
any
component
can
be
any
application
that
has
access
to
send
that
message
to
the
queue
the
client
that
we
are
using
in
the
web
to
storage
and
an
fct
storage
project
is
the
bucket
two
indexer
one,
which
is
just
a
lambda
that
receives
an
event
whenever
there
is
a
new
car
file
inside
the
bucket,
and
then
it
just
transforms
transforms
the
message
from
that
json
that
aws
give
us
of
the
event
for
a
more
simple
message
that
indexer
is
capable
of
of
comprehending
and
do
what
it
needs
to
be
done.
A
So
in
that
side
here
we
have
just
one
single
bucket,
but
the
whole
elastic
ipfs
thing:
it's
agnostic
from
the
location
where
the
file
actually
is.
We
don't
store
the
car
file,
we
don't
start
the
blocks.
We
only
store
positions
coordinates
of
where
things
are,
so
in
that
side
we
could
have
like
multiple
other
buckets
sending
stuff
for
the
same
sns
topic,
or
we
could
have
other
regions
with
that
same
client
that
same
things
here,
from
the
elastic
point
of
view,
elastic
ifs
point
of
view.
A
This
is
not
important,
okay,
so
one
one
thing
that
is
important,
though,
is
that,
even
though
we
don't
manage
the
buckets
once
we
index
the
the
files,
you
cannot
delete
them
or
move
them
anymore.
Otherwise
we
miss
the
link
of
where
stuff
really
is
so
zooming
in
the
indexing
subsystem.
So
the
objective
is
just
reading
through
all
the
blocks
inside
the
car
file
and
storing
in
this
dynamodb
tables.
A
What
are
the
the
c
ids
inside
that
car
file
and
what
is
the
block
position
inside
it
so
that
the
pier
subsystem
can
later
come
and
read
in
this
same
dynamo,
db
tables
and
provide
the
actual
content
so
soon
as
that
lab
is
triggered,
it
starts
reading
saving
stuff
here
and
then
sending
events
to
both
of
these
places.
So
we
have
a
notification.
We
have
notification
topics
for
whenever
events
are
emitted
from
the
system,
so
other
external
components
can
just
subscribe
to
do
whatever
they
want
and
we
have
also
a
queue.
A
One
thing
that
is
interesting
here
about
the
elasticity
is
that
that
lambda
we
have
a
limit
from
aws,
which
is,
it
can
scale
up
to
199
concurrent
runs,
but
we
with
the
load
that
we
currently
have.
We
are
way
below
that.
We
run
like
15
to
20,
so
that's
pretty
fine
being
able
to
handle
very
well
okay,
so
the
pier
subsystem
so
now
that
we
already
have
all
of
the
the
the
blocks
and
the
positions
here
the
file
is
everything
is
available
instantaneously.
A
So
soon
as
the
indexing
finishes,
this
means
that
the
peer
subsystem
is
ready
to
provide
the
content
because
they
share
the
same
tables,
so
indexing
rights
and
peer
reads
and
the
way
it
works
is
that
we
have
here
this
aks,
auto
scaling
cluster,
that
has
a
minimal
implementation
of
the
bit
swap
peer
and
it's
capable
of
doing
http
byte
range
requests
to
the
s3
bucket
that
contains
the
car
file.
A
A
Makes
that
request?
It
goes
through
a
load
balancer.
This
is
a
websocket
connection
over
lib
b2p,
and
this
solves
the
the
problem
of
being
able
to
handle
a
lot
of
load
and
scaling
horizontally
for
like
many
requests,
we
just
scale
up.
So
when
there
are
less
requests,
we
can
just
improve
our
cost
efficiency
and
scale
down
and
from
the
point
of
view
of
the
ipfs
network.
A
This
whole
cluster
is
actually
just
a
single
node
because
they
all
share
the
same
peer
id
and
they're
all
behind
a
dns
that
is
attached
to
that
load
balancer.
So
yes,
so
we
can
just
kill
nodes
and
create
nodes,
according
with
the
need.
So
this
makes
it
elastic.
A
So
the
elasticity
is
based
on
cpu
and
memory
usage,
so
more
active
connections
should
increase
the
amount
of
pods
and
in
this
moment,
it's
being
able
to
handle
the
lobe
very
well
without
scaling
too
much,
but
it's
prepared
for
whenever
there's
like
bunch
of
our
requests,
the
publishing
subsystem.
I
won't
talk
about
it
too
much
because
there's
a
specific
talk
about
it
in
the
content
routine
track
today
at
4
30
p.m.
That
is
going
to
be
with
paulo
and
alan.
A
It's
because
this
piece
is
like
one
of
the
most
challenging
ones,
so
it
deserved
a
separate
talk,
but
for
now
what
you
need
to
understand
is
that
all
of
this
is
the
piece
that
actually
advertises
for
the
the
other
nodes
for
the
dht
and
start
the
index
that
we
have
the
block
available
in
the
in
elastic
ipfs,
and
it
has
it
knows
the
dns
and
the
prd
that's
supposed
to
advertise.
So
the
nodes
itself
that
are
rolling
that
are
running
inside
the
cluster,
they
don't
advertise
anything
to
dht.
A
Just
sharing
some
of
the
challenges
who
we
had
some,
we
still
have
the
first
one
is
like
I
said.
The
the
content
advertisement
is
challenging
the
minimal
required
implementation.
So
when
this
project
started,
we
knew
like
pretty
much
nothing
about
ipfs
and
were
able
to
like
read
the
documentation,
especially
that
guy
over
there
awesome
guy.
A
He
was
able
to
read
the
the
details
and
understand
what
was
needed
and
what
could
be
just
thrown
away
and
that
become
like
the
minimal
implementation
of
what
we
needed.
Then
now
we
see
being
used
in
other
products
as
well.
So
this
is
very
interesting,
so
we
have
millions
of
of
index
blocks
being
per
day
in
index
per
day,
and
we
need
to
handle
infrastructure
limits
from
aws
like
the
right
capacity
units
and
read
capacity
unix
from
dynamodb.
A
We
had
to
make
some
optimizations
like
start
using
batch
inserts
and
like
not
validating
too
much
stuff
like
okay.
We
already
have
this
block
or
not.
A
We
could
just
batch
insert
stuff
and
make
this
the
indexing
speed
very
better,
like
super
fast
and
not
going
through
through
those
limitations
like
we're
way
below
that
after
some
optimizations-
and
we
have
this
read
from
anywhere
strategy
that,
like
I
said,
the
bucket
could
be
anywhere,
but
that
that
is
awesome
because
it
decouples
the
the
clients,
the
code
and
the
storage
itself,
but
it
also
comes
with
new
challenges
like
the
egress
costs,
which
is
what
we've
been
talking
a
lot
about
it
like
if
we
should
just
put
stuff
in
places
where
the
egress
is
free,
like
cloudflare
or
just
start
migrating
things
for
putting
things
closer
to
to
elastic
by
fast
but
well.
A
The
idea
is
to
have
this
the
couple,
but
a
lot
of
good
solutions
are
starting
to
show
up
about
this,
some
trade-offs
we
had
to
make.
So
whenever
we're
handling
with
infrastructure
in
aws,
we
have
like
99
dot
lots
of
nines
of
availability,
but
sometimes
things
fail.
So
we
need
to
retry.
We
have
two
retire
strategies.
One
is
inside
the
code
and
the
other
one
is
about
the
sq
sqs,
so
usually
most
of
the
problems.
A
They
are
solved
in
the
code
and
we
have
to
decide
like
okay,
we're
indexing,
there's
a
specific
block
that
cannot
be
inserted
into
dynamo
because
dynamo
just
charted
us
or
just
wasn't
available
at
that
time.
So
what
do
we
do?
We
decided
to
do
linear
back
off
instead
of
exponential,
because
it's
not
a
good
idea
to
do
exponential
back
off
inside
lambdas.
That
makes
them
run
for
way
too
long
and
and
it's
more
costs
and
in
the
sqs
cues.
A
Whenever
we
fails,
there's
like
an
exception
inside
the
lambda,
the
message
that
started
the
processing
of
the
the
indexer,
it
will
go
back
to
the
source
and
we'll
try
again
and
for
that
we
need
to
specify
the
max
receive.
So
is
the
max
amount
of
tries
that
this
lambda
is
going
to
get
until
it
moves
a
message
to
that
letter
q.
So
we
decided
to
like
only
try
twice
and
then
go
to
the
letter.
A
Q
and
someone
just
does
some
manual
action
to
also
avoid
expanding
too
much
and
like
retrying
stuff
that
are
not
gonna,
go
through
like
anyways,
because
most
of
the
infrastructure
problems
they
already
solved
in
the
code
and
the
database.
We
had
some
discussions
like
should
we
use
dynamodb
issues?
Do
we
use
redis?
We
chose
dynamodb
because
of
it's
very
easy
to
do
multi-region
replication
of
the
data.
We're
still
not
doing
it,
but
this
makes
us
very
prepared
for
that
and-
and
it's
been
fast,
it's
been
super
fast.
A
So
we
still
don't
see
the
need
of
of
changing
that
part,
and
there
were
the
discussion
about
what
container
orchestration
tools
should
we
use
aks,
acis
or
lambda.
We
ended
up
with
aks
because
is
the
one
that
we
trusted
the
most
when
we
started
this,
I
had
like
very
heard
several
people
that
worked
with
ecs
saying
that
it
was
bad.
It
was
terrible,
but
this
was
actually
old
experience
they
had.
A
I
started
reading
close
to
now,
like
some
other
benchmarks,
that
it
seems
like
it's
been
way
better,
so
might
be
something
that
we
might
change
if
it
makes
sense
sometime
and
the
alarm
does
is
totally
out
of
the
of
of
the
game
for
the
the
bitswap
pier,
for
example,
because
of
the
limitation
of
the
1999.
A
This
is
something
we
want
to
be
able
to
scale.
We
want
to
be
able
to
control
how
much
that
that
skill,
and
just
a
very
quick
demo,
just
20
seconds
of
uploading
a
file
in
web3
storage.
This
is
my
my
personal
account
and
I'm
already
connected
in
my
terminal
there
with
swarm
connect
directly
with
the
the
node.
B
You
mentioned
the
database
migration.
What
was
the
performance.
A
Right
right,
yeah,
definitely
especially
when
handling
like
big
files,
it
was.
It
was
interesting
that
the
average
we
we
made
like
a
database
change
that
optimizes
both
like
the
batch
inserting,
but
we
also
change
the
the
schema
for
the
club.
Do
a
better
decoupling
of
the
blocks
in
the
car
and
associating
those,
and
whenever
we
we
did
that
the
average
duration
of
the
lambda
it
kind
of
it
didn't
change
so
much
it.
A
We
were
like
between,
like
350
milliseconds,
to
500
milliseconds,
but
we
had
like
a
big
deviation
whenever
it
comes
like
a
big
file
that
could
go
like
over
one
minute,
sometimes
two
minutes
to
to
index
the
whole
thing,
and
this
is
not
a
reality
anymore.
So
it's
better
is
even
hard
to
see
the
difference
whenever
it's
a
hard
file,
a
big
file
or
a
small
file,
it
all
kinds
of
it
stays
in
the
average.
The
deviation
it
just
went
way
down
like
minutes
to
a
millisecond.