►
From YouTube: Bifrost Working Group 2020 Recap
Description
George Magras of the Bifrost Working Group joins us to give an update on all the things the infra team accomplished in 2020.
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
A
The
core
team
is
made
up
of
the
three
of
us
myself,
michael
burns
and
tommy
hall,
but
we've
seen
many
contributions
from
more
people
who
I
listed
below
the
biggest
probably
the
biggest
accomplishment
for
our
team.
This
year
has
been
keeping
up
with
the
ever
increasing
requests
on
uniques
and
and
and
data
flowing
through
the
gateways,
we're
sort
of
a
victim
of
our
own
success,
because
we
quickly
realized
that
just
spinning
gateways
on
bigger
boxes
results
in
more
people
using
the
gateways.
A
A
A
All
the
gateways
have
an
800
gigabyte,
hard
drive
for
caching,
and
our
observed
slis
are
around
average
150,
milliseconds
time
to
first
byte
and
99.97
uptime.
For
for
this
year,
we've
accomplished
these
numbers
by
applying
lots
of
tweaks
to
the
go
ipfs
garbage
collection
thresholds,
os
level,
tweaks
rotting,
tweaks,
for
example.
We
we
noticed
recently
that
having
nodes
in
frankfurt
region
basically
made
most
of
the
traffic
get
get
sucked
into
into
that
region.
A
So
it
wasn't
great
for
user
experience
because
we
would
get
users
from
like
asia
hitting
frankfurt
just
because
they
were
closest
to
that
region
in
terms
of
of
network
hops.
So
we
we
we've
since
distributed
the
load
better,
but
yeah
we've
seen
it
takes
increase
in
unique
visitors
and
in
gateway
traffic.
A
A
We've
also
switched
from
using
static
ips
and
the
go
ipf's
configs
to
using
dns
adder
suffix
matching
and
the
way
that
works
is
when
you
resolve
dnsr.bootstrap.looptp.io,
you
get
a
list
of
peers
which
are
which
are
also
dns
adders.
So
if
you
recursively
resolve
those,
then
you
get
multiple
ports,
multiple
entries
for
the
same
peer,
but
describing
other
protocols.
So
you
can,
you
can
easily
have
it
like
a
like
a
a
nested
way
of
describing
multiple
peers
from
the
same
dns
adder
top
dns
adder.
A
This
is
this
allowed
us
to
easily
add
support
for
websockets
and
and
quick.
Also,
we've
spun
up
an
ipfs
scallop
cluster.
This
is
a
three-note
cluster,
80
terabytes,
each
so
lots
of
room
to
grow
there.
It's
currently
hosting
a
bunch
of
pl
websites,
as
well
as
falcon
parameters
which
are
similar,
multi
multi
terabytes.
A
As
far
as
monitoring
we
didn't
have
a
lot
going
in.
We
just
used
like
plain
old
ssh
to
log
in
and
tell
logs
so
we've
since
incited
a
monitoring
pipeline.
We
use
prometheus
for
our
metrics
for
machine
level.
Metrics
like
overall
memory
usage
network,
cpu
usage,
we
use
node
exporter,
then
go
ipfs
itself
provides
metrics
in
the
prometheus
format.
A
So
we
get
all
sorts
of
good
info,
such
as
number
of
peers
bit
swap
stats,
quick
stats,
all
sorts
of
very
good
good
things,
and
for
the
http
proxy,
which
is
operating
this
case,
we
use
mtail
to
parcel
logs
and
expose
the
percentage
of
requests
of
successful
requests,
fail,
requests
and
and
so
on,
which
we
used
to
alert
on
and
also
to
graph
in
in
grafana.
A
A
Some
things
that
we
would
like
to
to
work
on
in
the
next
year
are
obviously
keeping
up
with
increasing
user
demand.
That
mostly
means
scaling
ipfs
instances
and
working
closely
with
the
go
ipfs
team
in
identifying
memory,
leaks
or
or
tweaks
that
we
can,
we
can
make.
A
We
would
like
to
make
our
repos
public
we've
basically
extracted
most
of
our
secrets
from
from
the
repo.
So
at
some
point
we
would
like
to
make
that
open
source
and
start
accepting
contributions.
A
Also,
a
big
one
has
been
streamlining,
dmca
request,
because
we've
we've
been
getting
lots
of
those,
so
we
actually
have
work
to
enable
a
double
hash
denied
list
that
can
be
easily
shared
with
other
gateway
operators,
which
would
be
great
to
do
at
some
point.
Also.
We
have
to
find
better
ways
of
testing
configs
before
deploying
to
all
of
the
gateways.
Right
now
we
have
staging
boxes.
We
do
canary
deploys
to
them,
but
sometimes
so
some
things
are
still
not
not
obvious
when
they
break.
A
So
we
would
like
to
have
a
sort
of
a
better
approach
to
testing
these
before
they
hit
live,
and
also
we
like
to
experiment
with
running
ipfs
on
kubernetes
at
scale.
The
major
of
the
kubernetes
networking
layer
makes
it
challenging,
but
if
we
solve
that,
it
really
makes
up
in
terms
of
orchestration
across
multiple.