►
From YouTube: Saturn Technical - Diego Rodriguez Baquero
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
gonna
go
a
bit
more
technical
on
to
into
the
L1
Network
and
all
the
components
that
I
have
that
it
has
first
an
overview
of
the
L1
network,
the
L1
node,
orchestrator
and
finally,
a
bonus
which
is
the
scr
and
npm
package
for
Saturn.
A
So
the
L1
network
is
a
decentralized
server-backed
web
2.5
CDN
web
2.5
because
it
still
runs
on
a
lot
of
web
2
Tech,
not
completely
three,
but
hopefully
it
does
serves
both
webtoon
web3,
some
slides
about
the
L1
Network.
As
of
today
morning.
There
are
66
notes
worldwide,
here's
the
map,
most
of
them,
are
in
the
United
States
and
North
America
you.
You
can
see
that
we're
lacking
a
lot
in
Apec
Asia
Africa.
A
So
if
you
do
have
notes
there,
we
would
love
for
you
to
join.
The
L1
network
is
ultra
fast.
A
It's
2.9
times
faster
than
the
ipfs
Gateway
in
the
median
on
average
is
1.8
faster
with
a
under
400
milliseconds
response
time
on
the
95th
percentile,
all
the
requests
take
under
1.1
seconds
and
for
the
fastest
internet
or
users
in
the
world.
That's
the
five
percent.
It's
under
20
milliseconds
and
that's
five
times
faster
than
the
ipfs
Gateway,
and
we
just
saw
that
with
the
Sydney.
A
That
was
incredible
on
traffic
right
now
we
have
over
2
000
requests
per
second
181
million
requests
per
day
and
we
serve
over
80
terabytes
of
data
per
day
in
terms
of
reliability.
We
have
a
99
cash
radio
because
we're
over
provisioned
with
capacity.
A
So
we
would
love
for
you
to
abuse,
Saturn
and
just
Exquisite
zero,
zero,
downtime,
auto
updates
that
are
rolled
into
the
network
for
an
hour
or
so
and
under
0.6
error
rate,
which
is
70
percent
less
on
the
ipfs
Gateway
and
over
a
lot
of
iterations,
we
got
a
A
plus
qualities,
SSL
Labs
rating
in
terms
of
security.
So
how
do
we
access
the
L1
Network
retrieval
clients
Discovery?
Through
DNS?
A
We
were
using
bony
DNS
for
this
geolocation
routing
and
just
through
HTTP,
you
can
request
any
seed
content
that
you
want.
We
use
scrn.pl
as
the
main
domain
for
the
main
Network
Saturn
Network
and
Saturn
test.network,
which
utilizes
the
test
Network.
It
has
fewer
nodes
with
fewer
capacity,
but
it's
always
there
for
us
to
run
tests
and
experiments
right
now.
We
support
HTTP,
1.1
and
HTTP
2
with
TLS
1.2
and
1.3.
That's
99
of
all
the
internet
clients.
A
We
do
not
support
unencrypted
HTTP
and
it's
a
drop
in
replacement
for
the
ipfs.io
Gateway
just
switch
the
domain
and
you
get
the
benefits
of
Saturn
in
terms
of
the
network,
we're
looking
into
the
connecting
with
l2s
and
l3s
for
cache
missing
routing
over
https
for
the
smart
clients
that
can
run
some
logic
in
JS
or
Ross,
or
go
meaning
and
we'll
see
this
in
a
bit.
Ipv6
support
right
now.
A
The
notes
only
can
only
be
discovered
for
their
ipv4
address,
HTTP
3,
which
is
going
to
be
huge
to
improve
the
time
to
First
Bite
we've
run
some
experiments
with
nginx
AJ,
proxy
and
caddy
to
compare
them
all
because
we
use
nginx
under
the
node.
We
did
some
experiments
with
nginx,
quick,
which
is
in
the
done
by
the
nginx
team
and
quiche
by
cloudflare,
also
running
boring,
SSL
and
quick
TLs,
but
it's
just
not
ready
yet
and
it
impacted
time
to
first
bite.
A
So
we
we
decided
to
remove
HTTP
3
for
now,
and
it's
also,
we
want
it
to
be
super
private
so
that
not
even
us
can
see
what
traffic
from
which
IPS
saw
that
so
we're
gonna
double
hash,
the
aps
from
DL
ones
and
the
clients
directly
onto
the
L1
node.
The
L1
node
is
just
has
one
task,
which
is
a
server
firewall
content
address
car
files,
some
Basics,
it's
a
Docker
container,
as
Ansgar
already
mentioned.
A
It's
super
easy
to
set
up
it
takes
about
15
minutes,
it
can
be
done
through
bash,
just
bash
commands
or
ansible.
It
has
only
two
inputs,
your
fill
wallet
address
and
an
email
so
that
we
can
send
you
notifications
in
case
anything
goes
wrong
and
there
are
two
main
components:
the
nginx
reverse
proxy
and
catching
component
and
node.js
for
the
cache
Miss
back
to
the
ipfs
Gateway
and
some
logic
overall
architecture
of
the
L1.
A
We
see
nginx
and
node.js
being
the
two
biggest
components,
but
we
also
have
the
login
gesture,
which
just
sends
the
logs
of
who
retrieved
what
and
at
what
times
to
the
locked
Service
there's
the
registration
component,
which
is
hey,
I,
just
joined
the
network.
Please
send
me
some
traffic
and
add
myip
to
the
DNS
of
the
sgr
and.pl
domain,
two
endpoints
that
the
node.js
service
has
is
just
the
registration
check
and
the
health
check
to
make
sure
that
your
node
is
still
running
healthy
and
it
can
serve
seats.
A
We
also
have
outside
of
the
docker
container
the
ad
updater,
which
is
just
a
Bash
script,
and
it
runs
smoothly.
After
so
many
tries.
The
current
requirements
for
running
an
L1
is
just
a
Linux
server
with
ipv4
address
two
ports,
the
main
HTTP
and
https
free
Docker
installed
CPU
of
six
scores.
10
gigabits
is
Android
mentioned.
Hopefully
it
will
go
down,
32
gigabytes
of
RAM
and
a
terabyte
of
SSD
storage.
A
You
can
see,
hopefully
it
will
lower
those,
and
so
you
can
this
is
this-
is
the
source
of
Truth
for
those
requirements.
If
and
hopefully
we
will
lower
them,
we
got
nginx
to
be
super
fast
thanks
to
these
Technologies
enabling
testing
them
experimenting
with
them.
If
you're
interested,
please
reach
out
to
me,
if
you
want
to
improve
your
own
engine
exit
up
I'm
happy
to
help
you
do
that
in
terms
of
the
L1
node.
A
What
we
want
to
do
go
next
is
just
improve
the
operator
experience,
making
it
easier
to
set
up
and
allowing
more
customization,
because
that's
the
most
feedback
we
have
received
from
the
early
operators.
They
want
more
customization,
so
we're
going
to
give
you
that
and
lower
requirements,
because
the
more
we
can
smartly
route
all
the
users
around
the
world
to
the
viewer,
L1
nodes,
the
lower
requirements
we
can
give
now
the
orchestrator,
which
is
the
magic
behind
the
scenes
it
handles.
A
Node
registration,
geolocation,
TLS,
certificate
management,
DNS
cell
checks,
stats
collection
and
it
provides
life
stats.
So
the
geolocation
is
performed
always
upon
registration
of
a
new
L1
node.
We
use
IP
info.
It
allows
DNS
Geo
routing
and
the
mapping
of
the
nodes
worldwide,
as
as
we
saw,
and
it's
what
pretty
much
allows
to
discover
where
in
the
world
we
need
more
L1
nodes
and
where
operators
could
get
more
feel
to
serve
traffic
in
those
regions.
A
In
terms
of
TLS
management,
every
node
has
a
key
and
a
certificate
for
svrn.pl
and
in
case
they
they
go
bad
or
they
just
go
fraudulent.
We
revoke
their
certificate
and
remove
them
from
DNS
and
that
that's
the
end
for
them.
They
don't
earn
any
more
feel.
It's
provided
up
on
the
registration
and
they
are
90-day
certificates
that
are
renewed
just
two
days
before
expiration.
A
A
A
The
orchestrator
also
does
health
checks
every
minute,
just
checking
that
the
nginx
service
and
the
shim
node.js
shame
it's
also
responding.
It
tracks,
the
time
to
first
bite
and
the
download
time
of
that
of
a
test
sit
and
if
you're
down
it
just
removes
you
from
the
DNS
and
impacts
your
weight
for
up
to
24
hours.
A
It
does
summer
stats
collection,
aggregated
stats,
so
it
tracks
the
time
to
First
Bite
for
the
past
hour,
every
five
minutes
and
for
the
past
day
every
hour,
it
also
receives
up
on
every
registration,
your
CPU,
your
memory
and
your
disk
and
bandwidth
metrics
on
the
interface,
the
inter
the
Nick.
A
If
you
go
to
that
website,
orchestrator.strn.pl,
you
can
see
live
stats
on
how
many
nodes
are
actually
live
right
now
you
can
also
get
the
map
of
where
they
are,
and
some
Knights
stats
that
Ansgar
already
showed
containing
every
everything
that
you
would
want
to
see
about
those
nodes
in
the
future.
We
want
the
orchestrator
to
be
globally
distributed.
A
This
allows
for
just
better
routing
in
terms
for
both
DNS
and
non-dns,
with
the
eventuality
of
routing
over
https
node
operator
notifications.
In
case
they
go
down,
they
are
running
an
old
older
version,
the
smarter
routing
and
just
better
the
ux
operator,
experience
and
developer
experience
and
integration.
A
Lastly,
the
scr
npm
package,
which
is
just
the
easiest
way
to
get
it
started
in
node.js
and
JavaScript.
It
allows
you
to
start
install
it
as
if
it
was
fetched
and
just
retrieve
content,
sits
from
Saturn
super
easily.
It
gives
you
the
HTTP
response
if
it
got
to
verify
the
content,
and
then
you
can
do
anything
you
want
with
it,
as
you
would
normally
do
with
ipfs.
A
A
How
to
get
involved
around
your
own
L1
node
and
earn
fail
your
Saturn
for
your
retrievals
and
work
with
us.
We
have
open
roles
in
the
team
and
the
company
you
can
reach
out
to
us
either
Falcon
Saturn
channel
in
final
Korean
slack
and
me
on
the
internet
with
Diego
or
backer.
B
Yeah
my
question
is,
so:
is
nginx
caching
files
or
is
it
caching
blocks,
meaning
that
if
I
request,
for
example,
a
car
file
can
I
if
and
then
I
do
another
request,
but
for
a
filed
a
specific
file?
That's
it
that
car
file
would
it
be
a
cache
hit
or
a
cache
Miss
in
that
situation,
right.
B
A
We're
not
catching
car
Complete,
Car
files
and
then
unpacking
them
as
needed
from
the
shame.
That's
an
ideal,
that's
an
ideal
that
we
would
love
to
have,
but
the
way
that
nginx
caches
things
by
their
URI.
It's
it's
not
possible
right
now,
but
what
we
could
do
is
go
back
from
the
shame
back
to
nginx
to
get
a
car
file
again
unpack
it
and
give
you
what
you
want.
We
just
haven't
built
it
yet,
but
we
thought
about
it.
C
I'm
just
wondering
what
is
the
caching
strategy
in
terms
of
like
geolocation,
you
mentioned
they
were
all
over
provisioned.
Are
you
prefetching
all
of
the
ipfs
Gateway
content,
or
how
do
you
plan
to
scale
that
in
the
future
and
get
feedback
to
improve
the
caching
I'm,
assuming
that
will
lead
to
better
performance
but
curious?
Your
thoughts
on
the
future?
There.
A
A
We
have
thought
about
catching
on
second
hit
or
just
different
strategies
based
on
performance
and
usage.
But
we
have
not
reached
that
point,
so
we
will
ask
those
questions
and
challenges
whenever
we
hit
them,
but
for
now
I
don't
have
a
better
answer.
D
What
do
you
mean?
There
are
other
ways
to
find
DL
wall
known
maybe
than
a
central
DNS.
A
Oh
right,
because
we
want
to
enable
like
just
the
quickest
way,
to
get
up
to
speed,
which
is
replacing
mpfs.io
with
saturn.pl,
and
the
only
way
to
do
that
is
with
DNS.
That's
why
second
largest
performance
Improvement
would
be
not
to
Route
by
DNS,
because
it's
not
perfect.
It's
far
from
perfect
and
really
get
the
clients
like
a
list
of
notes
in
your
own
region
and
then
test
against
them.
A
But
that's
an
optimistic
optimistically
upgrade
because
you
first
do
DNS
like
we
want
you
to
have
the
content
as
fast
as
possible,
and
then
we
upgrade
you
to
routing
routing
through
https,
which
has
already
tested
latency
against
your
closest
nodes,
and
that's
only
for
the
smart
clients.
But
if
you
use
Coral
from
the
CLI,
then
you're
you're
out,
you
have
to
use
DNS.
That's
why
it's
a
web
2.5
CDN.