►
From YouTube: Why Filecoin
Description
Join us for Filecoin Liftoff Week, an action-packed series of talks, workshops, and panels curated by the web3 community to celebrate the Filecoin mainnet launch and chart the network’s future. https://liftoff.filecoin.io/
Events take place all week, October 19-23, 2020.
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
All
right-
and
we
are
back
and
we're
here
with-
why
are
you
sleeping
otherwise
known
as
jeremy
johnson
off
offline,
so
y
has
been
working
on
towards
fatcoin
for
for
many
years,
his
work
started
with
ipfs
and
building
the
technical
foundations
for
for
the
protocol,
and
the
implementations
he's
been
through
every
single
project
in
in
the
network.
A
A
He's
been
involved
in
a
number
of
open
source
communities
over
the
years,
including
ethereum,
and
a
lot
of
different
package
managers
and
arch
linux
and
others,
and
more
recently,
he's
been
the
tech
lead
for
for
falcon.
He
was
also
the
team
lead
for
lotus
and
the
lowest
implementation,
and
nowadays
he's
working
on
all
kinds
of
different
areas
of
of
the
project,
all
over
the
place
so
and
yeah.
It's
a
great
great
honor
and
very
excited
to
be
here
chatting
with
you.
A
A
B
Yeah,
so
the
filecoin
network
today
is
honestly
really
cool.
There's,
you
know
got
a
long
ways
to
go,
but
what
what's
there
right
now
is
pretty
huge
and,
like
you
know,
in
terms
of
volume
and
in
the
you
know,
magnitude
of
what
we've
done
so
we
have
well
over
600
petabytes
of
data
right
now
on
the
network.
Not
all
of
that
is
you
know
real
data.
B
It's
a
lot
a
lot
of
committed
capacity,
but
that's
you
know
how
much
capacity
the
network
has
at
this
moment,
which
is
what
a
couple
weeks
after
launch,
pretty
crazy.
The
some
of
the
things
I'm
really
proud
of
with
this
network
is
like
really
how
impressed
how
impressive
the
amount
of
computation
this
network
is
doing
is
every
everybody's,
like
computer,
who
has
a
sync
lotus
node
right
now,
is
verifying
daily.
B
All
of
that
storage,
600
petabytes
of
storage.
Everybody's
computers
are
verifying
that
that's
correct.
It's
there.
It
works
it's.
You
know
the
right
data
and
that's
ridiculous.
When
you
think
of
like
my
laptop
with
a
node
running
on
it,
you
know
it
is
using.
You
know
a
one
or
two
cpus
a
lot,
but
hey
it's
verifying
a
ridiculous
amount
of
data.
The
amount
of
snarks
that
we
are
using
on
this
network,
I
think,
is
greater
than
the
number
of
snarks
on
any
other
service
system
ever
so
pushing
the
boundaries
of
scale
there
pretty
significantly.
B
Additionally,
if
you
were
to
take
filecoin
and
remove
all
of
the
zk
snark
proof
and
all
the
storage
proof
stuff
and
just
look
at
the
raw
capacity
of
the
network,
we
could
handle
10
000
messages
like
sends
of
transactions.
You
know
money
per
block
and
then,
since
the
way
that
filecoin
works
has
multiple
blocks
per
round,
you
could
have
you
know
50
000
transactions
per
tip
set
up.
B
So
that's
you
know:
50
000
transactions,
every
30
seconds,
which
is
like
what
1.6
000
transactions
per
second,
which
is
ridiculous,
and
so
most
of
that
space
right
now
is
taken
up
by
submitting
those
snarks
on
chain,
but
there's
so
much
slack
space
for
as
much.
You
know
value
transfer
as
you
need,
which
I'm
really
proud
of
how
you
know
efficient
we've
made
that
whole
setup
yeah.
That's
right.
A
Really
a
huge
monumental
feat
of
engineering-
and
you
know
curious
so
right
now,
a
lot
of
the
the
snarks
and
the
proofs
are
are
taking
up
a
huge
portion
of
the
capacity
on
on
the
chain,
but
is
that
is
that
going
to
continue?
That
way?
Is
that
going
to
scale
in
a
different
way?
Or
can
you
talk
to
us
about
kind
of
the
future
of
that.
B
Yeah,
so
the
the
snarks
right
now
is,
you
know
the
the
obvious
problem
that
we're
working
on
and
we
when
building
filecoin,
we
really
had
to
you
know,
thread
the
needle
between
getting
it
working
now
and
making
it
work
for
the
long
future.
And
so
we
we
made
a
bunch
of
shorter
term
decisions
such
as
like
using
bellman
as
it
exists
now
and
not
looking
for
you
know
new,
more
exciting
snark
proving
libraries
because
bellman
worked
it.
B
B
Again
is
a
really
exciting
setup
that
allows
us
to
do
recursive,
snark
proofing,
which
allows
you
to
take
as
many
snarks
as
you
want
and
compress
them
down
into
a
single
snark
which
really
in
a
lot
of
ways,
kind
of
just
solves
the
you
know
too
many
snarks
on
chain
problem,
and
so
some
of
our
next
steps
are
going
to
be
looking
at
new
new
proving
systems
like
this
to
allow
us
to
increase
the
scale
and
capacity
of
the
network
just
dramatically
and
so
really
excited
to
see
where
that
goes.
B
A
Yeah,
that's
super.
That's
super
exciting
and
yeah.
I
look
forward
to
getting
a
lot
more
of
that
capacity
on
the
chain
and
there's
gonna
be
a
scalability
that
will
come
there
yeah.
Let's
maybe
shift
towards
maybe
some
of
your
vision
for
the
future.
What
what
are
the
kinds
of
applications
that
you're
excited
about?
You
know
what
what
does
make
possible
it
wasn't
possible
before.
B
Yeah,
I
mean
one
of
the
things
that
isn't
necessarily
like
the
most
exciting
application,
but
one
of
the
things
that
personally
affects
me.
A
lot
is
just
having
package
managers
and
a
lot
of
dev
tooling,
and
all
of
this
be
put
onto
file
coin
and
into
you
know,
into
ips
and
file
coin,
and
it
really
more
distributed
peer-to-peer
settings
where
you
know
I
spend
you
know.
B
I
don't
know
how
much
time
accumulating
cumulatively
in
my
life,
I
spent
downloading
packages
and
waiting
for
these
things
to
update,
because
it's
you
know
this
thing
stored
on
some
crappy
server
in
some
university
basement
somewhere,
when
it
really
is
used
by
so
many
people
that
it
should
just
be
peer-to-peer
like.
I
guarantee
somebody
else
in
my
house
or
around
you
know
around
me.
Has
this
data
already
like?
B
Why
can't
I
just
fetch
it
from
there
and
so
seeing
filecoin
be
adopted
in
these
different
use
cases
where
it's
huge
amounts
of
data
that
is
already
like
you
know,
effectively,
content
addressed
and
just
having
that
be
all
put
on
filecoin.
For
you
know,
everybody
to
use
is
a
really
like
front
of
mind
thing.
Another
cool
thing
that
I'm
excited
about
is
like,
with
the
rise
of
machine
learning,
you
end
up
getting
a
lot
of
data.
B
Like
you
know,
everybody
says
big
data,
big
data,
but
the
the
scales
of
what
people
mean
when
they
say
big
data
is
usually
only
in
like
hundreds
of
terabytes
really
maybe
into
petabytes,
and
when
you
start
to
go
beyond
that
like
how
do
you
even
think
about
that
much
data?
And
how
do
you
store?
How
do
you
manage?
How
do
you
like
retrieve
that
massive
amount
of
data
and
the
cool
thing
is
like
the
way
we've
designed
filecoin
can
handle
exabytes
and
zettabytes
of
data
like
it?
B
It'll
just
do
the
thing
and
work
right,
so
I'm
excited
to
see
how
I'm
excited
to
see
like
uptake
of
file,
point
usage
for
storing
data
for,
like
these
huge
big
machine
learning
projects.
A
Yeah,
that's
super
exciting
and
you
know
earlier
today
we
had
a
talk
from
a
group.
That's
already
writing
a
a
tool
for
people
to
do
machine
learning
over
popcorn
data
and
so
they're,
it's
already
getting
going,
which
is
great.
What
are
the
when
we
have?
You
know
these
massive
scale
data
sets.
You
know
terabytes
to
petabytes
and
size
that
is
really
difficult
to
move
over
the
internet
right.
A
So
it's
a
huge
amount
of
data
that
would
occupy
many
many
many
disks
and
putting
that
kind
of
throughput
through
the
internet
would
take
many
months
in
in
most
most
connections.
So
talk
to
us
how
that
would
work
with
popcorn
yeah.
B
The
the
poorer
throughput
of
the
internet
itself
has
been
a
like
a
recurring
theme
of
work
at
protocol
labs.
I
remember
early
designs
of
filecoin.
We
had
this
assumption
where
we're
like,
okay
and
we're
going
to
shuffle
the
data
around,
to
prove
that
everybody
has
it,
and
then
we
did
some
back
in
the
envelope
math
to
be
like
no
that's
actually
a
thousand
times
more
than
the
carrying
capacity
of
the
internet.
So
we
can't
do
that
yeah,
a
huge,
a
huge
problem
that
many
people
designing
like
systems
computer
systems
have.
Is
that
like?
Well?
B
Actually,
the
internet
is
really
slow,
and
so
with
with
filecoin
we've
designed,
you
know,
we've
designed
it
so
that
you
don't
it
doesn't
rely
on
a
particular
way
of
moving
data.
It
doesn't
care
how
the
data
gets
there.
B
You
can
make
a
deal
with
anybody
and
then,
if
they
eventually
get
the
data
great,
the
deal
can
go
through
and
so
we've
initially
designed
so
that
you
can,
you
know,
shift
somebody
hard
drives
filled
with
data
right,
maybe
like
even
like
a
big
huge
box
of
disks
and
like
have
the
the
miner
on
the
other
side,
import
all
that
and
then
you
deal
can
go
through
because
they've,
you
know,
received
all
of
this
data
and
I
think
more
interesting.
B
Things
are
going
to
adapt
over
time
because,
as
as
demand
for
moving
this
data
grows
like
basically,
the
goal
of
filecoin
is
to
reduce
the
actual
cost
of
the
storage
down
to
nothing.
But
then
that
doesn't
yet
account
for,
like
the
the
cost
of
moving
that
data.
So
as
we
reduce
the
actual
long-term
costs
of
storing
the
data,
now
there
becomes
more
of
an
incentive
to
figure
out
how
to
make
moving
it
faster,
cheaper,
and
so
I
think,
we're
going
to
see
a
lot
of
interesting.
B
You
know
ways
pop
up
to
move
data
more
efficiently,
whether
that's
like
interesting
backhaul
with
you
know,
there's
like
optical
backhaul,
where
you
can
move
like
terabits
of
data
over
shorter
distances.
Just
through
you
know,
wireless
optical
links
or
who
knows
like
maybe
there's
some
interesting
satellite
technology
that
can
do
similar
stuff.
There's
there's
a
lot
there.
A
Yeah,
that's
super
exciting
and
let's,
let's
touch
up
a
little
bit
more
on
the
devtools
package.
Manager's
use
case
that
you
described
so
dead
tools
are
a
huge.
You
know.
Part
of
developers
lives
of
course,
impact
manager.
Specifically
people
are
constantly
installing
different
versions
of
software,
either
from
source
or
or
package
binaries
or
various
things
like
this.
A
How
do
you
see
that
evolving,
so
there's
a
lot
of
registries
out
there
and
then
there's
kind
of
source
code
on
github
and
a
few
other
areas
there's
a
lot
of
people.
Always
you
know
frustrated
with
the
centralization
of
these
kinds
of
registries,
and
the
people
want
to
be
able
to
have
higher
levels
of
security
and
assurance
around
around
their
code
and
the
binaries
people
have
been
trying
to
to
build
decentralized
versions
of
these
things,
but
a
big
hurdle
has
always
been.
A
You
know
who's
going
to
who's,
going
to
manage
all
of
this
data
who's
going
to
store
all
of
this
data
when
you,
when
you
start
adding
up
the
terabytes
and
and
you
deal
with
the
bandwidth
costs
that
can
be
pretty
big.
So
how
do
you
see
that
kind
of
thing
evolving
and
what
what
do
you
think
kind
of
kicks
us
off
and
gets
us
there?
A
Is
it
like
one
of
the
big
factor,
managers
that
exists
now
kind
of
moving
over
and
kind
of
having
mirrors
or
is
it
kind
of
like
a
new,
a
new
package
manager
yeah?
But
how
do
you
think
it's
gonna
play.
B
Yeah,
I
think
there's
this
is
this
is
a
hard
space
to
move
in
because
it
kind
of
it's
entirely
built
around
like
adoption
and
network
effects
like
more
like,
I
think
the
package
manager
space
is
like
so
tightly
built
around
network
effects
that
it's
it's
hard
to
it's
hard
to
move
it.
I
think
one
of
the
key
things
to
start
in
that
direction
of
breaking
up
that
network
effects
is
to
kind
of
separate
the
the
data
layer
of
packages
from
the
actual
registry
layer.
B
So,
if
you
have
you
know,
a
registry
just
contains
a
mapping
of
you
know
names
to
hashes,
and
that's
it
right
names
to
hashes
and
like
a
version
list
or
something
simple
like
that,
and
then
from
there
you
can
go,
get
the
data
from
wherever
it
happens
to
be
separating
this
out
and
then
making
it
so
that
the
data
is
stored
somewhere,
safe
somewhere,
where
it's
you
know
backed
up
nicely
and
that
me,
as
a
user
of
packages,
can
skip
the
registry
layer
once
I've
already
resolved
the
package
once
so
like
you
know,
I,
for
example,
npm
installs,
something
and
locally
it
references.
B
You
know
an
ipvs
hash
and
that
if
npm
goes
down
or
npm
has
problems
or
whatever
I
don't
need
npm
to
get
my
data,
I
don't
need
npm
to
keep
installing
my
my
packages.
I
can
go
straight
to
the
network.
I
can
go
search
in
ipfs.
If
it's
you
know,
if
it's
not
there,
I
go
retrieve
it
from
the
file
point
network.
Where
you
know
all
the
packages
can
be
just
stored
on
the
filecoin
network
for
long-term
storage,
just
to
make
sure
they're
there
forever,
and
that
gives
us
a
lot
more.
B
You
know
safety
and
resiliency
to
our
software
ecosystem
over
time,
and
I
think
I
think,
there's
definitely
room
for
new
package
managers
to
be
built
and,
for
you
know,
new
package
managers
that
focus
on
this.
You
know
trusted
hashing
and
you
know
moving
the
data
around
like
effectively
building
a
registry.
B
That
is,
maybe
you
know,
curated
by
some
token,
for
example,
or
has
some
way
to
have
better
curation
around
it
than
just
like
whoever
lays
claims
the
name
first
and
since,
if
you
have
that
separated
from
the
full
package
manager
suite,
then
it
becomes
easier
to
swap
those
out
and
use
a
different
curated
registry
for
different
things.
A
Yeah,
that's.
I
think
that
idea
of
separating
the
data
layer
from
the
registry
is
a
really
really
good
one
and
and
we
should
work
on
on
getting
that
propagated
out
there
great.
So
maybe,
let's
move
to
some
of
the
kinds
of
applications
that
maybe
are
coming
ahead,
especially
with
new
kinds
of
interfaces.
So
I
know
that
games
and
virtual
reality
are
pretty
interesting
to
you.
A
There
are
to
me,
and
many
other
folks
in
our
community
yeah
talk
to
us
about
some
of
that
kind
of
stuff.
What
are
the
kinds
of
games
that
you
think
are
now
possible
in
this
web
3
world
and
maybe
talk
a
bit
about
kind
of
where
virtual
reality
is
coming.
B
B
You
know
you
have
this
persistent
virtual
world
and
one
of
the
biggest
questions
you
get
into
when
you
have
this
persistent
virtual
world
is
the
question
of
ownership
of
space,
because
when
you
have
a
shared
space,
you
have
to
worry
about
like
who
owns
what,
where
who's
allowed,
to
make
changes
where
and
really
like
blockchains
are
a
natural
fit
for
managing
that
sort
of
thing
and
so
having
having
a
some
sort
of
you
know,
curation
of
a
space
through
the
blockchain
and
having
management
of
ownership
of
like
you
know
this.
B
You
know
black
sun
place
in
the
in
the
metaverse
is
owned
by
me
and
I'm
allowed
to
make
changes
to
it.
That
sort
of
that
sort
of
thing
is
really,
I
think,
a
good
fit
there,
and
then
this
brings
up
the
other
side
of
vr.
In
that,
as
you
get
into
more
and
more
compelling
immersive,
you
know
landscapes.
B
The
amount
of
data
you
have
to
move
around.
There
is
huge,
like
you
know,
you
imagine
how
big
just
a
video
game
that
you
play
in
your
tv
is,
and
those
are
like.
You
know,
60
to
100,
200,
gigabytes
worth
of
files
and
that's
not
something.
B
B
A
Yeah-
and
you
know
some
of
the
the
problems
that
emerge
in
vr
once
you
start,
you
know
scaling
experiences
in
around
the
world,
like
you
suddenly
have
millions
of
people
in
there.
If
you
start
generating
any
kind
of
content
in
large
volumes.
A
Not
only
do
you
have
a
large
amount
of
data
to
kind
of
broadcast
out,
but
suddenly
you
have
a
generation
of
vast
amounts
of
data
all
over
the
edges,
so
a
period
of
your
network
is
set
up
in
in
a
way
to
support
that
kind
of
application
in
that
kind
of
platform,
in
a
way
that
other
centralized
systems
are
not.
How
do
you
see
this
sort
of
evolving
with
you
know?
How
do
you
sort
of
expect
the
internet
to
evolve
to
adapt
to?
You
know
that
kind
of
throughput.
B
Yeah,
I
think
we're
definitely
gonna
see
a
lot
more
bandwidth
between
from
like
edge
to
edge.
So
like
think
things
like
5g
are
interesting
because
they're
such
short-range
technologies
they're,
you
know
about
the
same
range
as
wi-fi,
really
where
you
it's.
It's
gives
you
extremely
high
bandwidth,
but
just
the
people
nearby,
and
I
think,
with
more
as
these
things
evolve,
like
the
bandwidth
to
move
data
from
where
you
are
to
nearby
like.
B
I
think
people
will
have
more
reasons
to
just
share
huge
amounts
of
data
with
the
people
that
are
closer
to
them
than
necessarily
farther
away
just
because
well
the
way
that
experiences
are
going
to
be
shaped
like
no
for,
for
example,
I'm
trapped
inside
in
quarantine
right
because
I
can't
go
outside
because
there's
evil
viruses
out
there
that
are
trying
to
kill
me
well,
my
friends
are
still
nearby.
B
My
friends
are
still
in
the
same
city
right
and
if
I
want
to
spend
time
with
them-
and
I
spend
quality
time
with
them
like
it's,
the
internet
connection
required
is
just
between
me
and
there.
It
doesn't
need
to
go
like
overseas
or
anything,
and
so
as
we
get
these
faster
and
higher
bandwidth
edge
connections,
it
allows
you
to
have
like
more
interesting
virtual
worlds.
With
you
know,
people
you
would
normally
see
in
person
and
I
think
that's
a
thing.
I'm
kind
of
interested
right
now
is
like
how
do
I
you
know?
B
How
can
we
have
more?
How
can
we
have
a
better
experience
online
with
our
social
connections?
You
know
maybe
through
vr,
maybe
through
something
like
that,
but
the
the
way
that
bandwidth
is
going
to
evolve
affects
that
significantly.
A
A
We
stand
in
in
the
way
of
falling
into
the
same
kind
of
lock-in
that
we
have
today
with
with
the
social
networks,
as
opposed
to
a
much
more
open
web
that
we
had
early
on.
I
know
this
is
something
that
that
you
care
a
lot
about.
Can
you
talk
to
us
about
you,
know
kind
of?
How
do
you
see?
You
know
the
possible
features
right
like
what's
like
the
the
bad
future,
that.
B
Yeah,
I
think
I
mean
the
bad
future.
We're
trying
to
avoid
is
really
you
know
facebook
mind
controlling
everybody
with
their
fancy,
oculus
devices,
you
know,
that's
it's
the
the
boogeyman
that
everybody
worries
about,
and
I
think
it's
a
very
tangible
concern
and
making
sure
that
all
these
platforms
and
protocols
are
open
and
free
and
usable
by
everybody
is
really
important
here
that
you
know
the
data
you
need
to
access
in
order
to
you
know,
participate
in
these
new
worlds.
B
Here.
I
you
know
recently
got
one
of
the
oculus
quest
twos
and
it
forced
me
to
make
a
facebook
account
which
I'm
very
concerned
about.
I
made
a
fake
facebook
account
for
it
anyways,
but
you
know
who
knows:
that's
what
it's
going
to
require
of
me
in
the
future
and
what
restrictions
it's
going
to
try
to
impose
on
the
things
that
I
do
with
it.
A
Deeply
so-
and
you
know
with
the
current
thankfully
there's
been
kind
of
a
more
recent
awakening
to
the
you
know-
problems
that
the
social
networks
have
created
in
terms
of
the
abilities
to
manipulate
people
through
this
environment,
the
ability
to
spread
fake
news
and
misinformation
and
all
kinds
of
kinds
of
issues,
but
but
really
what
you're
talking
about
is
a
it's
an
even
deeper
level
of
lock-in
and
manipulation
right.
A
So
it's
not
just
kind
of
the
a
reality
created
by
information
flow
on
a
kind
of
low
quality
social
network
page,
but
we're
talking
about
the
3d
environment
that
people
will
spend
their
huge
fraction
of
their
time
in
being
fully
controlled
and
and
designed
by
by
potentially
a
few
gatekeepers
right
and
so
then
decide
who
who
gets
to
see
one?
It's
like
that.
That
kind
of
you
know
this.
Okay
is
pretty
scary.
How
do
we
avoid
it?
So
so
what
do
we
need
to
do
between
now?
A
And
you
know,
like
the
you
know,
10
years
out,
that's
five
to
ten,
probably
15
years
out
that
that
that
it's
going
to
take
for
those
technologies
to
develop.
How
can
we
paint
the
vision
of
the
good
future
and
how
do
we
get
there.
B
Yeah
I
mean
two:
I
I
think
a
vision
of
the
good
future
is
well.
You
have
you
the
access
to
all
of
these
future
devices
and
like
and
all
of
these
future
experiences
and
interactions
is
the
same
or
better
than
we
have
on
the
internet.
Today,
where
you
just
go
to
a
website,
it's
an
open
protocol.
It's
open
access,
like
it's
kind
of
the
you
know
your
browser
vendor
generally
doesn't
get
in
the
way
of
what
you're,
what
you're,
seeing
and
what
you're
doing
you're
you
know,
firefox,
isn't
going
to
tell
you
no!
B
You
can't
go
to
that
website,
although
sometimes
it
does
if
they're
https
cert
is
bad,
but
you
know
that's
different
reasons
and
the
you
know
the
tools
you
use
to
access
information,
the
tools,
use,
access,
information,
shouldn't
ever
be
able
to
have
a
say
and
like
should
never
be
able
to
have
a
say
in
what
you
see
and
view
and
so
building.
B
You
know
getting
people
in
to
build
on
these
protocols
and
build
with
open
protocols
and
build
experiences
that
everybody
wants
to
show
that
there
is
like
demand
for
this,
I
think,
is
important
here,
like
getting
kind
of
getting
in
and
building
building
out
the
tooling
and
building
out
just
actual
things
that
are
using
open
protocols
as
a
way
to
start
cementing
that
in
place
as
how
this
works
is
kind
of
how
we
avoid
that,
because
you
know
you
see
how
much
time
people
spend
on
facebook
and
twitter
and
so
on
now
imagine
how
much
time
you're
going
to
spend
and
how
immersive
it's
going
to
be.
B
A
Yeah,
and
so
you
know
for
the
for
the
falcon
community
members
out
there,
what
do
we?
What
should
we
prove
out?
You
know
this
year
and
next
year
and
the
next
year
to
start
getting
started,
making
web
3
and
the
open
web
ready
for
for
that
kind
of
the
kind
of.
B
Future
yeah,
I
think
we
need
tooling.
I
think
people
who
are
excited
about
this
stuff
should
start
building
like
build
apps
and
demos
and
like
just
build,
you
know,
games
even
that
use,
use
filecoin
use
ipfs
as
the
place
to
get
their
data
from,
and
in
doing
so,
I
think
we'll
discover
a
lot
of
a
lot
of
different
things
that
will
help
us
build
the
right
tooling
and
help
us
build
the
right.
You
know
knowledge
about
how
to
do
this
in
a
free
and
open
way.
A
Great
well
with
that,
let's
end
there
and
let's
lead
the
charge
on
on
making
sure
that
that
next
platform
is
open
and
stays
open
and
is
locked
open.
Thank
you
very
much
for
for
sharing
your
time
with
us.
Why
and.