►
From YouTube: Developer Community Call #24
Description
Today is the Developer Community Call day ✨
🗓️ June 29th at 6:00 pm CET / 11 pm CT
Here's what we've got planned:
• Progress highlight from the core tech team;
• Upcoming plans & events;
• Q&A session
Mark your calendars and join us today!
We meet in zoom 🔗 https://us06web.zoom.us/j/82025547767
B
B
A
Okay,
so
I
guess
it's
it's
already
time
to
start
so
again,
hello,
everyone!
It
was
so
nice
to
see
you
here.
This
is
influence,
developer,
Community,
call
number
24.
and
let's
see
what
we
have
on
our
agenda.
A
So
today
we
will
talk
about
the
proof
of
capacity
update.
We
will
be
rolling
out
by
the
mainnet
launch.
We
will
also
talk
about
progress,
highlights
from
our
core
Tech
Team
and
have
the
Q
a
session
with
them.
We
also
share
our
consensus
and
computer
over
data.
Summit
highlights
and
have
the
Q
A's
on
them
and
also
share,
like
our
future
events
and
of
course
hand
out
our
co-apps
right.
So
first
things,
first
Guinea
is
going
to
tell
us
about
the
proof
of
capacity
Guinea,
hi,
hello,.
C
Hello
yeah:
can
you
just
switch
to
the
next
slide?
Yeah
I,
like
I,
can
I
would
probably
like
give
very
short
introduction.
Just
to
remind
her
to
everyone
like
a
big
part
of
women's
project
is
creating
the
marketplace
of
compute
providers
right
because
we
don't
run
servers
ourselves.
We
are
not
centralized
company,
we're
not
trying
to
replace
clouds
with
another
company
that
that
runs
servers,
so
we
create
a
protocol
and
the
marketplace.
C
C
Proof
of
processing
and
execution
are
related
to
basically
security
of
computations
that
happen
in
influence
and
verifiability
of
communications
that
happen
in
liquids
like
whenever
customers
developers
deploy
their
applications,
their
functions,
two
providers,
basically,
as
as
a
part
of
executing
those
applications,
providers,
have
to
create
and
submit
proofs
that
are
verified
on
chain,
and
you
know
the
designs
of
these
things
are
still
not
yet
fully
finalized
but
processing
related
to
Aqua.
It's
basically
about
ability
to
verify
certain
particles
that
were
issued
by
applications.
C
Proof
execution
were
designed
were
named
as
a
name
for
breathability
of
marine
function
calls.
So,
basically,
when
our
particle
is
verified,
it
could
be
verified
very
different
ways.
It
could
be
like
audited
by
like
whether
particle
corresponds,
the
script,
that's
in
their
IR
script
or,
for
example,
air
script
plus,
you
know
peers,
that's
supposed
to
execute
it
or
airscript,
plus
peers,
plus
service
calls
results.
C
C
So
this
proves
cover
the
security
of
computational
influence,
Network
and
the
whole
idea
is
that
you
don't
need
to
trust
the
provider
that
providers
do
their
work
correctly,
like
you
have
this
probabilistic
verification
that
that
allows
you
to
use
this
network
in
a
trustless
fashion
and
then
another
big
thing
is
proof
capacity
and
what
is
proof
capacity
is
basically
the
idea
to
incentivize
providers
to
bring
compute
capacity
to
a
network,
even
if
there
is
not
yet
enough
demand
from
customers
who
want
to
use
all
this
capacity.
C
So
it's
basically,
if
you
further,
if
you
switch
to
the
next
slide
yeah,
so
basically
Pro
capacity
is
some
sort
of
work
that
the
providers
have
to
perform
constantly
to
prove
that
they
that
they
allocating
resources
to
a
network
at
the
moment.
So
they
have
to
put
some
collaterals.
They
have
to
lock
some
cholesterol.
They
have
to
promise
that
they
will
be
serving
like.
C
You
know
some
CPU
memory
resource
to
a
network
over
a
particular
period
of
time
and
then
during
this
this
period
of
time
they
have
to
constantly
say
yeah,
I'm,
I'm,
still
I'm
still
allocating
these
resources
performance,
Network,
there's
they're
not
allocated
to
something
else,
so
I
haven't
switched
them
to
something
else.
So
I
have
I
need
to
constantly
prove
that
I
still
I'm
still
allocating
these
resources.
And
this
all
we
call
proof
capacity
and
we
consider
like
we
consider
different
ways
to
approach
this
method.
C
But
basically,
it's
a
kind
of
proof
work
a
little
bit
similar
to
proof
of
work
and
the
whole
idea
that
it
should
be
Asic
resistance
because,
like
we
don't
want
Asics
to
be
on
a
network
instead
of
real
servers
that
ready
to
switch
to
serve
user
demand,
and
it
should
be
like
this
mechanism
should
include
token
collaterals,
because
we
are
interested
in
providers
to
commit
providing
resources
for
a
long
period
of
time.
C
So
they
have
to
put
some
stake
up
front
and
they
should
earn
some
rewards
for
providing
these
capacity.
So
it's
economically
makes
sense
for
them
to
do
it,
and
this
approach
is
actually
like.
C
You
know,
used
in
in
Falcon
Network,
for
example,
and
falcoin
basically
rewards
storage
providers
to
to
provide
storage
capacity
to
a
network,
even
if
it's
not
used
at
the
moment
by
useful
data
and
then
gradually
this
capacity
migrates
to
serving
useful
data,
and
this
is
a
very
similar
logic.
A
lot
of
these
things
are
still
work
in
progress.
Like
a
lot
of
details.
C
That's
why
we're
not
publishing
any
kind
of
documents
on
it
yet,
but
the
we
consider
this
as
a
as
a
important
part
of
the
my
net
launch,
so
basically
influence
mainnet
should
start
from
incentivizing
the
compute
providers
to
join
to
like
expand
the
basically
supply
side,
the
the
compute
capacity
side
and
then
start
gradually
onboarding.
The
customers
from
the
like
from
developers
to
this
Marketplace
yeah
I.
Think
that's
rough.
Just
rough
overview
on
on
this.
C
A
Hey,
thank
you
so
much
so
yeah
right
now
we're
it's!
It's
part
where
we
discuss
the
highlights
from
our
core
Tech
Team,
so
I
can
give
you
the
brief
overview
of
what
we're
going
to
talk
about.
So
first
of
all,
we
have
released
an
edit
support
for
Docker
images
with
rm64
architecture.
It
means
that
right
now
developers
can
use
m1mm
and
M2
Max
and
gets
like
the
native
binaries
for
them.
A
So
we're
also
we're
also
releasing
part
particle
signatures
and
and
Mike
we're
gonna
tell
about
this
a
little
bit
more
shortly
and
we're
also
introducing
some
new
updates
into
fluent
CLI
and
we
are
going
to
sunsets
Aqua
CLI,
so
we
have
transferred
most
of
the
functionality
into
fluency
CLI
and
we're
going
to
stop
supporting
aquacy
like
in
future
releases.
So
right
now,
I'm
going
to
pass
the
mic
to
Mike.
D
D
E
D
But
before
like
we're
going
to
talk
about
proof,
camera
data,
let
me
briefly
explain
what
we're
doing
we
can
like
kind
of
More
Level
terms
and
how,
like
our
network,
is
structured
and
how
we
want
to
actually
submit
this
proof
of
arts
and
to
the
on-chain
part.
So
on
this
side
you
could
see
like
this
kind
of
scheme,
there's
several
Aqua
VMS,
so
Aqua
VMS
is
a
interpreter,
so
don't
operates
like
air
code,
so
here
is
resulted
from
Aqua
compilation
of
aqua.
D
And
could
you
please
subscribe
to
the
next
swipe
and
actually
it's
important
that
every
peer
influence
Network
contains
where
it
costs
this
Aquarium
and
the
particle
so
there's
another
conception
is
a
particle
particle
is
a
you
could
consider
it
as
a
network
packet.
That's
like
going
from
one
peer
to
another
yep,
and
this
particle
is
basically
handled
by
Aqua
VMS.
So,
like
all
peers
all
run
times,
all
of
this
is
kinda
controlled
or
ruled
by
aquarium.
D
So
aquarium
tells
how
exactly
Network
should
behave,
and
could
you
therapist
fight
to
the
next
fight?
Yes,
so,
and
regarding
proof
of
practicing,
basically
we'll
have
the
division
to
two
parts
of
chain
and
on
chain.
So
now
we're
thinking
that
on
chain
party
will
be
on
the
file
coin,
and
on
this
side
you
could
see
how
like
a
principal
scheme
of
how
like
it
could
be
it.
D
Could
it
could
work
so,
basically,
every
peer
or
every
Aqua
VM
on
peer
mines,
a
particle
Yeah
by
mines,
I'm
by
mining,
I
mean
that
every
period
after,
like
a
new
article
created,
it
checks
it
hash
against
some,
like
some
another
hash,
yeah
like
in
a
general
proof
of
work.
You
know
this
hash
satisfies
as
complexity
or
like
an
another
hash.
It
doesn't
matter,
then
the
spear
like
mind
the
particular
hash
and
can
submit
it
to
on-chain
part,
not
not
just
hash,
but
the
data.
D
That's
verifies
that
this,
like
execution,
was
done
correctly
here
that
this
was
actually
done
and
on
this
right,
you
could
see.
That's
like
this
peer
one
of
the
of
the
three
peers
submits
so-called
golden
Network
packet
or
golden
article.
This
proves
some
proofs
to
launch
game
and
it's
important
that
every
peer
that's
participated
in
the
form.
You
know
this
article
or
this
like
in
a
particular
execution
or
participated
in
this
particular
particle
execution.
D
They
will
be
rewarded
so
that
for
every
period
it's
important
to
kind
of
check.
Particles
against
the
complexity
and
submitters
are
kind
of
satisfied
as
complexity,
and
also
it's
important
to
send
it
next
to
next
peers
yeah,
because,
like
all
next
peers,
have
the
same
probability
to
mind
as
golden
particle.
D
So
that's
how
it
should
work
like
in
general,
but
there
are
some
Mission
pieces
in
the
scheme
and
further,
could
you
please
move
to
the
next
slide?
Yeah,
and
basically
there
are
some
requirements
that
we
haven't
met
before,
like
first
of
all,
peers
must
not
be
able
to
Tamper
data.
So
it's
connected
to
the
last
kind
of
last
point
to
this
list
is
peer.
D
Peers
must
not
be
able
to
grind
it
here
so
like
if
Pierre
somehow
like
I,
can
get
heads
of
data,
and
then
it
should
be
satisfied
to
some
complexity.
Then
this
this
peers,
they
can
grind
it.
They
can
could
like
iteratively
change,
something
and
data,
and,
like
just
do
this,
mining,
not
useful,
not
useful
work
and
also
they
can
hamper
like,
for
example,
call
results
of
a
cows
that
was
done
that
were
done
on
previous
Piers.
D
Like
appears
that,
like
before
this
execution
also
on
chain
wire,
should
should
be
able
to
verify
particle
according
to
air
scripts.
Here
they
should
be
able
to
check
these
proofs
according
to
submitted
particle,
and
also
like
for
having
this
like
ability
to
having
like
Computing
proof
of
work,
Computing
hash,
being
the
reliable
source
of
entropy
yeah.
So
now,
at
the
moment
like
before,
like
last,
the
last
updates
we
haven't
had
this,
but
now
with
this
proof,
caring
data,
you
can
use
signatures
as
a
source
of
entropy
Hillary.
D
Could
you
please
go
to
the
next
slide
yep
and
a
couple
thought
about
signatures
and
about
how
aquavium
Works
internally,
so
basically
aquarium
has
no
any
saved
entry
point
so
and
when,
like
each
start
of
that
execution,
when
particle
came
to,
aquavium
execution
starts
from
the
very
beginning,
so
it
means
it's.
A
VM
starts
from
the
like
the
first
instruction
and
checks
like
if
it
was
already
executed,
doesn't
check
that
the
result
are
correct.
D
If
not,
it
tries
to
execute
it
yep
and
to
prevent
the
execution
of
the
same
calls
like,
for
example,
cows
or
any
other
instruction
twice
execution.
Trace
is
being
analyzed
and
the
serialized
in
some
form.
On
this
slide,
you
could
see
the
previous
incarnation
of
this
form.
Basically,
we
had
the
four
states
so
like
only
for
four
instructions,
it's
par
cow
fold
and
up.
We
had
States
in
this
trace
and
on
the
right
side
on
these
two
pictures
on
the
right
side,
you
could
see
that
how
they
structured
for
par
and
cow.
B
D
D
Fight
we
highly
refactored
our
trace
and
now
it's
heavily
Shady,
based
on
on
the
picture
on
the
right
side,
you
see
how,
like
all
this
link
together,
it's
rather
complex
and
we
won't
kind
of
take.
You
know,
spend
much
time
on
that.
If
you
have
any
question,
please
ask
on
a
q
a
session,
but
this
scheme
is
important
for
poor
care
and
data.
For
example,
these
cids
are
used
for
signatures.
Okay,
so
basically
after
like
particle
execution
will
never
appear.
Aqua
VM
signs
all
necessary
data
and
data
signed
by
CID.
D
So
basically
we
combine
all
cids
sort
them
and
then
sign
all
this
like
final
list
and
then
before
and
even
while
execution
execution
equivalent
like
every
next
peer.
We
check
that
all
signatures
are
corrected
are
correct,
yeah
and
no
data
was
tempered,
so
these
signatures
are
quite
important
also
for
sellers
of
entropies.
D
That
I
already
said
yeah,
because
it's
a
natural
actually
source
of
entropy,
because
it's
science
with
a
secret
peer
keys
and-
and
it's
not
so
easy
to
like
kind
of
grind
particles
in
this
way
hash
of
particles
in
this
way.
So,
basically
now
we
are
we're
kind
of
trying
to
release
the
network.
D
It's
almost
done
in
aqua
VM,
so
by
almost
I
mean
that
we
have
like
a
final
final
pull
request:
that's
not
yet
merged
what
it
was
made,
and
now
we
are
also
simultaneously
working
on
a
releasing
it
to
the
network
and
hope
that
next
week,
we'll
finally
have
it
on
the
network.
D
A
Frank,
thank
you
so
much
I
guess
that's!
We
can
have
like
a
q
a
session
after,
like
the
whole
part
about
the
team,
Tech
updates
are
Tom.
Would
you
like
to
tell
us
about
the
CLI
updates.
C
G
Okay,
so
I
will
tell
you
a
little
bit
about
the
recent
updates
for
fluency
line,
what
we
did
last
month,
so
one
of
the
things
is
checking
for
updates
this.
This
is
a
feature
it's
just
for
CLI
to
check
for
updates.
G
You
will
still
have
to
install
the
CLI
yourself,
but
at
least
you
will
know
that
new
new
version
came
out.
There
is
also
a
new
property
in
fluency
ammo
that
lets
you
pin
a
project
to
a
particular
hcli
version.
It
will
ensure
that
all
the
versions
of
every
component
are
compatible
with
the
exact
setup
of
your
project.
You
can
also
check
the
version
of
the
raspier
that
is
used
with
your
project,
so
this
is
just
the
way
if
you
want
to
make
sure
that
your
project
works
in
the
future.
G
There
is
also
an
old
build
flag
for
deployment
and
for
some
other
commands
is
just
allows
you
to
not
build
the
project
when
you
deploy,
because
by
default
it
is
built
before
deploying,
and
this
is
just
allows
you
to
do
some
stuff.
If
you
want
to,
for
example,
sign
was
module
before
deployment.
G
We
also
did
a
little
bit
of
testing,
and
now
our
core
commands
and
core
workflows
with
the
CLI
are
tested
and
working
in
CI.
So
it
makes
everything
more
stable
now,
because
we
can
check
that
everything
works
on
each
commit
and
that's
very
good.
G
There
is
a
small
feature
for
sports
immedioml
extensions,
which
were
previously
not
supported.
Also,
we
did
autocommit
influencia
repo,
that
is
just
for
developers,
so
it
is
easier
to
contribute
for
them
because
they
GitHub
actions
bought.
Does
all
the
information
LinkedIn
and
generating
logs
for
you
and
speaking
about
docs.
We
found
a
better
library
for
conflictox
Generation.
G
G
What
it,
what
this
allows
is
it
brings
all
logs
from
Aqua
function,
calls
as
they
happen.
So,
basically,
when
Com,
when
Aqua
is
compiled,
special
calls
back
to
your
own
beer
that
is
inside,
win,
CLI
are
made
and
you
can
see
the
logs
or
functions
happening
in
like
in
real
time.
G
So
we
added
airbt
if
I
into
a
CLI.
This
is
a
command
that
prints
their
scripts
and
human,
readable,
python-like
representation,
it's
not
executable,
but
it
can
be
used
to
easily
read
and
understand
air
code.
G
There
is
also
a
quick
start
template
information
line.
It's
just
streamlines
the
user
Journey
like
our
main
journey
of
deep
line
and
Ryan
what
you
deployed,
so
you
can
just
init
and
do
a
deploy
right
away
and
run
the
default
service,
which
is
like
a
part
of
this
template.
It's
just
Hello
World
Service
service
and
the
most
recent
feature.
G
A
John,
thank
you
so
much.
Thank
you.
So
much
for
sharing
the
updates
obviously
live
with
us.
Any
questions
for
Tom
and
Mike.
H
F
G
H
Well,
yeah
I
never
looked
at
it.
So
that's
something
already.
G
Okay,
that's
that's
what
is
actually
running
when
you
do
aquaran
running
on
your
Aqua
code.
It
is
compiled
into
this
air
representation
and
to
easier
understand
the.
G
G
H
Hey
the
the
fixed
CLI
version
and
the
updates,
and
then
you
said
something
about
that.
It
would
like
also
check
on
the
raspberry
version
or,
let's.
G
Say
it
it
does,
it
will
not
check
for
raster
version,
but
if
you,
if
you
use,
are
you
asking
about
this
right.
H
You
know
no
so
probably
I
have
an
aqua
version
related
to
my
CLI
version
and
I'm
like
doing
nothing
for
a
year
and
then
all
the
notes
in
the
network
have
a
higher
Aqua
version.
H
Yes,
yes,
will
it
and
say,
like
sorry,.
G
I
understand
not
not
that
at
Hope,
but
if
you
pin
the
version
of
silai,
you
effectively
allow
this
project
to
be
used
only
with
this
version
of
CLI,
and
if
you
use
a
particular
silai
version,
you
can
do
fluent
dependency,
V
A
version
command,
and
here
you
will
be
able
to
see
all
the
components
and
all
of
their
versions,
including
the
rest
Peter
version
that
was
used
together
with
this
project
at
that
moment
that
you
used
it.
G
So
if
you
want
wanted
to
run
your
project,
you
can
spin
up
your
own,
like
part
of
network
like
in
Docker
compose
with
this
version
and
check
out
your
project
and
how
it
was
working.
It's
it
only
maybe
like
useful.
H
Well,
there
is
lots
of
time
to
you
know:
I,
don't
know,
maybe
it's
not
even
useful.
My
part
yeah
Mike,
sorry
I'm
gonna,
review
everything
you
said
and
then
try
to
understand
it
at
another
time
because
today
was
not
working.
D
H
D
I
can
share
so
like
everything
that
I
have
said
say
ever
since
they're
written
in
a
in
so-called
proposal.
So
we
have
internals.
E
My
questions
excuse
me
as,
of
course,
support
it's
not
about
plans.
It's
about
Library
support,
because
the
library
that
I
tried
before
it
doesn't
support
color
it
output
when
compiled
to
awesome
and
the
beautifier
in
a
utility.
F
E
A
library
that
produces
calorie
support
for
awesome
income
for
console
investment,
it
would
be
possible
to
produce
calorite
output.
H
A
F
F
We
had
a
lot
of
talks
and
participated
in
the
main
event
of
consensus
having
a
boost
there
and
that
a
lot
of
file
coin
events-
one
of
them
are
file
called
Network
base
and
also
disco
sub,
which
is
like
the
centralized
compute
Community,
sharing
our
sharing
their
thoughts
on
everything
what's
going
on
in
the
space,
so
that
was
pretty
and
pretty
good
for
us
and
also
Tom
had
two
talks
which
was
on
our
YouTube
right
now.
F
So
you
can
check
this
out
and
and
learn
more
about
the
recent
yeah
updates
about
the
project
from
these
videos
as
well.
After
consensus,
we
had
participated
in
compute
over
data
Summit,
which
was
not
not
as
huge
as
consensus,
but
still
very
productive
for
us.
F
Actually,
we
don't
have
any
pictures
from
there,
but
we
have
Bernard
stocks,
which
is
also
you
can
find
at
our
YouTube
channel
yeah
yeah.
We
can
go
to
the
next
slide
and
and
of
course
we
had
our
internal
Summit
in
Belgrade,
which
was
very,
very
intense.
We
spent
the
whole
week
all
together
in
planning
and
working
a
lot
of
strategy,
strategic
sessions
and
a
lot
of
brainstorming
and
yeah
playing
our
main
net
and
the
future
roadmap
as
well.
F
Yes,
that
was
pretty
good
and
for
the
future
events
we're
planning
to
go
to
ECC
Paris
in
July,
also
yeah,
to
Japan
in
Berlin,
and
to
talk
in
2049
in
Singapore.
Also,
we
probably
will
be
in
Istanbul
for
the
deaf
connect
week,
and
maybe
something
else,
but
this
is
the
closest
one
and
Paris
is
coming
pretty
pretty
soon.
Let's
go
to
the
next
slide.
There
is
a
really
really
brief
agenda
of
what
we
are
planning,
so
we're
gonna,
be
at
the
main
event
of
ECC.
F
We're
gonna
have
our
booster
for
the
first
two
days
and
also
Dimitri
probably
will
come
and
give
a
talk.
So,
if
you're
interested
and
if
you
will
be
around,
please
join
us
there
at
the
mainland
and,
of
course,
at
a
lot
of
side
events,
we
are
mostly
concentrated
on
cloud
coin.
F
Events
so
yeah
check
us
out
at
Falcon
network
based
and
Launchpad
for
they're
gonna
participate
in
Launchpad
activities
cryptocon
day,
and
many
more
also
we're
going
to
organize
our
decentralized
compute
assignment
and
I
believe
we're
gonna
have
more
information
next
week
at
our
Twitter,
so
stay
tuned,
I
believe
it's
gonna
be
huge
this
time
and
as
for,
if
Global
barriers,
this
is
a
pretty
famous
hackathon.
F
We
are
not
like
really
deeply
participate
there,
but
you
have
an
opportunity
to
go
there
with
the
scholarship.
Can
we
go
to
the
next
slide
so
the
same
as
we
did
for
if
Denver
we're
also
doing
for
each
Global
Paris,
together
with
developer
Dao
community
so
yeah?
So
anyone
who
would
like
to
go
to
Paris
can
apply
for
the
scholarship,
so
you
need
to
is
to
have
a
team
and
ability
to
build
on
top
of
our
project
so
yeah.
F
If
you
are
able
to
go,
if
you
can
scan
QR
code
and
apply
you'll,
be
able
to
go
to
Paris
for
free
and
yeah
and
win
the
price.
That's
pretty
much.
It
I
believe
we're
gonna
share
more
about
the
upcoming
events
during
next
Community
course.
I
think
it's
too
early
to
share
the
plans
for
Berlin
and
for
Singapore.
Yet
that's
it
probably
from
my
side,
yeah.
F
A
Okay,
another
thank
you
so
much
we'll
be
waiting
for
that
Co-op
link,
so
guys
it
looks
like
we
have
ran
out
of
content
for
our
community
call.
So
thank
you
for
attending.
Thank
you
for
joining
any
questions.
While
we
still
have
to
have
time.
A
Well,
if
there
are
no
questions,
then
let's
conclude
our
calls
again.
Thank
you
thanks
you
for
joining.
Thank
you
for
your
participation
for
your
questions
and
see
you
on
the
next
Community
course
guys.
Thank
you
bye,
but.