►
From YouTube: Aurora Update [2021-04-30]
Description
Follow the latest from NEAR Protocol on:
Website: https://near.org/
Discord: https://near.chat/
Blog: https://near.org/blog/
Twitter: https://twitter.com/NEARProtocol
GitHub: https://github.com/near https://github.com/nearprotocol
#Blockchain #FutureIsNEAR #NEAR #nearprotocol
A
Okay
looks
like
we
are
live
right
now,
hello,
everybody
welcome
to
the
aurora
update,
weekly
update
that
is
happening
here
on
fridays.
As
usual.
Today
we
are
going
to
discuss
the
news
of
the
aurora
project,
update
all
of
you
with
what
we
are
working
on,
what
we
have
achieved
during
the
last
week
and
what
are
our
plans
for
the
future?
A
Cool
so
as
always,
we
have
a
pretty
straightforward
agenda.
We
are
discussing
the
status
and
current
goal.
Then
we
go
to
updates
on
on
what
we
are,
what
we
are
doing,
and
then
we
go
straight
to
the
discussion
and
and
discussing
what
we
are
going
to
do
on
the
next
week.
A
So
our
status
is
is
really
really
simple.
We
are
working
hard.
The
whole
team
is
working
hard
on
preparing
everything
to
order
release
that
is
going
to
happen
pretty
soon
so
yeah.
I
believe
it
will
be
safe
to
say
that
it's
going
to
happen
in
the
next
month
or
even
the
next
couple
of
weeks
and
yeah.
There
are
lots
of
things
that
needs
to
be
done,
but
if
everybody
are
going
to
update
on
their
on
their
stuff
right
away-
and
we
have
had
some
some
some
planned
things.
A
So
when
you
are
doing
the
update,
please
you
know
maybe
maybe
refer
to
some
of
the
things
that
that
that
have
been
planned
during
the
last
call.
I
will
start
with
myself.
I
was
planning
to
do
hiring
legal
structure
and
supportive
docs
bridge
talks
and
yeah.
I
was
working
quite
a
lot
on
the
legal
setup
of
aurora.
A
I
believe
we
have
an
approach
how
to
do
this
still,
some
some
things
are
not
yet
fixed,
but
we
are
going
to
to
fix
it
pretty
soon
lots
of
support
in
dogs,
especially
preparation
and
presented
during
the
town
hall.
So
this
this
wednesday
was
the
town
hall
was
preparing
a
deck
for
it
and
also
I
started
to
work
on
the
investor
pitch
tank,
so
it's
still
work
in
progress
and
I
already
started
to
have
first
calls
with
potential
investors
and
partners.
A
So
unfortunately
I
can
say
that
during
during
these
updates
onwards,
for
the
next
several
weeks,
I
believe
my
my
updates
will
be
pretty
weak
in
terms
of
working
with
the
potential
investors
and
partners,
but
obviously
well.
You
know
that
all
of
this
is
pretty
sensitive
stuff,
so
I
cannot
share
it
publicly.
Until
the
moment
we
have
secured
all
of
the
investors
and
partners
some
some
various
management
tasks,
not
a
lot
of
this
arto.
What's
on
your
side,.
B
B
A
One
short
comment
here:
I
was
speaking
to
one
of
the
partners.
They
were
interested
in
in
having
websocket
support,
rather
than
on
the
http
api.
B
Yes,
I
know,
and
on
the
other
hand,
for
the
http
api
most
most
every
method
that
can
be
supported
is
supported.
Now
I
believe
we
are
at
something
like
95
percent,
plus
compliance
there
or
of
implementation
status,
also
in
particular,
something
like
the
eth
get
logs.
B
Maybe
the
most
complicated
of
these
methods
is
now
implemented
fully
and
that
unblocks
a
number
of
partners.
Some
partners
also
had
a
request
for
non-standard
apis,
in
particular,
gith
and
parity
both
have
their
own
tracing
apis.
B
We
already
have
some
people
hitting
those
endpoints,
even
without
any
announcement
of
the
of
the
urls
and
and
of
course,
we've
communicated
those
urls
to
do
some
some
key
partners
directly.
So
after
today
I
will
be
quite
comfortable
letting
people
at
the
layers.
More
broadly,
we
have,
we
have
good
quality
of
service
limits,
their
rate
limiting
and
like,
and
it's
a
pretty
robust
implementation
by
now.
So
we
can
just
open
that
up
also
continued
work
on
the
documentation
website,
so
joshua
and
I
took
that
forward
in
the
past
week.
B
There's
a
lot
a
lot
more
to
do
on
that
there'll
be
a
more
of
a
focus
next
week,
so
in
general,
the
focus
now
shifting
towards
just
validation
and
documentation.
Those
are
tied
together
because
a
lot
of
the
previous
documentation
we
had
is
out
of
date.
Incomplete
potentially
has
errors,
so
it
means
that
we
have
to
actually
test
every
everything
that
we
have
documented
and
therefore
the
photos
are
tied
together
now
in
terms
of
the
important
design
discussions.
This
week
we
had
a
couple
of
things
that
are
worth
mentioning.
B
One
is
that
we
had
to
figure
out
how
we're
going
to
emit
the
event
logs.
So
these
are
the
log
op
codes
in
the
uvm,
how
we're
going
to
emit
those
from
the
from
the
avm.
We
had
been
doing
it
using
the
near
logging
facility
as
in
the
block,
utf
host
functions,
and
that's
not
a
good
good
solution
for
several
reasons.
B
One
one
is
that
those
functions
are
defined
as
only
taking
utf-8,
because
we
need
to
log
arbitrary
binary
data
and
secondarily
they
have
a
16
kilobyte
limit
and
we
don't
have
any
such
limit
on
the
evm.
People
can
can
log
more
if
they
are
willing
to
pay
for
it.
So
just
when
I
figured
it
out
and
that's
currently
under
under
review
and
will
be
merged
and
after
that,
the
access
to
the
event
logs
will
will
go
through
during
later.
B
So
that's
all
happening
still
today
and
then
we
had
a
big
evm
block
hash
discussion,
which
is
maybe
better
better
suited
for
the
discussion
segment,
but
that
took
a
lot
of
people
involved
to
figure
out
the
path
forward
there
and,
of
course,
a
bunch
of
reviews
of
of
pull
requests.
We
have
a
lot
of
a
lot
of
pull
requests
flow
now
so
to
speak.
A
Great
beginning,
what's
on
your
side,.
C
Fixed
an
issue
that
cause
drops
then
breach
to
not
accept
ethereum
blocks.
The
claim
that
the
client
that's
running
on
the
near
blockchain
do
not
accept
ethereum
blocks.
The
problems
was
is
because
this
was
caused
by
a
deep
reorg.
We
implement
that
hotfix.
We
think
that
on
main
end,
there
won't
be
such
deeply
orchid
rewards.
C
It
was
over
100
blocks
deep,
and
there
was
another
issue
with
the
near
proofs
of
withdrawals,
not
verifying
an
ethereum
chain,
and
now
we
think
that
this
is
because
our
front
end
is
storing
old
proof
and
after
we
redeployed
the
smart
contract
on
ethereum
chain,
it
can't
verify
old
proofs,
but
it
should
be
able
to
verify.
It
should
be
possible
to
regenerate
the
proofs
of
all
withdrawals
so
that
they
would
be
verified.
So
it's
not
that
old
withdrawals
can't
be
verified
at
all.
They
can,
but
the
proofs
themselves
need
to
be
regenerated.
C
C
There
is
no
need
for
this
check
in
the
current
connector,
but
in
the
future,
if
we
redeploy
it,
we
want
to
make
use
of
this
check
to
avoid
verifying
to
avoid
accepting
old
proofs
so
that
we
will
not
have
to
copy
the
list
of
already
used
events.
Instead,
we
just
set
a
minimum
height,
so
all
the
events
are
automatically
invalid,
but
actually
this
check
is
not
currently
implemented
correctly.
C
It
is
checking
the
height
of
reference
block
for
the
proof
and
not
the
height
of
the
proof
where
the
actual
event
took
place
and
because
of
this,
it
is
possible
to
generate
proofs
using
new
blocks
at
the
reference
and
essential
that
this
check
can
be
bypassed.
But
now
we
know
about
this
issue
so
by
the
time
we
actually
need
this
check.
We
will
fix
it
to
make
it
actually
work.
A
Okay,
cool
you:
can
you
fan
of
what's
on
your
side,.
D
I
completed
new
functionality
of
s
connector
and
additionally,
I
completely
remove
the
json
dependency
and
currently
it's
only
a
burst,
and
I
want
to
say
that
contracts
says
size,
reduced
and
not
big,
but
for
some
about
probably
100
kilobytes
and
after
that
we
with
kyriel
abramo
starting
our
testing
and
currently
all
integration
test
tests
related
to
s
connector
completed
successfully.
D
But
I
should
have
mentioned
that
not
tested
part
related
to
pre-compiles
and
parts
related
to
a
call
erc
20
contract.
So
I
think
when
it
will
be
completed,
we
should
decide
how
to
test
it.
I'm
not
sure
that
it
should
be
integration
tests
but
really
related
to
integration
tests.
Currently
it's
all
completed,
and
I
want
to
say
that
kirilla
brahmo
found,
as
I
understand,
critical
issues
related
to
proof
or
field
skip
breach.
D
So
we
we
found
that
if
we
send
that
field,
it's
possible
to
skip
with
verify,
lock
entry-
and
it
will
really
help
me
fix
that
issue
and
my
current
activity
not
completed
is
drawing
a
new
diagram
for
pull
requests
and
some
formal
specification
for
our
reviewers,
as
we
discussed
with
arta.
A
How
can
I,
how
can
I,
how
can
I
put
it
in
the
list?
Excuse
me,
how
can
I
put
the
last
the
last
point
in
the
in
the
list
of
europe.
A
Can
somebody
mean
guinea,
can
you
please
input
it
in
in
here?
So
I
will
not.
You
know,
put
the
rolling
for
here,
and
so
we
are
not
losing
here
time.
Also.
So,
let's
go
to
frank,
frank:
how
how's
the
bully.
E
Yeah
I
got
pulled
in
different
directions
this
week
so
also
mentioned
already
the
block
hash
issue
where
basically
got
a
hard
priority
interrupt,
and
so
I
implemented
like
a
stats
command
for
the
bully,
trying
to
figure
out
how
often
block
cache
is
actually
used
in
in
contracts
this
op
code,
and
then
I
ran
into
issues
with
the
disassembler
from
go
ethereum,
because
the
transactions
are
contract,
construction
transactions
and
not
like
this
actual
runtime
code
and
the
arguments
to
these
contracts
they
disassembled
chokes
on,
and
then
I
started
talking
to
guillaume
from
the
ethereum
foundation.
E
E
They
were
trying
to
figure
out
how
often
the
of
course
block
hash
and
difficulty
are
used
in
production
because,
with
their
move
to
ethereum
2
it
becomes,
it
loses
the
meaning
there
so,
and
it
turns
out
that
they
are
actually
not
that
often
used
the
only
place
where
they
really
use
this
one
bitcoin
implementation
on
top
of
ethereum,
where
people
use
the
block
hash
to
create
randomness,
which
isn't
the
the
right
thing
to
do
anyway.
A
A
E
E
Sure,
yeah,
okay
and
then
I
wanted
to
continue
with
the
error
I
had
last
week
and
updated
my
near
core
and
my
test.
Scripts
weren't
working
anymore
and
near
cli
gave
me
like
this
page
of
error
stack
and
I
about
delete
account
with
large
state
and
first
I
thought,
there's
a
bucket
near
cli,
and
so
I
just
implemented
that
command
and
the
bully
directly.
E
But
I
got
the
same
error
and
then,
with
the
help
of
bowen,
I
think
figured
out
that
it
is
actually
a
limitation
in
near
core
that
we
cannot
delete
accounts
with
large
state
and
once
you
know
you
can
you
know
when
you
deploy
the
evm
and
delete
it.
E
Then
it's
fine,
but
when
you
run
the
bully
for
a
while,
then
you
know
the
account
state
becomes
so
big
that
you
cannot
delete
it
anymore,
and
so
I
changed
my
local
test
script
that
we
basically
always
create
random
accounts
and,
and
one
point
for
the
discussion
would
be.
Is
there
a
problem
actually
for
us
in
production,
or
what
do
we
do
is
deal
with
that.
A
E
Yeah,
that
was
basically,
there
was
basically
no
at
the
point
where
I
can
continue
again
with
these
fixed
things.
F
My
bed
yeah,
like
everyone
else
this
week,
block
cash
discussion,
which
I
I
guess
we'll
talk
about
in
this
call,
maybe,
but
the
result
of
that
is
just
that.
F
We're
just
gonna
return
zero
for
now,
then,
second,
one
being
that
I
started
work
on
the
math
api,
as
I
said
last
week,
but
I
got
pulled
myself
off
of
that,
because
the
evm
is
still
still
top
priority,
of
course,
until
it's
released,
then,
after
that
I
reviewed
the
back
end
engine
implementation,
because
I
felt
like
there
was
something
not
right
about
it.
F
There
was
a
lot
of
dummy
data,
so
I
just
wanted
to
make
sure
that
all
the
all
the
stuff
that
we
were
we
were
returning
made
sense.
So
the
only
the
only
thing
that
really
resulted
in
that
was
the
big
block
cash
discussion
and
gas
limit.
What
what
should
we
turn
for
that
then?
After
that
I
started
on
the
docks
a
bit
a
bit
lately
I
was
filling
in
some
gaps
and
adding
in
a
bit
more
details.
F
Then,
following
that
worked
on
fixing
the
log
op
codes
because
they
were
pushing
to
near
sdk,
but
they
were
not
really
showing
up
because
they
can
fail,
so
we
returned
them
through
the
aurora
engine
call
itself,
which
is
now
called
submit,
and
then
after
that
reviewed
michael's
pr,
that's
it
in
a
nutshell.
G
Yeah,
so
on
the
beginning
of
this
week,
I
worked
on
the
avm
pre-compiles
with
marcella
and
also
had
a
couple
of
calls
with
joshua
and
arthur
helping
us
to
figure
out
the
pre-compiled
design
in
the
aurora.
Thanks
for
that,
I
also
worked
with
again
jokanov
on
verification
fixing
and
testing
the
aurora
ath
connector
and
its
new
design.
G
A
G
A
G
I
I
just
said
I
also
updating
the
eastern
europe
entry
layer
to
release
installer
as
well,
and
during
my
quad
walkthrough
through
the
aurora
engine
I
found
and
fixed,
I
would
say
quite
serious
security
issue
in
our
8h
connector,
so
it
allowed
us
allowed
anyone
to
pass
this
bridge
call
flag
in
the
roof.
So
that
means
that
the
proof
that
any
valid
proof
from
the
etherium
would
have
accepted.
G
So
it's
I
think
it's
really
important
that
this
thing
is
gone
already
and
we
tested.
Then
this
part
with
the
gain
of
hana.
When
now
it
works,
so
it
seems
to
be
to
work
as
it
should
be.
H
Yeah
well,
the
main
topic
of
this
week
was
building
the
rc20
connector.
I
would
say
right
now:
it's
like
70,
80
percent
done,
like
all
the
components
are
built,
but
I
need
to
connect
them
to
each
other,
and
so
it's
not
not
done
yet.
On
the
other
side
I
was
doing,
I
got
some
help
from
kirill,
mostly
on
the
track
and
paul
side,
which
I
appreciate.
H
I
was
working
with
closely
with
eugenie
on
some
of
the
issues
he
already
described
about
roxanne
with
rowles.
Also,
you
know
some
time
and
giving
support
for
bridge
related
channels.
We
still
need
to
improve
our
ux
in
several
places.
Well,
for
now,
some
manual
intervention
still
helps,
and
I
had
one
important
discussion
with
mario
from
our
sre
team
that
will
help
us
set
up
a
proper
release
process
for
the
for
the
bridge
and
will
improve
in
general
the
development
experience
I
expect
yeah.
So
that's
it.
A
Okay,
just
want
to
just
say
not
not
our
publishing
cli
to
npm,
because
it
needs
to
be
reflected.
Okay,
sorry,
what
do
not
hurry
in
publishing
cellulite.
H
A
H
A
Okay,
cool,
okay,
probably
oh
yeah!
Okay,
do
we
actually
need
to
do
we
have
in
the
in
the
discussion
here,
c20
connector,
no,
okay,
no
problem
cool
matt!
What's
on
your
side,.
H
I
H
I
This
week
was
finishing
the
aurora
bridge,
ui
prototype
and
then
handing
that
over
to
dupier
and
the
design
and
implementation
of
the
aurora
website.
I
And
then
there
was
some
management
activities
related
to
the
aurora
launch
and
then
some
minor
things.
So
we
just
set
up
a
base
camp
project
for
alex
b
and
myself,
because
the
the
number
of
little
tasks
related
to
the
website
kind
of
exploded,
as
as
things
came
together,
and
so
we
set
up
a
little
base
camp
project
to
manage
those
tasks
and
started
a
final
punch
list
for
the
launch
on
the
trello
board.
I
I
So
this
week
I
was
working
mostly
on
the
benchmarks.
I
think
those
are
in
good
shape,
there's
a
comment
from
alexei
kladov
that
I'll
address
today,
but
that
should
get
merged.
The
sort
of
summary
of
that
is
doing
pre-compiles
and
watson
directly
is
too
expensive,
so
the
math
api
pr
is,
is
pretty
important.
I
I
Thanks
so
this
week's
the
first
part
of
the
week
I
was
working
on
in-ear,
so
that's
almost
done.
The
front
end
is
ready.
The
only
thing
that's
missing
is
testing
the
unlocking
and
burning
on
tokens
which
I
couldn't
test
because
the
we
had
the
issue
with
the
rainbow
bridge,
and
so
then
I
started
working
on
aurora,
so
I'll
have
to
get
back
to
it
sometime
next
week
and
so
yeah.
After
that
I
was
working
on
aurora.
A
I
No,
that's
it.
I
think.
After
in
the
in
the
discussions
I
put
some
items,
it's
just
a
couple
of
things
that
I
want
to
make
sure
and
check
whom
to
ask
if
needed,.
A
A
Yeah
I
just
yeah
well,
potentially,
maybe
just
just
to
spend
ten
ten
more
seconds
to
give
everyone
a
little
bit
of
stats
on
how
the
bridge
is
working
right
now,
a
little
bit
of
stats
here.
So
the
total
amount
of
assets
breached
over
the
bridge
is
now
more
than
800
thousand.
Yes,
us
dollars,
recalculated,
obviously
from
all
of
the
from
all
of
the
assets
and
the
rc20
locker
is
now
holding
almost
well
686
thousand,
so
pretty
good
stats.
A
A
Cool,
so
let's
go
straight
into
the
discussions
art
this
is
yours,
isn't
it
or
this
is
peers?
Pierre
thanks
required
for
aurora
web
app
integration,
yeah.
B
I
I
Yeah,
so
basically
it's
just
a
short
list
of
things
that
I'll
need
next
week
for
completing
the
integration.
So
the
first
one
will
be
the
metamask
rpc,
having
a
way
to
connect
metamask
to
aurora
directly.
A
H
I
Okay,
great
cool.
The
the
next
item
is
so
on
on
the
front
end,
the
ones
are
lock
or
burn
is
made,
then
we're
just
waiting
just
waiting,
and
so
we
we
need
a
way
for
the
front
end
to
know.
Okay
has
this
transfer
finalized
yet
so
I'm
not
sure
about
the
details
of
how
the
relay
is
working
on
this.
A
I
I
A
A
At
the
destination,
so
the
the
deposit
transaction
actually
yeah
yeah.
So
I
believe
this
is.
This
is
something
that
that
kirill
is
working
on
yeah
kirilla,
you're
working
on
the
finalization
of
transactions.
Isn't
it.
G
Definitely
transaction
is
already
there,
but
what
pr
has
mentioned
is
that
I
believe
he
wants
to
have
some
way
to
get
the
status
of
the
specific
transaction.
Was
this
already
transferred
or
not?
And
for
that,
for
example,
for
sure
we
can
add
some
api
for
the
layer
or
we
can
add
some
some
subscriptions
feature
to
notify
for
some
specific
events
as
they
are
relate,
but,
okay
for
sure,
we
will
discuss
this
with
spear.
What
is
the
best
way
for
him?.
I
G
We
didn't
have
any
yet
at
the
moment-
and
we
mentioned
this
only
last
week
with
marcela,
and
we
we
also.
We
was
also
thinking,
maybe
about
the
addition
of
a
specific
function
that
will
check
whether
the
proof
already
exists
in
the
in
the
token.
So
that
will
be
easier
for
you.
G
A
Yeah
well
from
my
point
to
you:
these
are
this
is
something
that
is
additional.
So,
let's
not
change
the
contracts
right
now
we
can.
We
can.
We
can
fix
this
with
with
other
means,
the
means
that
are
outside
the
blockchain.
So
if
we
can
keep
it
outside
the
blockchain
yeah.
H
The
problem
fixing
this
outside
of
the
blockchain
is
that
most
of
our
components
right
now
are
stateless
and
almost
any
fix.
This
close
here
requires
to
have
some
state
either
on
the
layer
or
on
the
content.
G
D
H
A
An
indexer
for
for
near
so.
H
I
A
I
Yeah
well
definitely
need
a
view
method
to
to
check
the
address.
I
H
I
That
and
then
the
last
one
burn
contract.
Yes,
that's
just
the
the
api
for
burning
tokens
when
sending
back
from
aurora
to
ethereum.
A
So
this
is
done.
This
is
done
through
what
we
call
exit
to
ethereum
pre-compile.
G
I
believe
what
pierre
mentioned
is
that
the
burn
contract
call
itself.
So
when
we
discussed
recompiles
with
marcelo
marcella
offered
to
modify
the
erc20
vanilla
implementation
and
to
add
some
function
to
have
burn
functionality.
So.
A
No,
we
discussed
this.
This
is
not
the
way
to
go
the
reason
for
this,
so
we
need
to
have
a
separate
recompile
for
this.
The
reason
for
this
is
that
your
c20
implementations
and
actually
today,
was
speaking
to
the
partner
they
have
their
own
token
and
they
would
like
to
deploy
this
token
to
aurora
and
they
do
not
want
to
modify
the
erc20
logic
because
they
have
the
erc20
logic
and
some
additional
pieces.
A
So
instead
what
we
are
doing
is
the
form
we
are
implementing
the
exit
to
ethereum
to
compile
that
is
actually
going
to
take
a
bridge
token
bridge
talking
address,
and
then
it
will
take
a
recipient
amount.
A
So
this
this
is
what
this
this
pre-compile
is
taken,
and
then
this
pre-compile
is
checking
that
the
bridge
talking
address
is
actually
a
some
kind
of
is
actually
an
address
of
the
bridge
token,
and
then
it
goes
directly
into
the
state
of
this
token
and
removes
a
specific
amount
from
the
message.
Sender
and
and
issues
the
and
then
aurora
and
then
does
the
transfer
to
factory
or
if
it
is
yeah
that
doesn't
transfer
to
it.
C
A
C
Yes,
so
it
should
sound
similar
to
like
you're
sending
a
token
to
locker.
Basically,
the
reason
we
want
to
have
in
some
future
unified
interface.
So
all
tokens
should
be
bridged
in
the
same
way,
no
matter
whether
they're
native
on
vm
in
ethereum,
in
near
vmware
ethereum
or
in
near
native
runtimes.
So
the
idea
is
that
when
you
want
to
bridge
your
token,
you
always
have
to
send
actual
actually
send
this
tokens
to
the
appropriate
contract,
not
just.
A
A
B
H
C
C
Similarly,
to
how
this
is
done
in
ethereum,
well,
not
this,
but
generally.
If
you
want
to
send
token
and
then
to
perform
some
operation,
then
you
first
approve
this
token
to
the
contractor
in
this.
In
this
case,
it
would
be
this
precaum
file,
then
you
make
a
call,
and
this
call
calls
transfer
from
normally
as
if
so,
it
doesn't
need
to
be
able
like
to
directly
modify
the
state
of
the
token
it
shouldn't
need
this
ability.
It
should
behave
like
any
regular
ethereum
contract
that
accepts
token,
for
something.
I
A
What's
your
take
on
this,
what
do
you
think
should
we
should
we
keep
the
one
transaction,
pre-compile
or
or
two
transactions.
A
I
I
too,
so
this
is
a
specific
thing
and
like
having
only
one
transaction,
and
you
do
not
need
to
do
a
proof
and
then
and
then
call
in
the
pre-compile
which
does
transfer
from
yeah.
Anyway.
We
need
to
delete
these
tokens
from
the
pre-compile
in
order
for,
for
the
erc20
to
correctly
show
the
the
total
total
amount
of.
A
C
So
the
problem
with
this
approach
is
that
when
maybe
in
the
future,
we
implement
the
ability
to
breach
native
tokens
from
vm
back
to
illinois
like
tokens
which
are
native
to
evm,
which
will
probably
exist
at
some
point,
because
some
of
the
contracts
that
are
going
to
here
want
to
be
deployed
to
them
have
their
own
tokens.
And
we
want
to
reach
those
tokens
to
ethereum.
C
A
Yeah,
I
understand
what
you're
saying
so,
let's,
let's
probably
think
of
this
a
little
bit
later,
when
there
are
aurora
native
tokens
that
we
want
to
bridge
to
ethereum
absolutely
the
same
thing
as
as
we
are
so
so.
Let's
delay
this
problem
the
solution
of
this
problem,
because
it's
it's
it's
the
same
as
with
bridging
np
141s,
so
we're
not
supporting
it
right.
Now,
though,
the
value
of
of
the
current
approach
is
is
already
there
yeah,
so
we
have
near
blockchain
getting
some
tokens
but
yeah
I
hear
you.
A
Let
me
let
me
put
it
here
should
be
called
I.
So
I'm
going
to
put
idea.
A
A
I
A
Okay,
so
just
just
fixing
this
current
solution
go
with
a
single
transaction.
For
now
later
on,
we
will
be
able
to
update
the
pre-compile,
so
no
problem.
A
This
actually
pre-compile
will
help
us
to
use
to
make
use
of
the
composibility
of
transactions
literally,
so
there
will
be
an
interesting
way
how
to
how
to
do
it.
So
some
kind
of
some
some
contract
executes
and
and
is
actually
calling
the
pre-compile
yeah
without
without
an
approve
without
an
additional
transaction.
B
Me
sum
up,
so
the
the
problem
started
from
the
block
hash
opcode,
which
is
the
most
difficult
to
opcode
to
to
implement
in
the
evm,
because
it
gives
you
access
to
the
last
256
block
hashes,
and
we
had
this
stubbed
out
for
a
long
time
as
zero
as
in
returning
zero,
but
for
full,
full
100
comparability
to
have
a
credible
story
on
that.
B
B
Let
me
let
me
read
from
here
so
the
requirements
we
formulated
so
first
of
all,
we
need
to
be
able
to
return
those
most
recent
256
blocks,
but
also
the
block
hashes
must
be
unique
and
distinct
to
every
distinct
evm
chain
chart
and
contract
deployment.
B
We
should
endeavor
to
take
into
account
the
rare
situation
of
a
short
term
near
fork,
which
is
why
mac
suggested
we
wouldn't
want
to
use
the
block
height.
We
ought
to
use
the
block
hash
and
the
launch.
The
aurora
launch
should
highly
preferably
use
a
finalized
block
hash
scheme,
so
we
don't
have
to
change
that
later,
which
would
be
quite
painful
and
a
mess,
and
an
additional
weak
requirement
would
be
that
the
the
way
we
form
the
scheme
scheme
for
the
block
hash
should
be
not
that
easily
predictable.
B
This
is
not
a
hugely
important
requirement,
because
nobody
should
be
using
block
hash
for
pseudo
pseudorandom
number
generation.
Although
people
do,
of
course,
so
so,
given
given
those
requirements,
we
we
wanted
to
implement
it
as
a
a
hash,
a
catcher
cache
of
of
the
chain
id
the
block
hash,
that's
in
the
near
block
hash
and
the
evm
contract
accounts.
B
Looks
like
somebody's
phone
here
is
ringing,
so
the
host
function
needs
to
be
added,
so
that's
that's
been
tested
out.
Michael
michael
is
working
on
that
we
initially
thought
that
this
would
be
a
high
priority
for
launch,
but
we
lowered
the
priority
after
frank
gave
us
some
some
data
on
how
frequently
or
infrequently
this
instruction
is
used.
B
That
will
have
the
correct
scheme
from
the
beginning,
with
the
assumption
that
the
contract
will
catch
up
once
it
has.
That
protocol
upgrade
eventually,
so
that
pretty
much
summarizes
the
plan
here
launched
with
with
this
one
instruction
broken
basically
and
based
on
the
data
we
have,
it
shouldn't
be
a
broker
for
anybody.
We
hope
and
then
fix
that
up
over
the
summer.
B
B
C
C
So
if
you
want
to
manipulate
low
cash,
you
may
need
to
discard
some
valid
nonsense,
and
this
is
like
costs
a
lot
in
year.
It
costs
much
less
basically
for
block
producer.
It's
free
to
able
to
change
the
current
blocks
hash
by
generating
a
different
signature,
so
the
using
near
block
hash
won't
get
you
even
the
same
level
of
security
in
terms
of
randomness
as
the
ethereum
block
hash.
C
C
Then,
as
some
people's
note
people
noted,
it
is
possible
to
get
block
hatch
from
ethereum
and
then
use
it
in
the
free
api
to
look
up
the
block
by
hash.
This
can
be
implemented
with
any
scheme.
We
just
need
to
have
our
web3
api.
Our
rpc
servers
to
actually
use
the
same
block
hashes
as
the
keys
to
and
to
return
the
appropriate
kind
of
blocks
for
those
block.
Hashes
doesn't
matter
which,
which
values
are
used
as
block
hashes
as
long
as
they're
unique.
So
you
can
use
this
value
to
identify
the
block
to
return.
C
B
Be
clear,
let's
be
clear
that
the
randomness
question
is
separate.
I
I
don't
care
if
block
hash
will
not
give
people
randomness,
we
are
not
aiming
to
do
that.
So,
let's
leave
that
out.
Yeah.
C
B
Use
cases
yeah
the
problem
with
that
was
a
couple
folders
I
remember
so
one
was
that
it
doesn't
give
us
uniqueness
as
easily
uniqueness
of
the
resulting
hashes
and
the
second
one
was.
B
Global
uniqueness,
as
in,
if
we
have,
if
we
hash
the
chain
id,
we
hash
the
block
height
and
we
hash
the
contract
address
contract
name.
Then
we
are
going
to
have
this.
C
C
C
B
The
chart
yeah
and-
and
it's
also
it's
also
not
the
case-
that
that's
actually
unique.
We
use
the
same
chain
id,
for
example,
on
local
deployments
as
on
abolite
beta
net.
We
reuse
that
id.
So
that's
one
reason.
The
other
reason
is
max's
objection,
so
this
this
had
to
do
with
short
term
forks
of
the
chain,
and
he
felt
that
the
the
hash
would
be
a
far
better
solution
than
the
heights.
C
And
how
do
what
do
what
bad
happens
if
the
block
hash
on
the
short
fork
happens
to
match
block
hash
on
a
different
workplace?
I
call
that.
B
A
A
Correct
okay!
So,
let's,
let's
not,
let's
not
not
stop
here
for
for
a
long
period
of
time,
let's
go
into
the
next
item.
This
is
something
that
that
I
added
here
customer
receipt
money
contract.
So,
as
I
said
to
you,
as
I
briefly
mentioned
this,
I
have
had
a
chat
with
a
partner
that
would
like
to
in
order
for
his
solution
to
work
he
needs.
A
A
Token
deployments
on
on
aurora
to
be
of
an
extended
version
of
the
erc20
standards,
so
this
standard
is
called
erc665,
I
believe
or
667.
I'm
not
not
quite
sure.
I
need
to
check,
probably
665.,
and
so
that's
why
they
require
this
for
all
of
their
other
contracts,
to
be
able
to
work,
which
is.
H
A
Important,
so
we
need
to
have
obviously
when
once
we
are
going
to
deploy
the
tokens
over
the
the
logic
of
the
of
the
factory
and
of
the
aurora
logic,
we
will
deploy
some
kind
of
vanilla
implementation,
but
I
just
want
to
mention
to
you
that
we
need
to
have
an
ability
to
deploy
a
custom,
your
seat,
wanting
implementation,
and
so
so
this
this
needs
to
be
need
to
have
a
way
to
deploy
it
most.
A
Probably,
this
is
a
separate
function
too
or
separate
method
in
or
a
contract,
and
this
method
needs
to
deploy
a
new
ear,
c200
token
and
changing
the
mapping
of
the
and
changing
the
mapping
in
between
ndp,
141s
and
and
the
euro
c20s
in
aurora
change
the
contract
to
which
it
is
pointing
out.
A
But
potentially
this
is
also
something
that
can
be
done.
Just
wanted
to
mention
this
because
we
obviously
probably
this
is
not
something
that
we
are
going
to
do
right
here
right
now,
but
this
is
something
that
we
are
going
to
encounter
very
quickly
like
maybe
in
couple
of
weeks,
so
if,
if
we
are
able
to,
if
marcelo
marcel,
what's
your
take
on
this,
if
you
since
you're
working
right
now
on
the
on
the
nap
141
bridge
into
two
aurora,
what's
your
what's
your
first
reaction
on
this.
H
Well,
first
of
all,
I
need
to
understand
how
this
new,
this
new
york
city,
are
the
new
standard
works
in
general.
A
We
do
not,
we
do
not.
We
do
not
want
to
use
any
other
functions
from
that
standard.
It
is
up
to
up
to
the
clients
to
use
these
functions.
A
Yeah,
the
problem
is
that
we
have
a
function
to
deploy
your
c20
yeah
deploy
bridge
token.
This
is
the
same
function
that
we
have,
in
fact
in
the
factory
right
now
for
nap141s,
and
this
function
deploys
a
vanilla
implementation
of
np141
yeah
and
for
aurora
it's
going
to
implement
a
vanilla
implementation
of
the
erc20,
probably
from
open
zeppelin
yeah.
H
Like
from
the
top
of
my
head,
what
I
can
think
is
that
he
can
actually
deploy
the
new
york
c20
with
an
extra
function
that
allows
him
to
to
exchange
like
to
send
tokens
from
the
vanilla
implementation
to
the
new
implementation
and
well,
you
interact
with
the
other
implementation.
It's
a
little
bit
more
cumbersome.
A
It's
it's
much
more
cumbersome
because
because
we
knew
we
were
again
forcing
people
to
to
change
their
contracts
and.
H
A
This
is,
and
this
is
marcelo-
this
is
also
very
connected
with
the
way
how
how
some
stable
coins
are
working.
Yeah
remember
we
have.
We
have
started
a
discussion
about
the
fact
that
stable
callers
need
to
be
able
to
blacklist
other
people,
and
they
have.
H
Some
tools
that
are
separate
discussion-
and
I
have
my
opinions
on
that,
but
in
the
in
this
regard,
I
don't
think
it
would
be
easy
to
to
deploy
other
than
the
vanilla
implementation,
because
otherwise
we
need
to
like
inspect
them.
While
these
ourselves,
some
custom
implementations
and
it
may
be
easy
to
to
misread
or
miss
something
there.
So
in
general,
I
think
it's
on
the
on
the
developer
and
the
user
of
that
contract
to
to
actually
handle
this
like
interact
with
our
interface
right
now.
A
Okay,
I
think
it's
not
going
to
be
a
very
complicated
thing
like
at
all
the
the
enablement
of
the
erc
customer
c20
contracts.
So
maybe
we
can.
We
can
discuss
with
you.
H
H
A
A
Well,
it
depends
on
the
on
the
governance
structure.
If
this,
if
aurora
is
governed
by
dao,
which
is,
is
probably
going
to
be
the
case,
then
this
is
a
decentralized
decision.
H
Well,
I
already
told
you
my
take
on
this.
I
I
don't
think
we
should
go
through
that,
but
I
personally
don't
like
it.
Okay,
but
technically
it
is
possible.
I
C
It's.
Not.
It
can't
be
just
automated,
and
I
think,
if
we
need
to
have
some
of
those
bridge
tokens
have
some
special
behavior.
Maybe
we
want
to
have
either
different
connectors
for
them
for
them,
or
at
least
talking
themselves,
shouldn't
be
deployed
by
the
connector.
Normally,
it's
some
special
process.
A
A
So
here
we
have
less
versatility
in
comparison
to
the
to
the
near
run
time,
just
because
the
bridge
is
located
in
the
near
run
time
rainbow
bridge
land
stoppens
in
the
new
runtime.
So.
A
It
is
not
enough
because,
in
that
transactions
there
are
always
in
they
need
methods.
There
is
some
kind
of
initial
supply
of
the
tokens
and
the
initial
distribution.
Most
probably
people
do
not
want
to.
You
know,
create
new
tokens
during
their
deployment
of
the
contract
on
aurora,
so
there
may
be
many
many
many
additional
small
things
that
are
not
going
to
work
as
absolutely
the
same
as
as
as
with
the
first
deployment
on
ethereum.
A
Okay,
let's
discuss
this.
I
believe
this
is
going
to
be
hot
topic,
so
cool.
It's
super
cool
that
we
have
different
opinions
on
this.
Let's
discuss,
maybe
we
can
generate
a
good
idea
how
how
to
handle
these
cases
cool,
let's
go
further.
How
do
we
deal
with
the
near
core
limitation
of
the
addition
of
the
contracts?
Is
that
problem?
Is
that
a
problem
for
us?
B
So
10
kilobytes,
that's
the
limit
and
larger.
We
can't
delete.
E
B
And,
of
course,
the
bone
suggestion
of
deleting
state
from
inside
the
contract
itself
as
in
managing
managing
our
own
storage,
it's
a
little
bit
complicated
because
we
can't
iterate
over
the
contents
of
the
storage
from
inside
the
contract.
So
it
somewhat
needs
to
be
externally
facilitated
in
that
regard,.
A
It's
well.
B
Yeah,
I
think
it's
mostly
mostly
a
non-issue
for
us
on
on
maintenance.
If
we,
if
we
had
to
do
some
kind
of
upgrade
of
the
storage
format,
we
would
have
to
do
it
without
actually
deleting
everything,
so
it
would
have
to
be
incremental
and.
B
B
B
H
E
But
it
means
if,
if
you
run
it
on
testnet,
the
way
I
test
on
test
net
visibility
is
that
I
have
to
deploy
my
own
contract
because
it
has
the
special
functions
in
there
and
that
kind
of
will
create
this.
You
know
toxic
waste
in
a
sense
that
I
will
never
be
able
to
delete
those
contracts
again
and
with
the
test
tokens
that
are
bound
by
the
by
the
contract
size
and
could
potentially
become.
I
mean
we're
still
in
the
early
early
phase,
where
we
don't.
E
You
know
we
play
that
much,
but
it
can
become
extremely
large
once
we,
you
know
we
play
more.
B
E
B
H
B
Yeah
so
locally,
it's
workaroundable
given
multiple
ways
to
work
around
it,
including
changing
the
configuration
and
recompiling,
and
we
don't
see
really
an
issue
other
than
what
you
mentioned
on
testnet
toxic
waste.
B
I
think
for
for
cleaning
up
something
like
that.
As
long
as
you
have
a
full
access
key
to
the
account,
it's
always
possible
to
clean
it
up,
you
just
need
a
contract
which
does
nothing
except
delete,
storage
keys.
You
know
especially
keys
one
by
one.
Somebody
should
write
a
script
to
do
that.
That
would
be
cool.
B
Right,
just
because
it's
30,
you
know
yeah
yeah
sure
so
in
in
any
case
I
think
sounds
like
we
we're
not
blocked
on
this.
We
we
have
walk
around
and
also
there's
not
too
much.
We
can
do
about
it
on
or,
let's
say,
disneyland.
B
Right
so
joshua
is
beginning
on
this
already,
and
I've
also
asked
javkani
kuciako.
He
his
I
believe,
is
back
from
vacation
now
and
he
promised
to
take
a
stab
at
it
as
well.
B
So
the
questions
here
concern
which
other
eyeballs
we
want
to
pull
in
and
when
would
be
a
good
time
to
do
the
review
and,
from
my
point
of
view,
I
think,
given
how
you're
getting
okano
indicated
where
he
is
maybe
monday
would
be
a
good
good
time
to
start
doing.
Active
review
of
it.
B
There's
also
a
question
how
to
collaborate
on
such
a
review,
because
it
will
be
useful
for
reviewers
to
be
able
to
collaborate,
and
that
might
might
make
sense
to
then
raise
a
new
pr
specifically
between
the
the
master
branch
and
the
s
connector
branch.
It's
it's
big,
but
it's
not
so
much
necessary
for
the
review,
which
I
think
is
more
likely
to
be
successful.
B
If
you
locally
review
it
on
your
on
your
computer
and
maybe
test
it
as
well,
but
it's
for
for
keeping
track
of
everybody's
comments
in
the
same
place
and
being
able
to
have
a
conversation
around
a
particular
piece
of
code
and
so
on
further
inputs
to
this
review.
B
B
That's
not
gonna
happen
in
this
timeline,
but
in
in
any
case,
any
any
kind
of
specification
like
this
formal
specification
is
going
to
help
us
understand
the
reviews.
What's
going
on
and
diagram
diagram
as
well.
B
B
And
for
the
fourth
reviewer,
of
course
I
have
to-
I
have
to
review
it
myself
and
did
we
have
somebody
else
in
mind?
Let
me
let
me
see
who
we
had
yeah.
Of
course,
michael
already
reviewed
it
substantially
previously.
B
So
it'll
be
good.
If
we
can
take
a
second
step
at
it
once
we
have
the
diagrams
and
updated
correct.
So
so
that's
already
yeah,
there's
already
five
five
people
at
that
point.
So
maybe
maybe
that
will
be
enough.
B
I
think
I
think
this
meeting
was
long
enough.
You
can
probably
start
letting
people
go
soon.
A
Cool
yeah,
so
very,
very
short:
let's
close
this
with
our
plans
for
the
next
for
the
next
week.
So
I'll
start
with
myself,
I'm
going
to
the
general
management
with
regard
to
aurora
release,
so
we
need
to
prepare.
We
need
to
make
sure
that
everything
is
there
that
everything
is
is
cool.
I'm
going
to
work
with
the
pitch
deck
for
investors
and
and
we'll
start
to
reach
was
that
to
reach
these
people.
A
This
is
going
to
be
my
biggest.
My
biggest
things
to
do
during
the
next
week
are:
what's
on
your
side,.
B
Yeah,
so
the
biggest
focus
is
shifting
to
qa
and
validation
and
also,
of
course,
the
developer
experience
and
documentation
to
enable
that
there's
a
there's,
a
few
few
loose
ends
on
the
layer
that
I
want
to
tie
up,
most
importantly,
the
web
sockets
and
can
always
improve
the
robustness,
given
that
it
will
be
in
production
from
today
on
this
new
version.
B
C
A
Cool
begin
yohana.
D
So
next
week
I
asked
are
to
mention
that
I
should
complete
for
small,
specs
and
diagrams
I'm
not
familiar
with
cdla
plus
but
main
goal
related
to
easily
understand
or
for
people
who
who
don't
know
how
all
these
things
work
short.
So
we
should
have.
We
should
help
our
people
to
understand
to
how
it
work
and
validate
pull
requests
and
as
connector.
So
after
that
I
should
testing
as
connector
with
kirill.
I
believe
and
probably
will
be
some
additional
tasks.
B
E
And
with
this
extended
test,
script
keep
testing
and
bring
it
to
higher
block
numbers,
so
fixed
issues.
There.
A
Cool
joshua.
F
Yeah
documentation
with
that
that
includes
like
the
online
documentation
and
also
the
roar
engine
inline
documentation.
I
want
to
get
into
the
math
math
api,
but
that
might
not
be
until
the
end
of
the
week.
Then,
after
that
the
connector
logic
review.
That's
gonna
take
some
time
because
it's
a
lot
of
lines
of
code,
then
the
general
like
the
aurora
engine
as
a
whole.
F
I
want
want
to
go
through
it
all,
so
I
went
through
already
part
of
it
during
this
week
and
that
came
up
with
the
the
block
cash
issue
and
the
other
one
and
yeah.
That's
that's
what
I'm
planning
on
doing
next
week.
B
Yeah,
I
must
emphasize
again
that
the
the
math
api
additions
the
host
functions.
We
need
to
implement
the
pre-compiles.
B
We
we
have
to
land
those
in
the
mid
mid-june
minute,
upgrade
that's
business
critical,
so
yeah
just
check
with
bowen
to
to
make
sure
that
you,
you
will
not
miss
the
deadline
for
when
that
needs
to
get
merged.
G
A
Cool
matt.
A
Okay,
cool,
so
lots
of
plans
for
the
next
week.
In
case
you
you're
feeling
that
the
schedule
is
a
little
bit
too
tight
for
for
the
next
week
and
you
you
will
know
you.
I
think
you
will
not
be
able
to
to
implement
everything
that
you
are
planning
for
the
next
week.
A
Don't
hesitate
in
one
additional
day
from
this
from
this
weekend,
so
not
not
saying
to
you
that
you
need
to
do
it,
but
just
take
a
look
at
at
the
speed
how
how
the
things
are
done.
So
myself,
and
as
far
as
I
understand,
are,
though
we
are
going
to
work
on
sunday.
So
if
you
want,
you
obviously
can
reach
to
us
on
that
day.
A
C
B
C
B
Yeah
we're
already
operating
past
the
gas
gas
limit
so
to
speak
on
on
the
avm
contracts.
It
would
be
good
to
not
add
things
that
cause
the
gas
to
it.
We
have
a
protocol
upgrade
in
the
books.
It
will
be
delivered
in
middle
of
june.
Everybody
is
happy
with
it.
Bone
is
happy
with
it.
Max
is
happy
with
it.
Don't
don't
don't
see,
really
a
problem
with
it?
B
We
can
access
the
last
256,
hashes
and
and
we're
good,
so
I
would
suggest
take
it
up
in
the
in
the
channel
if
you,
if
you
want
to
push
this,
but
really
this
has
already
been.
We
all
spent
a
lot
of
time
on
this
this
week
and
I
don't
think
there's
too
much
more
benefit
to
be
gained
here.
This
is
already
a
dead
horse.
As
far
as
I'm
concerned.
A
Yes,
okay,
so
again,
thanks
for
for
the
input,
please
continue
async,
but
before
you
will
continue
check
check.
What
is
the
solution
that
is,
that
is
used
right
now,
okay,
so
we
have
had
a
couple
of
people
who
were
joining
us
and
commenting
during
the
stream
shout
out
to
them.
Thanks
for
joining
thanks
for
your
activity,
I
was
I
was
answering
the
questions,
async
cool,
that
you
are
watching
us
and
and
supporting
us
and
see
you
all
next
week.