►
From YouTube: Focus On #06 | Data Insights Core Unit
Description
For the 6th episode of Focus On, we will host the Data Insights Core Unit @data-insights-cu.
The Data Insights team will share an update on their Core Unit’s work over the past several months, highlighting the tools and services the Core Unit supports for the community.
Following the publication of several new MIPs, the Data Insights team will also explore what the future holds for the Core Unit, and welcomes any questions or comments regarding their strategy to accomplish their goals.
B
Hello,
everyone
welcome
to
another
focus
on
call
this
time.
This
edition
number
six
will
have
the
pleasure
of
receiving
the
data
inside
score
unit
where
tadeo,
piotr
and
tomek
are
going
to
be
and
leo
sorry
are
going
to
be
explaining
a
bit
more
what
they
do,
how
they
provide
value
to
the
other,
well,
the
community
and
other
core
units,
and
potentially
discussing
some
of
the
update
to
their
to
the
budget
yeah.
B
Just
as
a
introduction
for
those
that
don't
know
me,
my
name
is
juan
the
facilitator
of
the
sustainable
ecosystem
scaling
core
unit
at
megadow,
and
if
anyone
has
any
questions,
please
intervene.
Politely
we'll
have
a
lot
of
time
at
the
end.
But
if
there's
something
you
would
like
to
say
and
it's
relevant
you
can
potentially
interject.
I
don't
think
that
the
the
guys
that
the
data
is
coordinated
will
mind
too
much
so
yeah.
If
you
want
to
take
it
away,
guys.
A
C
A
And
we'll
try
very
briefly
discuss
our
mission,
our
strategy,
our
products,
our
plans.
We
have
a
short
presentation.
It
shouldn't
take
longer
than
like
30
minutes
maximum.
Then
hopefully
we'll
have
some
time
for
for
questions
and
any
discussion
that
you
want
to
have
so
today.
If
you
can,
please
advance
the
slide
for
me,
because
you
are
controlling
this
yeah,
the
next
one,
maybe.
A
A
It's
not
that
simple
to
use
and
our
mission
is
to
make
it
accessible
to
remove
the
complexness
from
this
to
transform
it
in
a
way
that
can
be
used,
analyzed
and
utilized
by
people,
even
without
deep
knowledge
about
the
complexities
of
blockchain
data
and
how
we
approach
this,
how
we
do
it?
First
of
all,
we
use
lots
of
sources
of
unchain
and
of
chain
data.
A
We
have
everything
we
integrate
those
sources
together
and
then
we
use
protocol
semantics
to
create
enriched
and
contextualized
data
sets,
which
means
that
we
are
trans,
transforming
the
raw
data
into
something
that
is
much
easier
to
use
and
instead
of
like
working
with
transactions
and
events,
you
have
business
actions
like
opening,
vault
or
generating
that
or
bidding
in
auction,
and
this
data
those
data
sets.
We
share
using
many
channels
for
different
users.
We
utilize
different
channels
of
sharing
this
data.
We
will
come
back
to
this
later
and
give
you
a
few
examples.
A
Basically,
it's
through
database
sharing
using
something
that's
called
snowflake.
It's
the
cloud
data
warehouse
that
we
use
for,
storing
and
sharing
the
data
that
we
have.
We
have
set
of
dedicated
apis
we've
built
and
we
are
going
to
build
more
dedicated
gui's
for
users
and
we
constantly
try
to
to
work
with
people
empowering
them
by
giving
adult
support,
creating
trainings
documentation
so
helping
people
using
the
data
that
we
have
so
to
know
if
we
can
go
to
the
next
one
yeah.
A
So
let
me
very
briefly:
introduce
our
team,
I'm
tomak,
piotr
and
tadeo
are
data
engineers
and
leo
just
joined
us
a
few
days
ago.
He's
full
stack
engineer
and
will
help
us
to
extend
the
the
scope
of
things
that
we
do
so
at
this
moment
there
are
four
of
us
and
yeah.
I
think
that's
for
the
brief
introduction,
so
today
I'll
hand
it
over
to
you.
You
can
continue
more
about
our
products.
Great
you
don't.
D
Like
great,
we
wanted
to
give
really
an
overview
of
the
products
and
services
that
we
provide
to
the
community,
as
we
realize
that
it's
important
not
to
need
to
build
them
to
also
communicate
them.
So
as
an
overview.
Our
main
way
to
share
the
data
stomach
mentioned
is
through
an
api.
This
is
really
well
suited
for
tangle
users
and
application
integrations.
D
You
can
bring
it
into
your
your
scripting
or
your
application.
Whenever
you
programmatically,
therefore
make
it
fairly
easy
and
again,
you
don't
really
need
to
worry
too
much
about
maintenance.
We
handle
all
of
that.
We're
constantly
improving
the
api,
including
their
documentation,
so
something
that
you'll
we'll
see
later
on
for
less
than
google
users.
We
provide
this
data
through
google
drive
and
google
sheets
again,
making
it
accessible
to
everyone
and
then
finally,
for
more
the
casual
users,
but
I
think
the
community
in
general.
D
We
also
provide
guise
and
dashboards
because
it
allows
people
to
really
quite
quickly
get
get
all
the
insights
with
simply
like
very
little
time.
We'll
also
talk
about
that
later
and
then
particularly
something
I
want
to
focus
on
is
the
adult
request.
D
This
is
our
core
mission
is
to
really
empower
the
community
through
the
use
of
data,
no
matter
what
skill
levels
they
have
so
we're
always
very
interested
in
collaborating
with
that
being
co-units
delegates,
partners
and
committee
members
to
help
them
create
their
analysis
or
their
their
projects
that
are,
data
related.
D
So
we'll
go
a
little
bit
through
all
these.
If
you
have
any
questions,
please
let
us
know
oh
yeah,
so
I
forgot
main
thing
that
we
want
to
communicate.
Is
we
do
have
a
website
very
simple,
built
on
notion
where
we
try
to
aggregate
all
the
work
that
we
do
so
if
you
build
market,
you
should
be
able
to
get
quick
access
to
any
data
insights
that
you
find.
We
also
want
to
expand
this
to
include
the
dashboards
from
our
other
teams.
D
Another
community
members,
I
really
to
create
a
central
repository
for
data
related
stuff
around
maker.
You
also
find
things
thing
that
may
be
particularly
interesting
to
you
is
the
public
road
map
that
we
keep
up
to
date,
where
you
can
get
all
this
information
and
particularly
for
the
api
if
anyone's
new
to
it.
I
would
like
to
have
a
a
quick
walkthrough.
We
provide
a
kind
of
like
a
how-to
guide
as
well
there,
including
all
the
different
changes
we
make
throughout
time.
D
And
lastly,
we
don't
currently
have
office
hours,
but
we
we
put
in
all
this
at
your
disposal
all
these
different
contact
channels
that
you
can
have
us.
We
are
always
very
active
on
the
forum
on
discord
and
you
can
always
put
a
meeting
whenever
you
want
with
us
so
now
I'll
hand
it
over
to
pyotr.
E
E
We
try
to
cover
all
essential
topics
for
the
dao
and
for
the
community,
so
we
you'll
find
in
our
repositories
datasets
that
are
related
to
votes,
and
this
is
basically
the
most
the
most
complex
set
of
data
of
tables
and
views
and
all
other
cool
stuff.
We.
C
E
E
Bits
and
all
the
deals,
so
you
can
track
down
all
the
activity
and
get
basically
actually
know
like
all
information
that
you
you
would
have.
You
would
like
to
get
other
than
this.
We
support
governance,
and
I
mean
by
this
that
we
provide
a
whole
history
of
voting
and
including
polls
and
executive
votes.
So
you
can
just
go
and
deep
dive
into
the
whole
process
of
voting
and
see
what
was
the?
E
What
was
the
support
for
any
executive
and
also,
we
recently
added
a
data
set
that
presents
the
basically
whole
history
of
protocol
parameters.
You
can
get
any
information.
What
was
the
value
of
any
any
protocol
parameter
any
given
protocol
protocol
parameter
at
any
point
in
time
we
constantly
update
what
we
already
have
and
we
are
trying
to
deliver
it
in
the
most
usable
form.
So
you
can
also
get
from
us
a
snapshot
of
volts
at
any
point
in
time
or
you
can
like
check
what
was
the
vault
state
at
specific
time.
E
So
this
is
it.
We
are
constantly
adding
stuff
to
the
data
sets.
So
if
you
guys
have
any
requests
just
reach
out
to
us
and
we
would
be
happy
to
add
more
cool
stuff
to
our
data
sets,
we
can
go
for
the
next
slide
for
the
obvious
okay.
So
this
is
just
a
snapshot
of
and
like
on
the
high
level
of
what
we
can
deliver
so.
E
We
recently
added
information
about
votes
origins,
so
we
can
see
what
apps
or
third-party
apps
were
used
to
create
a
vault.
We
can.
We
can
also
display
a
any
particular
any
particular
protocol
parameter
on
the
timeline.
E
This
is
this
is
what
we
see
on
the
bottom
is
the
dead
ceiling
for
it's
a
collateral
and,
on
the
left
hand,
side
there's
it's
a
super
simple
chart
that
showed
how
many
it
was
deposited
and
how
many
is
what
was
withdrawn
from
the
protocol,
a
simple
metric,
but
pretty
hard
to
get
when
you
don't
have
all
the
data
in
the
database.
E
C
Okay,
so
this
is
going
over
all
the
apis
that
we
have
so,
as
taro
had
mentioned
earlier,
we're
constantly
adding
and
updating
these.
If
you
go
on
the
website,
where
we
have
a
redoc
documentation
page
set
up,
you
can
see
all
of
the
information
that
has
to
do
with
how
often
it's
updated
where
to
contact
us.
C
If
you
have
any
requests
or
anything
anything
wrong,
likes
limits
and
whatnot,
and
then
we
can
get
all
different
information,
ranging
from
governance
to
vault
liquidations
default
details
themselves,
protocol
parameters
to
a
few
of
the
experimental
endpoints
we
currently
have,
and
so
yeah
we're
constantly
revising
and
adding
more
to
these
and
more
can
be
added
per
request,
yeah
and
so
built
in
fast
api
there's
also
a
swagger
ui,
to
like
view
the
documentation
and
the
specifications.
D
Next
up
is
google
sheets
again
in
our
effort
to
make
data
as
successful
as
possible.
We
provide
any
of
these
data,
sets
that
we've
been
mentioning
through
this
interface,
as
you
can
see
from
the
screenshot.
Most
of
these
things
are
related
when
servicing
this
specific
request
that
we
have
from
them.
D
We
put
it
up
there
and
we
set
up
schedule
refreshes
so
that
it
is
always
up
to
date
again
we're
very
happy
if
a
community
member
is
not
able
to
access
certain
certain
data,
we're
very
much
happy
to
create
this
data
pipeline
specifically
to
service
and
expose
that
next
up
is
the
graphical
user
interfaces,
so
here
we're
maintaining
and
we
build
and
maintain
two
main
web
apps,
one
for
governance
and
the
other
one
for
volts.
D
Here
you
get
an
overview
and
I'll
give
you
the
example
for
volts.
Where
you
can
see
on
a
collateral
type
on
ilk
level,
you
can
see
the
general
numbers
and
metrics
related
to
it.
You
can
drill
down
to
see
individual
vaults
and
then,
when
you
drill
down
into
the
vault,
you
can
actually
get
to
see
all
the
different
actions
or
events
that
happen
to
that
box.
You
can
track
all
this
through
time,
all
throughout
this
web
apps.
D
You
have
the
possibility
of
filtering
as
well
as
exporting
to
csv
exactly
what
you're,
seeing
so
again,
making
it
as
easy
as
possible
to
people
to
interact
with
data.
D
Also
don't
shy
away
from
using
other
dashboarding
tools
for
specific
programs.
We
will
actually
use
tableau
and
data
studios
again,
it's
really
just
in
the
service
of
the
users,
exactly
what
they're
most
constantly
interacting
with.
We
can
definitely
adapt
to
that,
and
that
was
kind
of
like
the
talk
about
what
we've
been
working
on
the
different
tools
that
we
have
available.
D
Just
on
that
we
recently
published.
I
think
a
couple
hours
ago,
a
review
of
the
last
of
the
last
period,
including
all
the
work
that
we've
been
doing.
If
there's
anything
of
interest
in
there,
please
reach
out
to
us
and
now
to
the
for
the
future.
We
want
to
keep
supporting
the
protocol
and
it's
different
endeavors.
We
see
two
main
challenges
ahead
of
us.
D
One
is
layer
twos
on
boarding
those
data
sets
in
and
being
able
to
bring
full
visibility
to
the
community
for
the
change
that
maker
will
will
eventually
deploy,
and
then
the
other
is
worldwide.
Assets
here
is
really
bringing
the
what
we're
used
to
in
web
3
of
data
availability
at
all
times
and
at
quite
granular
level,
trying
to
onboard
as
much
data
from
worldwide
as
possible.
As
some
of
you
may
know,
in
traditional
finance,
this
is
quite
lacking.
D
Usually
you
end
up
dealing
with
ptf
files
and
quarterly
numbers
we're
going
to
be
working
with
the
different
world
assets,
stakeholders
to
create
a
framework
and
implement
that
framework
in
order
to
make
make
it
possible
for
us
to
scale
up
and
as
an
example,
bring
monitoring
tools
for
the
entire
community
on
real-world
assets.
D
We
really
believe
that,
by
creating
tools,
we
can
empower
users
to
gain
more
insight
and
be
more
independent,
so
we're
looking
into
building
specifically
analytical
tools
for
people
to
be
able
to
interact
more
easily
with
with
data
something
that
we've
been
looking
at
is
a
it's
a
couple
of
tools
that
can
allow
you
to
do
drag
and
drop
similar
to
tableau
turn
on
drop
table,
building
based
on
sql,
with
a
mixture
of
that
and
ui,
bringing
that
to
again
make
it
easier
for
everyone
to
interact
with
that
data.
D
We're
also
reaching
out
to
external
partners
to
see
the
what
support
our
collaborations.
We
can
create
with
them
to
again
have
maker
really
data
everywhere
in
the
ecosystem,
and
then
two
new
data
sets
that
we
have
in
the
roma
they're
really
excited
about,
and
I
think
will
have
a
big
impact
is
both
the
dye
and
mqa
usage
things
that
I
think
we
see
from
other
other
places.
I
would
like
to
contribute
to
that
and
oh
I
thought
I
was
finished.
D
I'm
not
another
big
point
I
want
to
mention
again:
is
the
public
roam
up
we're
keeping
this
up
to
date?
D
So
you
can
see
what
up
on
work
we
haven't
come
up
and
also
details
on
the
stuff
that
we
built
before
we
actually
looking
into
switching
this
into
a
tool
that
will
allow
community
members
to
directly
give
us
feedback
on
the
roadmap
and
kind
of
create
more
of
like
a
living
document
where
people
can
do
requests
on
the
promise
directly
again,
I
think
here
we're
kind
of
creating
the
building
blocks
and
it
will
be
interesting
to
see
what
other
things
we
can
come
up
in
the
future
and
now,
yes,
that
was
kind
of
the
end
of
our
presentation.
D
We
thank
you.
I
would
like
to
remind
you
that
we
currently
have
our
bottom
new
mandate
and
budget
up
for
requests
for
comment,
so
love
to
hear
from
feedback
from
from
you
on
the
community.
If
you
have
any
questions,
we'd
love
to
hear
them.
B
Nice
I'll
break
the
ice
until
people
start
getting
more
comfortable,
so
the
yeah
for
my
first
question
was
for
for
the
non-technical
user.
What's
the
easiest
way
to
access
the
data?
Is
it
going
to
your
website
and
seeing
what
data
sets
you
have
or
what
would
you
recommend.
D
I
think
initially,
that
would
be
the
first
step.
I
highly
recommend
I
I
guess
for
an
entire
user
might
be
interested
in
either
governance
or
vault
data
go
into
our
trackers.
I
think
they're
fairly,
good
and
well
explained
and
they're
able
to
easily
download
data
to
work
with
it.
D
I
would,
I
would
say,
start
from
that
and
then
again
anything
that
you
can
see
in
the
api
documentation
that
you
might
be
interested
in
and
simply
don't
know
how
to
interact
with
the
api,
we'll
be
happy
to
either
help
them
interact
with
the
api
or
simply
do
that
extract
for
them
through
google
sheets.
D
No,
it's
actually
we're
striving
to
make
it
as
permissionless
as
possible.
Currently,
the
api
is
open
registration.
You
simply,
you
can
create
a
user
yourself
and
start
using
it
with
google
drive
we're
fairly
sure
we
it's
fairly
open.
If
I
don't
have
the
link
on
notion,
I
should
update
that
soon
enough,
but
no
our
our
our
objective
is
really
to
create
public
goods
that
can
be
joined
by
anyone
in
the
community
of
maker
and
outside
of
it
as
well.
D
C
A
Here
the
only
difference
is
for
getting
data
directly
from
the
snowflake
database.
This
is
for
very
technical
users,
but
this
enables
you
to
build
any
query
that
you
want
even
very,
very
complex
things
and
even
integrate
the
data
with
your
own
data
sets.
If
you
have
any
and
this
channel,
it
requires
setting
up
your
own
snowflake
account,
which,
up
to
a
certain
point,
is
free.
A
But
then,
if
you
want
to
continue
building
very
very
complex
queries,
then
you
have
to
pay
for
snowflake
subscription,
which
is
completely
outside
of
our
reaches,
is
like
paying
directly
to
snowflake
for
using
their
processing
power,
and
this
is
the
only
exception
when
you,
you
have
to
do
something
on
your
side
to
set
up
the
free
snowflake
account
to
access
this
data,
and
if
you
continue
using
this
very
heavily,
then
at
one
point
you
will
possibly
need
to
to
buy
a
paid
subscription,
because
the
free
tier
enables
you
to
spend
only
to
use
only
limited
computing
power
on
this,
but
all
apis
all
guys.
B
A
Properly,
we
are,
we
are
buying
the
on-chain
data
from
external
companies,
so
we
are
not
running
the
infrastructure
to
get
it
by
ourselves.
A
A
fixed
cost
of
getting
the
full
history
of
the
blockchain
data,
so
we
are
not
limited
in
any
house,
so
we
can
build
an
any
like
data
sets
on
top
of
this,
but
this
lowest
technical
layer
is
outsourced
externally.
So
we
don't
manage
this.
It's
not
only
question
of
costs
is
a
question
of
effort
required
to
maintain
your
own
notes.
We
try
to
avoid
it
and
rather
outsource
it.
So
we
are
just
getting
the
row
source
data.
F
Yeah
hi
guys,
frank,
cruz,
3f
delegate
a
couple
of
questions.
F
I
noticed
you
have
been
posting
on
the
forum
about
the
possibility
of
working
with
the
rwf
team
on
doing
some
reporting,
perhaps
a
data
dashboard.
So
I
was
wondering
how
do
you
envision
this?
Is
this
something
where
anybody
can
go
on
this
dashboard
and
get
an
analysis
on
a
monthly
basis
as
far
as
rw8
asset
originators
are
lending
and
how
much
they're
collecting
how
many
of
the
loans
are
in
default?
Maybe
they're
30
days
late,
60
days,
late
et
cetera,
is
that
how
you're
envisioning
this
dashboard.
D
Yeah,
that's
correct
the
we
still
need
to
build
the
foundations
with
them
and
design.
There's
a
very
important
aspect
here
about
a
person
will
identify
information
and
information
that
cannot
be
shared
publicly.
D
We
need
to
find
a
way
to
create
a
a
process
that
separates
that
and
really
exposes
the
the
data
that
is
valuable
and
can
be
shared
publicly
and
then
our
an
ideal
scenario
will
be
sharing
as
much
data
as
possible
for
each
type
of
deal
here
will
vary
between
the
asset
type
and
the
originator
themselves,
and
also
in
how
this
framework
is
actually
setting
this
as
a
requirement
for
our
genetic
partners
to
be
able
to
provide
this
data,
and
hopefully
we
get
to
aggregate
it
higher
levels.
D
To
give
any
point
of
view
that
you're
looking
for
so
let's
say,
I
think,
from
a
let's
say,
from
a
high
level
perspective,
have
the
entire
world
of
assets,
initiatives
all
all
together
and
if
you
wanted
to
deep
dive
into
a
specific
asset
type
or
as
a
securitization,
you
should
be
able
to
and
also
always
be
able
to
pull
the
raw
data
from
it
as
well.
For
your
own
analysis,.
F
Got
it-
and
I
would
imagine,
that's
going
to
take
some
time
to
build
out
and
obviously
get
some
of
that
that
data
sets
into
this
dashboard,
but
I
wanted
to
circle
to
the
possibility
of
all
this
data
you
guys
are
are
gathering,
especially
for
governance
and
dowse
or
obviously,
something
that
I
think
need
to
be
obviously
investigated
and
the
whole
world
is
excited
to
see
what
exactly
does
do
as
far
as
how
to
govern
one
and
how
to
have
success
in
having
the
governance
of
one.
F
So
with
that
in
mind,
with
all
this
data
that
you
guys
are
putting
out
there
like
the
mcd
tracker,
the
voter
tracker
is
pretty
cool.
I
like
it
is
there
any
way
we
can
monetize
this
for
the
benefit
of
mkr
token
holders
or
maker
dao
in
the
near
future.
So
I'm
thinking
about
monetization
of
the
data
you
guys
are
putting
out
there.
So
what's
your
thinking
behind
that.
A
Actually,
we
we
never
discussed
this
from
the
very
beginning.
We
like
tried
to
create
things
that
are
public
good
and
something
that
we
as
a
dao
can
show
for
free,
but
this
data
has
value,
and
monetization
of
this
is
potentially
interesting
thing.
But,
to
be
honest,
it
was
never
never
discussed.
F
Got
it
and
one
last
question
for
leo
you
work
with
the
api
of
your
core
unit
right,
so
I'm
thinking
also
further
ahead
layer
twos
are
coming
into
the
picture.
Have
you
been
doing
any
investigation
into
what
starknet
starkware
starkdate,
what
it's
going
to
take
for
the
api
there,
because
you
know
obviously
there's
a
difference
in
language.
You
go
into
cairo
and
wondering
what
your
thought
process
was
with
regards
to
capturing
some.
You
know:
api
data
sets
from
layer
twos
like
start
net
in
the
near
future.
C
Yeah
so,
like
we've
had
discussion
about
it
and
it's
on
our
roadmap
for
what
needs
to
be
investigated
and
completed
sooner
than
later,
but
I
have
not
stopped
writing
anything.
A
Let
me,
let
me
add
to
this:
that
stock
net
is
probably
will
be
the
first
one
that
we
will
be
able
to
add,
because
we
already
started
working
on
at
this
moment,
working
on
the
die
in
stock.net,
so
very
soon,
we'll
be
able
to
to
show
the
data
about
l2
die
on
starknet
and
what's
very
important
about
all
cross
chain,
things
like
dying
transfer
between
layer,
one
and
layer
two.
So
we
are
like
in
rnd
process
here
and
it
looks
like
for
stocknet.
We
will
be
ready
very
very
soon.
F
But
that's
from
that's
from
layer2dapps
or
that's
just
from
dye.
Moving
across
the
die
a
bridge.
A
This
is
this
is
from
looking
at
breach
and
looking
at
layer
two
transactions,
so
we
and
messages
right.
So
we
we
are
getting
all
the
messages
sent
from
layer,
1
to
layer,
2
and
from
layer
2
to
layer,
1
and
also
all
layer,
2
transactions.
And
using
this
we
are
trying
to
recreate
like
the
whole
history
of
die
in
transfer
and
die
on
layer,
2.
F
Got
it
so,
I
guess
you'll
be
able
to
follow
the
money
so
to
speak.
If
there's
a
layer
to
compound
version
of
compound
right,
you'll
be
able
to
tell
what
users
are
doing
you'll
be
able
to
provide
the
data
sets
of
what
the
unique
users
of
a
layer
2
like
stark
that
are
doing
correct.
If
I
understood
you
correctly,.
A
Yeah
absolutely
this
is
like
our
holy
grail
for
for
this
year.
This
is
definitely
what
we
will
try
to
do
to
extend
everything
that
we
do
in
layer,
one
to
layer
two
and
be
able
to
to
understand
the
the
usage
of
dye
on
on
layer
two
and
if
one
day
mcd
is
implemented
on
layer.
Two
then
also
provide
full
analytics
on
layer,
2,
mcd
implementation.
B
Right,
thank
you,
frank
for
the
inquisition.
Now
I
had
a
quick
question
regarding
one
of
the
graphics
that
you
said
open
like
where
the
vaults
were
open
and
I
saw
that
one
was
make
it
out.
Was
that
like
oasis
or
or
what
does
maker
do
mean
in
that
context,.
B
Okay,
just
checking
yeah
regarding
the
and
if
anyone
has
any
questions,
either
raise
your
hand,
write
in
the
chat
or
or
just
jump
in,
but
regarding
the
tale
mentioned
that
you're
going
to
be
trying
to
digitize
the
data
coming
from
real-world
assets
and
that
that
might
include
a
lot
of
pdfs.
B
What's
the
plan
over
there
is
it
I
don't
know
either
hiring
some
solution
to
scan
for
documents
or
is
it
more
like
outsourcing
everything
to
some
third
world
country?
How
are
you
planning
on
doing
this.
D
That's
part
of
the
discussions
that
we
need
to
have
with
the
real
world
stakeholders.
I
think
what
we'll
do
is
we'll
pick
a
first,
a
originator
to
work
with
them
and
then
build
that
out
with
them,
with
again
with
the
short-term
goal
of
onboarding
their
data,
but
also
with
the
long-term
goal
of
creating
this
framework
and
then
we'll.
Alternatively,
learn
from
the
new
and
your
originators
again,
we
will
we'll,
I
expect
to
see
quite
a
wide
range
of
let's
say,
data
quality
from
simply
data.
That's
just
simply
this.
D
The
original
sensor
is
simply
incorrect.
All
the
way
to
data
that
that
we're
just
tapping
into
directly
into
the
systems
and
extract
it
ourselves,
and
then
here
we
will
be
supporting
a
lot.
The
the
real
world
asset
focused
core
units
as
they'll,
be
the
ultimate
judges
of
the
quality
of
that
data.
B
All
right,
so
I
guess
that's
part
of
the
mandate
and-
and
there
are
a
lot
of
unknowns
on
how
the
data
will
come
and
how
you're
going
to
deal
with
that
yep,
perfect
and
yeah.
Regarding
the
budget,
there
were
a
lot
of
concerns
that
were
raised
by
the
by
the
community
members.
I
don't
know
if
you
guys
want
to
address
those
or
or
comment
on
that.
D
There's
quite
a
few:
we
we
did
answer
them
on
the
forum.
If
there's
any
additional
service,
we'll
be
happy
to
talk
about
them
quite
over
region,
we
definitely
are,
but
our
new
revised
budget
is
does
increase
our
scope
and
therefore
the
the
budget
itself,
something
that
we're
trying
to
work
with
is
this
idea
that
we're
doing
a
a
one-off
budget
proposal
for
an
entire
year,
where
there's
a
lot
of
unknowns?
D
And
again
we
are.
Our
work
mostly
responds
to
the
needs
of
the
protocol
on
the
community.
A
lot
of
them
are
unknown
at
the
moment,
so
we're
trying
to
have
that
flexibility
to
be
able
to
respond
to
them.
B
One
of
the
comments
was
to
to
potentially
post,
like
a
budget
to
keep
operations
running
and,
and
do
this
separate.
Is
that
something
that
you're
considering.
D
Yeah,
actually,
the
the
way
we
design
the
budget
it
kind
of
serves
that
purpose.
We
do
have
the
expansion
pool
as
a
separate
line,
the
idea
here,
so
you
could
consider
that
being
purely
the
expansion
side
of
things.
D
The
idea
here
is
really
what
we
are
requesting
for
specific
actions
that
we
want
to
take.
We
also
take
into
account
that,
in
response
to
the
future
needs
of
the
protocol,
we
might
need
to
switch
and
maybe,
instead
of
hiring
an
additional,
a
member,
we
need
to
switch
and
maybe
pay
up
our
external
partner
to
provide
us,
as
you
mentioned,
for
example,
the
example
of
be
able
to
process
pdfs.
D
So
that's
why
we're
also
keeping
that
line
all
together
and
again?
Well,
the
budget
is
less
transparent
in
that
sense,
because
the
single
budget
line
ideas
that
we
we
come.
We
support
this
with
a
quarterly
financial
report
where
we
actually
show
the
breakdown
of
where,
essentially,
that
expansion
funds
end
up
going
to.
B
Nice
yeah
and,
regarding
the
I
think,
tomic
answer
already
in
the
forum,
but
maybe
you
want
to
comment
on
how
would
things
look
like
if
this
budget
gets
voted
down
and
things
stay
as
they
were
voted
four
months
ago?
A
Probably
the
biggest
problem
will
be
like
expansion
to
layer,
2
and
real
world
assets.
This
definitely
requires
a
lot
of
work
and
with
the
current
budget
it
will
be
hard
to
to
really
extend
the
scope
to
this
data,
so
we
will
mostly
have
to
focus
on
maintaining
things
that
we
have
and
by
maintaining
we
mean
like
keeping
up
to
date
with
all
changes
in
the
protocol.
A
Answering
the
user
requests
for
new
queries
and
new
visualizations,
but
it
it
will
like
limit
our
capabilities
of
extending
the
the
scope
of
data
to
all
those
new
fields
that
we
think
are
very
important.
So
if
we
just
keep
the
current
budget,
we
will
be
able
to
to
to
to
maintain
what
we
have
to
follow.
All
the
changes
in
the
protocol
to
answer
to
other
queries,
but
it
will
be
hard
to
to
extend
the
scope
to
layer,
2
and
real-world
assets
and
all
new
things
that
probably
we
will
future
will
bring
so
yeah.
B
Yeah
makes
sense
so
right
now
it
seems
that
a
lot
of
the
new
mandate
involves
real-world
assets
and
layer
2,
where
maker
is
still
not
very
present.
So
would
you
consider,
as
a
team
waiting
for
this
budget
until
maker,
is
there
or
is
it
something
that
you
want
to
start
before
for
any
given
reason.
D
Well,
the
the
work
really
need.
I
mean
we
already
started
working
on
these
things,
while
maker
will
deploy
and
strong
net
in
the
future,
when
the
protocol
itself
needs
to
have
data
that
data
visibility
beforehand
and
with
volvo
assets,
because
we
still
need
to
define
the
the
framework
and
actually
define
how
we're
going
to
be
doing
this
there's
already
a
low.
Let's
say
time
that
we
need
to
dedicate
to
this.
D
In
the
meantime,
the
the
what
we
call
globally,
the
adult
requests
that
we
have
you
can
see
in
the
the
financial
review
is
something
that
we're
doing
ongoing
with
other
core
units.
We've
currently
been
working
with
some
core
units,
not
all
of
them,
and
we're
kind
of
hoping
to
also
expand
that
to
the
entire
to
the
other
core
units
that
are
left
there,
that
we
still
haven't
engaged
with.
B
A
Yeah,
so
we
we
are
sure
that
this
is
very
important
for
the
future
of
the
dao.
So
for
now
we
will
fight
for
this
to
keep
it
as
it
is
and
have
this
extension
included
in
the
new
budget.
If
it
gets
voted
down,
then
definitely
it
will
come
back
either.
We
will
ask
for
this
money
very
soon
or
other
core
units
will
have
to
increase
the
budget
to
to
cover
this,
because
without
access
to
good
data,
it's
really
hard
to
like
to
maintain
and
to
manage
those
new
areas.
A
A
C
Yeah,
I'm
wondering:
do
you
have
data
on
who's
using
the
data
and
metrics
on
you
know
kind
of
the
the
the
use
of
there's
so
many
as
a
non-technical
person,
there's
like
so
many
urls
and
it's
hard
to
know
where
information
is
or
to
find
it.
So
I'm
wondering
if
you
have
visibility
on
the
visibility
of
your
different
websites
and
who's
using
the
information.
I
think
frank's
question
was
really
interesting
to
me
about
you
know
how
could
we
monetize
the
information
in
some
way
or
productize
it
even
further?
D
So
we
have
a
couple
metrics
that
we've,
that
we
are
that
we've
been
able
to
monitor
on
one
side
in
the
in
the
api
we
can
see
when
a
new
user
is
created
again
here,
because
we're
we
don't
enforce
to
have
a
we
ask
for
email,
but
not
necessarily,
it
needs
to
be
a
real
email
or
it
doesn't
necessarily
need
to
be
a
work
email.
Let's
say
we
can't
be
too
sure
where
the
the
real
users
are
coming
from.
D
In
that
sense,
and
then
on
the
on
the
website,
we
are
tracking.
We
we've
implemented
metrics,
like
analytics
metrics,
like
google
analytics
on
the
website,
just
to
see
a
people
like
by
clicking
around,
but
we
don't
have
we're
not
identifying
the
users
behind
us
from
our
personal
experience
with
the
people
that
reach
out
to
us
through
mainly
discord,
we've
seen
a
big
focus
on
core
units
also
because
we've
we've
invested
in
doing
that
communication,
two
core
units
and
then
we
do
get
community
members
reaching
out
something
that
we
want
to
change
for.
D
This
new
new
period
is
doing
more
proactive
communication
to
the
wider
community.
One
of
the
big
points
is
we
realize
that
not
everyone
reads:
the
forum
has
time
to
read
the
forum
or
what
the
new
data
sets.
We
have
come
out
per
out
so
using
our
graphic
interfaces
to
showcase
that
new
data
as
an
example,
the
protocol
parameters,
we
think
that
would
be
very
valuable
and
kind
of
increase.
The
the
engagement
of
people
with
our
data.
B
Yeah
just
work
on
what
jango
was
saying
right
now,
so
we
we.
We
know
that
some
of
the
critical
core
units
are
using
these
services,
for
example,
risk.
But
it's
hard
for
the
for
the
wider
community
to
understand
who's
using
it.
And
how
and
potentially
it's
is
it
like
a
public
good
that
the
whole
dollar
should
you
should
fund
or
or
is
it
each
core
unit
that
should
reflect
their
costs
in
a
true
way
and
and
fund
them
internally?
So
I
think
that
would
be
useful
to
have.
B
F
D
No
sorry,
no
just
on
your
comment.
We
did
discuss
internally
this
idea
of
whatever
services
we
provide
to
specific
units
to
to
charge
them
directly.
Our
main
concern
there
is
that
we
end
up
working
for
the
co-units
rather
than
for
the
community
at
large.
We
like
to
still
think
that
we
are
even
when
we're
engaging
with
community
with
core
units.
D
And
another
point
that
I
would
be
worried
about
there
with
the
proposed
change
would
be
that
new
core
units
would
be
dissuaded
from
working
with
us
because
before
knowing
the
benefits
of
working
with
us,
that
we
need
to
be
engaging
in
paying
us
or
subtracting
them
amount
from
their
budgets.
B
D
Yeah
absolutely
we're
pressing
and
there
we
have
our
public
channel
on
discord
and
you
can
find
all
the
all
the
members
on
the
court
unit
also.
They
can
dm
us
also
very
active
on
the
forum
as
well.
So
we'll
we'll
see
a
message
and
then
mainly,
I
think,
on
the
website
on
the
contact
information
you
can
again.
If
you
don't
need
to
want
to
do
a
public
post,
you
can
connect
with
us
and
book
a
meeting
directly
with
us,
so
yeah
happy
to
engage.