►
A
Okay,
hey
everyone
welcome
to
wednesday's
dvc
calling
thomas
flirt
with
govcoms.
Thank
you
all
for
joining
us
just
a
couple
of
friendly
reminders.
If
you
will
during
the
call,
if
you
want
to
be
called
on
for
a
question
or
comment,
please
use
the
the
chat
function
or
also
you
can
use
the
raise
hand
function
and
during
the
call.
So
thank
you
all
for
joining.
Go
ahead
and
hand
it
off
to
room.
B
All
right,
hey
everyone,
I'm
gonna,
start
off
by
sharing
my
big
excellent
role,
and
so,
like
I
wrote
on
the
forum
today,
I
think
the
the
most
important
thing
to
talk
about
is
sort
of
how
to
transition
and
what
does
the?
What
is
everyone?
Do
they,
the
workforce,
to
look
like
in
the
short
run,
basically
right
and
even
sort
of
during
the
even
like
over
the
course
of
the
pregame.
So
after
the
in-game
approval
maps
are
approved,
and
then
until
we
do
the
actual
endgame
plan
launch
and
sort
of
launch
the
the
metadatas.
B
And
what
do
we
expect
to
happen
right,
and
I
think
the
I
found
out
like
this
way
of
kind
of
showing
the
whole
the
like
the
structure
is,
is
pretty
helpful
because,
what's
shown
here
is
actually
kind
of,
and
there
well
and
there's
also
the
the
there's,
also
the
ecosystem
actions
down
here,
but
basically
that
you
have.
This
is
kind
of
a
good
way
to
to
get
an
overview
of
like
everything
there
sort
of
is
in
a
sense
like
where
all
the
people
are
located
in
the
way
right.
B
You
have
the
scopes,
which
are
basically
sort
of
top-down
delineation,
right
of
like
how
you
make
your
governance
and
the
dvc's,
how
they
kind
of
interact
with
the
ecosystem.
B
And
then
you
have
the
metadows,
the
metadata
clusters
and
they're
kind
of
there's
a
one
way
of
kind
of
doing
work
for
the
protocol
and
then
they're
the
ecosystem
actors
and
that's
sort
of
another
way
of
doing
work
for
the
protocol.
Basically,
and
then
the
kind
of
like
the
structure
in
some
sense
is
that
yet
at
the
top
you
have
the
dbc's,
the
delegates
and
the
councils
or
tribunals.
B
B
And
the
point
is
then,
based
on
the
scope
framework,
that's
developed
by
these
sort
of
three
based
on
the
interaction
between
the
three
groups
at
the
top,
the
coin
and
sort
of
interpret
those,
and
then
they
interact
with
the
workforce,
which
is
either
people
in
the
metadata
clusters
or
ecosystem
actors.
B
Yeah,
so
I
guess
I'll
just
talk
a
little
bit
more
about
the
scopes
for
a
second
right,
so
the
idea
is
basically
that,
like
you
know
the
scopes
that
really
represent
the
total
sort
of
complete
end-to-end
possibility
of
what
maker
does
right
and
then
all
of
them
has
a
scope
framework
and
that
scope
framework
put
you
know
with
as
much
detail
as
possible
describes
how
we
do
this
particular
thing.
Right
and
that's
you
know,
that's
a
key
piece
of
it.
Part
of
the
end
game
is
that
we
want
to.
B
We
want
to
unlock
all
these
things
down
right.
We
want
to
prevent
kind
of
like
human
discretionary
decision
making
and
instead
maximize
this
kind
of
autonomous.
B
You
know
algorithmic
approach
to
how
the
dow
is
run
right
and
the
governors
actually
run
the
scopes
and
the
key
sort
of
role
that
the
governors
play
is
that
they
are.
They
are
responsible
for
following
the
scope
frameworks.
So
what
that
means
is-
and
let's
say,
there's
a
scope
framework.
You
know
the
scope
framework
for
for
rwa
right.
That
could
be
something
like
you
know:
risk
and
clean
money
scoring
or
something
like
that.
Right,
then,
that
this
is
how
they
sort
of
this
is
the
like.
B
It's
a
framework
that
determines
this
is
how
we
figure
out,
which
protectors
to
increase
the
debt
ceiling
to
right
and
then,
if
it
turns
out
that
the
core
unit
of
a
particular
governor
has
sort
of
completely
ignored
what
the
the
risk
and
clean
money
scoring
part
of
the
of
the
rwa
collateral
scope
framework
says
and
just
gone
ahead
and
proposed.
B
So
if
a
government
does
something
like
that,
then
they're
actually
like
they're,
actually
like
they
might
actually
get
penalized
for
that.
So
that
might
be
seen
as
sort
of
you
know
as
like
a
failing
like
that
sort
of
a
failure
to
provide
the
results.
They're
supposed
to
be
right
right.
One
of
the
results
that
they're
they're
guaranteeing
is
that
work
is
done
according
to
the
scope
frameworks
right
and
they
don't
sort
of
mess
around
and
make
things
up
on.
The
fly.
B
And
that's
sort
of
how
it
all
how
it
all
works
right
and
at
the
top.
You
basically
have
this.
You
have
the
four
primary
scopes,
and
this
is
like
the
extent
of
this
is
basically
what
what
maker
does
right.
So,
there's
there's
protocol
engineering
and
actually
there's
like
some
kind
of
I
mean
so
proto
engineering,
it
sort
of
has
it's
split
up
into
three
pieces
and
it
really
plays.
It
has
a
very
unique
role
initially
so.
B
Protocol
engineering
is
actually
treated
differently
than
everything
else
in
the
sort
of
the
the
early
game,
basically
early
on,
and
it's
subdivided
in
this
way
and
functions
a
bit
differently,
but
then
basically
other
than
other
protocol
engineering,
which
is
sort
of
all
the
major
developments
as
well
as
running.
B
You
know,
running
security
on
executive
votes
and
that
kind
of
stuff,
then
we
basically
have
you
know
decentralized
collateral,
which
could
be
onboarding
decentralized
collateral,
changing
the
risk
parameters
with
centralized
collateral
or
managing
d3ms
for
for
metadatas,
and
then
there's
rwa
right,
which
is
the
same
thing.
But
for
for
protectors-
and
it's
actually
much
like
decentralized
collateral
is
more
has
more
stuff
that
it
can
do
right.
It's
both
interacting
with
ethereum's
metadata.
It's
also
onboarding
and
off-putting
collateral.
B
You
know,
does
marketing
and
growth
initiatives
and
all
of
these
all
of
these
scopes
like
so
so.
A
key
feature
of
the
in-game
plan
is
actually
that
it's
supposed
to
be
possible
to
have
it
run
in
a
com
in
what
I
would
call
a
full,
fully
reactive
and
sort
of
fully
reactive
mode,
which
means
that
there's
actually
no
budget
whatsoever
sort
of
allocated
top
down
to
take
any
kind
of
initiative
top
down.
B
B
I'll
answer
these
questions
in
just
a
second
but
I'll
come
and
I'll
just
come
with
an
example
of
of
like
what's
the
kind
of
what's
this
reactive
kind
of,
because
I
mean
I
think
I
think
the
top-down
classic
approach
seems
pretty
it's
pretty
straightforward
right
for
decentralized
collateral.
It
spends
some
money
to
unborn
for
rwa.
It's
something
like
spend
some
money
to
onboard
a
psm
and
then
for
growth.
It's
like
spend
some
money
to
do
a
marketing
campaign,
something
right,
but
you
also
have
for
decentralized
collateral.
B
You
have
this
sort
of
the
the
bottom-up.
You
know
collateral
onboarding
the
ref
share
right,
so
a
random
metadata
or
a
company
could
can
do
an
entire
collateral
onboarding
for
maker
and
have
it
all
ready
to
go
and
build,
create
a
code
for
the
executive
vote
on
everything,
and
then
they
basically.
B
They
basically
propose
that
to
the
decentralized
collateral
scope
and
then
they
determine
whether
it's
a
valid
proposal,
and
then
they
I
mean
so
they
sort
of
react
to
this
work
right.
They
don't
take
it.
Take
the
initiative
themselves
to
to
do
a
particular
level
onboarding.
They
simply
sit
there
wait
around
for
somebody
else
to
do
all
the
work
and
take
all
the
risk
and
then
determine
whether
it's
valid
and
then,
if
it
is
valid,
then
unwanted
and
set
it
up
with
a
rev
share.
B
B
You
know
how,
if
you,
if
you
come
to
the
decentralized
kaleidoscope
and
you
follow
a
certain
format-
that's
an
opportunity
for
you
to
make
money
right,
there's
an
opportunity
for
you
to
to
put
in
place
a
rep
share
and
write
some
code
and
make
money
from
that,
and
this
whole
dynamic
of,
like
you,
know,
rev,
share
and
and
kind
of
make
a
reacting
to
others,
taking
risks
and
and
doing
sort
of
the
upfront
work.
B
That's
so
central
to
the
endeavoring
right,
because
that's
really
that's
it's
a
funnel
fundamental
way
to
deal
with
the
challenge
of
organizing
a
workforce.
Where
the
problem
is,
I
mean
we
talk,
you
know
we
talk
a
lot
about
kpis
and
so
on,
right
and
and
rev
share.
Is
you
know
it's
the
ultimate
kpi
right,
because
the
kpi
is
like
built
into
the
comp
itself.
B
So
that's
basically
always
something
we
want
to
be
available
to
handle
right
and
have
that
be
seen
as
an
interesting
opportunity,
and
there
will
be
tons
of
opportunities
like
this,
because
there'll
be
the
opportunity
to
do
this
kind
of
stuff
on
every
single
l2,
for
instance,
and
there
will
be
opportunity
to
do
it
for
different
like
to
have
multiple
versions
of
the
same
collateral,
with
different
risk
parameters
and
so
on.
B
Of
course,
finally,
I
mean
there's
ultimately
also
then
like
the
sort
of
like
the
risk
parameter.
I
guess
it's
just
right.
There
right
risk
forever
adjustment,
which
is
like
necessarily
top
down
right,
so,
but
it's
still,
even
though
it's
top
down
in
the
sense
that
maker
is
paying
to
get
the
work
done
and
it's
overseen
by
the
scopes
and
the
governance
and
so
on.
It's
still
a
kind
of
it's
reactive
in
the
sense
that
it
it
looks
at
what
are
the
correct
collateral.
We
have
available.
B
What's
the
data
we
have
available
and
then
let's
change
the
parameters
a
bit
right
and
and
these
two
interact,
these
two
interact
nicely,
because
once
you
do
the
collateral
onboarding,
then
you
count
on
on.
Basically
this
the
design
of
collateral
scope
itself
to
then
begin
to
change
the
risk
parameters
and
increase
the
debt
ceiling
based
on
on
the
performance
of
this
collateral
type.
Basically,.
B
B
Even
do
another
thing
about
it:
you
can't
do
glitter
onboarding
with
rush
here
for
psms,
because
they
don't
make
money
so
yeah.
So
I
guess
that's
the
only
thing
that
happens
in
real
life
collateral
is
they
sort
of
they
react
to
to
facts
to
and
then
change
with
parameters
to
protect
us
primarily
and
then
for
growth.
B
It's
actually
kind
of
interesting,
bottom-up,
rev,
share
kind
of
thing
that
has
right,
but
so
basically
that
you
know
the
the
core
business
model
of
every
metadata
is
use
of
like
front
end.
User
acquisition
for
the
front
ends
right,
but
the
rev
share
available
for
user
acquisition
is
also
available
to
ecosystem
actors
right.
So
it's
not
just
metadials
that
are
incentivized
to
try
to
to
bring
users
to
the
maker
ecosystem.
B
But
basically,
what
I
mean,
because,
because
one
of
the
things
we
want
to
avoid
is
that
people,
you
know
that
it's
just
sort
of
it's
just
literally
a
scene
as
a
way
to
kind
of
like
turn
it
into
to
like
sort
of
monetize
our
user
base.
Basically,
then,
the
idea
is
that
the
cash
like
the
revenue
share
that
you
get
provided
as
a
company
as
like
a
non-metadata
is
actually
like.
It's
not
actually
income
you're,
getting.
Rather
it's
like
a
grant
for
marketing.
B
Basically,
so
I
mean
a
good
example
would
be
something
like
oasis
right,
so
races
they
will
have.
They
have
tons
of
users
with
maker
and
they
would
be
qualified
for
like
a
massive
rough
share
or
massive
basically
cut
of
disability
fees.
But
then
the
growth
scope
would
would
I
mean
the
growth
scope
would
basically
sort
of
verify
that
that
money
isn't
like
it
isn't
just
income
that
goes
to
oasis
shareholders.
B
Basically,
rather
it
is
a
grant
that
oasis
can
do
can
use
to
to
you
know
to
to
acquire
more
uses
for
their
front
end
right,
so
we
basically
give
them.
We
basically
give
them
money,
so
they
can
go
out
and
do
marketing
campaigns
for
oasis
and
then
in
doing
so
major
benefits,
because
we
get
even
more
users
and
they
benefit,
because
this
just
benefits
their
their
core
business
model,
which
is
to
provide
premium
features
in
the
front
end
right.
B
So
basically,
something
like
oversee
a
third
party.
B
So
this
managing
the
brain
that
also
relates
heavily
to
metadatas
right
and
ensuring
that
metadows,
don't
somehow
like
cause,
spread
brain
damage
to
the
major
ecosystem
as
a
whole
and
then,
finally,
on
top
of
that,
that's
completely
there,
that's
absolutely
the
possibility
of
doing
actual
sort
of
top-down
marketing
campaigns,
but
initially
that's
we
really.
You
know
we
don't
want
that
initially
right,
we
want
to
really
we
want
to
cut
down
the
complexity
of
the
workforce
initially.
B
B
Anyway
and
then
the
point
is
that
all
this
stuff
down
here-
this
is
basically
all
like
these
are
supporting
scopes,
so
really
what
they
basically
do
is
they
provide
kind
of
infrastructure
for
this,
the
primary
scopes
to
function,
so
an
example
is,
like
interface,
helps
with
building
the
the
you
know,
the
the
core,
the
sort
of
the
basic
front
end
that
decentralized
collateral
is
using
right,
but
also
growth
is
using
to
attract
users
to
and
like
stability
and
liquidity,
supports,
decentralized
collateral
and
rwa
collateral
to
understand,
alm
and
impacts
and
peg,
and
so
on,
provide
them
with
the
resources
for
that
an
infrastructure
and
run
sort
of
infrastructure
necessary
for
something,
like
you
know,
the
oracle's
necessary
for
some
decentralized
collateral
and
so
on.
B
Basically,
so
that's
also
and
the
reason
why
the
supporting
scopes
have
tribunals,
but
the
primary
scopes
have
councils
and
coordinates
is
because
the
supporting
scopes
are
just
meant
to
be
a
lot
smaller,
so
they
won't
have
that
much
activity,
while
the
primary
scopes
could,
you
know,
make
or
grows
to
a
really
really
huge
size.
They
basically
will
have
both
the
council
and
then
they'll
have
multiple
core
units
right.
So
you'll
have
multiple
groups
doing
these
things
in
parallel
and
in
a
sense
of
competing
or
providing
diversity
for
how
it's
dealt
with
right.
B
So
you
might
have
two
two
different
risk
parameter
adjustment
proposals,
each
quarter
for
what
to
do
the
protectors
and
then
mkr
holders
can
actually
pick
which
one
they
prefer
or
multiple
different
growth
campaigns
happening
simultaneously
and
so
on.
B
Okay,
let's
just
go
through
some
some
questions.
First,.
B
So
how
will
tribunal
members
be
paid
for
the
service?
One
thing
I'm
not
clear
on
is
how
governor
downs
staff,
all
these
positions,
while
still
trying
to
sustain
their
metadata
with
the
standard
tokenomics.
Yes,
so
so
basically
I
mean
so
the
way
governors
they've
worked
right,
so
so,
first
of
all
governance.
B
What
it
basically
means
is
that
all
budgets
paid
to
councils
and
tribunals
and
core
units-
that's
all
paid
by
make
a
call
right
and
then,
on
top
of
those
budgets,
the
governor
gets
an
additional
20
or
whatever
we
whatever
we,
we
arrive
at
right,
but
there's
some
there's
kind
of
like
a
multiple
like
this
there's
a
there's,
an
overhead
fee.
B
Basically,
that
doesn't
go
to
the
actual
people
doing
the
job,
but
rather
goes
to
the
governor,
that's
kind
of
ensuring
the
quality
of
the
job
right
and
then
what
happens
is
then
if,
for
instance,
a
core
unit
fails
to
to
follow
the
framework
or
a
council
just
like
proposes
a
really.
You
know
a
kind
of
a
biased.
You
know
change
to
the
framework
or
you
have
you
know,
misconduct
or
whatever.
It
could
be
right
if
tribunals
also
kind
of
like
misinterpreting
the
frameworks
and
so
on.
B
Any
of
these
cases
maker
can
then
penalize
the
governor.
So
so
it's
like
maker
doesn't
mega,
doesn't
have
to
deal
with
something
like
firing.
People
anymore,
dealing
with
with
the
performance
directly
any
longer.
Instead,
it's
just
like
any
time
maker
isn't
like
believes
that
performance
isn't
as
expected,
then,
instead
of
trying
to
sort
of
handle
it
manually,
which
is
really
difficult
right.
So
that's
the
relationship
problem
is
super
difficult
for
for
maker
governance
to
deal
with
like
individual
people.
B
B
They
kind
of
figure
that
out
internally
and
because
they
take
that
risk
and
they
and
they
sort
of
have
the
job
of
kind
of
like
figuring
it
out
internally
and
of
course,
most
importantly,
they
try
to
extrapolate
this
stuff,
so
they
never
have
to
actually
be
penalized
in
the
first
place.
That's
what
they
get!
The
twenty
percent
overhead
fall,
so
yeah.
So
governors
are
pretty
well
off
in
the
sense
that
they
actually
have
this
like
very
predictable,
recurring
income
stream.
B
That
will
be
in
place
right
from
the
start
right
that
they
will
just
be
earning
overhead
for
for
all
of
the
the
core
units
councils
and
tribunals
that
they
they
guarantee
right
from
the
start.
And
then
they
just
have
to
make
sure
that
nothing
goes
wrong.
Basically,
because
they're
they're
yeah
they're,
providing
the
guarantee
right
they're,
taking
the
risk
of
that.
B
And
so
next
question
is:
do
scopes
ever
change
seems
difficult
to
predict
what's
necessary
in
10
15
years,
and
the
answer
is
basically
yeah.
They
don't
I
mean
in
theory
they
could
change
if
there's
some
kind
of
extremely
logical
natural
way
to
reorganize
them.
But
that's
you
know
that
that
has
consensus
right
so
so
that
there
literally
is
no
opposition,
and
it's
just
everyone
agrees
that
that
change
should
be
made.
B
But
in
practice
that's
that's,
that's
very
unlikely
to
happen.
So,
basically,
we
should
expect
that
they
will
never
change
and
the
reason
why
it's
not
so
difficult
to
predict
what's
necessary
is
because
it's
very
limited.
What
make
a
core
does
right.
This
is
a
part
of
locking
down
what
exactly
is
mega
core
doing
and
basically
it's
maintaining
a
stable
coin
with
realized,
collateral
and
decentralized
collateral,
and
then
it's
l2
and
doing
marketing
for
it,
and
that's
it
right.
B
And
the
the
the
kind
of
this
you
know,
the
complexity
of
what
maker
does
will
never
increase
over
time.
All
that
complexity
will
just
happen
in
the
metadatas
and
they
also
figure
it
out
on
their
own
right
and
the
maker
still
does
interact
with
with,
like
all
the
metadatas
and
all
this,
the
complexity
that
they
do
and
that's
basic.
B
That's
the
ecosystem
and
the
governance,
scopes
that
have
this
sort
of
special
status
in
the
system
of
being
a
kind
of
this
sort
of
courts
almost
and
that
often
interact
also
with
with
stuff
that
metadatas
are
doing,
and
that's
also,
why,
like
you,
can
really
like
the
good
analogy,
is
really
to
think
of
make
a
call
as
like
a
network
state.
B
And
also,
I
think,
increasingly,
that's
going
to
be
like
a
good
meme
for
us
to
use
right
that
we
can
talk
about
how
the
whole
in
the
endgame
the
whole
ecosystem
becomes.
This
like
self-regulating
thing
right,
where
maker
sort
of
it
doesn't
really
do
anything
directly.
Rather,
it
regulates
what
the
meta
metadatas
do
and
within
very
predictable
frameworks
and
so
rule
sets
that
that
basically
protect
users
and
protect
maker
itself
and
an
example
could
be
if
a
metadata
tries
to
build
a
ponzi
scheme.
B
Right
then
make
a
call
would
penalize
it
first
and
would
say
we're
gonna
keep
slapping
you
with
the
penalty
every
every
day.
You
don't
remove
the
ponzi
scheme
from
your
front
end
and
then
what
that
means
is
regular
users
when
they
interact
with
metadatas.
They
actually
know
that
there
are
certain
guarantees
around.
B
Basically,
the
quality
of
of
what
they're
interacting
with
right-
and
you
could
somewhat
compare
this
to
buying
stocks
in
a
kind
of
a
global.
You
know
like
a
globally
recognized
financial
center
versus
buying
you
know
buying
it
in
some,
you
know
buying
north
korean
stocks
or
something
right.
You
have
a
lot.
You
need
to
be
a
lot
more
careful
you're
going
to
buy
north
korean
stock.
B
Okay,
frank
asks:
does
the
growth
scope
also
include
business
development
and
community
growth,
plus
retention,
or
just
marketing
and
creator
initiatives
yeah?
It
could
really
include
all
of
that.
Basically,
it
could
potentially
also
do
none
of
it
right
and
just
only
stick
to
these
like
reactive
tasks,
right
of
managing
the
brand
and
overseeing
third-party
revenue,
share
marketing
and
sort
of
interacting
reactively
with
the
metadata,
but
it
could
also
be
the
place
where
big
budgets
are
put
made
available
for
some
big.
B
You
know
ambitious
marketing
plan
or
hackathon
thing,
or
something
like
that,
so
that
that's
I
mean
that's
all
possible
and
how
it
actually
happens
will
be
sort
of
defined
by
the
scope
framework.
B
And
then
die
foundation
the
question
about
diversity
yeah,
so
the
divination
is
sort
of
an
interesting
case
where
I
think,
basically
they
might
be
covered
by
I
mean
yeah.
I
guess
actually,
I
think
now,
like
the
probably
the
simplest
way
to
to
to
think
about
it.
Right
now
is
that
there
will
that
would
simply
be
a
coordinate
under
the
growth
scope.
That's
probably
the
most
straightforward
way
to
to
approach
that,
although
I
think
in
the
very
long
run,
we
might
want
some
kind
of
special
approach
to
the
diet
foundation.
B
Yeah
I
mean
in
fact
yeah.
Well,
it's
okay,
so
it's
not.
If
I
mean
the
problem
is,
of
course
coordinates
have
to
be
connected
to
governor
diaz
and
it
doesn't
really
make
sense
that
the
dime
foundation
be
connected
to
a
particular
goblet.
So
maybe
temporarily,
that
would
be
the
case,
but
then
later
on,
that
will
probably
we
would
change
that.
Somehow.
B
Okay,
so
peyton
asks
about
the
clash
between
maintaining
like
having
governor
dallas,
maintain
core
units
and
then
also
grow
their
own
community.
I
grow.
B
That
would
be
the
like.
You
would
have
like
in
a
governor
now
you
both
have
an
administration
team
that
handles
kind
of
the
I
mean
we
should
have
talked
a
little
bit
about
this
with
peyton
right.
That,
like
one
thing,
is
that
so
governance
tribunals
right.
B
They
are
very
unique
in
this
whole
setup,
because
they're
kind
of
like
the
supreme
court
of
the
whole
system
right
so
as
a
result,
governance
tribunals
have
to
actually
be
administration
teams
in
the
governor's
house.
B
Otherwise
you
get
this
weird
circular
authority
thing
where
you
don't
know
who's
kind
of
like
you
could
have
a
you
could
yeah
I
mean
you,
wouldn't
you
know
the
governor's
tribunal
could
get
fired
by
the
governor
or
something
randomly
which
you
don't
really
want
right
so,
but
so
I
think
actually
the
governor
does
they
would
they
would
both
have
an
administration
team
that
runs
their
their
part
of
the
governance
tribunal,
which
is
you
know
the
really
critical
piece
to
make.
B
The
whole
thing
work,
then
also
an
administration
team
that
is
so
responsible
for,
for
basically
the
meta
engineering
and
the
growth
of
the
governor's
community
and
even
like
building
products
for
the
governors
and
then,
if
they
want
to
do
that
right
and
the
reason
why
I
think
it's
important
to
combine
these
things
is
because
I
mean
basically
it's
the
insurance.
B
Component
right,
so
that's
a
really
critical
element
of
the
in-game
plan
and
how
we
approach
the
decentralized
workforce
is
maker,
doesn't
interact
directly
with
people
right
maker
interacts
with
the
governors,
and
then
it
maker
is
unhappy
with
the
work
the
governance
are
collateralized,
so
maker
can
can
penalize
them
if
they,
if
they
don't,
live
up
to
the
expectations
they
said,
and
then
the
thing
is
for
that.
B
If
that
insurance
is
meant
to
to
be
effective
and
the
governor's
you
know
need
to
be
probably
handling
that
kind
of
to
sort
of
extrapolate
and
prevent
preventing
it
from
taking
actions
that
will
that
will
trigger
penalty
or
something
like
that,
then
the
governor
needs
to
have
its
own
community.
B
That
holds
its
token
right,
because
it's
sort
of
the
token
that
ultimately
plays
that
role
and
the
other
thing
is
also
that
it
should
be
possible
for
a
governor
to
to
actually
fulfill
its
roles,
even
if
it
loses
its
its
teams.
B
So
you
would
actually
have
a
governor
that
would
lose
its
administration
team
for
whatever
reason,
but
they
just
randomly
disappear
or
for
whatever
reason,
and
then
the
fallback
is
that
the
governor
community
directly
plays
the
role
of
the
tribunals
and
the
coordinates
and
the
the
councils
right
through
their
own
governance
process
so
and
by
having
that
ability.
That's
a
highly
flexible
sort
of
amorphous
group
of
of
government
our
token
holders
and
that
that
that
provides
is
very
high
level
of
resilience
right.
B
But
if
we-
but
it
only
works
if
there's
a
proper
community,
so
there's
real
matter,
there's
like
a
real
distribution
and
so
on.
So
that's
basically
why.
I
think
it
makes
sense
to
combine
these
things.
B
B
And
so
the
thing
is,
if
you
already
have
all
of
these
pieces
as
a
part
of
a
governor.
Now
you
may
as
well
also
do
stuff
like
enable
your
community
to
to
you
know,
to
yield
far
more
of
your
own
token
through
either
die
vaults,
for
instance,
especially
since
all
of
this
comes
for
free
right.
So
initially
you
don't
even
have
to
necessarily
grow
the
community.
In
that
sense,
like
you
could
just,
but
you
you
know,
you
just
have
to
have
have
a
front
end
for
the
further
metadata.
B
But
then
I
think
it's
very
likely
that
if
you,
if
you
go
through
kind
of
the
it's,
I
mean
it's
really
a
matter
of
economies
of
scale,
right
that
if
you've
already
done
all
the
effort
to
have
a
real
down,
that's
functional,
then
you
may
as
well
have
a
front
end.
That
you
know
can
provide
some
some
products
and
services
to
the
user
base
that
you
already
have
in
the
community
that
you
already
have.
B
This
I
mean-
and
this
is
one
of
the
really
unsolved
questions
like
unsolved
issues
right,
but
what
exactly
happens
if
a
governor
tries
to
fire
the
governance
tribunal
so
like
a
governor,
a
metadata,
the
metadata
governance
process
off
boards,
the
administration
team?
That's
it!
You
know,
let's
say
here
right:
there
might
be
so
there's
there's
an
admin
administration
team
here,
so
administration
team
means
that
it's
a
team,
that's
been
directly
elected
by
metadata
governance,
separate
from
so
so
actually
another
case.
B
The
simpler
case
is
that
you
have
something
like
a
coordinate,
doing
realized
collateral
and
then
the
administration
team
and
the
governor
could
actually
just
directly
off-board
that
core
unit
immediately,
if
they
wanted
to,
if
if
they
saw
that
okay
they're
doing
some
bad
stuff,
that
this
is
gonna
be
trouble
right.
Let's
simply,
let's
cut
them
off
immediately
right
and
that's
come
that
works
fine,
because
it's
sort
of
I
mean
it's.
It's
like
it
ultimately
is
just
a
coordinate,
that's
sort
of
doing
work
in
the
dao.
B
It's
not
it's
not
kind
of
it's
not
reaching
the
medic
governance
level
right
where,
where
somebody
that
has
power
over
governance
itself,
but
then,
once
you
get
to
the
governance
tribunal
in
the
governance
school,
then
you
run
into
this
issue
of
like
you
could
have
a
situation
where
there
could
be
two
men
of
the
house
that
somehow
are
in
like
a
conflict
or
something
like
that
right
and
then
yeah
I
mean,
I
guess,
that's,
okay,
that's
not!
B
Maybe
that's
not
a
that
example
doesn't
exactly
make
sense
you
maybe
even
you
have
something
where
the
governance
tribune
yeah
well
anyway,
so
I
can't
think
I
can't
actually
think
of
like
a
realistic
situation
where
this,
where
this
really
matters.
B
That's
sort
of
that's
just
a
matter
of
the
governor
sort
of
following
its
its
incentives
and
trying
to
make
money
by
earning
the
overhead
and
not
getting
getting
hit
with
penalties
and
but
the
governor's
tribunal.
I
mean
that's
a
different
level
of
like
that's
sort
of
the
that's
like
law
in
a
sense
right
and
that
you
need
to
have
some
level
of
like
trust
and
and
sort
of
you
know,
can
just
be
run
as
something
where
you're
trying
to
make
money
right
because
of
course,
that'll
immediately
make
the
whole
thing
fall.
Apart.
B
But
yeah
so
anyway,
all
in
all,
the
answer
is
probably
that
we
need
some
kind
of
special
like
special
case.
Maybe
even
it
could
even
be
that
it's
something
else
that,
like
we
define
some
special
type
of
team,
that's
only
available
the
governor
does
and
that
only
can
play
the
role
of
running
the
the
governor's
tribunal
and
there's
also
the
question
of
like
what
happens.
B
If
then,
the
government's
repeal
disappears,
then
you
just
have
the
government
house
I
mean,
then
you
would
assume
that
basically,
the
government
house
then
stand
the
involved
tribunals
using
the
token
voting,
but
that
might
also
be
something
that
that
is
like
it
needs
like
it's
something
it
really
needs
to
be
considered
properly
in
the
long
run,
because
this
is
like
once
once
we
set
a
system
like
that
in
place.
It's
unlikely
they
will
change,
because
it's
kind
of
niche
and
and
it's
an
edge
case,
but
then
that
also
means.
B
If
we
make
the
wrong
decision,
it
could
have
a
lot
of
you
know
we
could
be
in
a
much
much
worse
position
than
if
we
just
spent
the
time
really
sort
of
exploring
that
edge
case.
Initially,
but
either
way
like
we
don't
have
to
like
that's
all
something
that
can
like.
You
can
still
be
determined
over
the
course
of
several
years,
because
these
kind
of
sort
of
niche
questions
would
still
be
completely
sort
of
open
to
to
redesigning.
B
Even
you
know,
even
after
the
metadata
launch,
for
instance,
it's
simply
that
the
direction
always
has
to
be
towards
us
vacation
right.
So
we
don't
want
it
to
be
reopened
after
15
years
right
and
then
be
re-engineered
to
something
that
somehow
benefits
a
particular
group
a
little
bit
more.
Something
like
that.
B
I
just
want
to
talk
a
little
bit
about
what
are
kind
of
the
big
tasks
that
I
think,
like
the
whole
dial
should
should
should
realign
to
to
completing
if
the
in-game
approval
map
is
is
accepted
right
and
what
really
is
like
the
major,
the
major
work
in
in
sort
of
how
to
transition,
how
to
implement
the
early
stages
right
and
so,
first
of
all,
I
mean
in
the
long
run.
It's
not
even
like
it's
really
just
these
scopes
right.
So
these
scopes
they
really
sort
of
determine.
B
This
is
what
this
is,
what
the
maker
does
in
the
long
run
right,
but
in
the
short
run,
there's
certain
like
there's
certain
sort
of
little
bit
more
specific
kind
of
tasks
that
that
it
makes
it
like
that.
Some
which
have
a
lot
like
some,
which
are
much
more
important
early
on,
while
others
are
actually
not
important
at
all
initially
and
can
be.
B
B
Gets
a
special
like
get
some
special
treatment
in
that,
and
protocol
engineering
gets
kind
of,
I
mean
all
the
other,
all
the
other
scopes,
mostly
they
sort
of
split
up
right
so
that
the
oversight
and
the
kind
of
management
of
the
scope
happens
in
the
scopes
happening
in
the
governor
does.
But
then
the
actual
work
happens
in
the
metadata
and
the
ecosystem
actions
right,
but
for
protocol
engineering,
that's
not
the
case.
B
The
protocol
here
just
stays
exactly
the
way
it
is
now,
and
there
could
also
be
other
exceptions
like
this,
so
it
may
not
not
be
only
protocol
engineering
but
but
protocol
engineering
is
like
the
best
example
of
something
that
is
guaranteed
to
stay
that
way,
and
we
also
we're
on,
like
I
mean
for
a
lot
of
the
other
scopes
in
general.
B
B
That
goes
beyond
that,
and
let
all
that
stuff
just
happen
in
the
middle
house,
but
you
can
never
get
too
many
smartphone
developers
and
you
can
never
get
too
much
protocol
resources,
especially
in
this
kind
of
critical
stage,
that
the
mega
will
be
in
right
anyway.
So
basically
there'll
be
sort
of
three
major
pieces
internally
in
protocol
hearing
right,
so
one
is
just
kind
of
like
basically
general
kind
of
security
of
the
protocol
and
then
executive
votes.
B
Basically-
and
this
will
take
on
a
new
and
bigger
role
once
you
have
metadowns,
because
this
also
plays
the
role
of
sort
of
auditing
governance
decisions
by
the
metadatas
and
and
then
you
have
building
up
the
end
game
roadmap.
So
that's
basically
building
all
these
like
building
the
the
farms
building,
the
tokens
building
the
anti-reflectivity
mechanic
and
so
on.
It
turns
out
that
that's
actually
quite
easy
to
build.
B
So
that's
something
we
could
have
done
quite
quickly
in
terms
of
getting
the
end
gameplay
launch
going
and
then,
after
that,
it's
mostly
about
building
the
singularity
engine
and
some
other
key
features
like
mkr
the
ability
to
generate
dive
with
mpr
the
ability
to
lock
up
mkr
to
get
higher,
yield
and
better.
B
You
know
better
rates
from
generate
die
and
then
the
singularity
engine,
that's
this
sort
of
long-term
ultimate
engine
and
some
other.
I
love
these
like
key
pieces
right
that
all
the
I
mean
so
something
like
operating
the
the
node
network
to
also
do
staking
and
other
stuff
like
that,
building
your
own
roll-up
potentially
and
that
yeah.
So
that's
all
sort
of
handle.
B
I
mean
that's
less
like
an
end
game
right
and
then
there's
the
l2
road
map,
which
is
kind
of
considered
like
separate
to
the
in-game
robot,
because
the
l2,
because
the
in-game
roadmap
that
actually
disappears
in
the
end
so
that
actually
just
like,
goes
away
entirely
like
it's
built
and
then
at
a
certain
point,
there's
nothing
more
built
on
it
and
then
disappears
and
the
l2
rover
is
different
because
that's
actually
permanent
and
that's
and
basically
security
and
ops
and
then
l2
roadmap.
B
Those
are
like
the
two
permanent
tasks
of
protocol
of
the
protocol
engineering
scope
in
the
long
run
and
the
l2
roadmap
could
also
become
reactive
in
the
long
run,
with
following
the
same
kind
of
the
bottom
up
deployment
with
revshare
right.
So
it's
the
protocol
here
you
can
simply
sit
around
wait
for
a
metadata
to
deploy
a
new
shot
involved
in
your
new
singularity
engine
and
then
verify
that
it's
secure
and
then,
if
that's
verified,
then
start
using
it,
and
then
they
get
a
rev
share
for
having
done
the
work.
B
Yeah
and
then
I
mean
then
there's
like
decentralized
collateral
out
of
the
vehicle
and
there's
growth
and
that's
basically
right.
These
are
like
the
call
that
has
to
keep
functioning.
B
Rwa
collateral
is
a
little
unique
because
right
now
we
are
looking
at
you
know
not
having
anyone
at
the
dow
level,
but
having
a
bunch
of
rwa
capacity
in
the
protectors,
so
that'll
be
like
that'll,
be
a
big
challenge,
but
one
that
basically
I
mean
that
we
will
that'll
just
be
a
major
focus
of
this,
like
immediate
transition
period
right
so
immediately.
Following
the
end
game
plan
approval,
one
of
the
top
tasks
will
be
to
then
try
to
to
incubate
a
completely
new
real
asset
core
unit.
B
And
that
could
also
be
done
with
help
from
the
protector
clusters
yeah
and
then
there's
growth
and
there's
like
I
mean
at
least
these
tasks,
but
then
potentially
also
top
down
marketing
campaigns
and
so
on.
But
I
don't
think
that
I
mean
like
I
said:
I
don't
think
that
that's
we
don't
have
to
do
that.
Initially
we
can
we
can.
B
B
And
then,
in
the
supporting
scopes,
there's
I
mean
there's
basically
three
pieces
that
are
like
extremely
critical,
and
so
one
thing
is
to
just
keep
like
infrastructure
running
and
keep
locals
running
and
so
on
and
and
that's
kind
of
actually
like
and
also
like
migrate
them
to
metadatas.
B
So
you
know,
the
idea
is
that
the
infrastructure
scope
will
not
itself
be
responsible
for
actually
sort
of
running
and
maintaining
the
orbitals
in
the
same
way
that
that
it
works
today,
but
rather
that
the
metadata
will
do
that
following
a
kind
of
a
logic
similar
to
the
results
guarantee
right
where
the
meta
dials,
they
can
run
two
or
three
oracle
notes,
and
then
they
get
paid
to
basically
run
those
oracle
notes,
and
then
they
pay
a
part
of
that,
or
rather
like
the
notes
themselves,
get
paid
to
to
run
the
nodes.
B
And
then
the
metadows
get
paid
a
kind
of
overhead
to
basically
govern
and
provide
an
insurance
on
the
performance
of
that
oracle
node.
So
that
would
be
so
so
over
time
like
so
initially
the
infrastructure
scope
will,
just
you
know,
have
to
keep
keep
the
locals
running
the
same
way.
They
are
currently
running,
but
then
over
time
migrated
to
to
something
where
it's
sort
of
outsourced,
in
the
same
way
that
everything
else
is
outsourced.
B
And
then
there's
basically
the
interface
I
mean
I
mean,
and
this
is
really
the
big
like
one
of
the
biggest
tasks
or
possibly
the
biggest
task
of
like
what
we
have
to
build
from
scratch
to
make
the
end
game
work,
and
that
is
the
metadata
front
end
which
it
seems
like
like
that
it
might
actually
be
more
like
it
might
be
a
better
chance
to
do
that
than
to
build
a
smart
contract,
because
the
smart
contracts
are
very,
very
simple,
but
so
on
one
hand
we
have
the
dial
tool
kit
right,
that's
sort
of
the
whole
approach
to
how
we
actually
organize
all
the
work
that
the
metadows
do
and
that
the
scopes
do
and
that's
going
to
be
handled.
B
B
B
Initially,
the
ecosystem
scope
will
have
to
kind
of
bootstrap
that
from
the
ground
up
and
then
the
interface
scope
they'll
have
to
bootstrap
the
other
part
of
the
metadata
front
end,
which
is
the
like
the
really
important
one
right,
the
the
actual
like
metal,
farms
and
vaults,
and
either
die
bald
with
farming
and
the
governance
portal
and
delegated
governance
farming,
and
so
on
and
right
now
they're
in
a
like,
it
seems
like
we
will
have
a
cluster
that
will
actually
do
the
the
dot
toolkit
right.
So
that
would
be
like
that.
B
We
may
also
there
might
even
be
an
additional
protector
as
well,
and
I
getting
you
know,
we're
getting
some
additional
high
quality
clusters
and
then
because
we're
having
we
have
more
high
quality
clusters.
It's
it's
worth
it
to
build
an
entirely
additional
metadata
to
to
accommodate
that.
If
we
think
the
cluster
is
good
enough,
basically
and
then
the
way
it's
kind
of
it,
it's
already
possible
that
this
could
work
from.
The
start
is
that
you
have
the
ecosystem
scope,
then,
basically
work
on
the
dial
toolkit
with
the
cluster
of
the
metadata.
B
And
that's
like
a
really
good,
I
think
that's
a
really
good
example
of
like
the
power
of
the
metadata,
basically
and
how
we
can
use
them
in
this
context,
right,
because
this
is
really
what
were
we
saying
what
we,
how
the
way
we
set
up
set.
This
up
is,
then
that
we
kind
of
create
this
completely
self-contained
right
team
that
can
they
can
they
both
run
a
metal
now
and
they
they
need
a
dow
toolkit
themselves
to
make
that
work,
and
then
they
also
will
be
developing
it
themselves.
B
But
then,
from
that,
you
know
from
that
position,
they're
perfectly
positioned
to
basically
get
paid
from
make
a
call
to
do
this
work,
but
then
also
later
on,
get
paid
from
other
metadatas
to
to
build
some
like
specific
work.
They
need
to
get
they
need
to
to
have
done
on
their
own
toolkits
right,
and
I
think
this
is
not
yet
there's
no
real
effort
in
this
yet,
but
I
think
it
it's
like.
We
should
look
into
doing
the
same
thing
for
the
the
front
end
itself.
B
So
that
you
know
we
would
also
have
a
metadata
which
is
like
a
standard
creator
and
then
what
it
builds
is.
It
builds
the
the
kind
of
the
white
label
from
it
right
like
the
core
code
base
of
the
front,
and
I
think
all
the
metadata
will
probably
be
involved
in
that
to
some
extent.
But
I
think
we
need
a
specific
team
that
really
built
it
all
from
scratch
and
really
figured
it
figures
it
out
from
scratch.
Because
there's
going
to
be
a,
I
think
there
will
be
some
unique
requirements
in
order
to.
B
So
that's
like
an
interesting
case
where
this
decentralized
front
end
thing.
There's
a
lot
of
of
projects
out
there
that
are
right
now,
thinking
about
how
can
they
look
into
building
actual
decentralized
front
end?
How
can
they
develop
some
like
principles
and
some
some?
You
know
some
code
base
that
that
enables
for
a
very
lightweight
and
very
secure
decentralized
front,
and
that's
also
has
good
user
experience.
B
But
yeah
like
in
terms
of
I
mean
basically,
this
is
the
main
thing
that's
missing
right
now,
like
so
the
metadata
front,
so
for
me
everything
else
is
covered
quite
well,
and
I
think
that
there's
like
a
good
implementation
and
sort
of
transition
plan
for
how
all
of
this
stuff
gets
done
right
like
you,
where
it
sort
of
slowly,
it
starts
off
that
you
just
sort
of
reorganize
the
coordinates
into
the
scopes,
and
then
some
of
the
work
in
the
scopes
then
starts
migrating
into
the
metadata
or
into
the
ecosystem.
Mattress.
B
And
then
we
also
sort
of
incentivize
the
creation
of,
like
you,
know,
an
inter
intermediate
economy
right
because
that's
when
the
real
magic
happens
is
when
the
metadata
start
to
sort
of
permissionlessly
collaborate
and
and
supply
stuff
to
each
other,
all
the
ecosystem
actually
start
to
do
that
right
without
it
having
anything
to
do
with
maple
core
directly.
B
B
Okay,
a
question
from
frank:
can
you
please
go
over
how
a
council
defines
kpis
for
workforce
within
a
certain
scope
and
if
there's
any
budget
responsibilities
I
mean
so
I
I
of
course
can't
just
kind
of
provide
the
answer
to
that,
but
I
mean
I
think
it
like
an
example
would
be
something
like
I
mean,
I
think
maybe
the
best
example
for
where
we
already
have
some
work
to
this
extent
would
be
for
renewable
essence
right
where
we
have.
B
We've
had
work
on
these
like
core
frameworks
in
the
past
that
basically
provide
a
kind
of
a
step-by-step
kind
of
guide
like
this
is
how
we
evaluate
real
estate
right.
So
then
we
would
make
this
kind
of
like
this
is
how
we
evaluate
which
protect
like
how
to
assign
that
ceiling
to
different
protectors.
Right.
B
And
yeah
and
then
and
then
the
second,
so
so
that's
the
point
is
that
that
gets
sort
of
figured
out
over
time
right
and
that's
exactly
by
defining
the
scope
framework
and
then
the
second
part
of
the
question
is:
is
the
council
should
be
unlucky,
scope
responsible
for
designing
the
toolkit
module
for
their
respective
scope?
And
yes,
that's
that's
exactly
what
they're
doing
and
the
thing
is.
These
two
things
are
closely
related
because
the
dial
toolkit-
that's
where
you
build
these
kpis.
Basically,
so
that's
where,
like
you,
you
sort
of
you.
B
You
know
like.
So
if,
if
a
core
unit
wants
to
propose
at
that
dead
ceiling
increase
to
a
particular
protector
right,
then
they'll,
like
the
way
that
will
work.
Is
that
there's,
like
a
button
in
the
dow
toolkit
that
it
says
click
here,
to
propose
a
death
ceiling
increase
to
a
protector
right
and
then
it
will
have
all
these
like
forms
and
and
like
you
know,
the
software
itself
would
define
what
is
all
the
data
that
we
need
right?
These
are
the
and
then
you
know
so
like
here.
B
We
need
to
know
about
legal
opinions
and
like
so
you
sort
of,
and
it
simply
doesn't,
allow
you
to
even
submit
something
that
doesn't
contain
all
that
the
required
information
right
and
that's
and
then,
if
you
still
somehow
like
manage
to
do
that
or
the
information
you
put
in
it's
just
bad,
then
that's
where
you're
held
responsible
for
like
that's
where
the
core
unit
and
the
coinage
governor
is
held
responsible
for
for
failing
to
to
provide
quality,
work
right
and
and
one
of
the
key
ways
that
you
you
figure.
B
That
out,
is
you
basically
say?
Look
I
mean
this
is
what
the
box
said.
It
said
put
in
a
legal
opinion
and
you
put
in
some
like
random
stuff
right,
that's
a
clear
violation.
That's
a
penalty
right
or
another
example
could
be.
Someone
did
something
without
even
using
the
dial
toolkit
at
all
and
that's
another
sort
of
example
of
like
this.
Is
then
how
you
like?
That's
when
you
you
you
you
be.
B
You
know
that
make
a
call
applies
a
penalty
right
and
then
there's
a
kind
of
more
difficult
case
where
the
coin
had
actually
followed
the
framework
correctly.
But
the
framework
itself
was
bad.
B
You
know
so
like
they
forgot
to
even
have
a
you
know,
a
section
for
legal
opinion
or
whatever
legal
assessment
right
and
then
because
there's
no
legal
assessment
involved
then
turns
out
that
the
legal
structure
is
totally
broken
and
a
bunch
of
money
is
lost.
Because
of
that
and
then
that's
where
it's
the
council
itself,
that's
held
accountable
right
and
then
it
might
actually
be
all
of
the
governor
dallas.
B
That
would
be
like
all
of
the
like
together
sitting
on
the
council
right,
because
because
all
the
governors
make
up
the
council
basically
or
it
might
be,
that
yeah
I
mean
it's,
it
would
be.
Those
would
be
sort
of
more
complicated
cases
right.
But
but
it's
a
lot
more
sort
of
straightforward
when
it's
the
individual
coordinates
because
they're
a
lot
more
right.
They
sort
of
follow
the
framework.
And
if
the
framework
is
good,
then
you
can
basically
penalize
them
if
they
fall
outside
of
the
framework.
B
And
yeah,
it's
like,
I
mean
it's
a
tr,
it's
it's
a
transparency
and
governance
reporting
tool,
but
also
a
kind
of
jira
collaboration
tool
right.
So
that's
the
whole
idea
of
like
the
difference
between
a
dao
and
a
public
company
is
that
in
a
public
like
in
a
company,
you
have
a
kind
of
you
have
two
sets
of
languages.
You
have
a
language
that
you
use
for
your
internal
operations
and
then
you
have
a
language
that
you
use
for,
like
you
know,
providing
financial
statements
to
you
to
share
all
this
and
the
tao.
B
You
have
a
single
language
so,
like
this
system
that
you
use
to
to
gain
transparency,
that's
also
directly
the
system
that
the
work
is
done
in
and
that's
critical,
because
the
the
community
itself
needs
to
basically
assume
that
the
work
will
you
know
that
over
time
you
will
have
you
know
you
will
have
corruption
and
and
and
negligence,
and
all
that
stuff
will
occur
over
time
and
there's
no
protection
for
for
the
for
token
holders,
other
than
actually
directly
verifying
it
themselves
right,
because
there's
no
fiduciary
duty.
B
Okay,
I
mean
one
thing
I
just
want
to
talk
about
is
like
one
other
major
thing
that
still
needs
to
be
figured
out
is
basically
the
ability
for
metadatas
to
access
smart
contracts,
so
that
could
potentially
be
I
mean
so
I've
been
hoping.
I
could
set
up
what
I
call
the.
B
You
know
what
I
call
a
smart
contract
shop
basically
or
we
could
work
with
some
existing
external
smart
contract
shops
right
that
could
be.
I
don't
know
companies
like
that
that
exist
somewhere,
because
the
problem
is
the
quality
like
it's
so
difficult
to
get
people
that
can
actually
build
smart
contracts
at
the
level
of
security
and
quality
that
we
we
need
in
maker
and
but
of
course,
a
key
objective
of
endgame
plan.
B
Is
we
don't
want
protocol
engineering
to
be
sitting
around
and
doing
all
the
work
for
all
the
metadata
like
they
will
be
doing
the
final
security,
verification
on
the
metadata's
governance
actions
and
and
the
the
spells
that
the
metadata
want
to
do,
but
they
shouldn't
be
like
designing
the
thing
in
the
first
place.
We
really
would
like
to
avoid
that
right.
So
every
time
that
I
wanted
to
do
collateral
onboarding
it
shouldn't
be
that
then
protocol
engineering
has
to
do
it
or
some
someone
over
in
the
scopes
right.
B
It
should
rather
be
that
let's
say
like
the
rose
you
know
whatever
the
rose
cluster
becomes
as
a
metadata.
They
they
want
to
unbolt
collateral,
so
they
go
out
and
they
pay
some
money
to
an
ecosystem
actor
to
do
the
work.
And
then
you
know,
then
they
do
the
work.
The
work
is
done
and
then
they
verify
it's
very
secure
and
it's
great
and
then,
when
sort
of
all
done,
then
they
basically
will
send
that
work
to
to
this
sort
of
the
security
scope
over
here.
B
The
security
and
operations
piece
of
the
protocol
in
your
scope,
who
then
verify
whether
it's
actually
done
fully
up
to
the
the
the
security
standards
it
needs
to
be
at,
and
if
it
is
fully
up
to
those
standards,
then
it
gets
executed
and
the
metadata
takes
some
kind
of
action.
B
And
if
it's
not
up
to
the
standard,
then
the
metadata
gets
a
penalty
for
having
you
know
been.
What
do
you
call
that
basically
neglected
their?
You
know
the
duty
right
to
not
put
a
random
insecure
code
into
the
into
a
maker
into
a
major
executive
mode.
B
But
I
think
this
is
basically
like
this
is
the
these
are
the
key
points
related
to
the
like?
What
actually
needs
to
be
done
and
how
you
would
transition
from
the
the
current
workforce
and
then,
like?
I
said,
like
I
think
that
you
know
we
want
to
reduce
budgets
of
make
a
call,
and
we
want
to
reduce
head
count
and
stop
doing
a
lot
of
the
stuff.
We're
doing
now
that
that
isn't
that
isn't
sort
of
directly
necessary
for
make
a
call
or
then
game
plan,
but
then
also
wherever
we
can.
B
A
So
like,
what's
the
in
a
way
like
what
what
comes
after
the
others,
so
first
we
define
the
scopes
and
then
the
scope
define
the
clusters
who
then
form
the
metadata.
So
how
does
the
current
workforce
transition
to
these
method
hours,
like
the
mechanics
of
that,
are
not
entirely
clear
to
me.
B
Yeah,
so
actually
most
of
the
clusters
are
already
in
place
and
yeah,
and
I
think
you
could
think
of,
like
very
kind
of
you
could
think
of
like
the
scopes
defining
the
club
like
defining
the
metadata
right,
so
you
would
have
so,
but
for
the
most
part
metadata
they
they're,
like
a
lot
of
them,
do
the
same
thing,
which
is
try
to
grow
the
overall
ecosystem
right.
B
So
you
have,
I
mean
so
basically
you
could
say
the
decentralized
collateral
scope
interacts
with.
You
have
two
creators
that
are
just
like
focused
on
on
growing
and
and
building
a
user
base
and
building
products.
B
Then
you
have
one
creator,
that's
doing
the
dow
toolkit
and
then
potentially,
if
I
can
figure
it
out,
then
we'll
have
another
creator.
That
does
the
metadata
front.
If
I
can
find
a
way
to
actually
make
that
cluster,
because
that
would
just
be
very
will
fit
very
well
and
then,
if
we,
if
this
thing
this
last
metadata
doesn't
come
into
existence,
then
instead
it
will
be
some
kind
of
like
special
project
done
sort
of
top
down
by
the
interface
scope.
B
B
You
know,
but
when
we
really
want
to
scale
rebel
assets,
I
mean
scale
the
sort
of
the
complexity
or
less.
It's
not
the
kind
of
the
most
simple
verbal
assets
that
we're
currently
trying
to
scale
up.
Then
we
need
to
have
the
rwa
collateral
scope
properly
staffed.
Basically,.
B
But
yeah,
that's
I
mean
that's
basically
it
right.
To
some
extent
I
mean,
then
you
can
say
growth.
They
interact
with
all
the
metadatas
and
then
protocol
engineering.
They
also
sort
of
they
build
some
of
the
basic
stuff
that
everything
interacts
with
and
then
the
the
secondary
scopes
they
support.
I
mean
you
have
the
specific
case
of
the
ecosystem,
scope
and
interface
scope
that
need
to
do
these,
like
major
projects,
the
other
supporting
scopes,
basically
to
slowly
ramp
up
their
specific
support
that
they
need
to
provide
the
primary
scopes.
B
Yeah
and
there's
one
more
question
from
miguel
right,
so
how
does
vault
adoption
works?
Why
would
megafall
give
away
stability
fees
to
metadata?
Is
it
just
for
me?
Risk
mitigation
reasons
yeah
so
and
that's?
The
answer
is
yes
to
mitigate
risk
right,
because
the
metadata
now
takes
first
loss
on
the
on
the
vault
and
it's
also
to
outsource
the
the
operational
work.
So
the
metadata
becomes
responsible
for
providing
oracle.
It
comes
responsible
for
updating
the
risk
parameters
and
it
becomes
responsible
for
doing
like
maintenance
or
whatever
security
related
to
it.
B
So
basically
it's
it's
it's
I
mean
it's
actually
like
a
d3
right,
so
so
maker
specifies
we
want
to
give
this
particular
creator,
a
250
million
vault
adoption
debt
ceiling,
and
then
they
can
go
and
do
whatever
they
want
with
that
250
million,
but
they
just
have
to
pay
us.
You
know
a
two
percent
return
or
something
like
right.
Under
250
million
or
whatever
hours
of
a
base
base
rate
of
return
is.
B
Then
they
can,
they
can
put
whatever
stability,
they
want
yeah,
and
this
includes
existing
vaults.
That
I
mean,
I
think
so.
What
I'm
right
now
thinking
is
that
we
should
basically
define
that
make
a
call
can
only
have
vaults
that
earn
more
than
one
million
dollars
in
revenue
per
year,
so
any
vault-
that's
not
earning
more
than
one
million
per
year,
would
basically
be
off-boarded
to
make
a
call.
B
B
B
Yeah
and
then
it
will
be
done
through
a
mid
to
revolted
option.
In
fact,
every
metadata
will
have
its
own
vault
adoption,
or
maybe
there
will
be
like
a
single
global
thing
where
each
metadata
gets
its
own
terms,
but
it's
basically
like
it's
like
a
d3m,
but
instead
of
being
the
parameters
being
written
in
code,
the
parameter
is
written
in
a
myth.
Basically.
A
Yeah
thanks
fran.
There
are
a
couple
of
things
that
you've
you've
you've
shared
it's
not
new,
but
I've
heard
it
a
few
times.
So,
as
someone
that's
been
through
countless
reorganizations
over
my
career,
the
messaging
is
really.
A
We
need
to
be
really
careful
here,
because
what
happens
is
that
if
you're,
truly
signaling
a
workforce
reduction,
that's
okay
to
say
we're
going
to
reduce
head
count
and
people
are
going
to
expect
that
at
the
same
time,
as
is
that-
and
we
could
also
have
people
in
different
roles
today-
that
maybe
are
are
way
competent
enough
to
be
able
to
assume
other
roles
that
it
might
be
a
different
responsibility.
A
But
what
what
I'm
seeing
is
is
that
the
the
knowledge
base
that's
contained
within
the
people,
that
we
have
you're
really
going
to
need
these
people
to
be
able
to
make
this
transition,
and
in
doing
so
and
the
messaging
is
really
important
and
and
what
I'm
referring
to
is
that,
if
it
when,
when,
when
I
start
hearing
about
headcount
reductions
or
budget
reductions,
you
effectively
put
people
on
notice
and
when
that
happens,
is
that
people
get
concerned
about
their
livelihoods
and
then
they
start
to
think
about.
A
Okay,
what
are
my
other
options
and
what
I'm
seeing
is
that
maker
down
needs
the
the
the
head
count
and
the
stored
knowledge
within
the
organization
to
be
able
to
make
this
transition?
So
so,
could
you
be
clear
on
exactly
what
you're
referring
to
when
you
say
you
know?
Okay,
so
great
budgets
need
to
be
reduced.
Are
you
just
gonna
put
pick
a
number
and
we're
gonna
manage
toward
that,
and
then
also
when
you're
talking
about
headcount
reductions?
B
So
I
used
to,
I
mean
I've
always
been
in
favor
of
budget
reduction
and
headcount
reduction,
and
I
really
don't
think
that
there's
any
way
we
can
have
130
people
create
value
like
we
don't
have.
We
don't
have
the
ability
to
to
structure
130
people.
I
believe.
B
But
what's
clear
now
is
that
we
can't
take
some
kind
of
broad
approach
right.
There's
been
this
thing
like
this
talk
around:
let's
do
like
cut
all
budgets
by
10
or
something
like
that
right
and
I
don't
think
that's
gonna
work
right.
B
Instead,
it's
gonna,
be
I
mean,
like
I
talked
about
it's
basically
a
matter
of
like
looking
at
who
are
the
ones
we
basically,
who
are
the
ones
we
actually
need
and
who
do
we
not
need
right,
and
so
in
practice
I
mean
I've
already,
I'm
already
doing
a
lot
of
that
work.
B
Myself
right
I
mean,
and
one
of
the
really
key
approaches
is
to
figure
out
who
are
these
of
the
natural
clusters
for
the
metadatas
right,
because
that's
the
way
I
mean-
and
I
think
it's
also
to
your
point
like
that's
a
way
to
provide
continuity
right,
because,
if
you're,
if
you're
clustering
to
metadata,
then
there's
kind
of
a
clear,
I
mean
role
right,
there's
a
clear
there's,
a
clear
need
for
you
right
in
the
system,
there's
a
way
that
we
actually
know
that
someone
who's
in
a
metadata.
B
We
have
sort
of
a
reasonable
shot
at
getting
value
out
of
that
right,
because
the
metadatas
are
built
around
performance
right
and
sort
of
inbuilt
kpis
in
a
sense,
and
then
there
are
the
people
who
are
the
right
fits
for
for
to
be
facilitators
in
scopes
and-
and
I
think
that's
kind
of
the-
I
mean
that's
for
a
lot
of
that.
That's
for
like
the
existing.
B
You
know
we
have
a
lot
of
existing
facilitators
that
just
like
fit
fit
well
on
with
with
the
with
what
they're
already
doing
that,
where
they
can
just
sort
of
naturally
run
the
the
scope
itself
right
and
then
there's
also
some
people
that
they
don't
fit
into
running
a
scope.
B
They
don't
fit
into
running
a
metahow,
but
they
I
mean
they
don't
want
to
work
in
a
minute
now,
because
it's
a
bunch
of
tokens
and
a
bunch
of
meta
and
whatever
that's
too.
That
could
be
weird
right
and
there
are
some
people
that
actually
should
be
organized
as
ecosystem
actors
right.
So
that
should
that
that
might
be
us
basically
to
get
a
grant
to
set
up
a
company,
and
then
that
company
starts
providing
services
to
the
metadatas
into
the
scopes
and
then
there's
also
exceptions.
B
So
something
like
protocol
engineer,
that's
just
something
we
want
to
maintain
right
and
in
fact,
also
the
the
stark
net
core
unit
would
be
sort
of
merged
with
protocol
engineering
in
a
sense
right.
Actually,
I
think
technically,
we
would
merge
them,
but
it's
more
like
they
would.
B
They
would
fall
under
the
l2
roadmap
right
and
there's
also
potentially
risk
it
might
be
an
exception
where
they
just
keep.
We
just
keep
that
running
the
way
it
is
and
because
it
I
mean
that
all
depends
on
how
it
goes,
we're
sort
of
trying
to
to
find
a
right
way
to
transition
that
so
this
stuff
that
we
we
need,
but
we
can't
figure
out
how
to
transition
it
it
just
it
just
becomes
an
exception
that
stays
as
well
called
a
consolidated
core
unit
right.
B
So
basically,
it's
like
a
fat
scope
where
the
scope
also
does
the
actual
work
so
like
protocol
engineering,
but
also
for
potentially
for
other
stuff.
That
could
also
be
what
we
do
with
interface,
where
we
basically
have
a
sort
of
a
big
core
unit
sitting
in
this
interface
scope.
Doing
the
work
internally.
B
And
then
anyone
that
doesn't
fall
into
any
of
these
categories.
They,
basically
I
mean
those
are
the
ones
we
would
let
go
right
and
in
many
cases
we
would
also.
B
Merge
core
units
and
to
get
them
in
line
with
with
the
you
know,
provide
you
know
doing
the
work
related
to
a
particular
scope,
and
in
doing
that,
we
will
find
that
there's
a
lot
of
like
side
projects
and
a
lot
of
sort
of
extra
stuff
right
that
that
we
would.
We
will
also
just
cut
right
because
the
scopes
need
to
sort
of
you
know
they
need
to
they're,
not
even
supposed
to
do
anything.
B
Unless
that's
right,
I
mean
they're
supposed
to
manage
they're,
not
supposed
to
do
work,
sort
of
in
the
trenches,
and
I
mean
the
outcome
of
this
is
then
we
had
we
end
up
with
a
workforce
where
we
know
exactly
how
we're
getting
value
out
of
everyone
right
and
that's.
The
big
problem
today
is
like
we
know
that,
there's
a
ton
of
people
and
we
know
we're
paying
them
a
ton
of
money,
but
we
don't
know
where
and
how
we're
getting
value.
B
And
that's
what
we
that's.
I
mean.
That's
really
the
thing
right.
We
need
to
to
to
change
that.
Obviously,
but
yeah.
B
Like
I
said,
this
is
something
that
basically,
I'm
already
working
a
lot
on
directly
myself
right
in
sort
of
preparation
for,
if
the,
if
the
approval
maps
pass
I'll,
be
able
to
get
much
more
hands-on.
B
And
actually
I
will
even
do
things
like
a
comp
negotiation
right,
because
we
also
have
this
problem
of
like
the
comp
being
completely
screwed
up
right,
like
it's
totally
arbitrary
compensation
models.
We
have
across
the
board
and
we
also
have
the
massive
challenge
of
figuring
out
compensation
for
the
metadata
clusters,
which
you
know
they
will
have
to
deal
with
the
with
the
kind
of
the
the
unknown
of
a
completely
new
token.
That's
hard
to
value,
and
in
practice
I
mean
I
have
to
be.
B
You
know
I
have
to
be
hands-on
myself
on
this
kind
of
stuff
right,
because
this
is
like
it
is
something
that
could
play
out
in
a
decentralized
fashion
over
a
long
period
of
time,
but
it
would
be
sort
of
like
paying
your
head
against
the
wall
right
and
having
a
lot
of
really
bad
experiences
of
like
very
bad
compensation
outcomes.
That
would
set
up
a
lot
of
bad
present
ads
and
ruin
the
culture
and
all
that
stuff.
B
So
it's
very
important
that
we
have
a
kind
of
intelligent
approach
from
the
start
right
and
that's
probably
design
right,
and
that's
also
consistent,
so
that
you
don't
have
this
weird
issue.
We
have
today,
where
everybody's
being
paid
in
different
ways
and
it's
it's
unfair
right
based
on
basically
how
you
know
which
process
you
ended
up
getting
in
on
and
then
some
people
benefit
more
from
that.
A
Yeah,
so
it
so,
it
sounds
like
there's
a
lot
of
transitionary
work.
You
know
it
was
used
to
determined,
as
you
have
the
bus
and
they
have
seats
on
the
bus.
You
got
to
figure
out
where
the
people
go
on
the
bus
and
some
people
don't
get
on
the
bus
right.
They
find
another
bus
to
get
on,
and
so
how
can
you
know
so?
For
example,
tim
tim
did:
did
a
wonderful
facilitation
on
a
dbc
call
on
friday.
I
think
it
was,
and
you
know
we
were.
A
We
were
brainstorming,
like
you
know
what
what's
needed.
How
can
we
help
so
in
the
context
of
what
you
just
laid
out?
There's
a
lot
of
work
there
just
again,
just
based
upon
the
experiences
that
I've
had
in
the
past.
Just
the
compensation
discussion
is
a
huge
area
for
negotiation,
trying
to
find
market
rates
and
correct
set
of
instruction
instruction
incentive
structures
as
well.
So
what
what
do
you
need
and-
and
how
can
the
people
that
are
around
your
help.
B
I
mean
that's
a
good
question
and,
to
some
extent
I
mean
I
think
that
it's
hot
like
in
the
end,
we
can
all
kind
of
work
behind
the
scenes
right
to
to
try
to
sort
of
feel
feel
people
out
to
figure
out.
What's
kind
of
the
right,
I
mean
something
like
comp
right
that
should
ultimately
be
it
should
end
up.
It
should
be
some
kind
of
organic
process
that
ends
up
with
some
sort
of
global
model,
and
this
is
something
I
mean.
B
We've
tried
this
a
hundred
times
in
the
past
right,
it's
a
very
it's
a
very
big
challenge,
but
just
people
trying
to
kind
of
you
know
like
just
have
a
look
at
that
and
and
and
take
a
step
of
what
they
think
is
a
way
to
kind
of
like
unify
the
way
things
currently
work.
I
mean
that
that's
helpful
right,
for
instance,
but
but
I
actually
think
the
most
important
thing
is
that
everyone
should
kind
of
focus
on
getting.
I
mean
basically
getting
their
own
setups
up
and
running.
B
Well,
right
I
mean
that's,
that's
gonna,
be
the.
The
key
thing
is
that,
like
once,
we
have
some
good
examples
of
like
good
clusters
that
that
actually
makes
sense
and
core
units
that
are
set
up
with
sort
of
with
with
kind
of
with
supply
networks
right,
so
they
can
interact
with
a
cluster.
They
can
interact
with
ecosystem
actor
to
actually
get
their
get
their
job
done.
B
I
mean
that's.
What
I
think
will
be
the
most
important
thing
right,
because
once
we
can
once
we
have
some
tangible,
like
kind
of
outcomes
right
where
we
see
all
the
stuff,
that's
right
now
all
being
prepared
behind
the
scenes,
but
once
we
start
to
see
that
be
more
sort
of
real
and
in
public
right,
then
that's
gonna
help
make
the
final
pieces
fall
in
place.
I
think.
B
And
and
yeah
I
mean.
A
B
A
B
Yeah
I
mean
and
of
course
it's
clear
that
everybody
wants
to
help
and
there's
a
lot
of
people
that
are
doing
various
types
of
you
know,
volunteer
work
and
like
communication,
work
and
and
ideas
and
feedback
and
so
on,
but
often
it
can
also
just
be
difficult
to
actually
figure
out
what
to
even
do.
And
ultimately
I
think
that
it's
like
the
earliest
stage
of
this
transition
is
so
insanely
kind
of
unique
and
incomparable
to
anything.
That's
ever
happened
before
that.
B
To
a
large
extent,
I
mean
I
just
have
to
do
a
lot
of
this
stuff
myself,
because
there's
just
there's
like
no
other
way
for
it
to
get
done
right
and
that's
I
mean
like
I've,
said
before:
that's
something
I'm
just
I'm
just
going
to
to
put
in
the
time
for
all
that,
I'm
just
going
to
have
to
do
that.
I
mean
that's
what
I've
already
set
out
to
do
right.
B
As
a
peyton
asks,
how
concerned
are
you
with
the
possibility
that
we
cannot
retain
the
necessary
workforce
to
spin
up
the
metadose
on
the
top
of
my
mind?
Is
that
a
lot
of
people
at
maker
came
here
to
avoid
the
corporate
hierarchy
and
leaving
all
these
hiring
decisions
to
you?
Roon
may
not
sound
all
that
different
yeah
I
mean
so
I
think
I
mean
so.
I
think
actually
that
there,
I
think,
there's
a
lot
of
people
that
will
like
they'll
actually
leave
exactly
because
of
this
right.
B
They
will
just
be
like
no,
I
don't
want
some
room
thing.
I
want
some
whatever
thing
right,
but
I
don't
I
mean
I.
I
don't
think
we
have
any
other
choice,
because
I
mean
the
alternative
is
to
then
do
nothing.
I
guess
or
count
on
the
dow
to
figure
it
out
by
itself
or
something
like
that.
B
But
that's
not
that's
not
actually
realistic,
right
and
that's
kind
of
the
reason,
for
the
reason
why
I
started
doing
all
this
in
the
first
place
is
exactly
because
it
turns
out.
You
cannot
like
the
actual
transition,
the
actual
setting
of
the
starting
point.
Isn't
it
just?
Cannot
be
done
by
itself
like
it
cannot
just
magically
occur
through
some
primordial
soup.
B
You
know
kind
of
figuring
out
by
itself
through
some
kind
of
decentralized
process.
B
So
I
think
what
I
mean,
on
the
other
hand,
I
think
like
and
and
there's
the
the
good
thing
of
that
in
a
sense,
is
that
then,
what
we
end
up
with
is
people
are
people
who
are
basically
aligned
around
the
importance
of
having
you
know
a
strong
starting
point
right
where
things
are
actually
set
up
for
success
and
and
there's
been
this
sort
of
proper,
you
know
it's
not
that's
like
a
random
decentralized
process.
B
It's
it's
sort
of
designed
right
from
from
I
mean.
Ultimately,
it
is
of
course
like
it
is
all
you
know,
everything
is
done
through
governance
and
all
that
stuff
right.
But
basically
the
more
important
thing
is
now.
We
have
set
up
a
kind
of
a
long-term
vision
for
governance,
for
how
it
will
become
probably
decentralized
in
the
long
run,
and
that's
kind
of
I
mean
that's,
I
think
the
only
viable
path
forward
right.
We
can't
like
we
can't,
I
mean
and
also
a
simple
matter.
B
We
said
we
just
cannot
make
everyone
happy
and
we
shouldn't
we
shouldn't
be
trying
to.
We
should
be
trying
to
get
the
people
who
are
aligned
and
make
them
happy,
and
I
think
right
now
I
mean
it
seems
to
me
like
there
actually
is
what
I
would
say
basically
critical
mass
of
of
of
buy-in
right
of
support
in
like,
and
I
think
it's
especially
it
is
really
like.
What's
kind
of
interesting,
is
that.
B
I
mean
this
also
sort
of
happened
with
how
the
metadata
themselves
they
started,
clustering
right
that
what
you
really
need
first,
is
you
need
the
like
sort
of
the
people
to
do
the
actual
work
like
the
people
to
to
make
the
money
to
to
make
the
products
to
do
the
work
to
write
the
code
and
so
on,
and
then,
once
you
have
those
in
place,
then
you
can
sort
of
cluster
and,
like
you,
can
then
get
a
metadata
running
with
like
a
facilitator,
basically
right
and
and
because
nobody
wants
to
be
a
facilitator
of
some
of
amanda
now
with
with
no
real
value
right.
B
Is
there
then
it
gets
a
lot
more
interesting
to
be
a
facilitator,
and
I
think
it's
going
to
be
the
same
thing
with
the
scopes
as
well
right
that,
like
that,
the
appeal
of
of
like
overseeing
a
particular
scope
and
and
sort
of
running
and
like
like
interacting
with
with
metadata
and
ecosystem
actors
following
these
frameworks
to
get
this
stuff
done,
I
mean
that's,
that's
gonna,
come
from
having
high
quality
metadata
clusters
and
high
quality
ecosystem
entries
to
interact
with
right.
B
So
yeah
I
mean
it
is.
Definitely
I
mean
broadly.
This
is
a
major
concern,
obviously,
but
it
looks
like
right
now
that
it's
not
actually
going
to
materialize
because
we
have
the
we
have
quality
metadata
clusters
that
will
enable
us
to
actually
have
a
kind
of
a
critical
mass
of
the
of
the
workforce.
B
Okay,
I
think
that's
a
good
point
to
end.
It.
B
B
I
don't
understand
that
question,
but
basically
some
some
were
some
people
transitioned
to
metal.
Well,
everyone
transitions
to
metadatas
or
uses
macros,
but
some
people
transition
to
creators
and
protectors
that
kind
of
do
actual
work
in
the
in
the
sort
of
like
make
a
core
context
and
some
people
transition
to
governance
that
work
and
and
run
the
scopes.
A
The
question
is
basically
just
about
evolves
and
the
people
who
work
to
work
on
them
and
like
I'm,
I'm
trying
to
imagine
how
the
transition
really
works
so
like
there's,
there's
vaults
that
are
currently
onboarded
and
are
getting
onboarded
and
then
some
get
adopted
by
metadials
and
some
are
newly
onboarded
into
method
hours
and
then
for
the
transition
period.
A
B
Yeah,
so
it's
really
going
to
depend
on
like
the
individual
situation.
It
will
be
very
flexible
in
many
ways
right
so
depend
like.
So
in
some
cases
you
could
have
an
example
like
the
deco
coin
right.
They
can
just
completely
transition
into
metadata
right
and
then
other
examples
would
be
like
a
metadata
where
some
of
the
people
that
are
clustering.
B
But
it's
you
know
it's:
it's
not
directly
sort
of
work
that
will
be
done
in
the
metadata
and
then
they
would
basically
keep
doing
that
work
in
the
coordinate
right
during
the
sort
of
the
transition
phase
and
then
when
they
leave
the
coordinates
and
goes
go
into
the
metadata,
then
that's
when
they
will.
You
know
they
will
then
start
working
only
on
the
metadata,
but
then
in
many
cases
while
then
also
have
miss
then
once
they're
in
the
metadata
they'll
just
keep
doing
the
same
work.
It's
just
now
be
done
through
the
scopes
right.
B
So
instead
of
let's
say
the
interface
example
right,
so
maybe
you
can
have
people
in
here
sitting
in
the
in
some.
You
know
in
the
interface
scope
or
the
docs
coordinate
or
whatever
place,
where
work
is
being
done
on
building
a
front
end,
and
then
they
could
move
over
to
a
metal
now
and
then
what
simply
happens
is
now.
The
work
is
just
done
like
this
basically
and
they're
just
sitting
in
the
middle,
but
they're
still
also
doing
work
for
making
a
call
and
being
paid
to
do
the
work.
B
But
it's
really.
It
is
very
specific
to
each
of
these
like
key
things
right,
like
keeping
the
oracles
running,
building
the
dow
toolkit
building
the
front
end
and
then
running,
you
know
doing
the
product
engineering
work
I
mean
and,
like
I
said
earlier,
protocol
engineering
is
completely.
It
doesn't
interact
with
metadata
at
all.
B
Initially,
although
some
people
from
the
team
may
come
like
some
people
on
the
team
may
transition,
but
the
point
is
that
protocol
engineering
becomes
like
a
you
know,
it
doesn't
become
a
scope
that
is
sort
of
hollow
in
the
same
way,
the
other
scopes
are
right
when
they
outsource
that
work.
B
So
this
is
very
important
and
that's
also
why
we
need
to
consider
having
a
special
case
for
the
risc
core
unit,
because
of
how
important
this
scope
is
and
then
other
vehicle
lateral.
They
talked
about
that.
Already
right
that
we
need
somebody
new
we
already
have.
The
transition
in
this
is
already
has
already
happened
in
a
sense
right
that,
where
the
most
of
the
work
on
real
assets
are
happening
out
in
the
cluster
sort
of
area,
and
so
now
what's
needed
is
having
actually
somebody
on
this
on
the
scope
side
of
the
goblin
side,.
B
A
All
right
and
thank
you,
roon
and
thank
you
all
for
joining
us
today,
we'll
have
the
recording
should
be
out
tomorrow.
We
hope
that
you
can
join
us
for
tomorrow's
dvc
call.
Thank
you
and
have
a
great
rest.
Your
day.