►
From YouTube: Governance and Risk Meeting: Ep. 87
Description
# Agenda
## Risk Segment
- Vishesh: Model Inputs
- Marko: Jump Risks
- Primoz: Collateralization Ratios and Liquidation Discounts
## General Q&A
We'll open the floor for any questions about Scientific Governance and Risk.
Please join us and help shape the future of the MakerDAO.
## Links
- [Video/Voice](https://zoom.us/j/697074715)
- [Dial-in](https://zoom.us/u/acRbIMDvK)
- [Calendar](https://calendar.google.com/calendar/embed?src=makerdao.com_3efhm2ghipksegl009ktniomdk@group.calendar.google.com&ctz=America/Los_Angeles)
B
B
Today
we
have
a
another
special
presentation
from
the
risk
team.
Vishesh
promotion.
Marco
are
gonna
talk
about
some
really
awesome,
research
that
they
are.
We
have
all
collected,
we've
been
working
on,
I
think
we
can
go
ahead
and
get
started
with
that.
Are
there
any
learning
may
be
outstanding
governance
items
that
we
want
to
tackle.
First
with
this
is
extra
time
that
we
have
Oh
uncle
ism.
Is
there
anything
that
you
can
think
of
from
yesterday
to
today,
I
think.
B
Given
that
the
timing
of
the
executive
vote
is
sometimes
not
it's
pretty
much
random,
it
has
the
potential
to
mess
with
a
lot
of
people's
a
lot
of
people's
gooeys
interfaces
and
whatnot
causes
significant
amount
of
complications
for
our
collateral
partners
and
our
integration
partners.
So
the
idea
is
that,
regardless
of
when
the
spell
is
cast,
it
would
then
trigger
that
true
shutdown
at
a
predetermined
time
stamp,
say
one
day
later,
something
along
those
lines
to
give
people
sufficient
sufficient
timing
sufficient
time
to
prepare
for
it.
B
B
B
B
C
C
B
C
A
B
A
lot
of
a
lot
of
makers,
integration,
partners
and
collateral
partners
right.
It
would
be
a
significant
disruption
to
their
business
if
a
CD
was
shut
down
like
a
switch
versus.
If
there
was
once
it
was
confirmed
to
be
occurring
as
per
the
executive
vote,
there
would
be
some
wind
down
time
where
they
could
gracefully
handle
the
transition
and
there's
internal
systems
and.
C
C
You
know
at
the
last
minute,
and
you
know
people
can
actually
be
ready
for
it
when
it
happens,
and
then
the
other
one
is
you
don't
really
know
if
we're
actually
going
to
shut
SCD
down
until
about
passes
right,
you
might
put
a
vote
in
and
say:
okay,
it's
you
know,
gonna
shut
down
whatever
this
thing
passes
and
it
never
actually
passes.
So
you've
got
a
passive
vote
before
you
even
really
announce
that
you're,
shutting
it
down
for
sure,
and
you
don't
want
to
be
like
announcing
okay
yep.
C
That's
definitely
officially
shutting
down
two
days
from
now.
You
know
like.
What's
the
final
vote
passes
like
that's?
Why
I
argue
for
a
longer
delay
is
that
you
don't
want
to
make
the
announcement
until
you've
actually
cast
the
vote,
and
you
know
it's
gonna
shut
down
for
sure
right.
That
makes
sense.
Thanks.
B
One
second,
okay,
so
general
disclaimer.
This
communication
is
provided
for
information
purposes.
Only
the
views
expressed
here
are
those
of
the
individual
maker
foundation
personnel
quoted
or
who
present
said
materials
and
are
not
the
views
of
maker
or
its
affiliates.
This
communication
has
been
prepared
based
upon
information,
including
market
prices.
Data
and
other
information
from
sources
believed
to
be
reliable.
The
maker
has
not
independently
verified
such
information
and
makes
no
representations
about
the
enduring
accuracy
of
the
information
or
its
appropriateness
for
a
given
situation.
B
This
content
is
provided
for
informational
purposes
only
and
should
not
be
rely
upon
as
legal
business,
investment
or
tax
advice.
You
should
consult
your
own
advisers
as
to
those
matters
references
to
any
digital
assets
and
the
use
of
finance
related
terminology
or
her
illustrative
purposes
only
and
you
not
constitute
any
recommendation
for
any
action
or
an
offer
to
provide
investment
advisory
services.
This
content
is
not
directed
at
more
intended
for
use
by
the
maker
community
and
may
not,
under
any
circumstances,
be
relied
upon
or
making
a
decision
to
purchase
any
other
digital
asset
referenced
here.
B
The
digital
assets
reference
here
and
currently
face
an
uncertain
regulatory
landscape,
not
only
the
United
States,
but
also
many
foreign
jurisdictions,
including,
but
not
limited
to
you,
the
UK,
a
European
Union
Singapore,
Korea,
Japan
and
China.
The
legal
and
regulatory
risks
inherent
in
reference.
Digital
assets
are
not
the
subject
of
this
content
for
guidance.
Regarding
the
possibility
of
said
risks,
one
should
consult
with
his
or
her
own
appropriate
legal
and/or
regulatory
counsel,
charts
and
graphs
provided
with
them
are
for
informational
purposes
solely
and
should
not
be
relied
upon.
When
making
any
decision.
B
B
All
right
so
today,
going
to
do
we're.
Gonna,
do
a
more
detailed
overview
of
the
model,
we're
going
to
talk
about
some
of
the
different
risk
factors
that
are
embedded
in
maker
risk
and
then
at
the
end
we
can
do
Q&A
and
maybe
a
demo
so
I
think
from
here
I
think
we're
gonna
be
sharing
the
the
presentation
amongst
amongst
the
different
risk
team
members,
so
I'm
gonna
pass
it
off
to
the
shesh
at
this
time
and
then
they'll
take
it
away
from
here.
Okay,.
A
A
Yesterday
we
talked
about
a
lot
of
the
concepts
that
go
into
what
risk
is
and
how
we
would
apply
the
model
to
try
to
help
determine
what
a
reasonable
values
for
some
of
these
output
parameters.
So
just
at
a
very
high
level.
The
point
of
what
we're
doing
here
is
to,
at
the
end
of
the
day,
derive
these
outputs.
A
The
process
that
we
use
basically
takes
into
account
what
we
have
available
to
us
as
inputs
and
sort
of
best
practice
methodologies,
given
a
lot
of
the
nuances
and
constraints
of
how
crypto
assets
work.
So
yesterday
we
kind
of
talked
about
at
a
very
high
level.
What
some
of
these
methodologies
are
today
we're
sort
of
going
through
in
a
little
bit
more
detail,
what
some
of
the
the
the
mechanics
and
the
data
and
analysis
that
go
into
developing
these
methodologies.
A
We've
taken
kind
of
this
stress,
testing,
monte
carlo
approach.
So
in
doing
so,
what
you
basically
have
is
kind
of
this
feedback.
Loop
of
you
assign
some
inputs.
You
run
many
different
iterations
of
the
model
and
then
those
produce
your
outputs.
And
then
you
can
sort
of
plot
graph
determine
okay,
based
on
the
outputs.
Do
you
need
to
make
an
adjustment
to
the
inputs
and
that's
just
kind
of
a
general
methodology
for
modeling
not
really
specific
to
maker,
not
really
specific
to
finance?
Actually,
it's
just
generally
how
you
do
modeling.
A
So
the
specifics
to
this
approach
in
the
system
are
that
we're
modeling
collateralization
ratios
as
the
prime
backbone
of
the
quote:
unquote
methodology.
So
pretty
much.
Everything
up
to
the
point
of
saying
here
is
how
prices
would
potentially
move
and
here's
how
collateralization
ratios
would
move
accordingly
pretty
much
everything
before
that
is
either
historical
analysis.
Just
you
know
purely
looking
at
data
or
is
kind
of
basic,
common-sense
structure
of
how
make
your
system
is
laid
out.
A
The
reason
being
is
those
collateralization
ratios,
as
we
talked
about
yesterday,
primarily
determine
what
your
liquidation
amounts
are
and
basically
how
much
you
liquidate
and
when
you
liquidate
it,
that
aggregate
tells
you
how
much
you're
going
to
lose
through
the
system.
And
then
you
know,
as
you
can
probably
guess,
modeling
out
how
much
you
stand
to
lose
in
the
system
is
primarily
the
goal
and
then
you're
just
slicing
that
up
a
few
different
ways
to
determine
risk
parameters.
So
add
a
step
by
step.
A
That's
basically
using
those
inputs
to
model
how
you
expect
collateralization
ratios
to
fluctuate
over
time.
Taking
those
collateralization
ratios,
pushing
them
through
a
very
simple
calculation
to
determine
the
Galatians,
taking
those
liquidations
as
a
time
series
and
applying
through
some
historical
analysis,
but
primarily
basic
formulas,
a
amount
that
is
lost
through
slippage
as
well
an
auction
efficiency
and
then,
basically
that
final
adjusted
amount
that
is
lost
is
the
prime
goal
of
what
you're
trying
to
model
so
again
at
a
comparatively
high
level.
A
That's
the
structure
of
the
model
and
then
we'll
go
through
a
little
bit
more
detail.
Next,
there's
so
three
main
components
to
this.
As
I
alluded
to
the
asset
price,
that's
primarily
your
independent
variable,
the
collateralization
ratios.
So
these
are
you
know
we
just
talked
about
a
function
of
that
asset
price,
but
also
some
user
behavior
and
then,
at
the
end
of
the
day,
you
take
those
amounts
and
you
adjust
them
through
some
historical
market
analysis
and
you
get
your
final
losses
through
slippers,
the
asset
price,
we're
modeling
using
geometric
Brownian
motion.
A
A
So
again,
for
the
purposes
of
this
model,
it's
not
well-suited
to
just
extrapolating
from
historical
data.
So
we
have
to
take
this
more
stress,
testing,
Montecarlo
approach,
it's
flexible
enough
to
allow
us
to
incorporate
certain
custom
for
nuances
for
how
crypto
works
and
the
kinds
of
risks
that
we
expect
to
need
to
be
prepared
for
in
crypto.
You
know
as
as
we've
seen
recently,
you
can
have
these
sort
of
seemingly
random
jump
events
in
apps,
where
no
indication
from
historical
volatility
and
then
suddenly
there's
a
major
price
drop.
A
These
kinds
of
things
happen
in
crypto,
because
it's
a
young,
immature
market
and
a
lot
of
these
assets
are
hugely
technical.
They're,
not
just
you
know,
large
traditional
public
equity
companies,
they're
they're,
very
technical
projects,
so
you
know
very
impactful.
Things
can
happen
suddenly
and
then
it
allows
us
to
incorporate
a
time
component
to
the
model.
So
you
know,
if
you
think
about
a
stability
fees
and
annual
value,
so
being
able
to
look
at
annual
losses,
is
a
helpful
layer
for
lining
up
a
frame
of
comparison.
A
Why
we're
not
just
using
historical
sampling,
though
it
is
an
incorporated
component
next,
one
yeah,
so
I'm
not
going
to
go
through
every
one
of
these
line
items,
but
in
general
the
the
Monte
Carlo
simulation
has
a
parameter
to
it,
which
is
I
would
bucket
these
three
sets
of
parameters
like
they
are
listed
here
so
essentially
there's
stuff
about
the
model
itself
kind
of
meta
parameters.
How
are
you
running
the
simulation?
How
many
times
are
you
running
it,
etc?
A
There's
things
about
the
asset
price,
so
I'll
go
through
this
in
a
little
more
detail
later,
but
essentially,
when
you
have
these
random
walk
processes,
there
are
a
lot
of
things
that
go
into
it
like
how
long
you're
modeling
the
asset
price.
For
how
detailed
are
your
individual
time
steps
there's
two
main
parameters
when
it
comes
to
Winer
processes,
they're
called
mu
and
Sigma,
they
can
be
thought
of.
A
As
kind
of
this
annualized
log,
normal
drift
and
a
lognormal
volatility,
and
essentially
those
those
parameters
determine
in
what
direction
the
price
path
will
move
and
how
violently
it
will
move
along
the
way
and
the
collateralization
ratios
so
obviously
a
lot
of
the
parameters
that
go
into
determining
what
the
collateralization
ratios
are.
Our
existing
system
parameters
like
the
liquidation
threshold
penalty,
etc.
These
are
these
are
not
things
that
we
necessarily
are
fluctuating
or
moving
very
much
in
the
model.
It's
more
of
for
a
big
run
of
the
model.
A
You're
gonna
try
a
different
think
of
it
as
a
vault
type
right.
So
currently
you
have
eath
with
150%,
collateralization
and
13%
penalty.
That's
one
potential
set
of
parameters.
You
could
also
run
this
to
try
and
see
what
things
look
like
if
that
wall
type
were
different
if
it
had
a
higher
collateralization
requirement
or
a
lower
one.
So
that's
again
a
very
helpful
facet
of
the
kind
of
model
that
we've
set
up
here
and
then
the
final
point,
sorry
serious,
I
think
your
no
scream
yeah.
A
So
for
the
the
slippage
component,
the
the
main
thing
there
is
deriving
a
function
that
tells
you
roughly,
if
you're
selling
X
amount,
what
amount!
Why
are
you
going
to
be
able
to
recover
next
one
yeah,
so
now,
Marco
will
run
through
how
we
sort
of
back
our
way
into
specific
loss,
events
and
specific
price
movements.
How
much
is
the
scram
going
to
move
and
why
and
sort
of
how
we
incorporate
that
data
historically
into
our
model.
E
Yeah,
hello,
guys
I'm
going
to
talk
about
jump
risks
what
they
are,
why
they
are
important
and
how
we
can
use
them.
Basically,
here
we
are
showing
this
picture
once
again,
because
it
is
very
related
to
jump
risks.
Basically,
if
we
have
small
price
changes
over
time,
they
will
generally
create
positive
flow
for
the
protocol
because
of
the
liquidation
penalty,
while
severe
high
price
changes
will
potentially
create
under
collateralized
liquidations
and
generate
negative
flow
for
the
protocol.
E
Next,
like
this,
here
is
basically
another
illustration
of
this.
On
the
left
picture,
we
can
see
eaters
price
up
to
2020
almost
and
basically,
even
though
the
price
decreased,
for
you
know,
like
a
large
amount
over
high
period
of
time.
Basically,
protocol
didn't
generate
any
negative
flows
and
on
the
right,
we
have
like
an
example
of
price
drop
week,
which
could
probably
activate
a
lot
of
liquidations
and
generate
negative
flow
for
the
protocol.
E
Do
not
really
like
people
to
be
able
to
send
dollars
around
the
world
without
any
restrictions.
So
we
can
somehow
estimate
that
there
is
high
chance
that
there
will
be
some
regulatory
issues
with
them
but
like
if
we
would
just
consider
historical
volatility
of
let's
say,
USD
C.
We
would
basically
see
that
volatility
is
very
small
but
like
if
we
actually
want
to
test
it
right
in
our
model.
We
we
necessarily
need
to
add
some
additional
jump
risks.
E
Then,
let's
talk
about
triggers.
Basically,
when
we
started
to
research,
what
are
the
triggers
for
this
jump
risks
in
general
in
crypto?
First,
we
conducted
historical
research
of
daily
returns
on
eater
and
basically
we
try
to
estimate
or
determine
what
are
the
most
likely
causes
for
largest
xxx
negative
returns,
and
basically,
what
we
found
is
that
almost
75%
of
these
30
cases
were
actually
were
actually
due
to
some
kind
of
systematic
factor.
E
E
These
are
basically
mainly
just
theft
of
funds
and
basically
the
most
severe
outcome
of
this
category
is
if
some
exchange
just
becomes
insolvent,
and
this
creates
large
panic,
people
start
selling
and
is
just
like
really
bad.
Then
another
category
which
we
picked
is
technological
vulnerabilities.
In
this
category,
we
basically
includes
any
kind
of
technical
back
vulnerability
which
could
be
exploited
either
on
like
blockchain
layer
or
on
some
application
protocol
layer
and
then
another
category
is
governance,
related
issues.
Now
in
history
of
crypto,
we
only
had
two
large
such
events.
E
The
first
one
was
when
basically
a
deal
in
community
couldn't
reach
consensus
on
how
to
deal
with
the
Dow
hack
and
basically
splitted
in
to
block
chains
and
also
to
and
another
was
similar
example
on
Bitcoin,
where
people
just
couldn't
reach
consensus.
How
to
you
know,
approach
infrastructure
in
the
future
and
basically
split
into
Bitcoin
cash
in
Bitcoin,
and
then
less
categories,
regulatory
issues
which
I
think
I
don't
need
to
really
talk
about
much.
E
Sorry
yeah:
can
you
go
to
the
next
slide?
Okay,
so
what
are
the
parameters
of
jump
risks?
Basically,
they
have
two
main
components,
one
being
frequency,
basically
how
many
times
person
Bank
period
this
happens
and
how
severe
this
price
changes
are.
Now
both
both
of
these
components
can
be
determined
in
many
different
ways,
for
example,
for
frequency
we
could
just
analyze
historical
events
and
say,
for
example,
we
had
10
hex
/
10/10
exchange
hex
per
year
on
average,
and
thus
we
should
just
use
this.
As
our
you
know,
frequency
per
year.
E
Other
approach
is
by
creating
scoring
framework
which,
through
a
system
of
different
questions,
tries
to
estimate
how
exposed
particular
asset
is
towards
a
category
of
risk,
and
then
another
approach
is
mainly
for
stress.
Testing
is
to
just
use
some
kind
of
constants,
and
in
this
case
you
know
we
are
just
testing.
What
would
happen
under
such
conditions.
E
Civility
is
the
same.
It
can
be
used
as
constant
for
stress
testing.
Like
example
of
this
would
be,
if
some
somebody
who
is
analyzing
this
protocol
would
say
we
want
to
be
protected
against,
for
example,
70%
price
drop
once
per
year,
and
they
could,
just
you
know,
put
these
inputs
into
the
model
and
then
of
different
behavior
of
these
factors
and
outcomes
like
here,
I
I
want
to
point
out
that
jump.
E
E
Is
that
the
general
idea
behind
this
is
that,
through
a
series
of
questions,
we
try
to
determine
how
exposed
is
particular
acid
towards
one
particular
jump
risk
category
so,
for
example,
for
exchange
hacks
like
asset,
which
has
almost
no
supply
on
custodial
exchanges
have
like
you
know,
even
even
if
they're
hacked
like
the
civility
of
this,
will
be
small
and
then,
at
the
same
time,
some
crypto
assets
actually
have
some
kind
of
fun
dimension.
Fundamental
functionality
in
the
underlying
system
example
of
this
is
proof
of
stake.
E
Based
block
chains,
where
you
can,
you
know
actually
produce
blocks
with
coins
and
another
example,
is
application
layer
protocols
where
token
you
use
for
voting
or
also
some
kind
of
staking
or
something
like
this
and
basically
assets
which
has
have
such
functionalities
are
logically,
additionally
exposed
to
our
exchange
hex
yeah.
This
is
basically
logic
behind
the
schooling
framework.
Now
we
can
move
to
the
second
portion,
which
are
civilities
our
next
slide.
Please.
E
Basically,
for
civilities,
we
decide
that
we
want
to
analyze
historical
events
which
fit
into
our
four
categories,
so
this
is
example
of
like
largest
exchange
hex
in
crypto
and,
like
our
exact
methodology
here,
was
that
we
we
considered
daily
and
also
intraday
price
changes
of
the
day
when
events
happened
plus
10
days.
So
in
this
11
day
time
period,
we
picked
the
most
severe
price
change
to
determine
disability
of,
like,
for
example,
BitFenix
heck
next
slide.
E
Basically,
we
have
much
less
of
these
events,
which
actually
happened,
and
we
could
measure
but,
like
you
know,
this
doesn't
mean
that
they
will
not
occur
again
or
they
that
they
would
have
higher
civilities,
basically
like
the
highest
civilities
here
in
this
set
in
technological
and
regulatory
issues.
It
says:
can
you
just
jump
to
the
previous
slide?
E
Please
I
forgot
to
mention
something
that
yeah
I
just
wanted
to
say
here
that,
like,
as
you
can
see,
we
we
measure
different
kind
of
metrics,
for
example
like
like
a
set
amount
divided
by
trading
volume
or
like
market
capitalisation
at
that
time,
to
like
try
to
determine
is
there
some
kind
of
distinctive
correlation
between,
for
example,
how
many
coins
were
stolen?
And
you
know
what
is
actual
civility
of
this
event?
E
And
basically,
if
you,
if
you
look
at
the
bottom,
there
is
the
upbeat
hack,
which
recently
happens,
and
basically
this
particular
hack
was
the
largest
aetherium
eater
hack
in
terms
of
coins.
But,
as
you
can
see,
the
severity
was
like
minus
3.5,
so
it's
basically
like
any
other
day
in
crypto,
so
yeah,
it's
like
it's
like
really
really
hard
to
objectively
determine
these
events.
D
Premiership
so
liquidation
discount
is
one
of
the
core
components
of
the
model.
Discount
is
closely
related
to
slippage,
in
a
way
that
when
we
have
an
option
triggered
Minato
says
how
high
option
keepers
will
be
in
order
to
determine
losses
or
revenues
for
maker.
Now
there
are
many
variables
that
affect
the
bidding
of
keepers
and
I'll
go
through
them
later,
but
the
two
main
are
the
creative
collateral
and
the
creative
die,
and
they
both
produce
slippage.
Now
this
this
slippage
can
have
two
effects
in
terms
of
losses
per
maker.
D
It
can
either
reduce
the
die
recovered
and
it
can
cause
even
the
liquidation
that
would
otherwise
be
profitable
to
potentially
become
a
loss
or
it
can
worsen
existing
loss
or
undercooked
volatilization,
which
of
course,
makes
loss
even
more
severe.
We
use
two
kinds
of
inputs
in
the
model
for
modeling
liquidation.
Discount
first
is
the
slippage
curve,
which
essentially
tells
us
tells
us
what
percentage
of
slippage
maker
might
expect
while
liquidating
a
specific
type
and
amount
of
collateral,
and
basically
we
get
an
answer
of
how
much
below
market
price
of
certain
collateral.
D
We
keepers
beat
during
the
simulation.
If,
if
of
course,
options
are
being
triggered,
the
second
variable
is
called
torsion
efficiency,
this
one
uses
maliki
t
related
variables
and,
of
course,
this
additional
effects,
the
the
slippage
in
option
and
next
slide.
Please,
okay,
so
here's
some
variables
women
using
for
modeling
slippage,
I'm,
going
to
describe
them
one
by
one.
So
the
first
source
of
slippage
is
based
on
collateral
quiddity.
D
Now,
if
this
particular
collateral
see
liquid
peepers
will
be
at
least
as
low
as
the
slippage
they
were
think
or
when
they
recycle
this
asset
into
a
stable
acid.
How
we
model
this
liquidity
is
that
we
used
very
conservatively
chosen
volumes
and
the
most
liquid,
centralized
exchanges
and
basically
used
only
the
stable
con
pairs,
because
most
likely
peoples
will
prefer,
to
you
know
close
the
arbitrage
cycle
through
those
through
those
pairs.
D
Additionally,
we
compared
those
warnings
with
price
changes
historically,
and
this
basically
helps
us
assess
price
elasticity,
which
tells
you
at
how
the
price
actually
impacted.
Sorry
called
the
volume
that
was
that
was
historically
made,
how
it
impacted
the
price
through
temporary
or
permanent
price
shocks,
so
basically
also
telling
you,
if
keepers
would
be
selling
this
amount,
how
much?
Additionally,
you
could
expect
price
to
drop
the
details
about
these
are
going
to
be
published
in
documentation.
D
Materials
later
for
now
important
to
here
is
that
we
want
to
stress
out
collateral
sleep
which
is
based
on
secondary
exchanges,
and
volumes
were
very
carefully
chosen
and
separately
chosen
and
because
we
know
there's
a
lot
of
fake
volume
on
many
exchanges,
so
we're
really
cautious
here.
The
second
separate
important
source
of
slippage,
as
you
can
see,
is
that
dial
equality
or
die
slippage.
D
We
know
that
markets
are
shallow
and
that's
why
we
need
to
know
at
what
price
keepers
would
be
able
to
obtain
die
needed.
So
here
we
most
relied
on
order
book
analysis
just
looking
at
mostly
diapers
to
assess
the
die
liquidity
and
at
what
price
tippers
can
buy,
and
then
these
results
the
die
slippage.
These
results
produce
died
slippage
curve,
and
this
is
combined
with
the
collateral
slippage
curve
that
I
mentioned
before
now.
D
The
third
input
equally
important,
if
not
maybe
the
most
important-
is
the
the
total
keepers
reserve,
which
is
sort
of
a
hard
cap
on
option
size,
after
which
we
may
start
seeing
zero
bits.
So
imagine
if
you
know,
we
assume
that
keepers
have
only
ten
or
twenty
million
of
available
capita.
This
means
after
we
breach
the
10
million
option
notional
value.
We
would
most
likely
see
a
zero
bits
coming
in
leading
to
future
losses,
so
something
like
on
Black
Thursday,
but
apart
from
just
you
know,
assuming
some
some
amount
of
capital
keepers
fault.
D
You
also
need
to
know
what
amount
of
debt
capital
is
denominated
in
die
rather
than
just
in
some
girls,
because
if
the
results
are
are
small,
we
would
expect
keepers
to
either
buy
die,
which
leads
us
to
sleep
which
that
I
just
described
earlier.
Ordinal
minted
by
using
news
deceive
world,
for
instance,
in
any
case,
keepers
would
have
meeting
house
or
slippage
buying
valley
and,
of
course,
the
slippage
curve
worsens
and
then
the
final.
The
fifth
input
is
the
target
profit
for
capers.
D
Of
course,
if
expected,
profit
for
keepers
desired,
this
again
worsens
the
slippage
curve.
Next
slide,
please!
Okay!
So
here
we
have
a
visual
representation
of
how
one
of
potential
slippage
curbs
we've
been
using
for
the
model
looks
you
can
see.
It
combines
both
collateral
slippage
in
this
case
hitter
and
the
die
slippage
into
one
function,
which
is
the
yellow
line
called
option
slippage.
This
is
basically
the
slippage
curve,
not
really
the
slippage
curve.
D
Is
the
green
one
going
to
explain
later
why
you
can
see
this
yellow
line
is
much
more
flat
at
the
beginning
and
because
we're
some
keepers
have
an
uptight,
capital
disposal
and
eater
sleep
which
isn't
really
present
at
those
values.
So
we
mainly
just
assume
keepers
would
be
the
slow
to
to
have
some
profit
which
you
you,
you
define
once
we've
reached
certain
point
where
peepers
idler
need
to
acquire
died
on
market
or
they
need
to
minted.
D
The
slippage
curve
become
becomes
much
more
steep
and
starts
to
primarily
behave
according
to
die,
stick
which
we
measured.
So
all
of
the
these
five
inputs,
I
mentioned,
are
used
to
calculate
the
slippage
curve.
Here,
I
want
to
make
sure
community
knows.
This
is
only
one
of
possible
ways.
Many
other
techniques
could
be,
of
course,
applied.
The
model
uses
slippage
curve
as
a
function.
Currently
this
is
the
fourth
polynomial
a
polynomial,
polynomial
or
fourth
degree.
D
That's
the
Green
Line,
and
this
is
fitted
to
the
yellow
line,
but
basically
anybody
can
use
any
function
it
likes
or
thinks
that
it
applies.
Whether
can
be
much
more
flat
can
be,
can
be
steep,
whatever
user
ones.
Next
slide,
please,
okay,
so
the
next
input
for
liquidation
discount
is
called
auction
efficiency.
So
this
simple
tries
to
capture
all
the
non-liquid
related
metrics
that
impact
potential
losses
from
auction.
D
It
most
addresses
the
issues
that
were
experienced
on
Black
Thursday,
and
this
were
something
like
network
congestion,
low
number
of
keepers,
for
instance
reliability
of
keeper
software
and
time
to
bid
duration
and
so
on.
This
input
ranges
from
0
to
1,
0
being
totally
inefficient.
Auction
1
totally
efficient
in
any
case,
whatever
number
you
choose,
this
applies
on
top
of
the
slippage
curve
that
I
showed
earlier.
So
you
could
imagine
that
the
curve
that
we
plotted
before
the
slippage
curve
would
be
then
moved
upward
or
downward
the
Pentagon.
You
know
what
input
for
auction
efficiency.
D
D
Okay,
so
exposure
risk.
These
generally
applies
to
behavior
function
that
we
shall
describe.
What
we
primarily
want
to
model
is
how
world
user
behavior
impacts
portfolio
exposure.
We
know
this
make
rewards
are
quite
unique
and
since
they
have
no
maturity,
they're,
not
the
course,
which
means
the
exposure
is
primarily
a
function
of
collateral
price,
but
also
of
voltage
behavior
when
users
either
add
or
remove
collateral
or
repay
initiative,
and
this
exposure
is
best
measured
by
just
simply
looking
at
fertilization
ratio
volts.
D
Once
in
the
cultivation
ratio
starts
decreasing
to
levers
where
they
might
get
liquidated,
but
was
an
audit
of
collateral
price
rises?
They
tend
to
remove
an
estate
collateral.
Importantly,
this
means
that
portfolio
exposure
can
be
less
volatile
than
the
collateral
price
itself,
at
least
when
we
have
collateral
as
Peter.
In
other
words,
if
volts
defaults
wouldn't
be
making
any
collateral
that
prepayment
maker
with
her
future
assessing
into
18
when
each
price
fell
more
than
90
percent.
So
we
approach
this
issue.
D
We
simplified
world
behavior
and
diplomat
it
so
called
the
carer
function
where
we
model
mean
reverting
volatilization
ratios.
This
means
exactly
as
we
wrote
here,
worlds
maintain
their
target
quantization
ratio
throughout
the
simulation,
but
with
a
specific
time
line.
So
this
time
lick
is
important
and
it
needs
to
be
set
differently
for
different
tranches
of
fertilization
ratios.
So
just
one
example
is:
if
price
falls
and
pushes
the
catalyzation
ratio
distribution
down-
and
let's
say
you
have
a
large
amount
of
falls
near
150%,
you
would
see
a
faster
version
process
right.
D
Users
would
normally
act
in
a
day
or
two.
You
know
maybe
like
the
same
moment
and
try
to
protect
the
rate
from
being
liquidated,
and
this
can
avoid
liquidations
right
if
they
behave
as
such.
On
the
other
hand,
if
you
have
some
bolts,
usually
when
you
see
a
world
with
500
percent
ratio,
quantization
ratio
intended
price
drops.
This
user
might
not
really
react
right.
It
may
take
many
days
or
simply
he
would
interact
next
slide
please.
D
So
here
you
can
see
two
charts
for
is
the
quantization
ratio,
distribution
of
dessert
into
18
19
at
single
collateral
dye.
You
can
see
this
tribution
was
actually
pretty
stable,
at
least
compared
to
it
when
when
it
dropped
more
than
90%-
and
this
is
this
is
exactly
what
we
try
to
somehow
simulate
in
the
lower
graph,
which
is
already
a
simulation.
D
The
model,
what
you
put
what
you
set
as
an
input
in
the
model
is
simply
initial
fertilization,
ratio,
distribution
and
this
one
is
maintained
through
the
simulation
by
responding
to
collateral
price
impacts,
but
with
a
certain
reversion
time,
as
I
described
earlier,
so
fast,
reversion
or
less
days
needed
for
lower
collateralized,
volts
and
slower
version
time
or
more
days
for
higher
grade
last
words.
Next
slide,
please
I'll
be
here
here,
is
showing
one
example
of
the
model
inputs,
so
first
initial
fertilization,
ratio
distribution
is
is
set.
D
This
could
be
anyone
the
chooser
things
should
be
maintained,
some
sort
of
equilibrium
or
long
term
using
different
sets.
For
instance,
one
is
just
using
90-day
averaging
in
MCD
that
we
observed.
Secondly,
this
distribution
is
been
simplified.
It's
put
into
buckets
arrangers,
where
we
say:
X
amount
of
depth
resides
in
fertilization,
ratio
of
150,
200,
75
and
so
on.
D
Next
Malaysian
uses
GBM
process
to
jump
risks
and
price
is
simulated
and
collateral
utilization
ratios
are,
of
course
changed,
but
importantly,
what
the
model
does
is
then
uses
this
behavior
function
and
to
simultaneously
push
the
cauterization
ratio
back
to
some
initial
equilibrium,
with
the
reversion
time
like
I
described.
The
bullet
point
actually
has
an
example.
D
So
let's
say
you
have
20%
of
the
depth
residing
in
utilization
ratio
of
200
price
drops
10%,
the
ratio
is
decreased
to
180
and
then
volts
adjust
it
back
to
200
by
some
reversion
time,
the
one
that
user
of
course
said-
and
you
know
this
may
lead
to
liquidation,
so
either
it
doesn't
depend-
depends
about
the
electrician.
Sorry
about
the
diversion
time
is
said.
Importantly,
there
is
always
some
kind
of
reversion
to
equilibrium,
and
this
depends
on
this
reversion
times
the
coming
days.
The
commutation
should
provide
some
data
we've
been
using
for
inputs,
you
know.
D
Actually,
this
is
far
from
ideal
will
be
clear
modeling.
So
this
still
remains
the
further
research,
and
why
is
that?
It's
because
the
hardest
part
is
when
you
try
to
assess
behavior,
which
might
completely
change
during
the
simulation
from
the
initial
one.
Just
imagine
that,
for
instance,
collateral
enters
a
bullish
market,
a
bullish
sentiment
and
the
behavior
function
completely
changes
right.
Users
might
just
want
to
be
over
leveraged
keeping
low
utilization
ratio,
and
that's
why,
ideally,
you
would
want
to
model
behavior
more
dynamically,
somehow
more
associated
with
collateral
price.
D
Hopefully
this
is
something
that
that's
being
improved
in
the
next
iteration
or
future
extreme
scale.
First
of
all,
this
next
slide,
please,
okay,
so
this
is
the
one
last
important
part.
The
input
we've
been
briefly.
We
briefly
talked
about
yesterday,
the
maker
dilution
capacity
or
economic
capital.
This
input
is
crucial
for
estimating
solvency
of
maker
plays
important
role
when
deciding
about
debt
ceiling
or
liquidation
ratio.
D
D
Clearly,
that's
not
the
best
approach,
and
just
assuming
the
pure
market
capitalization,
because
we
know
maker
is
quite
a
liquid.
There's
also
uncertainty
when
there's
an
issuance
event
and
so
on
also
maker
is
quite
correlated
with
the
asset,
and
you
know
when,
when
you
have
liquidations
maker
maker,
capitalization
would
of
course
also
fall.
D
So
this
this
remains
one
possibility,
but
it's
hard
to
predict
what
you
know.
There
needs
to
be
some
liquidity
discount
assume
there
and
it's
not
easy
to
calculate
it.
So
the
other
heuristic
we
use
to
to
sidestep
this
liquidity
correlation
issue
of
maker
is
to
create
a
hypothetical,
very
conservative
valuation
model
for
maker
token,
so
evaluation,
which
would
have
a
sufficiently
conservative
distant,
and
this
would
normally
attract
enough
investors,
which
would
almost
surely
purchase
the
necessary
amount
in
Depok
ssin,
regardless
of
what
the
market
conditions
are.
So
please
next
slide
service.
D
Ok,
so
this
was
our
approach
for
determining
maker
dilution
capacity.
We
used
discounted
future
cash
flow
analogy
by
using
different
ranges
of
inputs,
but,
most
importantly,
we
used
the
hugeness
control.
So
here's
one
example:
we
use
some
kind
of
conservative
inputs,
but
then
we
said
discount
rate
to
a
really
high
number
like
hundred
percent.
D
Normally
this
is,
you
know,
put
it
40
at
most
50,
maybe
so
this
is
really
high.
What's
the
whole
point
of
this
is
that
investors
bank
maker
in
depth,
auction
under
under
such
valuation-
and
let's
say
they
assume
all
the
inputs
are
kind
of
tolerable
by
them
they
would
achieve
hundred
percent
yield,
and
the
idea
is
this
would
attract
a
lot
of
money
right.
This
would
attract
enough
money
to
cover
to
cover
this
valuation
that
you
just
calculated
in
this
case,
that's
28
million.
D
Actually
we're
using
different
ranges
also
is
going
to
be
shared,
but
in
this
case
you
are
kind
of
implying
there
would
be
28
million
of
money
available
to
cover
the
shortfall,
because
investors
would
be
maybe
making
hundred
percent
annual
return-
and
you
know
there's
enough
investors
in
crypto
willing
to
do
that.
A
short
side
note
here
at
the
valuation
you
can
see
this.
This
F
approach
is
truly
just
an
analogy,
because
this
isn't
quite
standard.
We
added
additional
value
to
estimate
in
capital.
This
is
called
make
or
Garen
is
premium.
D
The
idea
behind
this
is
that
this
dilution
capacity
capacitance,
is
also
from
the
value
that
isn't
necessarily
based
on
fees.
For
instance,
even
if
maker
doesn't
collect
any
fees,
there's
still
substantial
value
in
in
make
risk
governance
token,
because
of
course
it
determines
the
overall
defi
activity
and
ecosystem.
We
don't
have
obligation
model
for
that.
Yet
so
again,
a
very
conservative
numbers
were
used
in
this
case
is
15
million,
but
we,
of
course
we
use
the
range.
D
D
A
A
Just
to
briefly
kind
of
recap,
at
the
end
of
the
day,
we've
got
sort
of
these
different
individual
analyses.
It's
almost
like
every
single
step
in
this
model
could
be
its
own
substantial
model.
But
when
you
roll
this
all
together
at
the
end
of
the
day,
you're
you're
modeling,
those
collateralization
ratios,
to
determine
the
amount
of
liquidation
to
then
produce
a
amount
that
is
sold
to
then
recover
a
certain
percentage
of
that
to
get
final
losses.
And
then,
essentially,
you
can
see.
A
There
are
a
lot
of
different
individual
analyses
that
go
into
determining
how
the
collateralization
ratio
responds
to
prices,
to
determining
the
slippage
curve,
to
determining
the
jump
risks
and
how
the
asset
price
path
is
going
to
be
modified
for
these
don'tjump
diffusion
processes.
So,
at
the
end
of
the
day,
the
model
from
a
technical
standpoint
essentially
has
to
incorporate
each
of
these
components.
A
So
what
we've
sort
of
endeavored
to
do
from
a
mechanical
standpoint
is
to
essentially
modularize
these
components
into
different
functions
and
roll
them
all
together
into
a
cohesive
technical
model.
So
again,
I'm
not
going
to
bore
everyone
with
really
individual.
Like
line-item
details
about
the
code
today,
but
just
from
a
mechanical
standpoint,
the
the
overall
structure
of
this
process,
you
could
think
of
the
entire
model
living
in
one
big
loop.
A
That
loop
is
run
say
ten
thousand
times,
that's
a
variable
number,
but
those
10,000
iterations
of
the
model
are
effectively
like
simulating
different
worlds.
There's
a
world
in
which
eath
you
know,
moves
from
$200
to
20
and
there's
a
world
in
which
it
stays
totally
flat,
but
highly
volatile,
there's
a
world
in
which
it
goes
to
the
moon,
etc
and
with
regards
to
asset
pricing.
A
It
starts
to
make
a
lot
more
sense,
and
so
each
of
these
components
that
we've
kind
of
run
through
today
is
kind
of
like
its
own
little
module
inside
the
model
and
they're
adjustable
they're
removable.
You
can
swap
them
for
different
approaches,
etc,
and
you
would
theoretically
be
able
to
run
the
model.
Additionally,
the
way
the
tool
is
kind
of
being
structured.
A
All
of
these
things
are,
you
know,
sort
of
configurable
so
that
at
the
end
of
the
day,
look
the
whole
model
is
will
still
work.
So,
if
you
go
to
the
next
line,
yeah
the
the
starting
place
for
this,
as
we
mentioned
before,
was
the
asset
price.
So
this
is
gonna,
be
your
prime
independent
variable.
Now
the
asset
price
is
effectively
a
modified,
geometric,
Brownian
motion.
So
geometric
Brownian
motion,
as
I
said
before
a
pretty
textbook
thing.
A
There's
a
specific
formula
that
exists
for
a
lognormal
drift,
given
a
particular
drift
parameter
of
you
remember
a
particular
volatility
Sigma
parameter,
and
so
what
happens
is
essentially
every
one
of
those
lines.
Is
this
random
walk?
You
can
apply
onto
this.
The
jump
risks
that
marco
was
talking
about
so
essentially
there's
a
way
in
a
kind
of
an
unaltered
sense
where
you
can
take
the
vanilla,
geometric,
Brownian
motion
function
and
you
can
take
that
price
path
apply
a
particular
jump
diffusion
into
it
and
leave
the
remainder
advice
path
unaltered.
So
that's
effectively.
A
What
we've
done
is
we've
taken
this
normal
normal
is
a
statistically
charged
term.
This
textbook
geometric
Brownian
motion
price
path,
applied
these
specific
and
basically
gone
in
and
spliced
in
these
specific
jumps.
These
dramatic
price,
Depression
events
so
that
the
system
is
stress,
tested
and
is
modeling
cases,
not
just
where
there's
kind
of
blue
skies
volatility,
where
everything
is
functioning
properly,
but
also
where
it's
modeling
these
sort
of
catastrophic
cases,
because
the
the
way,
the
approach
that
we
sort
of
taking
is
what
is
the
point
of
doing
model?
A
If
it,
you
know,
doesn't
actually
stress
test
the
system.
So
that's
where
these
these
jumps
come
in
is
to
say
well,
ok,
a
normal
statistical
textbook
approach
doesn't
actually
consider
these.
You
know
very
severe
catastrophic
Black,
Thursday
type
of
events,
and
so
we
need
to
account
for
those
and
that's
where
the
jumps
come
from.
So
at
the
end
of
the
day,
asset
price
is
a
function
of
basic
geometric,
Brownian
motion
plus
John,
and
this
is
the
prime
independent
variable
next
one
we
then
apply
as
pretty
much
mentioned.
A
Our
learnings
from
historical
user
behavior
analysis
to
produce
collateralization
ratios
now
this
particular
graph
is
this
was
in
promotion.
Slide
is
effectively
a
collection
of
eight
to
ten
collateralization
buckets
and
so
essentially
what
we've
gone
and
done
and
said.
Well,
if
we
were
to
consider
that
collateralization
is
a
distribution
right,
so
a
particular
collateral
portfolio
does
not
have
just
one
collateralization
ratio,
it's
a
spectrum
of
collateralization
ratios,
so
you
could
represent
this
with.
A
Basically,
if
you
took
a
continuous
curve
and
converted
it
to
a
bar
chart
of
the
area
under
the
curve,
you
just
create
some
discrete
buckets.
If
this
had
a
hundred
buckets
or
a
thousand
buckets,
it
would
potentially
be
a
bit
more
detailed
and
a
bit
more
nuanced,
though
that
introduces
a
lot
more
complexity.
A
You
may
be
unnecessarily
so
so
we
sort
of
gone
with
a
reasonable
amount
of
granularity
with
say
eight
to
ten
buckets
here,
but
perfect
example
is:
if
somebody
thought
that
that
was
not
enough,
they
could
go
in
and
add
more
buckets
and
rerun.
The
model
they'd
have
to
wait
a
bit
longer
because
it'd
be
a
bit
slower,
but
they
could
do
that,
and
so
essentially
you
see
here
at
a
technical
level,
this
this
is
actually
a
graph,
that's
not
just
generated.
A
This
is
generated
by
the
code,
so
this
is
an
actual
representation
of
a
simulation
of
eight
to
ten
different
collateralization
buckets
and
how
they
would
respond
to
the
asset
price
fluctuating
over
time,
and
so
you
can
see
the
the
ones
that
are
color-coded,
more
red
are
riskier
and
the
ones
that
are
color-coded
more
green
are
less
risky.
And
then
you
can
see
sort
of
this
jump
event
at
around
day,
115
or
so
that
jump
event.
A
Essentially,
if
you
take
every
individual
one
of
the
lines
from
the
previous
graph
and
you
count
up,
okay,
which
ones
went
below
150
percent
or
whatever
the
parametrized
collateralization
requirement
is
because
you
can
also
that's
an
adjustable
parameter.
If
you
wanted
to
consider
a
different
risk
profile,
you
wanted
to
consider.
A
How
does
the
system
work
with
110
percent
collateralization
then
that
number
wouldn't
be
150
would
be
110,
and
you
can
just
change
one
line
in
a
file
and
run
that
now,
this
sort
of
shows
okay
for
each
one
of
those
lines
which
one's
dipped
below
150
percent
and
when
and
then
they're
a
grenade
at
a
daily
level
for
these
little
nice
and
clean
spikes.
So
these
spikes
are
representations
of
liquidation
events,
so
you
can
see
that
you
know
day.
115
ish
there
is.
Is
that
catastrophe
simulation?
A
A
You
add
that
up
and
basically
you
determine
how
much
died.
Would
we
recover
for
selling
that
amount
of
collateral
once
you
know
what
amount
of
die
you're
recovering
from
the
liquidations?
It's
a
very
simple
subtraction
to
say
subtract
that
from
the
amount
of
debt,
and
that
tells
you
basically
how
much
you've
gained
or
lost
through
that
liquidation.
A
A
A
A
So
if
you
took
all
the
losses,
all
those
spikes
added
them
up,
divided
by
the
average
debt
supply
over
that
time
period,
you
get
on
a
percentage
basis.
What
you
expect
your
annualized
losses
to
be
for
a
given
dye
supply
now,
every
one
of
those
runs
spits
out
a
different
value.
You
plot
those
into
a
distribution,
and
then
you
take
this
and
you
can
now
generate
one
of
two
calculations
off
of
it.
We
could
look
at
the
average
or
you
could
add
up.
A
You
know
some
cut
off
of
the
tail
and
look
at
the
area
under
that
tail.
That's
where
you
get
the
value
at
risk
and
expected
loss
calculations
from
and
that's
those
are.
Basically,
the
values
that
you
would
use
to
back
into.
Is
that
economic
capital
sufficient
to
cover
those
the
area
under
the
curve
for
those
tails?
A
A
So
if
you
want
to
adjust
what
you
think
is
an
appropriate
volatility
parameter
for
eath
or
if
you
want
to
adjust
the
collateralization
requirement
or
if
you
want
to
adjust
the
particular
slippage
curve,
etc,
you
can
do
so,
and
basically
each
one
of
those
different
parameters
would
now
spit
out
a
different
value.
Well,
you
could
graph
many
different
runs
of
multiple
different
changes
of
parameters.
A
This
is
just
for
our
brains,
sake,
a
visualization
of
changing
two
parameters
at
once,
or
sorry,
three
parameters
at
once:
no
yeah
the
signal.
Sorry
I
I
tricked
myself
up
for
a
second
without
the
dimensions.
So
if
you
were
to
change
two
parameters
at
once,
collateral
cut
off
and
expected
volatility,
this
produces
a
plane
of
different
games
and
loss
values.
A
This
is
just
the
average
expected
loss
and
it
helps
to
visualize
and
understand
certain
patterns
of
okay.
These
parameters
are
more
or
less
important.
These
are
the
relative
interactions
between
these
parameters,
which
is
a
helpful
facet
of
the
approach
that
we've
taken
here
is
to
be
able
to
see
the
interplay
between
these
these
variables.
A
So
at
the
end
of
the
day,
there's
no
one
specific
right
answer
here
of
this
is
the
exact
expect
expected
loss
value.
It
is
for
a
given
combination
of
parameters.
Here
is
you
know,
sort
of
your
average
expected
losses
and
there's
a
lot
of
extensibility
and
there's
a
lot
of
flexibility
here
for
different
analysts
to
sort
of
come
in
and
say,
hey,
I
think
this
parameter
should
actually
be
this
value,
and
then
you
can
go
back
and
look
and
say
alright.
A
B
B
B
The
methodology
that
we've
outlined
today
should,
in
theory,
work
fairly
nicely
with
most
crypto
assets.
You
can
basically
take
the
trade
history.
You
can
do
a
fundamental
evaluation
of
the
asset.
You'd
have
to
parameterize
the
jump
risks
a
little
bit
and
then
all
else
equal.
You
can
probably
come
up
with
a
solid
list
of
solid
package
of
risk
parameters.
B
The
lagging
the
lagging
factor
in
the
process
is
that
fundamental
evaluation.
So
for
the
community
or
our
team,
or
anyone
to
kind
of
dig
deep
into
the
asset,
learn
the
steward
due
diligence
and
then
be
able
to
kind
of
give
a
qualified
or
competent
evaluation
right.
That
would
take
a
while
once
that's
done,
everything
else
can
be
done
fairly
quickly.
So
one
potential
suggestion
for
governance
is,
if
urgency
and
speed
and
growth
and
scale
and
all
those
kind
of
business
objectives
are
are
top
of
mind.
B
Then
we
could
maybe
outsource
a
lot
of
the
fundamental
evaluations
to
community
analysts,
to
third-party
research
firms
and
so
on,
and
then
we
can
kind
of
collaborate
with
these
people
and
put
out
a
risk
construct
or
risk
package
for
for
the
community
in
terms
of
centralized
assets.
Of
course,
they
they
require
a
little
bit
of
extra
legal
work,
which
is
still
kind
of
a
gray
area,
but
from
a
quantitative
perspective,
what's
important
is
that
potentially
the
loss
distribution
is
by
stable.
B
The
key
concept
here
is
that,
when
you
think
about
correlations,
you're,
not
necessarily
thinking
about
price
correlation,
but
what
you're
actually
thinking
about
is
the
correlation
of
a
vault
from
two
different
collateral
assets
defaulting
at
the
same
time.
So
a
simple
example
of
how
this
could
trip
people
up
so
recently.
B
There's
been
a
lot
of
discussion
in
the
community
about
adding
a
tokenize
gold
from
a
company
called
Paxos
Paxos
I
believe
also
has
a
USD,
stable
coin
product
called
tax
USD
in
theory,
tax
gold
impacts,
USD,
sorry
in
theory,
the
gold
price
and
the
USD
price
should
be
fairly
uncorrelated.
But
given
that
both
of
these
tokenized
variants
are
both
handled
by
the
same
company,
there
is
a
very
strong
correlation
in
the
correlation
risk
of
the
counterparty
default.
B
So
this
is
where
we
need
to
be
careful
in
saying
that,
even
though
pacts
and
oops,
even
though
gold
and
and
a
stable
coin
are
uncorrelated
pacts
USD
in
acts
gold
to
share
a
default
correlation
and
how
to
how
to
parameterize
this
concept
is,
is
definitely
going
to
be
challenging,
but
in
the
mean
time,
I
think
it's
III.
I.
Don't
think
this
is
a
significant
concern
at
this
point,
but
definitely
something
for
discussion
in
terms
of
subjective
inputs.
B
B
This
question
refers
to
the
fact
that
if
maker
does
not
have
sufficient
economic
capital
or
buffer
on
hand,
then
potentially
it
may
need
to
increase
the
collateral
requirements
through
the
liquidation
ratio,
which
would,
in
theory,
could
make
make
her
less
competitive
in
reference
to
competitors
and
yeah.
That's
it
so
a
lot
of
these
questions,
they're,
obviously
not
for
today's
discussion,
but
with
we
should
try
to
make
some
forum
threads
out
of
some
of
the
most
important
topics
of
these
and
then
do
some
good
governance
through
that
and
yeah.
B
B
All
right,
that
being
said,
if
anyone
has
kind
of
a
few
quick
questions,
happy
to
answer
some
things
right
now
in
general,
I'm
guessing
given
that
we're
90
minutes
into
this
call
might
be
a
good
idea
to
just
cut
the
call
off,
and
then
we
can
kind
of
just
mold
these
concepts
over
and
check
on
and
chat
up
line.
Four
on
the
forms.