►
Description
Dealing with block limits - presented by @aschmahmann at IPFS þing 2022 - Data and IPFS: Models - https://2022.ipfs-thing.io
A
It
talks
about,
you
know,
block
limits
why
they
exist
and
how
we
might
be
able
to
deal
with
them
all
right.
So,
what's
the
block
limit
all
right
before
we
start
with?
What's
the
block
limit,
let's
go
back
to
the
top.
What
is
a
block
when
we
say
block-
which
I
mean
an
ipl
d
block-
is
a
bunch
of
bytes
that
you
can
refer
to
by
a
cid
or
the
multi-hash
component
of
it.
B
A
Support
transferring
blocks
less
than
or
equal
to
two
mib
good
question.
Yes,.
B
A
We
can
talk
about
that
if
this
time
after
but
concretely,
this
means
that,
like,
if
you
take
the
shot
256
of
like
a
10
megabyte
tar
file
or
like
you,
go
on
github-
and
you
see
like
this
cool
repo
publish
the
checksum,
you
cannot
go
just
create
a
cid
out
of
it
by
putting
the
magic
incantation
in
front
for
those
of
you
not
fluent
in
multi-formats.
A
A
With
with
and
you
will
not
be
able
to
load
the
data
because,
even
though
it's
content
addressed
because
this
because
of
this
block
limit
problem,
so
why
does
it
exist
it
sucks
but
like?
Why
would
we
have
them?
The
short
version
is
like
it
reduces
the
risk
of
dos
attacks
in
peer-to-peer
networks.
So
we
get
this
cool
thing
out
of
content
addressing
which
is
that
the
self-certifiable
nature
of
it
allows
us
to
not
really
care
who
we
get.
A
The
data
from
doesn't
have
to
be
a
trusted
entity,
but
now
we're
interacting
with
these
trusted
entities,
and
we
need
to
like
put
limits
on
how
we
deal
with
them,
which
requires
increments.
So
one
of
these
is
requiring
incremental
verifiability.
You
know
in
particular,
if
I
have
a
hash
of
a
100
gigabyte
block
right.
I
check
some
of
100
gigabyte
iso
and
I
download
it
from
you
know
from
eric,
and
then
you
know
it
turns
out
it's
the
wrong
thing.
I
have
now
used
up
my
whole
data.
A
My
whole,
like
data
limit
from
my
isp
and
paid
all
the
overage
charges-
and
I
still
don't
have
my
data-
that's
very
sad
and
so
using
verifiability
to
make
sure
like.
I
am
getting
good
data,
I'm
getting
the
data
that
I
want
block
limits
have
been
sort
of
a
thorn
in
in
the
ipf's
ecosystem
for
a
while,
and
so
this
has
also
led
to
a
bunch
of
people.
Thinking
of
other
reasons
why
block
limits
exist
and
so
we're
going
to
go
through
those
other
reasons,
people
think
block
limits
exist
with
smaller
blocks.
A
I
can
have
better
parallelized
downloads,
that's
true.
It
is
true
but
like.
Why
would
I
enforce
that
at
the
protocol
layer
instead
of
letting
users
choose
right
like
it
feels
like
it
would
be
really
weird
for
the
ethos
of
the
people
who
are
like
we
want
to
allow
ourselves
to
make
mistakes
to
like
hard
code
in
a
limit
that
says
like
this
will
be
good
for
your
performance.
A
Also,
you
can't
really.
You
still
have
this
problem
with
downloading
duplicate
data
like
you.
Can
you
can
still
work
with
it?
There
are
multiple
possibilities
for
deduplication
with
smaller
blocks,
compared
to
larger
ones.
Again.
Why
would
you
enforce
that
right?
That
seems
like
something
a
user
would
be
able
to
choose.
So,
even
if
this
is
true
like
you
wouldn't
enforce
it
at
the
network
layer
and
similarly,
if
I
need
to
download
a
sub
component,
I
have
I
want
to
download
the
byte
range.
You
know
one
megabyte
through
two
megabytes
of
my
100
gigabyte.
A
A
So
I
mentioned
this
a
little
bit
but
like.
Why
is
having
a
block
limit
like
real,
sad
there's
all
this
like
all
these
hashes
that
already
exist
and
more
that
can
and
will
exist.
You
can
find
like
ubuntu
isos
that
have
shot
256
checksums.
I
can't
go
use
those
all
the
package
managers
that
have
hashes
to
like
lock
your
files
in
there
they're
also
using
hashes
some
of
these
things,
some
of
the
objects
they
reference
are
bigger
than
two
megs.
A
Then
you
lose
and
now
I
you
can't
be
like
it
transfers
all
go
packages
or
all
npm
packages.
You
can
be
like
it
transfers,
I
hope
most
of
them
and
that's
like
very
difficult
to
build
systems.
Out
of
there
are
other
content
address
structures
that
chose
bigger
ones.
Git
and
bittorrent
git
chose
nothing,
but
then
the
get
torn
ecosystem
imposed
like
a
the
get
ecosystem
imposed
like
effectively
100
megabytes
because
of
how
github
operates
and
bittorrent
allows
for
bigger
configurable
ones.
A
Some
people
make
new
structures
and
they
have
reasons
for
making
their
block
limits
bigger.
Our
weave
does
this.
If
you
heard
michael's
talk
yesterday,
he
referred
to
taking
large,
you
know
large
objects
and
chucking
them
in
s3
and
then
using
the
sha-256
that
comes
with
s3
urls,
and
so
this
is
a
new
thing
that
is
being
built
like
very
recently,
and
it
too
is
doing
this.
A
So
it'd
be
nice
if
we
could
be
compatible,
but
you
know
we
can't
other
reasons:
people
don't
like
block
limits
which
again
are
sort
of
like
not
the
thing
I
feel
like
this
is
the
thing.
If
there
were
fewer
blocks
in
the
universe,
there
would
be
fewer
cids,
which
would
each
ease
pressure
on
the
dht
or
you
could
just
advertise
fewer
nodes
like
no
one's
making.
You
do
it,
I'm
sorry
if
the
kubo
apis
kind
of
make
it
look
that
way.
But
like
it's,
it's
not
a
protocol
thing.
A
You
don't
have
to
do
it.
Fewer
round.
Trips
means
like
no
round
trips
through
bitswap,
because
I
don't
have
to
like
walk
multiple
layers,
which
I
guess
is
true
but
like
you
could
either
use
a
protocol
that
allows
you
to
give
it
a
root
and
then
get
the
whole
graph
instead
of
bit
swap
or,
and
also
your
your
time
to
first
byte
probably
goes
up
because
you
have
to
download
the
entire
thing
and
verify
it
before
you
like
look
at
any
of
the
bytes,
and
similarly
this
is
a
good
one.
A
A
A
How
would
block
limits
go
away
so
a
tree
hash
is
basically
a
hash
function
that
is
constructed
as
a
as
a
as
a
tree
right.
You
hash
the
bottom
pieces
and
you
combine
those
hash,
bigger
and
bigger,
and
there's
a
few
constructions
for
how
you
do
this
and
for
those
of
you
who've,
you
know
used,
are
familiar
with
other.
You
know
bits,
horn,
files
or
or
even
how
or
how
unix
fs
is.
You
know
balanced
chunker
works.
A
It's
basically
the
same
deal
like
maybe
a
little
fancier
and
some
some
examples
for
this
include,
like
you
know,
blake
3
and
kangaroo
12..
There's
a
whole
bunch
of
them.
Unfortunately,
the
ones
that
you
see
the
most
these
days,
we're
blessed
as
shaw
version
x
and
none
of
those
are
tree
hashes
and
use
the
the
merkle
dam
guard
construction,
which
we
will
get
to
so.
C
A
With
large
large
hashes,
okay,
so
this
is
the
the
merkle
dam
guard
construction,
okay,
so
this
is
how
like
shatsu
and
shawn
these
are.
This
is
how
they
all
work
internally,
which
is
there
is
some
compressor
function,
f
and
some
initialization
vector
that
is,
you
know,
determined
by
the
lords
of
the
universe,
and
you
take
your
you,
take
your
blocks,
you
pad
them
to
be
the
right
size
and
you
basically
just
go
like
iv
block
output.
Next
iv
block
output
next
iv
block
next
iv
until
you
get
to
the
end.
A
A
So
here's
here's
a
proposal.
What,
if
we
try
and
like
break
this
up-
and
we
say
I'm
going
to
ask
for
like
the
last
chunk
of
data
c9
and
the
internal
state
at
that
point,
iv9
and
then
I
will
compute
and
see
if
it
meets
the
final
hash.
A
A
A
So
is
this
safe?
I
sort
of
just
messed
around
with
some
hash
function.
Internals
and
was
like
yeah
sounds
good.
I
think
so.
So
some
terminology
reminders
for
hash
functions.
There
is
this
thing
called
second
pre-image
resistance.
We
we
really
want
this.
If
you
don't
have
second
pre-image
resistance,
everything
is
very,
very
bad.
It's
I
need
I
you
know
given
a
certain
message.
I
cannot
find
a
second
message
such
that
the
hashes
of
the
two
messages
are
the
same
right.
A
The
prototypical
bad
thing
here
is
like
there
is
an
ubuntu
iso
that
exists
somewhere
with
some
cid
and
like
I
can
then
create
one
that
has
malware
in
it.
That
has
the
same
cid.
That's
like
that's
like
disaster
town
there's
collision
resistance,
which
is
bad
but
less
so
if
you
don't
have
collision
resistance,
which
is
that
I
I
can't
find
a
hash
of
m1
equals
m2
and
I
can
make
up
whatever
m1
and
m2
are
all
right.
A
So
the
standard
example
of
this
is
like
I,
I
give
you
a
message
which
is
like
I,
you
know
I
give
it
in
10
bucks
and
I
get
you
to
sign
it,
and
then
I
make
another
message.
That's
like
I
give.
I
give
a
dean,
a
million
dollars
that
has
the
same
hash
and
then,
when
I
go
and
present
it
to
the
bank,
I
give
you
like
the
giveaway
in
a
million
dollars,
one
right,
that's
the
prototypical
example
of
collision
resistance,
freestyle
collision
resistance
is
basically
a
subset
of
this.
A
You
go
like
a
little
deeper.
It
says
inside
of
like
our
our
little
hash
function
over
there
I
can
choose.
If
I
can
like
make
up
one
of
the
ivs,
I
still
cannot
get
up
the
same
result.
So,
instead
of
just
saying
I
can't
get
the
same,
you
know
different
inputs
same
output,
it's
even
if
I
get
to
like
stop
in
the
middle
of
the
of
the
hash
computation
if
yeah
sort
of
iv1
block
one
does
not
give
me
cannot
give
me
the
same
thing
as
iv.
B
A
Means
that,
like
the
compressor
function,
is
broken
that
doesn't
mean
that
the
collision
resistant
it
doesn't
mean.
You've
lost
the
prior
in
you,
know,
properties
which,
which
are
more
important,
but
like
the
proofs
that
everything
works
are
falling
apart
and
now
you're
starting
to
rely
on,
like,
I
don't
think
anyone's
found
an
exploit
explicitly
yet
and
so
like
when
the
free
start
collisions
break.
Everyone
starts
being
like.
We
should
find
a
way
out
of
this
hash
function.
A
From
what
I
can
tell
some
references
there
shot
256
is
not
subject
to
free
start
collisions,
which
are
the
things
that
we
need
in
order
to
make
this
like
slice
it
up
in
the
middle
work
right,
because
if
what
happens
is
if
you
don't
have,
if
you
don't
have
you
know
free
start
collision
resistance?
A
I
can
give
you
like
bogus
a
bogus
c9
that
gives
you
the
result
that
gives
you
that
and
I
can
give
you
a
bogus
c8
and
I
can
sort
of
keep
chasing
you
back
to
infinity
and
you'll
never
reach
this.
The
you
know
the
god-given
iv
from
the
universe,
because
then
that
would
imply
that
you
broke
that
you
have
collision.
You
broke
collision
resistance,
but
I
could
like
keep
sending
you
back
indefinitely
and
like
that,
would
be
bad
news
for
the
like.
The
dos
factor,
business.
A
Shawwan,
oh
shaw,
one
so
like
the
compressor
is
broken
you
may
have
seen.
The
thing
shawn
is
shattered
is
in
shambles,
etc
the
names
of
those
papers.
Unfortunately,
some
of
the
popular
users
of
shawwan
have
not
upgraded
their
hash
functions.
Our
good
friends
in
in
gitland
have
proposed
using
shatu,
maybe
sometime
eventually
with
no
particular
data
in
mind.
Bittorrent
has
bitsaran
v2,
which
uses
shatsu,
but
the
adoption
is
still
slowly
increasing
better
than
it
used
to
be,
but
bitcoin
v1
is
still
very
prevalent.
A
A
Maybe
those
are
one
of
you.
Let
me
see
how
much
time
I
have
yeah.
Okay,
I
have
a
little
bit
of
time,
which
means
I
get
to
talk
about
this
a
little
bit.
A
So
something
sad
with
all
of
this
business
is
that
you
have
to
go
one
block
at
a
time
all
the
way
back,
which
is
very
sad
because
you
can't
you
sort
of
you
know
in
like
a
bitsoft
world,
I
can't
paralyze
requests.
I
have
to
sort
of
walk
back
linearly.
It's
like
that
case
bit
swap
case
right.
I
have
a
deep
linear
graph.
A
I
could
use
something
like
like
graph
sync
and
fetch
it
all
this
way.
That
would
be
better.
I
can't
parallelize
it
so
now
I
have
my
100
gigabyte
iso
and
I
have
to
download
it
from
exactly
one
pierre,
and
that
is
real
sad
and
what
happens
when
I
want
to
resume
and
all
of
this
business.
A
So
it
looks
a
lot
familiar
to.
It
looks
very
familiar
to
a
problem
that
we
already
know
about
and
kind
of
care
about,
which
is
like
people
build
deep,
linear
graphs,
and
we
want
to
download
those
things
fast
because
they
look
like
they
look
like
you
know
the
backbones
of
blockchains
or
they
look
like
you
know,
git,
repos
and
master.
That
goes
all
the
way
back
to
the
beginning,
right.
A
So
yeah,
so
this
sort
of
devolves
into
the
same
problem
as
the
deeply
linear
graph.
One
thing
that
I
guess
is
an
optimization
that
I
should
note
is
like
these
chunks
here
are
like
64,
bytes
or
something
but
like
you
can
take
groups
of
64,
bytes
and
just
group
them
all
together,
because
you're
just
computing
the
function
going
forward,
and
so
you
can
set
those
to
your
limit
like
two
megabytes
and
and
you're
sort
of
off
to
the
races
there.
A
So
we'll
probably
talk
more
about
this
tomorrow,
but
tldr
there
is
this
thing:
I've
been
calling
up,
mana
fetch
where
the
idea
is.
Basically
you
ask
someone
for
a
manifest
of
the
cids
or
the
things
that
you
will
need
in
order
to
download
the
data
and
like
some
metadata
associated
with
it.
A
So
in
our
case
in
the
sha-2
land,
it's
give
me
like
cids
of
like
the
two
megabyte
chunk
boundaries,
or
something
and
of
the
two
megabyte
chunk
boundaries,
and
also
the
like
ivs
in
the
middle,
and
I
still
have
to
linearly
download
it
right.
But
I
can
change
my
model
a
little
bit.
I
can
relax.
If
I
choose
to
relax,
my
model
from
alice
cannot
send
me
more
than
two
megabytes
of
garbage
without
me.
A
A
I
can
get
a
bunch
of
performance
out
of
this
yeah
I
get
my
I
get
my
big
manifest
and
then
I
start
asking
for
blocks
and
I
get
the
first
block
and
now
I
trust
alice
a
little
bit
more
and
I
get
the
next
block
and
I
trust
her
more
and
I'm
like
oh
well
now
I
can
send
out
like
two
in
parallel
and
four
in
parallel
and
eight
in
parallel
to
other
nodes
in
the
network,
because
I'm
building
trust
in
alice
and
the
manifest
that
she
has
sent
me
and
so
again
I
still
get
to
be
protected
from
abuse.
A
I've
relaxed
the
conditions
a
little
bit,
but
this
allows
me
to
speed
things
up
and
what's
pretty
cool
here
is
that
this
is
the
amount
of
trust
you
need
like
the
x
percent.
Here
is
a
client-side
choice.
So,
if
I'm
like,
I
cannot
afford
duplicate
data.
Like
my
my
isp
is
like
you
know,
eighty
dollars,
eighty
dollars
a
byte
just
I'll,
take
the
latency
head.
I
can
do
that
and
if
I'm
like
hey,
I
found
a
free
aws
account
with
like
infinite
resources.
Then
you're
like.
A
It's
fine!
You
can
do
that
too,
and
that's
kind
of
like
the
idea
of
this
of
this
proposal.
A
Friends,
don't
let
friends
use
merkle
dam
guard
hashes
as
check
sums
for
large
objects,
just
use
a
merkle
tree
if
you
hate,
if
you
hate,
blake
3
and
you
hate
kangaroo
12
then
make
a
new
one
that
uses
the
same
merkle
tree
constructions
but
uses
like
whatever
security
parameters
you
want,
but
like
please
stop
doing
this
like
it
works,
and
we
can
do
the
manifest
thing.
But,
like
god,
your
life
is
easier.
If
it's
already
a
merkle
tree.
A
Shawwan
shawwan
is
kind
of
broken,
which
is
really
sad
because
there
aren't.
There
are
like
big
content
address.
You
know,
content
addressable
if
we
would
like
them
to
be
systems
that
use
and
doing
this
trick
might
not
be
safe.
A
I
guess
something
to
note
here
is
like
it's
worth
thinking
a
little
bit
about
like
what
are
the
ramifications
of
being
wrong
here,
right
like
like
what,
if
we
did
this
for
sean
and
turns
out
like
nope,
not
not
safe,
what
happens
in
our
ecosystem
right,
so
it
seems
like
it's
probably
something
like
people
either
have
to
build
defenses
against
some
of
these
dos
mechanisms
and
other
sorts
of
reputation,
systems
to
avoid
them,
or
they
just
stop
using
these
sha-1
hashes,
but
that's
hard,
because
people
then
might
have
started
building
applications
that
rely
on
its
existence,
and
then
they
have
to
make
this
choice
about
like
a
big
tech
swap
underneath
or
just
being
like
yeah
the
security
stuff's,
probably
fine,
which
almost
always
is
until
someone
wants
to
call
you
a
problem
right
and
so
those
are,
you
know,
ramifications
to
think
of
about
being
wrong
and
then
downloading
a
large
block
backwards
has
the
same
properties
as
downloading
linear
dags,
which
is
something
we
also
care
about
anyhow,
and
we
have
options
which
is
good
yeah.
A
B
I
will
admit
a
decent
amount
of
ignorance
around
exactly
how
to
what
I've
said.
Sheldon's
broken,
but
is
it
what
are
the
consequences
of
applying
this
method?
B
If
they
really
want
to
get
it
or
maybe
it's
something
like
turn
it
up
to
20
and
then
use
the
shaw.
One
thing:
yeah.
A
Yeah,
I
think
that's
so
for
for
those
in
the
crowd.
I'm
sorry,
there's
no
thing
here
for
you
to
hear
directly.
Hannah's
question
was
like
kind
of.
Can
we
use
the
shot?
One
thing:
does
it
matter
like?
What
does
it
matter
like?
How
does
it
impact
us
if
it's
a
little
broken?
Are
there
client-side
parameters
that
allow
well
our
users
choice
here
like
effectively
them
setting
their
own
block
limits
to
like
100
megabytes?
A
So
yes
and
no
okay?
So
let's
start
with
the
last
part.
A
We
have
this
problem
anyway,
with
various
things
with
codecs
and
yes,
we
have
some
of
these
problems
anyway,
we'll
likely
talk
about
them
both
today
and
the
people
who
do
the
webassembly
stuff
later
this
week.
We'll
probably
want
to
talk
about
that
too,
but
yeah,
that's
one
of
the
difficulties
with
the
block
limits.
Now,
if
it's
broken
right,
so
I
was
trying
to
explain
this
a
little
bit,
but
if
it's,
if
it's
broken,
I
think
what
happens
is
like
the
way.
A
A
B
A
Yeah
right
so
all
right,
so
there's
like
there
are
some
things
there,
or
maybe
we
say:
okay,
yeah
you're
downloading
backwards,
and
then
we
have
some
some
limits.
It's
it's
tricky
because
again
you
want
to
have
things
that
are
user,
I
don't
say
tunable
or
or
you
don't
want
to
necessarily
have
things
that
are
user
tunable
when
they're
going
to
break
across
people,
as
opposed
to
like
the
user
tunable
part
of
the
manifest
business
which
is
just
like.
Is
it
slower
or
faster,
as
opposed
to
like?
Is
it
a
yes
or
no?
C
D
You
if
you're
at
the
beginning
you
should
whatever
protocol
you
use,
transmit
the
size
of
the
cleaners,
and
so
I
isn't
you
claiming
you
have
a
hundred
megabyte
charging.
You
know,
that's
fine,
it's
a
fake
one
right,
but
I
will
download
100
megawatts.
I
will
not
download
more
because
you
told
me
it's
funny
like
even
this.
D
This
thing
goes
to
city
and
it's
not
going
to
stop
there
independent
of
the
hash
function
and
then,
at
the
end
I
will
hatch
the
whole
thing,
because
now
I
hope
that
so
I
can
definitely
do
that
and
I
would
also
always
detect
that
you
screwed.
A
Yeah,
well,
you
will
always
detect
that
you
screwed
me
if
you're,
using
the
modified
sha-1
thing
that
git
uses.
Yes,
so
yeah
and
I
think
that's
possible.
So
I
don't.
I
don't
know
enough
about
the
internals
of
the
sha-1
thing
that
git
is
using
like
it
may
be
that
that's
sufficient
to
help
us
out
here,
I'm
not
sure
more
more
eyes
and
people
familiar
required.
A
A
It's
just
it's!
You
know
it's
one
of
these
things
that,
like
part
of
the
part
of
the
game
that
we
play
when
we're
building
with
the
abstractions
in
general,
is
trying
to
find
that
balance
between
when
we
have
to
when
it
is
beneficial
for
us
to
be
able
to
go
off
and
do
our
own
things
for
a
while
to
explore
the
space
and
when
it's
important,
that
we
share
share
things
as
in
order
to
prevent
users
from
being
like.
A
I
understand
I
tried
it
in
this
one
and
then
that
one,
it
doesn't-
and
I
don't
know
what's
happening
right-
and
this
is
this-
is
a
tough
one.
A
That's
always
hoping
to
avoid
it
with
with
shot
two
and
shot
three,
and
maybe
we
can
do
shawwan
we'll
see.
C
D
A
A
A
The
the
speeded
up
thing
with
like
the
exponential
growth
is
only
required
if
you
want
to
go
faster,
but
you
still
need
to
do
this
like
instead
of
I
have
I
have
cid,
I
get
block.
Give
me
block
you
need
to
do.
I
have
manifest.
Let
me
like
walk
them
through
there's,
probably
a
bunch
of
in
in
places
like
you
know,
go
bit
swap:
go
merkle,
dag,
etc,
like
the
things
that
do
the
plumbing.
A
For
that
there
is
likely
a
bunch
of
work
here
to
be
done
to
enable
that
the
good
news
is
we
sort
of
want
to
do
these
things
anyway,
because
of
like
being
able
to
operate
with
different
protocols
like
there's,
there's
all
sorts
of
ways
to
download
that
you
may
want
to
like
interleave
with
each
other.
Fun
example
that
our
example,
I
think
is
fun
at
least,
is
like.
If
you
have
a
car
v2,
you
have
car
file
index
in
front.
A
You
can
basically
query
that
car
file
as
if
it
were
bitswap,
because
you
can
go
to
the
index
and
use
that
as
like
a
find
as
a
do.
You
have
the
block
and
you
look
through
the
index
to
see
if
they
think
they
have
the
block
and
then
you
say
get
block
and
then
you
go
to
wherever
the
index
said
to
get
the
block
from
and
you
get
the
block
and
like
if
you
have
basically
a
little
more
plumbing
and
more
configurability
into
the
data
fetching
process.
A
You
get
more
options
here
and
some
other
folks
have
taken
like
sprints
at
this
that
we
can
likely
learn
from
and
be
like.
So
what
went?
Well
what
didn't
go?
Well,
if
you
want
to
try
doing
this,
and
we
have
people
doing
this-
maybe
in
multiple
languages,
which
will
take
their
own
approaches
for
how
they
put
this
together,
because
the
architectures
will
look
a
little
different
but
like
at
the
at
the
protocol
level.
A
Really
simple,
like
I
have
a
version
of
this
that
I
prototyped
last
january
by
like
packing
together
a
fork
that
swap
which,
when
it
does
the
job,
the
question
is
just
how
you
how
you
plummet
into
like
the
larger
architecture
pieces,
but
I
think
those,
I
think,
that's
good
work.
You
want
to
do
anyway.