►
Description
The BlockScience team takes us through an overview of their history with blockchain research and most recently, "Content Addressable Transformers".
https://github.com/BlockScience/cats
https://medium.com/block-science/the-cats-out-of-the-bag-introducing-content-addressable-transformers-7483e61e3844
A
Test
run
all
right,
hello,
everyone,
it's
joining
remotely.
This
is
our
sixth
session
in
the
afternoon
session,
the
apac
friendly
session
and
we're
very
fortunate
to
have
the
block
science
team
joining
us.
I'm
very
excited
because
I've
been
reading
a
lot
about
the
posts
that
they
had
recently
titled
the
cats
out
of
the
bag.
A
Talking
about
content,
addressable
transformers-
and
I
think
it's
a
lovely
blueprint
that,
at
the
very
least,
every
other
compute
over
data
project
can
learn
a
lot
from,
and
so
we've
got
david
kelsey
and
joshua
from
the
block
science
team
and
I'll
just
go
ahead
and
hand
it
over
to
to
kelsey
to
give
us
a
bit
of
an
intro
and
we'll
let
the
rest
of
the
team
take
it
from
there.
B
Thank
you
so
so
the
guys
are
going
to
talk
about
content,
addressable
transformers
and
what
we've
actually
been
doing
on
that
part.
I
get
to
give
the
lovely
introduction
to
block
science,
but
before
that,
we'll
just
flick
to
the
next
slide
to
show
where
we're
headed.
B
So
what
is
block
science?
I
guess
one
thing
that's
unique
about
us-
is
that
we're
a
multi-disciplinary
research
and
development
firm
and
we're
specifically
interested
in
complex
systems,
and
that's
almost
at
the
stage
of
you,
know,
mapping
the
system
and
asking
the
hard
questions
and
addressing
some
of
the
hard
problems
before
the
actual.
Just
you
know
day-to-day
or
straightforward
kind
of
software
engineering
front.
B
So
a
lot
of
the
research
we
do
is
primary
research
as
a
team,
and
then
this
has
led
to
some
open
source
software
development
as
well,
so
people
might
be
familiar
with
cad
cad.
The
computer,
aided
design
software
that
block
science
has
built,
and
this
was
really
around
you
know
solving
some
of
our
own
problems
as
we
embark
on
working
on
complex
systems,
specifically
those
in
the
web3
space
next
slide
and
I'll
hand
to
david
to
speak
about
some
of
the
things
that
we've
worked
on
and
why
compute
over
data.
C
So
these
are
some
of
the
projects
that
we've
worked
on
an
engineer
who,
unfortunately,
couldn't
be
here
for
this
presentation:
danilo
bernardinelli
he's
in
he's
in
berlin
right
now,
at
a
conference
in
time
zones
just
aren't
working
across
the
four
of
us,
so
he
he
was
the
the
lead
engineer
on
on
most
of
these
projects
and
then,
as
kelsey
mentioned
falcone,
makes
makes
a
lot
of
use
of
cad,
cad
and
and
block
science
makes
a
lot
of
use
in
cad
cod
for
the
work
we've
done
with
cloud
point
and
I'd
just
like
to
call
out
that
joshua
is
one
of
the
the
main
main
developers
responsible
for
the
cad
cat
product
so
and
also
we'll
make
these
slides
available
at
the
bottom.
C
The
this
hackmd
book
is
something
that
danilo
put
together
to
provide
some
more
information
about
these
products
for
anybody
that
projects
that
anybody
is
interested
in
and
the
links
on
these
pages
go
into
some
more
depth
about
some
of
these
topics.
C
And
then
sort
of
give
it
given
who
we
are
and
and
sort
of
our
relationships?
Why?
Why
do
we
care
about
compute
over
data?
I
think
kelsey
is
by
training
an
ethnographer,
and
I
think
it's
important
to
call
out
that
these
aren't
just
technical
problems
that
we're
trying
to
solve
they're
social
problems
as
well.
They're,
the
the
the
systems
that
we're
developing
are
effectively
algorithmic
policy.
C
Making
systems
that
we
need,
we
need
to,
you,
know,
make
sure
work,
work
transparently
and
reproducibly,
so
cats
was
sort
of.
C
Started
out
with
that,
with
that
in
mind,
so
you
know,
transparency
is,
is
a
decent
operational
definition
of
trustability,
at
least
it's
it's
sort
of
the
beginnings
of
it
and
reproducibility
is
a
decent
operational
definition
for
verifiability,
so
that
that's
a
large
part
of
what
we
want
to
be
able
to
to
bring
into
these
algorithmic
policy
making
systems
and
then,
lastly,
sort
of
our
observation
was
it's
kind
of
hard
to
do
this
in
web
3.,
largely
due
to
lack
of
sort
of
fundamental
primitives,
and
I
think
that
both
cats
and
and
compute
over
data
are
our
share.
C
So
with
that
introduction
to
block
science
and
sort
of
an
overview,
we
want
to
introduce
cats
and
I'll
take
it
from
a
conceptual
point
of
view
and
then
pass
it
off
to
josh
who
will
go
through
the
the
the
the
instance
of
cats
that
he's
that
he
has
developed
so
basically
we're
looking
at
you
know
what
is
transformation
start
with
a
noun?
C
So
here
we're
showing
a
transforming
element.
We
call
it
a
node,
it's
divided
up
into
into
pieces,
there's
a
there's,
a
structural
and
a
functional
component.
These
can
be
further
subdivided
into
plant
and
infrastructure
on
the
structure,
side
process
and
then
for
function
on
the
on
the
function
side.
C
These
are
pretty
fuzzy
boundaries
between
these
things
and
largely
they're.
They're
they're,
divi
divide
based
on
temporal
characteristics,
infrastructure
and
infra
function,
change
slowly
and
plant
and
process
can
change
more
quickly.
C
There's
a
little
definition
at
the
bottom
of
this
slide
that
sort
of
gives
a
good
description
of
infra
function,
which
I
found
useful.
Basically,
it
pretend
provides
functions
that
are
things
like
protecting
providing
supporting
connecting
containing,
including
that
sort
of
thing
it
comes
from
a
book
on
on,
in
a
more
tradition,.
C
Institutional
traditional
institutions.
C
So,
given
that's
a
transformer,
what
what
is
then
a
cat?
So
if
you
take
transformer
and
add
content
addressability
to
it,
you
get
some
interesting
properties.
One
of
them
is
that
in
a
decentralized
system,
location,
addressability
doesn't
really
work
works.
Fine
in
centralized
systems,
but
in
decentralized
systems
location
is
frequently
not
known
or
if,
even
if
it
is
known,
it
may
not
be
accessible
or
it
may
not
be
convenient.
There
may
be
better
places
to
get
the
the
stuff
that
you're
looking
for
from.
C
So
you
know,
sort
of
you
know
given
content
addressability
and
you
know
a
big
call
out
to
ipfs.
So
you
know
provides
this
this.
This
ability
to
name
things
in
a
in
a
content
based
fashion
very
nicely.
C
C
The
function
in
the
structure
are
there
and
what's
added
in
here
is
is
what
we
call
a
component.
We
call
a
factory
that
basically
takes
an
order,
and
in
that
order
there
are
our
links
to
through
content
ids
the
structure
as
code
to
build.
You
know
the
structure
this
thing
is
going
to
run
on
the
function
is
code
to
build
the
the
the
functional
elements
of
the
of
the
cat
and
also.
C
The
content
address
of
the
input
data
the
factory
can
take
all
of
that
information,
create
a
processor,
execute
the
function
using
the
input
data
and
make
output
data,
in
which
case
the
the
factory
spits
out
this
invoice,
which
is
really
nothing
but
the
the
the
order
that
that
was
was
requesting.
C
All
of
this
to
be
done
and
the
content
address
of
the
output
from
it
nice
property
of
all
of
this
is
that
it
sort
of
has
provenance
built
in
that
as
you
as
you
put
transformers
together
the
the
orders
and
the
invoices
sort
of
keep
track
of
of
who,
where
where
things
came
from
how
things
were
created
and
where
things
go.
C
So
you
know
I
I
love
quotes,
so
I'm
always
going
to
have
a
slide
with
quotes
on
it.
Why?
Why
are
we
doing
this?
There's
a
guy
sort
of
one
of
the
one
of
the
patrons
of
data
driven
systems,
development,
edward
deming,
as
a
quote,
data
are
not
taken
from
museum
pieces
they're
taken
as
a
basis
for
doing
something.
C
C
So
that's
sort
of
what
what
what
we're
doing
we're,
not
collecting
data
to
stick
it
in
a
database,
we're
not
collecting
it
to
stick
it
even
in
a
data
lake.
We're
collecting
it
to
to
make
some
product
that's
useful
to
people
and
that
idea
was
was
probably
was
pushed
by
chama
digani.
C
Excuse
me,
if
I
totally
butchered
her
name,
I've
never
actually
heard
it,
but
she
has
this
idea
that
she
developed
of
a
data
mesh
and
central
to
that
is
the
purpose
of
functions
within
a
data
mesh
to
produce
these
data
products
and
that
the
idea
of
sort
of
domain
level
responsibility
for
those
data
products
is
a
key
thing.
C
I
think
it's
it's
it's.
I
find
it
compatible
both
with
what
I've
read
in
the
compute
over
data
literature,
I've
gone
through
and
and
it's
certainly
central
to
cats.
So
if
people
aren't
familiar
with
it,
I
would
I
would
advise
that
I
would.
You
know,
certainly
recommend
you
take
a
look
at
some
of
her
ideas.
We've
got
a
reference
here,
so,
given
all
that
sort
of
how
do
cats
and
cods,
you
know,
how
do
we
put
all
this
together?
C
C
We
can
have
a
you
need
a
data
layer.
The
data
has
to
be
somewhere.
C
There
are
technologies
around
that
that
provide
that
things
like
file,
storage,
web3
storage,
provides
that
streaming
ceramic
network
provides
that
so
we've
got
our
data
layer,
an
action
layer.
You
need
to
be
able
to
process
the
data
or
it's
just
sitting
there
for
museum
purposes,
and
we
don't
want
that.
C
So
that's
I
think
you
know
compute
over
data
is
producing
tools
for
for
for
that
processing
being
able
to
process
data
in
a
decentralized
system,
and
then
we
see
cats
as
sort
of
working
in
this
domain
layer
as
providing
the
business
logic
that
define
what
what
the
products
need
to
come
out
of
this.
These
transformations
need
to
be
so
that
that
that's
our
sort
of
breakdown
and
a
lot
of
what
we
want
to
get
out
of
the
interaction
with
the
compute
over
data
work
group
is
sort
of
validation.
C
C
So
without
further
ado,
that's
the
conceptual
basis
of
what
we're
doing
josh.
Do
you
want
to
take
on
the
what
what
you've
done.
D
Okay,
so
what
is
a
cat?
And
why
are
they
useful?
Okay
cats
is
a
unified
data
processing
framework
for
decentralized
mesh
network
for
a
decentralized
mesh
network
to
empower
collaboration
on
products
implemented
with
multi-cloud
services
that
enable
data
provenance
by
content
addressing
the
means
of
data
transport
between
services.
That's
a
wordy
explanation
of
the
sentence
below
the
title.
D
Right
all
within
this
unified
processing
framework
right,
the
platform
will
empower
the
collaboration
on
products
across
domains
between
cross-functional
and
multi-disciplinary
teams
and
organizations
right
by
reducing
the
operational
overhead
of
adding
new
data
sources
via
decentralized
and
distributing
responsibility
to
those
collaborating
within
bounded
domains
to
support
continuous
change
in
scalability.
In
other
words,
if
I
use
bugs
words
to
describe
this
kind
of
value,
it's
a
multi-disciplinary
team
working
on
a
aspect
of
a
product
as
a
micro
product
deployed
as
a
micro
software
as
a
service.
D
D
D
Okay,
so
how
are
they
useful,
specifically
the
input
data
transformation?
I'm
sorry
cats!
Well,
no
I'll
wait
for
that.
One!
Okay!
They
enable
data
process,
verification
and
transport
between
services
with
ipfs
cids,
which
are
the
content,
addresses
all
right,
so
those
enable
maintenance
and
reporting
of
data
process,
provenance
and
data
or
process
lineage
as
a
result,
because
if
you
have
data
provenance,
you
get
data
lineage
as
a
byproduct,
which
is
a
business
intelligence
view
of
data
processes
on
the
mesh
right,
so
they'll
be
used.
D
D
These
record
entries
are
input
and
output
necessary
to
re-execute
a
deterministic
process
in
a
pipeline
of
post-processed
in
a
pipeline
that
has
been
post-processed
a
pipeline
of
cats.
It's
wordy,
but
I
had
to
prove
provenance
this
way
with
the
I
o
itself
right,
because
the
input
and
output
is
used
to
re-execute
a
cap.
So
that's
a
very
important
concept
for
the
next
slide.
D
The
cids
are
generated
by
the
cat
right
by
clients,
ipfs
clients
on
the
worker
nodes
of
a
software
as
a
service,
in
this
case
it's
spark
which
will
also
transform
the
input
data
cid,
that
provenance
record
entry
right
to
produce
an
output
which
is
a
combina
combination
of
another
record
entry
and
the
result.
Data
and,
as
you
can
see,
the
cloud
service
model
is
encapsulated
in
a
cat
for
which
a
multi-disciplinary
team
collaborates
on
a
single
attack
on
a
single
cat
queue.
D
Okay,
this
is
a.
D
This
is
an
example
of
david's
slide.
Above
that's
collapsed
between
the
cap,
processor
and
the
node,
so
the
processor
is
software
to
be
deployed
on
a
single
node.
I
thought
that
was
important
to
mention,
so
you
have
structure
which
is
a
user-specified
computation
or
data
transformation
framework
such
as
spark
desk,
etc.
D
This
structure
exists
such
that
a
function
is
executed
upon
it
and
the
function
is
a
data
transformation
api,
which
I
have
an
example
in
the
next
slide
for
or
some
arbitrary
computational
process
performed
by
the
software
as
a
surface
in
this
case
spark
so
that
computational
process,
or
which
would
be
a
transformation
in
this
case,
would
be
a
spark
data
frame
transformation
and
the
api
would
be
spark
sql
right,
which
is
the
process
and
the
infra
the
cat
interface
that
isolates
that
transformation
for
the
user
experience
is
called
the
infra
function.
D
This
is
an
example
of
this
is
an
example
of
that
isolation
in
the
user
experience
where
one
is
using
spark
sql
to
transform
a
data
set
abstracted
by
a
spark
data
frame
using
spark
sql.
So
I
decided
to
have
a
combination
of
like
you
know,
ansi
sql
and
just
like
spark
api,
just
to
show
you
that
you
know
it's
both
okay
cube
next
slide.
D
All
right
could
I
take
over
here.
Can
I
share
my
screen
here
sure:
okay,.
A
D
D
Okay,
so
I
just
want
to
explain
the
demo
before
I
demo:
okay,
just
to
tie
it
all
together.
So
this
okay,
I'm
using
cube
control,
okay
in
order
to
monitor,
monitor
execution
right.
This
control
plane
the
service,
the
secret
identity
and
access
management
is
in
terraform.
For
example,
cluster
roll
binding
is
handled
by
a
pi
spark
api
behind
the
hood
that
I'm
using
to
build
a
con,
a
docker
image
that
I've
modified
to
include
an
ipfs
client
on
that
will
be
built
from
docker
to
be
worker
nodes
of
a
spark
cluster.
D
Okay.
Here's
an
example
in
this
example:
there
are
two.
So
each
of
these
spark
workers
have
an
ipfs
client,
that's
being
used
to
as
a
storage
hack,
because
we
don't
have
web3
storage
right
yet,
but
there's
a
cache.
So
once
it's
ip
once
it's
cid'd
by
the
first
cat
that
cluster
is
non-ephemeral
for
the
purposes
of
you
know,
because
we
don't
have
storage
yet
right
and
the
second
cat
gets
those
cids
of
the
data
set
and
other
uris
necessary
to
process.
D
Itself,
I
guess
so.
I
just
needed
to
note
that
right-
and
this
is
an
example
of
I
would
say-
that
record
entry
I
referred
to
earlier-
we're
calling
it
like
a
bill
of
materials.
It's
a
supply
chain
term
that
we
still
so
you
have
input,
transfer,
transformation,
invoice,
data
output,
all
cid.
The
data
output
is
partitioned
in
the
case
for
distributed
processing,
everything
that
all
the
partitions
rcid.
So
this
is
kind
of
like
an
example
bomb
or
that
provenance
record
entry
that
is
retrievable
and
re-executable
from
the
ipfs
network.
D
So
that's
an
input
right,
so
the
first
cat's
purpose
is
to
content,
identify
the
data
set
because
it's
not
on
the
network
and
we
don't
have
storage
yet
okay.
This
is
an
example
of
an
m
of
everything,
critical
in
the
provenance
record
entry
or
the
bomb
necessary
to
execute
a
cat.
In
this
case
it's
creating
input
for
the
second
one.
These
are
the
same,
so
it's
like
cat
0
produces
the
input
for
the
subsequent
cat.
So
this
is
the
input
for
the
subsequent
cat
spark
transforms
it.
D
The
ipfs
clients
cids
everything
necessary
for
re-executing,
both
the
input
and
the
output,
and
then
you
have
an
output
bomb,
so
it
produced.
So
the
the
execution
of
the
second
cat
produces
an
input
bomb
and
an
output
bomb.
Everything
re
necessary
to
execute
re-execute
the
entire
pipeline
to
the
demo.
D
A
D
D
Maybe
a
side
by
side,
okay,
there,
okay
in
this
example,
I've
already
executed
cat
0,
because
I
want
to
show
you
that
the
bomb
cid
is
the
same
okay.
I
just
want
that
to
be
seen
hold
on.
Let
me
check
the
spark
cluster
right,
so
this
is
the
cat
zero
spark
cluster.
I
made
it
non-ephemeral
because
we're
hacking
storage.
D
D
D
This
is
the
invoice
which
contains
the
content
addresses
of
the
data
in
s3.
These
are
this.
This
is
a
data
set
of
a
single
partition,
so
this
partition
is
on
ipfs.
D
There
is
an
input
bomb
because
this
cat
is
creating
the
imp.
The
output
of
this
cat
is
the
input
for
the
subsequent
clad,
so
there's
no
input
bomb
cid.
This
log
contains
like
ipfs
ur,
like
ipfs
uris,
which
change
okay.
So
that's
why
they're
not
in
the
bomb,
otherwise
the
cid
would
change.
This
is
just
the
idea
of
the
infrastructure
is
code.
D
D
Okay,
right
now,
the
dependencies
of
the
cat,
zero
application.
Okay,
right
now,
the
cat,
xero
application-
is
being
packaged
into
a
virtual
environment.
These
dependencies
are
being
installed
into
that
virtual
environment
as
well.
Okay,
so
the
virtual
line
is
created.
It's
installed
in
the
virtual
environment,
the
virtual
alignment
is
zipped.
D
D
Okay,
that's
terraform
spark
is
like
driving
an
elephant,
it's
annoying
and
it
takes
time
so,
okay
hold
on.
D
Hold
on,
I
have
to
monitor
resources
hold
on
okay.
Now,
clark
price
park
has
an
interface
for
creating
a
container
that
that
I've
taken
the
docker
file
for
modified
it
to
include
the
ipfs
client.
This
container
is
being
okay.
C
A
A
A
D
Okay,
so
cats
in
this
example,
which
is
a
proof
of
cons,
concept,
the
cat.
The
reason
why
the
cat
is
non-ephemeral
is
just
the
storage
hack
cats
hasn't
been
deployed
right,
it
hasn't
been
deployed.
The
only
reason
why
it
sticks
around
is
because
I
don't
have
storage,
I
don't
have
content
address
storage,
so
what
you're
describing
is
a
cat
on
a
mesh
network
that
has
been
deployed.
D
That
is,
is
be
that
that
requests
are
sent
to
right.
D
D
Okay,
that's
a
future
vision.
We've
considered,
okay
hold
on
wait.
I
kind
of
want
to
answer
this.
That's
a
future
vision!
Okay!
Actually,
you
know
what
let
me
just
continue.
I
I
will
remember
this
and
we
will
continue
this
conversation.
That
was
good.
D
B
D
In
the
next
one,
because
the
input
and
output
are
in
the
second
cat,
okay
yeah-
this
is
an
example
of
things
changing
you
know.
The
ipfs
client
has
a
different
uri,
so
I
just
removed
it
from
the
bomb
and
put
it
in
s3.
Okay.
So,
let's
see.
D
D
D
Because
it's
the
underpinning
of
multiple
kinds
of
data
processing
like
batch
construing
and
can
be
used
for
iot,
which
kind
of
lends
itself
to
this
node
concept.
So
the
processor.
D
I
guess
this
is
the
node
right,
so
the
processor
is
a
software
that
we're
going
to
know.
This
is
an
abstraction
of
the
node.
It's
like
a
quantum
of
the
node
right
right
now,
I
have
to
say
it's
a
vm,
because
you
can't
put
it's
difficult
to
put
docker
inside
docker,
but
that's
a
story
for
nothing.
D
A
Well,
fair
enough!
Well
I
I
want
to
say
I
I
really
appreciate
you
guys
taking
us
through
a
demo
because
we're
all
in
the
various
stages
of
building
and
so
we're
all
you
know
we're
all
building
together
and
just
seeing
what
you
guys
are
putting
together.
Just
seeing
the
you
know,
some
of
the
the
raw
inside
pieces
of
it
is
super
helpful.
So
I
I
very
much
appreciate
that.
D
A
Just
one
last
question:
for
you
in
terms
of
workloads,
you
know
so
in
it
separate
from
the
computer
data
working
group.
One
of
the
other
projects
I
work
on
is
bacco
yao
and
we're
trying
to
serve
deterministic
and
non-deterministic
workloads,
which
is
tough.
Do
you
guys
make
any
distinction,
whether
it's
you
know
deterministic
and
it's
always
going
to
return
the
same
output?
Or
do
you
in
this
case
primarily
promote
primarily
focus
on
non-deterministic
workloads,
just
whatever
the
data
says
at
the
time
it's
going
to
be
unique
and
different
for
each
other.
D
For
a
strategic
implementation
standpoint
in
terms
of
time,
okay,
I
didn't
care
about
whether
or
not
it
was
deterministic
or
non-deterministic.
That's
up
to
the
user
that
there's
a
way
to
enforce
that
which
is
on
the
roadmap.
D
Okay,
yeah,
and
there
is
a
way
to
enforce
that.
We
know
how
to
enforce
that
and
we
would
have
to
talk
again
for
that.
D
D
So
you
know
somebody
sticks,
you
know.
If
something
changes
I
mean
that's
up
to
the
user,
for
the
poc,
the
example
is
deterministic,
but
there
is
a
way
to
ensure
that.
C
For
the
purposes
of
being
able
to
track
provenance,
immutable
data
and
item
item
potent
process
is,
is
a
is
the
goal
that
that
you
know
we'd
like
to
achieve
to
the
extent
possible.
You
know
what
what
do
you
do
to
a
process
that
has
a
random
number
generator
in
it?
Well,
you
can
get
around
that
by
passing
a
random
number
in
as
an
input
parameter,
something
along
those
lines.
What
to
what
extent
can
you
make
a
process?
C
I
I
prefer
item
potent
to
deterministic.
I
guess
same
same
input,
same
output,.
D
That's
how
you
do
it,
but
yeah
yeah,
well,
yeah
I
have
to
I.
I
need
leverage
to
talk
to
you
again
so.
D
D
Okay,
if
the
process
is
non-deterministic
on
the
network,
that's
like
a
fork,
but
that
can't
be
managed
because
it's
like
I
don't
know
I
there
will
be
cases
in
which
people
want
non-deterministic
processes.
D
A
Especially
if
you
look
at
like
a
lot
of
blockchain
processing,
you
know
like
popular
ethereum
type
processing,
it's
deterministic,
it's
the
same
output,
every
time
when
you
run
the
smart
contract,
but
then
in
our
world
of
broader
use
cases
it's
entirely
open-ended,
so
we're
always
kind
of
trying
to
figure
out
use
cases,
but
we
haven't
found
a
strongly
deterministic
use
case
either.
But
it's
helpful
to
hear
your
guys
perspective.
D
I
think
we
use
too
much
resources
on
a
local
machine,
but
the
idea
is
kubernetes
was
used
because
it's
just
the
backbone
of
services
deployed
in
you
know
cloud
service
provider
wanted
to
go
as
bare
metal
as
possible.
D
If
I
continue
down
this
route,
I
will
get
rid
of
s3,
I'm
tired
of
it.
I
don't
like
spark.
Okay,
I
read
hold
on
I
mean:
do
we
have
any
time
constraints.
A
We
probably
I'll
go
ahead
and
wrap
up
the
the
recording
here
in
just
a
second,
but
we
can
keep
going
ourselves
as
well.
So
I'll
just
go
ahead
and
hit
pause
right
now
and
we'll
be
good
and
then.