►
A
A
B
E
B
Like
you're
used
to
it's
telling
us
that
if
we
run
firefly
we're
going
to
get
operability
with
this
new
open
asset
data
protocol,
which
is
exactly
what
we
want,
so
are
we
streaming?
We
are
streaming.
What
is
this
fuzzy
firefly
yeah?
Oh?
Why
were
we
calling
it
fuzzy
firefly?
That
was
to
remind
me
of
something
it's
called
we're,
calling
it
fuzzy
firefly,
because
what
we're
doing
is
we're
looking
at
the
we're
looking
at.
Actually,
we
should
show
this
to
twitter,
so
we're
gonna.
B
Look
at
firefly
as
if
it's
the
corpus
of
data,
the
blockchain
on
firefly
is
providing
the
corpus
to
of
data
to
our
fuzzer
and
our
fuzzer
is
alice.
She
is
the
the
brain
of
our
fuzzer
and
we
are
building
a
fuzzer,
but
we're
also
building
a
reverse
buzzer,
because
we're
going
to
be
building
code
or
alice
is
going
to
be
building
code
and
so
she'll
interact
via
the
we're
going
to
put
we're
going
to
put
we're
going
to
put.
B
The
oh,
by
the
way
before
I
turn
this
on,
somebody
tell
me
that
this
is
bad
idea
to
turn
it
on
like
this,
because
if
it
turns
out
that
that's
bad,
then
this
will
be
something
last.
You
see
me
so
all
right,
okay,
so
basically
it
looks
like-
and
I
mean
this
is
this-
is
I
mean
this?
This
thing
this
thing
this
draft
of
this?
Oh,
I
this
this
thing
looks
good.
B
This
looks
like
something
that
we
care
about
this,
this
oh
dap,
and
here
it
is
and
look
and
they
have
an
rfc,
which
is
remember
the
scope
that
we
decided
is
applicable
for
this
for
this
problem
space
right.
So
that
tells
us
we
must
be
in
the
right
place
right,
so
somebody
has
something
which
expires
in
a
few
days,
so
that
means
they'll,
probably
updating
this
document.
F
B
Open
asset
open
digital
asset
protocol,
okay
operates
between
two
k-way
devices,
so
this
is
exactly
yeah,
so
very
subject
to
pc,
okay,
so
basically
they're
extending
bgp
to
be
a
construct
for
the
blockchains,
so
off
the
lt,
so
the
various
resources
that
are
outside
of
a
distributed,
ledger
technology
system
and
are
now
part
of
the
operations
of
the
dlt
system.
So
this
is
us.
This
is
where
we
fit
in.
B
So,
let's
understand
more
so
so
we
want
to
be
part
of
that:
on-ramp,
off-ramp,
ramp
and
so
the
deal
this
open,
open,
digital
asset
protocol
is
the
file
format
that
we're
going
to
speak,
and
the
protocol
just
is
like
our
dhcp
spec
right.
This
is
that
level
of
scope.
This
is
what
we're
looking
for.
So
we've
confirmed
that
firefly
seems
to
be
the
state-of-the-art
solution.
B
We've
confirmed
that
this
open,
dap,
okay,
we
just
saw
we
in
confirming
that
that
firefly
is
the
the
correct,
state-of-the-art
solution
to
be
building
off
of
the
the
the
edge
of
the
train
of
thought,
the
bleeding
edge
the
state
of
the
art.
It's
it's
it's
the
farthest.
B
What
did
we
call
it
yeah?
So
the
state
of
the
art
is
this:
you
know
all
your
feature
branches
they
be
tested
against
each
other
right.
So
basically,
you
know
this
is
the
the
main
branch
right.
Obviously
this
is
the
project
right,
but
if
you're
looking
at
your
feature
branches
as
your
projects
within
the
theoretical
space,
then
you
know
this
is
effectively
that
state-of-the-art
feature.
Branches
it'd
be
tested
against
you.
So
this
type
of
thing
so
we're
going
to
map
it
into
that
concept.
B
When
we're
looking
at
we're
going
to
map
between
those
concepts
of
forks
and
feature
branches
and
a
b
testing,
the
feature
branches
and
the
ecosystem
as
a
whole,
and
that
av
testing
of
the
feature
branches
is
a
microcosm
of
the
open
source
ecosystem
and
we're
gonna
draw
a
lot
of
parallels
there
in
the
way
that
we're
gonna
make
alice
work
with
the
code
so
and
and
across
code
bases,
specifically
right,
because
that's
a
critical
piece
there.
So
what
do
we
got
here?
B
It
looks
like
they've
got
a
content
type
they've
got
their
own
content
type
and
they've
got
their
own
url
and
it
looks
like
they
have
this
company,
maybe
called
quaint,
sounds
like
there's
something
called
quaint
going
on
here.
That
looks
like
an
instance
of
something
so
eu
supply
chain.
Okay,
so
we're
talking
supply
chains
here.
So
that's
where
we
want
to
be
so,
let's
see
so
specific
to
giveaway
behind
the
target.
Dlt
operates.
This
field
is
local
to
the
dlt
gateway
and
used
to
direct
odap
interactions
to
the
correct
underlying
dlt
okay.
B
So
basically,
this
is
saying
that
these
are
yeah,
so
this
this
dlt
these
these
these
gateways
are.
You
know
where
we
sit
so
dfml
needs
to
that
that
thing.
So
that
thing,
so
we
want
to
implement
a
odap
gateway
for
kubernetes
api
server.
That's
that's
part
of
our
goal,
and
if
we
can
do
that,
that
means
that
we
can
then
transform.
B
Then
then
we
want
to
take
the
data,
that's
within
the
which,
which
we
want
to
take
the
data
stored
on
chain
and,
as
a
part
of
our
gateway,
execute
some
of
that
data.
I
mean,
depending
on
our
context,
right
and
when
we're
at
the
kcp,
the
the
kcp
gateway
there
we're
going
to
be
converting
it
into
the
kubernetes
api
job
constructs
for
these
crds
that
we're
going
to
create
that
are
basically
these
jobs.
B
So
within
kcp,
which
is
the
thing
that's
going
to
abstract
the
crds
and
the
api
server
away
from
the
away
from
the
container
interface,
because
we
don't
care
necessarily
about
containers
right.
We
care
more
about
the
interface
right,
we're
using
we're
leveraging
we're
leveraging
blockchain
we're
leveraging
odap
as
the
standard
for.
So
what
do
we
have
right?
We
have
three
things.
Remember
we
have
data
compute
ml
right
and
so
dffml
is
going
to
provide
the
ml
to
us.
Data
is
going
to
be
transferred
using
odap.
B
Compute
is
going
to
be
orchestrated
using
kcp
right,
so
we're
going
to
create
our
gateway,
dlt
gateway,
which
translates
you.
Our
firefight
led
firefly
ledger
into
our
kcp
api.
B
Api
server
calls
so
and
then
obviously
we're
gonna
run.
Machine
learning
on
the
the
and
the
reason
for
the
fuzzy
right
is
because
we're
gonna
we're
gonna
run
ml
on
all
of
the
data.
That
goes
in
and
out
of
these
executions
or
the
oh
dap
and
and
then
you
know,
have
alice
change
what
executions
happen,
based
on
that
data
that
that
telemetry,
so
discovery
of
digital
assets,
interior
resources,
so
resource
discovery,
so
we
care
about
dlt
gateways
all
right.
So
where
do
we
implement
a
gateway?
B
B
Unlock
true
value,
okay,
yeah
so
once
again,
they're
doing
something
similar
to
us
all
right,
so
they
we
can
leverage
the
fact
that
they
have
already
done
the
spec
right.
So
now
so
we
talked
about
the
the
rfc
for
the
open
architecture
right,
so
the
data
portion
is
covered
in
that
right.
So
so
the
opening
application
data.
So
now
I
think
so
far
so
that
portion
is
covered.
So
basically,
all
our
input
objects
so
for
open
architecture,
rfc,
open
architecture,
rfc,
so
data
compute
ml,
okay,
which
is.
E
B
Okay,
so
let's
recap
so
data
so
inputs.
B
B
So
it's
a
good
thing.
We
searched
twitter
because
searching
twitter
for
hyperledger
firefly
wasn't
able
to
tag,
it
doesn't
have
an
account,
but
the
last
tweet
before
us
and
it's
linked
in
the
thread
is
this:
this
guy,
who
was
talking
about
this,
this
open
digital
asset
protocol?
So
we
are,
I
mean
we
are
hidden.
We
are
real
lucky
right
now
we
are
riding
on
we're.
B
Gonna,
take
this
luck
and
we're
gonna
write
it
as
far
as
it
goes,
and
so
it
looks
like
they
must
put
that
my
guess
is
they're,
probably
going
to
publish
something
new
because
it
expires.
So
maybe
oh,
no.
The
request
for
comment
expires,
so
we
have
until
the
11th.
If
we
want
to
comment
on
this,
so
they
put
out
their
rfc,
they
say
hello
world.
This
is
what
we're
up
to
and
then
we
can
comment
until
may
11th.
So
we
are
within
the
range.
B
So
let's
go
test
this
stuff
out
and
see
how
it
works,
and
then
we
might
provide
them
with
feedback
right
and
then
they
can
provide
us
with
feedback.
Maybe,
let's
see
we'll
see
if
that's
applicable,
that
may
be
not
applicable.
B
Right,
probably
not
in
this
case,
we're
just
building
off
them,
so
those
response
codes
will
probably,
but
I'm
describing
the
general
process
because
you,
you
know
if
you
encounter
something
else,
obviously
may
limit
unless
you're
going
to
go
out
and
repeat
the
same
thing
for
a
different
sub
area
here
that
we're
looking
at
then
we're
not
going
to
run
into
this
problem.
So,
but
this
is
just
you
know,
so
you
know
for
all.
B
B
B
B
Communicate
with
other
nodes
through
like
the
models
that
are
built
over
the
strategic
plans
to
sort
of
this,
these
are
like
subtleties
to
communicate,
subtleties
to
communicate
via
subtleties,
and
this
is
why
I'm
talking
about
this,
because
there's
a
knock
here
and
okay
anyways,
so
lockout
lockheed
verification.
F
Flow,
oh:
hey,
hey,
hey,
okay,
ways
implement
the
lock.
B
Okay,
so
remember
that
dffml
provides
a
concurrent
execution
environment
with
managed
locking,
and
why
is
this
important?
So
you
know
we
provide
managed,
locking
because
locking
is
hard,
and
so
basically,
if
you
define
your
data
types
as
being
needing
to
be
locked,
then
the
goal
is
that
you
would
lock
the
tree
of
descendant
data
and
we
probably
need
to
flush
that
out
more
right
now,
right
now
it
just
actually
locks
anything.
B
So
if
there
is
any
data
that
that
is
derived
from
a
parent
that
that
requires
a
lock,
it's
just
going
to
lock
the
whole
tree
of
anything,
that's
dispatched,
so
not
really
working.
Ideally
right
now
it
basically
just
locks
everything.
So
it's
not
the
most
performant.
You
know
it's
not
the
the
performance
that
we
want
out
of
it,
but
we
could
be
more
intelligent
about
it.
B
So
it's
I'm
curious
about
what
is
this
lock
evidence
verification
flow,
because
perhaps
we
could
relay
this
use,
use
leverage
this,
for
we
could
perhaps
leverage
this
for
distributed,
locking
so,
okay,
so.
B
I
think
basically,
let's
just
go
back
to
twitter
and
stop
reading
this
rfc
okay.
So
let's
retweet
this
greg
greg.
You
rock
thank
you
saved
us,
so
just
from
well
okay,
so
you
confirmed.
B
Okay,
so
api
gateways
right,
that's
what
we're
plugging
into
dlt
relay
through
internet,
okay,
so
great.
E
B
This
is
where
we've
sort
of
confirmed
that
that
that
firefly
is
the
way
to
go,
we're
going
to
be
interoperable
with
all
this
stuff
right,
and
so
what
are
we
going
to
put
in
there
we're
going
to
put
all
our
data
from
our
data
flows,
which
means
we
are
going
to
have
as
we
refactor
into
our
second
and
third
party
plugins.
B
That's
gonna
be
all
of
our
s-bomb
data
right,
so
that
is
going
to
be
what
are
the
dependencies
as
we're
going
to
and
if
you
look
listen
look
into
the
second
party,
adr
and
you'll,
see
where
we
talk
about
this,
a
b
feature,
testing
of
developer
branches
against
each
other,
and
then
we've
obviously
extended
that
concept
into
alice
to
have
her
communicate
for
her
distributed
agents
to
to
effectively
do
the
same
thing
right.
Okay,
so
let's
go
spin
up
firefly.
B
B
So
remember
our
plan.
Our
plan
is,
we
are
going
to
load
an
example.
Dump
the
blockchain
confirm
the
format
maps
over
to
our
d
ids
generate
the
the
same
format
using
our
lower
level,
libraries,
so
that
we
can
be
sure
that
we're
going
to
be
compatible
for
import
and
then
so
so
so
we
could
okay.
So
so
what
are
they
talking
about?
With
these
api
gateways?
Well,
they're
going
to
make
it
really
easy
for
us.
B
You
know
when
we're
our
application
is
deployed
to
go
and
and
talk
to
their
thing
through
their
little
api
gateway
right
and
these
plugins
and
right
we
can
run
the
supernode
and
then
we
can
talk
via
the
api
gateway,
but
we
also
want
to
basically
support
this
lower
level
where
we're
interacting
directly
within
the
blockchain
ecosystem,
which
means
that
we're
you
know
we
can
talk
this
protocol
dfml
can
talk
this
protocol
and
why
do
we
want
to
do
that?
B
Well,
because
we
want
to
save
and
load
from
cold
storage
right,
usually,
cold
storage
would
mean
tape,
storage,
but
I'm
going
to
say,
cold
storage,
because
you
know
what
I
mean
it's
not
running
and
and
that's
going
to
be
very
important
here
or
well
yeah
it
doesn't
it
doesn't.
It
could
be
running,
it
doesn't
really
matter
so
so,
okay,
so
we're
gonna
run
firefly.
Let's
just
go
run
it
do
you
think
they'll
they'll,
reinstate
that
show
if
we
all
run
this
enough.
B
B
F
B
B
B
Start
our
blockchain
on
so
we're
going
to
start
firefly.
So
they
have
this
cli
situation,
so
firefly
init.
B
And
then
firefly
start
okay,
so
this
will
run
firefly
on
docker
compose
so
immediate
things
right
off
the
bat
anytime.
Somebody
gives
you
a
docker
compose
file
you're
pretty
much
about
to
have
a
bad
time.
Unfortunately,
so
the
problem
is
that
docker
compose
is
great
for
these
little
demos,
but
then
translating
it
into
kubernetes
deployments.
B
F
B
E
B
This
is
great.
This
is
great
because
this
is
going
to
mean
that
alice
when
she
finds
a
repo
she's
just
going
to
be
able
to
deploy
any
repo
right
on
the.
If,
because
people
include
compose
files
for
a
lot
of
things,
so
if
she
runs
across
a
random
repo
and
she
wants
to
run
it,
then
she
can
just
basically
run
this.
So
that's
great.
B
Okay,
kcp.
B
What
was
it
called?
The
gateway,
odap
gateway.
C
B
So
they're
saying
that,
basically,
if
we
use
this,
so
we
they're
saying
don't
so
they're
saying,
don't
bother
doing
what
you're
doing
right
now,
which
is
encode
to
pure
dids
directly,
because
you
can
run
firefly
now
I
have
a
problem
with
that.
I
don't
want
to
run
firefly
right.
I
don't
want
to
have
to
run
firefly
because
remember
we
need
all
of
this
to
work.
Remember
we
said
we
want
everything
to
okay,
so
why
don't?
We
want
to
run
firefly
watch
this
this?
This
is
why?
B
B
You
know
just
run
everything
like
we
normally
run
at
the
speed
that
we're
normally
running
it
without
anything
else,
remember
no
dependencies
and
have
opera
interoperability
right.
So
if
we
introdu,
if
we
can
understand
the
format,
the
protocol
that
this
thing
talks,
then
we
can
have
interoperability
with
it
and
we
don't
have
to
run.
You
know
multiple
containers
worth
of
things
just
to
talk
to
the
blockchain
right
or
just
to
to
to
save
our
stuff
in
a
way
that
could
could
talk
to
the
blockchain.
B
So
so,
let's
just
do
this
firefly,
okay,
so
firefly
net.
B
B
I
think
this
is
making
it
worse.
That's
not
gonna
fly
there,
and
so
also
your
five
dollar
digital
ocean
droplet
things
won't
fly
there
either
and
we
wanna
you
know
ubiquitous
right,
okay,
so
we're
spinning
up
for.
Let
me
let
me
just
go
fix
this
I'll,
be
right
back!
I'm
not
gonna!
Stop
the
stream.
B
Example
is,
should
I
so
in,
should
I
there
are
tests
check
out
that
so
so
should
I
needs
a
lot
of
should
I
needs
a
lot
of
stuff
because
it
runs
a
bunch
of
these
static
analysis
tools.
B
All
right
so-
and
this
is
so
basically
what
we're
doing
here
well,
we're
just
saying
we're
just
defining.
B
And
remember:
we're
going
to
unify
we're
going
to
unify
across
the
operation,
interface,
the
cli
interface-
and
you
know
everything
else
so-
and
the
testing
interface
we're
going
to
unify
all
of
that
right.
Everything
is
going
to
be
a
function,
so
all
right
so
check
this
out.
So
what
did
we
do
here?
Well,.
B
We
ran
cache
download.
So
what
are
we
going
to
do
here?
Well,
we're
going
to
run
cache
download
and
so
something
something's
going
on
here.
So
what
what?
What
do
we?
What
do
we
see
so
we're
about
to
download
this
thing?
So
what
we
don't
want
to
keep
files
in
the
gate
repo?
Obviously
we
don't
throw
dependencies
in
there.
B
We
do
want
to
understand
where
we're
getting
the
file
and
what
version
it
is
and
to
do
that
eventually,
you
know
we'll
basically
have
this
generic
installation
flow
right,
which
says
you
know,
install
me
this
thing
from
github
right.
You
pointed
it
get
it.
You
know
alice,
install
blank.
You
pointed
out
a
link,
she'll
go
and
she'll
install
the
system
package
management
version.
B
If
it's
applicable,
you
know,
if
there's,
if
there's
a
package
manager
with
your
system
or
maybe
shoot,
goes
and
traverses
and
installs
via
the
ppa,
that's
that's
documented
in
their
docs
or
may,
or
maybe
she
just
goes
and
she
grabs
the
binary
and
she
installs
it
on
your
system
and
she's
going
to
keep
track
of
what
she
did,
because
every
data
flow
that
she
runs.
Every
system
context
that
she
executes
she's,
going
to
add
it
to
the
blockchain
and
you're,
going
to
have
a
history,
so
you're
going
to
have
a
full
history
of
everything.
B
That's
happening
your
machine,
any
machine
right
so
and
she'll
tell
you
you
know
what
kind
of
features
you
have,
what
kind
of
features
you're
missing.
It's
basically
build
your
own
distro
on
the
fly,
all
the
time
sort
of
thing
so
and
then
take
that
and
put
it
wherever
you
want
redeploy
it
wherever
you
want.
So
what
kind
of
information
are
we
missing
here?
So
I'm
I'm
seeing
I've
seen
so.
B
Basically,
if
we
were,
if
we
were
to
look
at
that
use
case,
right,
alice,
install
firefly,
what
would
you
do
so?
We
would
say:
okay,
google,
for
you
know
google
for
firefly
right,
try
to
understand
which
one
we're
talking
about
within
the
context
of
our
problem:
space.
Okay,
so
our
our
previous
trains
of
thought,
our
previous
system
context,
our
development
workflow,
our
chrome
browsers.
All
of
that
stuff.
B
It
says
we're
looking
at
firefly,
that's
a
blockchain
thing
right,
so
she
makes
sure
that
she
picks
the
same
one
that
we're
seeing
in
our
tabs,
because
she's
context
aware
right,
she's,
she's,
always
context
aware
and
where
she's
running
it
doesn't
matter,
because
she
we're
communicating
with
her
through
the
blockchain.
So
maybe
we're
just
saying
you
know
you
know,
maybe
there's
a
there's,
there's
other
threads
that
go
out
and
run
so
maybe
we're
just
working
in
our
browser
in
a
vs
code,
environment.
B
That's
that's
entirely
in
our
browser
and
we're
publishing
the
events
you
know
of
the
other
tabs
in
our
browser.
You
know
to
to
to
maybe
some
service
right.
That's
this
cloud-based
service,
where
we're
running,
alice
and
and
or
we're
running
us.
You
know
in
the
browser
locally
right
and
then
we're
querying
now
to
other
services
as
appropriate,
and
this
way
really,
you
know,
ideally
what
we're
striving
for
is
running,
alice
locally
or
supplementing
with
our
own
compute
right
or
trusted
compute,
so
that
we
can
always
maintain
this.
B
B
You
are
able
to
expose
and
put
trust
boundaries
around,
and
you
know
manage
the
data,
your
data
of
any
execution
of
anything
that
you're
doing
right
per
your
your
strategic
principles
that
we
talked
about
right
and
maybe
these
ad
hoc,
so
your
privacy
policy,
right
and
so
to
abide
by
your
own
privacy
policy,
may
require
that
you
invest
more
of
your
assets
into
making
that
happen
right.
B
B
This
functionality,
where
it'll
return
the
correct
release
right
then
they
know
that
we
went
and
we
wanted
to
install
firefly
cli
now
if
we
chose
to
if
they
had
this
operation,
that
says:
here's
the
here's,
the
tar
file
for
whatever
repo
you
name.
You
know
that
for
your
system,
if
you
pass
this
your
system
right,
then,
and
then
they
return.
This
link
right.
So
you
say
github
tell
me
what
firefly
hyperledger
firefly
cli
latest
release
for
linux
is
and
it
returns
this
link.
B
We
could
also
just
as
easily
write
this
function
and
we've
written
it
a
million
times
before
it's
like
littered
in
different
places,
but
where's
the.
I
think.
If
you
look
at
the
shim,
it
goes
and
does
this,
but
with
pi
pi.
I
think
there's
some
there's
a
bunch
of
other
code
that
does
this,
but
you
just
grab
the
github
releases
page.
You
grab
the
latest
one,
and
then
you
grab
your
architecture
as
a
maps
to
you
name,
and
so
basically
you
can
just
create
an
operation
for
that
right.
B
So
you
could
then
choose
run
the
operation
and
you
could
say,
run
the
operation
locally.
On
my
my
my
compute
that
I
have
access
to
and
what
what
compute
can
run
that
right,
so
is
that
operation
instantiable
right
within
the
web
of
of
of
input,
output
or
operation
implementation
networks
that
I
have
and
if
it
is
not
right.
So
what
are
the
implications?
What
are
the
implications
of
running
this
you
know
by?
But
this
just
give
me
the
download
link
from
a
third
party
versus
from
myself
right.
B
Well,
the
implications
are
outlined
in
what
strategic
plans
you
have
overlaid
right.
So,
if
you're
at
all
times,
overlaying
strategic
plans
that
are
looking
at
privacy,
then
you
would
identify
that
you've
introduced
a
third
party
here
right
by
outsourcing
this
to
github.
To
tell
you
right-
and
you
can
say
here-
is
the
input
data
right.
B
So
what
is
the
data
that
you're
sending
them
right
and
then
what
potentially,
would
they
want
to
do
with
that
data
and
well,
you
can
tell
potentially
what
they
want
to
do
with
that
data
by
reading
the
the
odap
all
of
the
other
information
in
all
of
the
other
chains,
to
come
up
with
models
that
help
you
map
the
activities
of
other
entities
to
to
to
essentially
to
do
you're
doing
you're
doing
you
know,
machine
learning,
enabled
reconnaissance
right
you're
trying
to
understand
what
are
the
strategic
plans?
B
What
are
the
things
that
the
other
entities
are
out
there
doing?
What
are
the
metrics
that
they're
driving
in
their
activities?
Right
in
that
that
that
show
you?
What
is
their
operating
model
right?
You
know,
okay,
so
you
know
if
we
see
github
publishing,
download
statistics
of
firefly,
we
might
be
like
okay,
great,
you
know
we
want.
We
want
the
firefly
community
to
have
that
information.
Well,
we
could
also
just
publish
it
ourselves,
but
we
could
say
okay,
so
that's
one
of
the
things
that
they're
doing
with
the
information
right.
B
We
see
those
those
those
dids,
those
pd
ids
being
created
when
we
run
these
operations
from
them
right
and
you
know-
or
we
maybe
see
you
know,
we
maybe
yeah.
I
you
know
the
the
point
is
basically
you're
doing
this
inference
on
what
are
these
other
entities
doing
with
your
data
based
on
what
kind
of
data
do
they
want
to
consume?
And
you
can
infer
that,
based
in
based
on
looking
at
models,
you're
training
models
across
all
of
the
data
in
these
web3
networks
and
you're,
saying:
okay?
B
B
If
you
could
see,
maybe
what
operations
they're
triggering
right
to
see
what
input
data
they're
passing
to
those
right,
so
you
can
tell
you
know
generally
based
on
data
transformations
right,
because
we're
focused
on
on
the
data
types
of
these
things
where
somebody
might
have
got
a
value.
You
know
from
another
node
and
I
don't
know
yeah,
can
you
do
this?
I'm
not
sure
I
think
you're
gonna
be
able
to
do
this.
Basically
you're
going
to
infer.
You
know
what
are
people
doing?
Why
are
they
doing
it?
B
Okay,
so
if,
if,
if,
for
example,
we
have
strategic
plans
that
say
you
know
if
every
time
we
ask
what
the
firefly
cli
is,
you
know
github.
B
Like
if
we
had
some
strategic
plan
overlaid,
that
said,
okay,
here's
one
rate
limiting,
so
we
have
a
strategic
plan.
Overlaid
that
says,
do
not
hit
the
github
api
more
than
five
thousand
times
more
than
five
thousand
times
in
a
hour
right.
So
if
you
wanted
to,
you
would
take
into
that
account
that
into
account
within
the
gatekeeper
right.
B
So
there's
the
prioritizer,
which
runs
the
gate,
which
runs
all
the
strategic
plans
and
then
runs
the
prioritizer
or
and
then
runs
the
gatekeeper
after
that
right
and
then
prioritizes
based
to
see
what
things
are
actually
getting
executed
or
do
get
executed
in
terms
of
what's
suggested
system
context
by
strategic
plans.
B
B
Now,
every
time
one
of
those
operations
gets
executed,
right,
it'll
it'll
release
a
little
piece
of
structured
metro,
metric
data,
saying
this
was
dispatched
right
and
when
it
does
that
we
can
say
we
can
also
have
it
log
when
it
uses
the
api
internally
within
that
right
or
well.
It's
probably
wrapper
around
some
api
call
right
so
or
maybe
it's
wrapped
around
multiple
api
calls.
The
point
is
somewhere
we
end
up
with
something
that
that
logs,
you
know,
ticks
up
a
number.
B
Every
time
we
do
an
api
call
or
maybe
says
you
know,
an
api
call
was
made
right.
Maybe
it's
just
a
little,
maybe
it's
a
pd
id
itself
and
it
says
api
call
was
made.
It's
just
an
input,
bull
value,
api
call
made
timestamp
right
and
so
and
then
you
know
you
probably
want
to
correlate
with
system
local
resource
management
to
see
if
ntp
is
alive
on
that
system,
and
if
that
system
is
in
sync
time
wise.
B
So,
but
the
point
of
the
matter
is
then,
basically,
you
can
query
the
input
network
and
you
can
see
you
know
how
many
nodes,
how
many
input
nodes
were
created.
You
know
of
this
definition
that
says
I
did
an
api
call
within
this
time.
Range
by
you
know,
selecting
you
know
basic,
maybe
maybe
we
can
select
yeah,
you
know
we
would
we
would
we
would
load
it
up
and
grab
the
data
right
and
we'd
say
you
know.
Does
this
thing
you
know.
B
Does
this
thing
and-
and
this
is
where
you
know-
we'll
store
some
data
on
chain
some
data
off
chain
right.
B
So
if
you're
looking
at
you
know,
we'll
probably
have
some
thresholds
and
basically
we'll
decide
hey,
you
know
if
we
can
encode
a
system
context
that
basically
is
just
one
static
value
within
these
peer
d
ids,
then
you
know,
maybe
we
don't
encode
a
whole
system
context
that
we
then
go
execute
right,
so
we're
going
to
either
have
this
just
static
data
or
execute
to
grab
data
from
off
chain
right
and
we're
going
to
figure
this
out
in
the
shim
layer
and
within
you
know,
as
we
analyze
these
firefly
dumps
here
so
yeah.
B
So
that's
a
way
that
you
could.
Then
you
know
that
that's
something
that
you
might
be
interested
in
doing
in
terms
of
understanding.
You
know,
should
an
operation
be
called,
you
know
using
one
operation,
implementation
network
or
another
right,
so
you
know
maybe,
should
I
just
go
scrape
the
web
page
and
and
grab
it
because
I
don't
have
you
know
a
5001
api
request
this
hour.
B
B
B
Yeah
we
could
instantiate
the
subflows
on
operation,
implementation,
instantiation
and
that
way
that
when
the
data
flow
lows
and
the
operation
implementation
is
instantiated,
then
the
operation,
implementation
or
then
the
subflow
will
also
instantiate.
So
basically
as
soon
as
the
data
flow
loads,
we'll
load,
whatever
the
correct
ones
for
all
the
deployment
environments
are
and
then
we
can
also
trace
back.
B
B
And
then
so,
what's
missing
here
right,
obviously
we're
hard
coding
for
linux
for
x86
for
for
intel,
64-bit
systems,
okay,
so
firefly
linux,
download.
F
F
Expand
to
use
generic
download
from
github
repo
flow
c
recording
for
details.
F
E
B
And
async
context
manager
is
a
very,
very,
very
good
friend
of
ours.
So
what
are
context?
What
is
a
context
within
within
python?
What
is
context
live
and
what
are
context
managers?
Well,
the
long
short
of
it
is.
Basically,
you
have
generators
right,
so
def
gen.
B
Okay,
so
what
does
this
do
so?
This
says
for
every
line
in
standard
input
you
know
yield
yield
the
the
line
can
convert
it
from
a
string
to
an
integer,
and
so
basically,
what
that's
going
to
do
is.
If
you
said
you
know
a
equals
gen
it
would
it
would.
It
would
make
a
this
dynamic
thing
that
doesn't
get
called.
You
know
until
basically
it's
not
going
to
yield
the
next
result.
B
It's
not
going
to
read
it's
not
even
going
to
trigger
the
line
read
until
you
say
4
I
in
a
and
then
if
you
printed
here,
you
would
get
an
integer,
so
you
could
you
could,
then
you
know
convert
that
to
a
list,
but
by
default
it's
just
this
dynamic
thing
that
that
you,
I
don't
think
you
can
reiterate
over
so
so,
what's
a
context
manager
well,
a
context
manager
is
a
simple
case
of
that,
where
you
can
use
it
in
a
with
statement
right,
so
you
would
say
with
gen
as.
B
B
And
then
this
would
say
one
because
we
yielded
one
so
context.
Manager
is
a
special
case
of
a
generator
where
we
only
yield
one
value
and
and
what
it
allows
us
to
do
is
really
simplify
our
cleanup
operations.
And
so
you
know.
B
So
maybe,
if
we
had
a
connection
here,
we
would
close
it
like
this.
You
know
so
if
there
was
an
exception
that
occurred
here
so
raise
up,
then
you
know,
conduct
close
would
get
called
before
we
exit
this
block
on
line
99..
So
so
what
are
we
going
to
do
here?
So
basically,
we're
going
to
use
this
to
we're
going
to
use
this
functionality
of
a
context
manager,
as
well
as
temporary
files
to
go
through
and.
B
Just
we're
going
to
download
firefly
to
or
we're
not
going
to
use
tempter,
so
basically
we're
going
to
download
firefly
and
we're
going
to
put
it
here
in
downloads.
So,
okay.
F
B
I've
started
doing
join
path.
I
don't
really
like
using
operated
over
overloaded
operations.
This
generally,
I
generally,
I
would
say
generally
avoid
operator
overload,
makes
it
hard
to
refactor.
I
was
not
following
that
advice
at
this
point
when
this
was
initially
written,
this
stuff
we're
copy
pasting
from
so,
and
why
is
this
defined
separately
on
line
84?
Well,
because
if
you
look
at
the
other
one
with
test
binaries,
we
can
split
this
stuff
out
and
then
we
can.
Basically,
we
can
split
this
stuff
out.
We
can
split
out.
B
You
know
that
binaries
file
as
its
own
file.
So
then,
basically,
we
can
just
kind
of
like
using
the
constructs
of
the
caching
mechanisms
that
are
available
to
us
via
github
actions.
If
we
track
the
changes,
if
we,
if
we
say
blow
up
the
cache
on
modifications
to
files,
including
binaries
or
inclu,
like
this,
this
file
that
declares
the
binaries
and
their
sha
values.
Then,
whenever
one
of
these
changes
it'll
blow
up
the
cache
right,
and
so
the
level
of
granularity
ideally
would
be
a
single
file.
B
If
we're
looking
at
the
way,
github
actions
treats
this,
the
level
of
granularity
that
you
would
ideally
use
is
mapping
a
single
file
to
a
single
entry
in
the
cache,
and
then
you
would
blow
up
that
entry
in
the
cache.
You
know.
Basically,
when
that
file
changed
so
then
we
would
import.
You
know
cache
firefly
cli,
and
then
this
would
be
the
only
value
in
it,
and
then
you
know
that
way.
B
If,
as
far
as
github
is
concerned,
the
scope
there
is
file
scope,
and
so
you
know
if
that
way,
we
could.
We
could
most
granularly
track
that
I
believe,
and
only
only
blow
up
the
cache
for
specific
files.
And
so
what's
that
even
gonna
do
I
mean
that's
just
gonna
make
downloading
these
resources
slightly
slightly.
You
know
maybe
slightly
faster,
maybe
not
even
slightly
faster
right
if
it's
coming
from
github
itself,
it's
coming
from
somewhere
else
and
it's
probably
going
to
be
faster
to
have
downloaded
and
cached
it.
B
So,
within
imp
enter,
we
can
define
context
managers
which
we
want
to
be
which
we
want.
E
F
E
B
B
So
this
would
be,
you
know
like
if
you
had
a
question
mark
or
a
hash
value
or
something
you
know
at
the
end
of
our
or
you
know
some
something
at
the
end
of
our
url.
B
That
might
have
a
bunch
more
data
right
and
we
just
want
the
file
name.
So
this
is
going
to
blow
up
in
that
case,
I'm
pretty
sure
so
and
if
we
expand
this
later,
we'll
want
to
make
sure
that
we've
captured
that
so
firefly
cli.
B
It
up,
and
so
basically
then
we
can
just
return
firefly
cli
right
so
now,
when
we
run
this
so
now
we
can
say.
E
B
B
So
what
are
we
doing
here?
Well,
we
are
creating
a
config.
B
F
B
So
and
we're
just
putting
this
here
right
now
right,
so
we'll
probably
put
this.
E
B
B
B
F
And
runs
okay,
so
let's
switch
those
boxes.
B
F
B
So
we
can
define
operations
of
many
different
levels.
We
can
define
operations
as
functions
so
so
so
remember
everything
follows
the
context.
The
double
context-
entry
pattern
right,
so
we
can,
which
means
that
that
everything
follows
this.
This
pattern,
where
there's
a
class
upon
instantiation
of
the
the
class
we
enter
that
class
as
context
effectively
enter
right
and
then
upon
usage.
B
You
know,
maybe,
like
a
transaction
in
a
database,
we
enter,
we
create
a
context
of
the
class
right,
so
there's
the
basically
the
parent
object
and
then
the
context
object
that
gets
checked
out
for
each.
You
know
around
each
running
right
or
set
of
runs
of
the
the
whatever
the
operation
is,
and
the
operation
run
method
right.
The
operations
have
just
the
run
method,
whereas
interfaces
have
more
than
one
method
right.
B
B
Well,
you
know
you
can
actually
just
pass
your
plugin
as
a
shared
config
object,
two
different
operations
and
then
have
these
operations
that
basically
wrap
your
method
calls,
and
so
then
effectively.
You
know
what
you
end
up
with
is
just
you
know,
you're
ending
up
with
several
layers
of
call
and
direction,
but
we're
gonna
be
able
to
get
rid
of
that.
We're
going
to
be
able
to
get
rid
of
that,
because
we
can
do
the
analysis
on
the
network.
B
We
can
do
the
analysis
with
the
the
debut
the
benefit
of
using
the
the
data
flows.
You
know,
and
and
whatever
this
this.
This
open
architecture
format
is
right,
because
our
goal
here
is
to
create
this
format
that
allows
us
to
describe
an
architecture.
Written
with
is
in
is
in
whatever
domain
specific
pieces
are
necessary
right,
and
so
that's
where
this
open
data
asset
protocol.
B
Is
the
gateway
to
the
domain,
specific
representations
of
the
functional
components
within
the
open
architecture,
and
so
those
are
sometimes
assets
and
they're?
Sometimes
descriptive
of
you
know,
method
calls
and
things
like
that.
So
at
least
that's
my
understanding
of
the
way
this
is
shaping
up,
but
you
know
we
don't
know
until
we
find
out
right.
So
this
is
just
you
know,
sort
of
what
what
it
looks
like
at
the
moment.
So.
B
So
if
you
look
at
run
data
flow
it
you
know,
okay,
doing
a
lot
of
explaining
here.
Okay,
so
if
you
look
at
run,
data
flow.
B
Yeah,
I
don't
you
know,
I
don't
love
this.
I
think
what
we're
seeing
here
is
that
I
think
what
we're
seeing
here
is
that
we'll
have
a
generic
operation,
which
you
know
downloads
a
you
know,
a
a
linux
tar
file
right
or
we'll
have
these
generic
flows.
That
basically
say
we'll
have
these
data
flow?
We
talked
about
these
these
canned
data
flows.
Well,
what
are
the
can
data
flows
now?
B
Well,
we
realize
now
that
a
can
data
flow
is
really
just
a
system
context
right,
an
in
an
instance
of
alice
right,
an
an
an
entity
right,
a
can
data
flow.
Is
you
know
whatever
the
models
that
we're
trained
within
the
strategic
plan?
Okay,
wait!
A
minute,
wait,
a
minute
wait
a
minute.
We
lost
something,
so
we
lost
something
we
lost
something.
So
there
was
something
we
were
talking
about:
the
cash
directory,
okay,
the
cache
directory.
B
This
operation
is
instantiated.
So
so
we
want
to
take
this
thing.
We
want
to
run
it
on
another
computer
right.
So
what
are
we
gonna
do
right?
We
saved
we
save
we
save
and
load
and
and
remember
so
we
can
take
any
so
if
we
get
all
the
data
on
the
blockchain
right,
then
we're
going
to
be
able
to
use
that
as
like
our
open,
so
onyx,
the
open,
neural
network
exchange
format
right.
This
is
a
format
that
you
can
use
to
exchange
between
different.
B
You
know
frameworks
that
that
allows
you
in
libraries
that
allow
you
to
to
build
and
train
neural
networks
right.
So
what
we're
talking
here
is
this.
You
know
like
yeah
this,
this
the
open
neural
network.
So
this
is
the
open
architecture,
format
right.
So
this
is
basically
like
you
know.
I
want
to
run
some
code
right,
like
here's
all
my
function
prototypes
and
how
I
want
them
knit
together
like
please
go
run
it
for
me.
B
You
know
and-
and
you
know,
allowing
the
execution
environment
to
do
the
sandboxing
that's
appropriate
to,
and
maybe
sandboxing
is
not
the
right
word
here,
but
really
really,
you
know
put
up
the
walls
where
the
trust
boundaries
are.
You
know,
for
that
specific
context,
right
and-
and
that's
just
why
everything
has
to
be
context
aware.
So
so,
if
we
were
to
package
this
thing
up
right
and
send
it
over,
we
might
see
caster,
right
and
so
take
take.
B
Take
take
the
case
where
we've
refactored
this
into.
I
can't
remember
tireless
clock.
I
can't
remember
I'm
I'm
remembering
your
handle
right
now,
but
I'm
I'm
I'm
blanking
on
your
name.
I'm
sorry,
so
tireless
clock
brought
up
the
other.
The
a
couple
weeks
ago
in
a
weekly
sync
meeting.
B
B
B
You
know
these
mechanisms
that
we've
been
talking
about
right
so
and,
along
with
that
right
we
talked
about
the
system
system,
local
resource
management
and
the
allowing
that
to
have
us
have
have
us
help
or
that
that
can
that
came
out
of
this
discussion
around
locality
right
and
so
as
we're
we're
having
this
discussion
of
brown
locality,
and
you
know
where
do
the
if,
if,
if,
if,
if
you
send
out
alice
right
and
you
say
alice,
go
make
this
happen,
she
might
need
to
you
know,
have
you
know
you
might
be
operating
directly
on
a
computer
in
front
of
you?
B
The
context
is
such
that
you
want
her
to
do
this
on
the
computer
in
front
of
you
right.
You
don't
want
to
do
this
on
a
remote
server
right.
So,
if
you're,
like
hey,
alice,
download
this
file,
she
goes
great.
I
download
the
file
and
turns
out
she
did
it
on.
You
know
the
server
right,
maybe
that
you
know
she's
connected
to
the
web,
that
that
doesn't
help
you
at
all
right
and
you
want
it
on
your
desktop
in
front
of
you
right
so
the
context
for
each
execution.
B
So
each
seed,
each
seed,
remember
our
seed
values,
are
the
values
that
that
that
that
that
that
well,
okay,
so
it's
not
really
a
seed
in
this
case
so
basically
yeah.
I
guess
it's
sort
of
the
suit.
So
basically
alice
is
listening.
Listening
and
listening
we're
sitting
here,
standing
in
front
of
the
computer,
we
say:
let's
go
download,
firefly
cli
to
the
desktop
right
so
and
then
show
me
the
desktop.
B
Then
we
should
see
you
know
we
should
be
presented
with
the
desktop
with
firefly
cli
on
it
right
so,
but
that
could
be
interpreted
so
go
download.
The
firefly
cli
show
me
the
desktop
right,
so
that
could
be
interpreted
as
download
the
firefly,
cli
or
yeah.
I
guess
we
said
download,
so
so
we
would
say
download
to
the
desktop
right.
So
maybe
your
connected
also
does
is
also
on
an
rdp
server
right
and
she's.
B
You
know
on
some
windows
box
somewhere
and
she's
connected
to
your
mesh
right
because
she's,
yours
and
so
she's
got
all
your
assets,
and
so
so,
if
you
say
download
to
the
desktops,
you
may
not
know
which
machine
to
do
it
on
right.
So
you're,
using
this
context,
awareness
to
say
alice,
was
triggered
and
you
were
looking
at
this
screen,
not
the
other
screen
right.
That
probably
means
I
want
to
download
to
this
computer
here
right
and
so
so
say
we
wanted
to
cache
the
state
of
this
right.
B
We
should
the
the
flow
should
probably
go
like
this.
It
should
probably
go
initiate
download,
check,
content
length,
verify
via
traversing
input
network
or
no
yeah
check
content
length
yeah.
It
should
be,
let's
write
it.
What
is
it?
What
is
the
right
granularity
right,
so
content
length,
check,
content,
length.
B
And
I
think
what
you'll
find
when
you
write
this
out
and
you
expose
the
reality
of
what
should
be
happening
is
you'll,
find
that
this
is
not
how
we
do
things
right
like
this
is
just
everything
is
just
too
disconnected
right
because
look
at
checking
the
content
length,
the
level
of
configurability
required
to
check
the
content
length
between
the
time
you
receive
headers
and
the
time
when
you
write
to
disk
to
uniformly
apply
policies
such
as
you
know,
restrictions
on
download
size,
yeah,
I'm
not
I'm
guessing
you're
not
going
to
find
that
kind
of
level
of
configurability.
B
E
B
B
What
does
this
mean
right
and
what
is
a
data
structure
that
could
be
associated
with
this
sentence
right
now
or
this?
These
two
sentences
right
so
officially
approved
standards
as
well
as
privately
defined
design
architecture.
So
these
are
both
examples
of
domain
specific
representations
right.
So
we're
saying
that
there
should
be
an
open
architecture
that
should
be
a
rfc
which
you
know
tells
us
how
to
convert
in
and
out
of
this
open
architecture.
Just
like
the
open
data
asset
protocol
allows
us
to
get
data
in
and
out.
B
We
want
to
get
software
architecture
and
hardware
architecture
and
just
architecture
in
general.
We
won't
have
a
generic
way
to
describe
that
and
then
we
want
to
be
able
to.
You
know
put
it.
You
know
we'll
put
it
within
the
open
data
asset
protocol,
because
the
architecture
itself
is
an
asset
and
then
we'll
execute
according
to
the
orchestration
environment
that
we've
defined
and
been
using
for
years
now.
So,
and
of
course,
you
know,
the
thing
is
the
thing
about
this.
Is
that
we're
going
to
off?
I
mean
it's
just.
B
This
is
just
an
intermediate
representation
that
goes
across
programming
languages
right,
and
so
I
think
there
was
some
work.
Somebody
sent
me
some
work
from
facebook.
They
they
presented
this
thing
as
like
sparta
or
something
we
gotta
look
into
that
too.
We're
gonna
look
into
that
too,
because
it
might
right.
Now
we
have
the
data
flows
and
the
data
flows
are
working
right
so,
but
we
do
need
to
make
a
note
so
check
out.
So
how
do
you
pay
gateway,
cold
storage,
save
load
via.
B
Operation
to
be
on
ramp
off
ramp
to
odap
as
data
highway,
so
this
is
the
you
know.
So
this
is
the
infrastructure.
B
So
odp
odap
is
the
data
highway,
so
see
it's
the
infrastructure
and
it's
the
commodity
right,
because
it's
the
infrastructure
which
we're
using
to
allow
the
data
to
transfer
and
then
the
data
itself
is
our
commodity
right.
So
we
were
talking
about
how
as
we
move
into
you
know
whatever
this
next
space
is
data
as
a
commodity.
Right
I
mean
data
is
already
commodity,
but
we're
really
going
to
see
the
commoditization
of
data.
B
At
least
that's
the
hypothesis
with
the
this
whole
web
3
space
in
odap,
that's
great
to
hear
that
this
has
evolved
to
that
state.
So,
okay!
So
what
are
we
going
to
do?
What
are
we
going
to
do.
B
B
That
is
the
system
architecture
right,
which
is
the
open
architecture
right,
because
we're
saying
that
this
is
an
open,
spec
propose
a
formula
which
can
be
used
to
describe
a
system
architecture
right,
and
so
what
is
this
going
to?
Let
us
do
right,
so
this
is
going
to
let
us
you
know,
analyze
architectures,
across
different
code
bases-
and
you
know
our
first
project-
is
we're
going
to
write
some
threat
models,
so
so
open
architecture-
let's
see
so,
did
we
have
a
nice
little
link
somewhere.
This
was
nice.
B
Right,
so
this
is
the
core
problem.
This
is
sort
of
one
of
these
core
problems.
Right
is
that
you
know
you
can't
do
static
analysis.
You
know
you
you
got
to
do
static.
Analysis
has
to
be.
You
know,
domain
specific
right,
so
you
know
if
you
can
create
this.
This
essentially
like
this,
this
this
this
format
that
allows
you
to
understand.
When
you
need
to
proxy
into
your
domain
specific
static
analysis,
then
you
can
appropriately
analyze
across
right.
You
can
also
you
know
appropriately
reconfigure,
because
you
have
this
intermediate
representation.
B
That's
decoupled!
Your
implementation
from
you
know
the
overall
desires
right
of
your
thing
and
and
and
we're
capturing
so
we're
capturing
intent
with
strategic
plans
and
we're
capturing
so
yeah.
We
use
strategic
plans
to
capture
intent,
and
then
we
use
the
open
architecture
to
capture.
You
know
functional
knitting,
and
then
we,
you
know,
map
our
intent.
B
You
know
we
combine
the
the
intent
with
with
the
functionalness
to
to
create
equivalent
architectures
right
or
to
create.
You
know.
Functionality
based
on
description
of
intent
right
propose
that
the
format
which
should
use
describe
system
architecture.
B
B
Universal
blueprint,
the
data
flow
well,
we've
called
it
something
else.
The
universal
blueprint
data
flow,
the
system
context.
B
B
The
open
architecture
describes
assets
using
the
open
data
asset
protocol,
so
please
yeah,
you
know
I
think
I
need
to
reach
out
to
jeremy
again
anybody
else
who
wants
to
be
involved
in
this.
Please
reach
out,
obviously
putting
it
all
out
there.
So
the
open
architecture
describes
assets
using
the
open
data
asset
protocol.
It
acts.
So
to
do.
I
check
out
look
in
more
detail
at.
F
B
I
think
jeremy
said
thank
you.
Jeremy
pointed
me
on
to
that.
I
only
got
the
chance
to
like
barely
skim
it,
but
it
looks
like
there's
something
there
maybe
possible
possible.
I
have
renewed
for
reuse
or
collaboration.
B
So
this
the
open
architecture
describes
assets
using
the
open
data
asset
protocol.
It
acts
we'll
just
say
proxy
what
it
acts,
as
so
via
directed
acyclic.
B
B
F
E
B
B
We'll
say
business
process,
because
process
is
very
generic
and
people
will
understand
more.
If
that's
like
some
kind
of
like
you
know,
you
first
go
talk
to
this
person.
They'll
understand
more.
If
you
put
business
process
okay,
so
we
still
haven't,
started
firefly
and
I'm
going
to
take
a
break,
but
we
did
find
out.
We
did
do
some
very
important
work,
which
is
define
what
what
are
we
doing
here
right?
What
what?
Why
are
we
here?
What
are
we
doing?
B
Well,
you
know
the
purpose.
What
are
we
going
to
do
we're
going
to
propose?
What
is
the
purpose
of
our
work?
What
is
the
purpose
of
our
work
on
alice?
You
know
the
the
the
overall.
What
is
the
purpose
of
our
work?
Well,
it's!
So
it's
not
it's
not!
I
mean
this
is
part
of
the
point
right.
This
is
a
building
block.
So
what
are
we
going
to
do?
You
know
what
are
some?
You
know
immediate
next
steps,
community-wise
right,
because
this
is
not
technical-wise.
B
This
community
was
right,
so
we
need
to
go
on
rfc
this
thing,
and
so
I'm
going
to
talk
to
jeremy
again
about
that
anybody
else
who
wants
to
be
involved
in
the
rfc,
but
we're
going
to
write
up
one
of
these
documents.
So
unless
we
find
that
there
is
something.
E
E
B
So
this
is
interesting.
This
actually
looks
like
something
that
we
might
be
interested
in
curious
for
execution
environment,
so
this
actually
looks
like
something
we
might
be
interested
in,
for
you
know
leveraging
some
of
the
concepts
here.
B
Maybe
there's
something
we
can
learn
from
that,
but
I
don't
think
I
don't
think.
As
far
as
I
can
tell
tell
me,
if
I'm
wrong,
I
don't
think
that
that
anybody's
rfc
something
like
this
yet
so
we'll
do
it.
Anybody
who
wants
to
do
it,
but
basically
you
know
the
goal
is,
and
the
goal
is
really.
You
know
like
if
you're
you're,
to
have
this
standard
description
of
architecture
for
your
threat
model
right
to
basically
take
your
s-bomb
and
tell
me
well
what
is
this?
B
B
B
But
you
know,
I
think,
the
director
we're
sort
of
going
to
assume
you
know.
Maybe
we
we
assume
so
we
offer
an
option
right.
So
so
the
open
architecture
describes
us
using
that.
So
one
option
one
option:
the
open
architecture
allows.
B
Component
domain
specific
architectures,
I
hardware
software
physical,
any
combination
thereof.
Okay,
so,
basically
you
know
we're
gonna
offer
several
options.
One
of
the
them
will
be.
You
know
this,
you
know
so,
let's
this
is.
Maybe
you
know
plug
in
so
schema.
B
So
schema
you
know.
E
B
So
this
is
the
open
architecture,
schema
right
and
within
it
it
says
you
know.
Maybe.
E
B
All
right,
so
maybe
all
it
is
right
now
is
you
know,
plug
in
dataflow,
configure
safety,
okay,
so
yeah,
so
you
have
some
kind
of
top
level
thing
right
that
says:
how
do
you
interpret
this?
So
basically,
how
do
I
interpret
this?
Well,
it's
an
open
architecture,
right,
okay!
So
well,
what
is
you
know
what
what
is
my
top
level
definition
within
the
open
architecture?
Well,
it's
going
to
be
defined
using
a
data
flow
and
then
here's
the
data
flow
right.
So
this
is
one
option
right,
so
example.
F
B
We'll
put
a
link
to
the
manifest
stuff
here.
B
B
Be
a
single
domain,
specific
representation
and
then
hi.
I
was
just
about
to
jump
off
here.
B
B
In
this
case,
a
data
flow
right.
So
that's
all
we're
saying
all
we're
saying
is
that
you
know
we
want
to
have
something
that
says
this
is
a
we
all
we're
saying
is
that
there's
a
need
for
a
format
which
describes
a
architecture
of
a
system
in
a
generic
way
such
that
we
can
describe
any
systems
architecture,
and
then
we
can.
B
You
know,
leverage
any
domain,
specific
representations
to
knit
together
different
architectures
to
give
a
complete
overview
or
to
give
a
complete
description
of
the
architecture
of
any
system
context
right,
because
that
is
going
to
be
what
really
allows
us
to
harness
the
power
of
our
machine
learning
as
we
as
we,
as
we
put
our
hooks
into
any
architecture,
we're
going
to
be
able
to
optimize
that
architecture
all
right.
Well,
thanks
and
I'll
talk
to
you
all
later
so.