►
A
It
is,
and
so
I'm
trying
to
overlay
this
image
on
so
that
it'll
be
visible
when
we're
in
the
terminal,
because
we're
doing
a
lot
of
stuff-
that's
not
on
the
terminal
right
now,
but
you
know
it
would
like
to
get
to
the
terminal.
So.
A
To
put
this
opacity
application
via
the
color
filter
and
white
is
one
is
only
one
number
because
ffff
if
it
overflows,
then
it's
white.
So
that
means
that
obs,
I
think
obs
maybe
has
an
integer
overflow
there,
either
that
or.
A
Its
intended
behavior,
but
I
don't
know-
or
maybe
I'm
wrong
altogether,
so
all
right.
I
don't
know
how
to
do
this,
and
this
is
why
we're
on
the
video,
because
then
in
case,
anybody
else
wants
to
know
how
to
do
this,
because
it's
hard
to
find
these
little
clips
with
these
things
with
weird,
you
know
editing
software
all
right
so
because
we
all
want
to
make
you
know
some
streams
here,
so
some
videos
so
it'd
be
good
to
have
this
information
on
there.
A
So
I'll
move
this
over
here.
A
Okay,
so
where
were
we
well
when
we
left
off
oh-
and
I
saw
something
well,
I
was
on.
A
We
can
maybe
leverage
it
for
some
visualizations,
so
basically
we're
getting
everything
in
and
out
of
the
network.
You
know
it's
too
dark
in
here
without
this
okay,
okay,
I
know
it
creates
a
giant
light
behind
me.
You
probably
can't
see
anything.
B
But
it
doesn't
really
matter
so
because
what
matters
is
on
screen
so.
A
A
Okay,
I
think
we
looked
at
last
night,
the
the
key
situation,
so
where
were
we?
I
think
we
were
in
documents
python.
Let's.
B
A
B
A
I
think
we
pretty
much
determined
that
you
know
we
want
to
go
this
route,
so
let's
just
try
to
run
some
code
and
because
we
want
to
just
try
to
get
something
in
this
format.
So
where
are
we
at.
A
So
I
found
out
the
other
day
that
you
can
actually
do
stash
with
a
message,
so
we're
gonna
say
what
does
it
look
like
we
were
doing
here,
so
it
looks
like
oh
yeah
experimenting.
B
B
B
A
A
Here's
where
we
added
our
ad
in
history,
this
might
get
rewritten
somewhat
because
of
more
pdf
files
in
there.
So,
but,
but
generally,
you
can
rely
on
the
fact
that
this
commit
name
will
be
unique.
So
pure
did
python.
Okay,
so
all
right,
okay,
where's,
the
docs.
A
So
let's
show
how
we
would
figure
out
how
to
do
this.
So
basically
we
know
we
want
to
implement
pdid
support
in
dffmel
right,
so
we're
going
to
go
and
you
know
look
at
the
existing
documentation
which
is
really
zoomed
in
now
really
zoomed
out
and
we're
going
to
you
know,
go
figure
out.
What's
the
latest
latest
version
of
it
right
so
and
we're
using
this
as
an
example,
I
mean
we're
doing
this
for
our
own
project,
but
this
is
a
general
general
template
you
can
follow.
A
For,
for
how
do
you,
you
know,
go
find
out
what
you
need
to
do
and
where
you
need
to
do
it
so
you
you
know,
you
identify
the
problem
space
and
then
you,
you
know
you
get
up
to
date
on
what
you
know.
A
What
what
what
the
unfilled
piece
of
that
problem
space
is,
and
then
you,
you
know,
attempt
to
identify
connection
points
to
the
most
reasonable
areas
for
expansion
that
are
already
widely
deployed,
so
we
determined
that
was
web3
dids
and
you
know
we
have
our
own
ml
built
in
right
and
then
we're
going
to
mesh
all
of
that
to
be
the
ml
definition.
On
top
of
that,
and
so
now
you
know
we
we
pick.
A
Where
do
we
want
to
begin
first
right
and
because
we
are,
you
know,
going
to
be
implementing
the
control
feedback
loop
within
dffml
and
df
mouse.
Obviously,
our
this
is
our
project
right.
So
this
is
our
place
where
we
do
things
so
we're
going
to
implement
it
here
and
then
once
we
have
it
ironed
out
right.
A
You
know
once
we
know
how
it
works
thoroughly,
then
what
we'd
do
is
we'd
say:
okay,
kcp,
you
know
here
are:
maybe
you
know
what
what
if
this
stuff
actually
makes
sense
that
either
casey
we
would
go
out
and
we
would
talk
to
other
kcp
users
and
we
would
look
and
we'd
say:
hey
you
know.
Are
we
doing
some
of
the
same
things
and
then
we'd
come
together
and
we
would
write?
You
know
some
specs
around
like
what
what
you
know,
what
the
ideal?
Well,
after
doing
an
analysis
on
our
shared.
A
You
know
things
that
are
running
on
top
of
there
and
the
modifications
that
have
been
made
thereof.
Then
you
sort
of
figure
out
as
a
community
which
of
these
things
need
to
be
incorporated
most
into
the
main
feature
set,
and
you
don't
want
that
feature
set
to
grow
too
much
right.
So
that's
where
you
know
you
need
to
figure
out.
A
Where
are
the
roles
and
responsibilities
of
each
project
as
it
exists
like
within
this,
the
open
source
ecosystem
that
we
have
and
then
you
know
you
attempt
to
put
your
changes
in
the
most
the
place.
That
makes
the
most
logical
sense
and
you
try
to
do
that
and
distribute
the
changes
across
the
projects.
Logically,
so
that
you
know
basically
don't
don't
it's
kind
of
like
we
talked
about
with
the
with
the
option
to
so.
A
Yeah,
okay,
so
let's
just
get
going.
B
B
A
And
that
will
allow
us
to
basically
take
this
data,
we're
gonna,
take
data
and
we're
gonna
we're
gonna
act
as
like
the
the
this
open
architecture
which
we're
proposing
acts
as
the
highway
upon
which
the
data
travels
right.
It's
the
infrastructure
and
it's
the
commodity.
It's
it's
a
it's
a
hybrid.
What
what
we're
doing
here?
We're
because
it's
it's
it's
self-descriptive
in
in
the
way
that
it's
executed
as
well
right.
A
So
it's
like
a
it's
like
an
instruction
manual,
it's
like
if
you're
selling
things
on
the
blockchain-
and
it's
like
you,
know,
you're
selling,
you're
selling,
because
you
are
you're
selling.
You
are
assigned
things
that
are
on
the
blockchain,
because
they're
like
little
robots
and
stuff
right-
and
I
mean
they're
not
just
like
little
robots
that
are
moving
around,
but
there's
all
these
devices
right
and
so
they're
all
on
the
blockchain
and
so
how
we
interact
with
them.
A
We
interact
with
them
through
this
uniform
way
where
we've
defined
this
open
system,
for
which
you
know
it
allows
for
sandboxing
appropriately
right.
This
is
this
data
flow
format
right
this
this
I
mean
the
data
flow
format
is
one
of
the
execution
formats
within
that.
But
this
is
this
universal
blueprint,
the
open
architecture.
You
know
the
the
system,
context
right
from
the
discussion
thread,
and
so
so
yeah.
A
So
basically,
you
know,
then
the
in
the
interpretation
in
the
interpreter-
and
so
so
the
manifest
is
how
we're
going
to
encode
the
manifest
is
how
we're
going
to
encode
the
system
context,
and
then
we
put
the
manifest
on
the
chain
and
we're
going
to
put
it
on
the
chain
using
the
peer
dids.
A
So
what
we're
trying
to
figure
out
right
now
is
we
we
identified
that
you
know
we're
putting
we're
putting
the
this
is
our
this
is
you
know
that
we
did
an
analysis
of
what
what
code
bases
are?
We
know
we're
going
to
be
involved
for
our
effort
here
right,
we've
found
the
the
the
technologies
in
terms
of
like
the
protocols
and
the
formats
and
and
stuff
that
we
want
to
build
our
stack
on
top
of
right,
and
then
we,
then
we
figure
out
what
you
know.
A
Our
first
target
project
is
whether
we're
contributing
to
an
existing
project
or
we're
starting
a
new
project
and
then
proposing
that
for
contribution
within
another
project
or
we're
saying
basically
a
hybrid
thereof,
where
we
have
an
existing
project,
because
it's
part
of
the
overall
solution
that
we're
looking
at
and
so
we're
just
going
to
prototype
very
similar
to
one
of
the
well.
So
we're
we're
just
going
to
prototype.
A
It's
like
having
a
fork
of
a
fork
so
or
it's
really
just
like
having
a
fork.
Basically
right.
So
basically,
like
imagine,
imagine
that
this
imagine
that
we're
we
basically
like
we're
not
actually
using
any
of
the
code
of
these
projects.
Of
of
you
know
the
pdid
or
kcp
right,
but
we
are
con.
We
are,
you
know,
creating
things
that
that
conform
and
leverage
their
interfaces
right,
and
so
we
can
because
they
provide
these
extensible.
A
You
know
that
the
way
that
the
the
ecosystem
is
is
centered
around
like
these
these
formats
and
interfaces,
because
when
you
center
around
formats
and
interfaces
you
allow
for
multiple
implementations
to
evolve
and
you
allow
for
you
allow
for
you
allow
for
multiple
implementations
to
evolve
and
yeah
that
helps
you
do
this
process,
by
which
you
know
you
basically
have
all
these
forks
and
so
you're
you're,
just
sort
of
you're
you're
you're,
seeing
this
natural,
like
so
you're
thinking
of
the
different
repos
and
the
different
future
branches
within
them.
A
As
these
trains
of
thought
right,
and
so
you
have
this
natural
expansion,
because
the
engine
like
the
software
engineers
just
sort
of
go,
you
can't
you
try
to
tell
them
to
do
things
and
then
they
just
end
up
on
google
right
and
because
that's
all
of
us
right,
like
I
mean
how
many
times
have
I
said
it,
probably
within
the
last
five
minutes.
So
it's
like
I'm
just
gonna
go
google
this.
So
the
thing
is
that
we
have
to.
A
We
want
to
like
it's
like
preemptively,
doing
that,
right
so
by
by
okay,
no
we're
getting
into
a
whole
where
the
thing
is
okay.
So
it's
it's
okay!
So
what
we're
doing?
What
we're
doing
when
we
leverage
this
did,
let's
see
so?
A
Okay?
So
no,
we
were
talking
about
we're
talking,
it's
all.
So
it's
all
sort
of
the
same
thing,
it's
the
thing!
So
when
we
look
at
a
fork
of
so
there's
a
repo
right,
so
there's
kcp
so
say
we
were
to
fork
kcp
right
and
then
we
start
doing
our
kcp
related
changes
directly
in
our
fork.
All
right!
Well,
that's
all!
You
know
fine
well
and
good
right,
but
sometimes
we
may
just
want
to
like.
A
So
if
you
look
at
like
how
distro
works,
the
linux
distro,
they
have
build
scripts
that
basically
take
the
upstream
apply
patches
on
top
of
the
upstream
and
then
that's
the
version
that
they're
using
right
and
so
that
in
itself
is
sort
of
like
a
one-stage
data
flow.
Or
maybe
you
know
an
end-stage
data
flow.
But
you
know
it's
defined.
A
The
execution
environment
is
defined
in
the
in
the
plug-ins
effectively
the
plugins
for
the
operations
there
aren't
plug-ins
for
the
operations
they're,
the
ones
that
are
statically
coded
into
the
package
management
system
or
package
build
system
right.
You
know
for
which
pieces
of
the
package
you
should
build.
A
So
what
we're
saying
is
it's
the
same
thing
as
we're
saying
with
the
data
flows
where,
if
you,
instead
of
focusing
on
feature
data
and
sort
of
models
as
separate
things,
but
but
focusing
on
the
the
interface
between
them,
it
would
allow
you
to
you,
know,
mix
and
match
these
things
right
and
and
and
then
you
have
this
decoupling
and
then
you
can
do
your
auto
ml
on
your
your
hyper
parameters
and
then
you
can
also
do
your
auto
ml
on
your
feature,
engineering,
because
you
have
the
ability
to
re-wire
what
features
are
in
your
data
set
right,
and
so,
when
we
do
this,
when
we
do
the
automl
on
feature
engineering,
it's
the
same
as
what's
happening
when
a
project
in
an
open
source
project,
it
is
forking
and
then
you
keep
seeing
these
forks
and
forks
and
forks
and
you
get
this
sort
of
ecosystem
right.
A
A
Can
you
can
collect
all
that
data
on
the
open
source
ecosystem
and
you
can
analyze
it
using
this
model
right
and
this
model
itself
is
the
the
open
source
ecosystem
is
a
byproduct
of
the
the
the
communication,
the
speed
of
communication
as
it
propagates
through
the
open
source
community
network,
as
well
as
the
speed
of
validation
of
these
ideas
with
which
is
directly
related
to
the
number
of
users
adopted
and
like
sort
of
how
battle
tested
the
software
is
right
and
once
you
get
more
battle
tested
software,
then
it
allows
it
to
grow
market
share
and
more
and
more
people
can
try
their
edge
cases
and
use
cases
on
top
of
it,
and
when
you
find
those
edge
cases,
then
you
generate
your
new
trains
of
thought,
which
is
where
you
are
having
you
know
your
forks
right
and
ecosystems
are
sort
of
around
the
formats,
and
then
the
projects
are
are
having
their
forks
off
of
them
and
each
fork
sort
of
represents.
A
Maybe
like
a
feature
branch
and
the
feature
branches.
The
feature
branches
represent
a
train
of
thought
and
a
train
of
thought
is:
you
know
a
direction
that
we're
working
in
it.
It
moves
one
sys.
We
we
watch
the
the
the
system
context
like
tick,
tock,
tick,
tock,
tick,
tock
from
one
to
the
next,
even
though
it
may
be
done
in
a
massively
parallelized
setup
but
effectively.
A
You
know
if
you
were
to
just
look
at
them,
flattened
over
time,
you're
watching
it
tick-tock
tick-tock
from
state
to
state
right
because
you're
always
moving
you're,
always
either
thinking
of
a
new
state
executing
the
state
or
you're.
You
know
you're
yeah,
thinking
of
the
state
or
executing
the
state
or
or
it's
it's
executed.
A
So
I
there
may
be
a
few
more,
but
basically
that
that
maps
to
each
commit
right
and
so
then
the
validation
of
each
commit
maps
to
the
cicd
for
that
commit,
and
then
what
we
need
to
do
is
we
need
to
sort
of
preemptively
validate
the
changes
of
each
developer
in
each
permutation
of
each
commit
in
each
developers,
feature
branches
against
each
other
and
that
will
give
us
sort
of
the
the
the
the
best
the
the
the
the
edge
of
field
right.
So
so
this
isn't
upstream,
this
is
bleeding
edge,
basically
right.
A
So
this
is
what
is
the
latest
auto
validated
version
of
the
all
of
the
feature
branches
tested
in
all
of
the
permutations
against
each
other
right?
So
this
basically
generates
your
dev
build
from
all
your
developers
with
all
your
feature,
flags,
essentially
right
and
you-
you
know
your
compatibility
matrices
so
that
you
can
release
different
subsets
to
different
users,
and
you
are
using
these
strategic
plans
that
are
overlaid
on
top
to
do
the
analysis
pre-execution
of
each
of
these
system.
A
Contexts
to
say
to
say,
hey
do
like
basically
is
this
within
my
risk,
tolerance
right,
I
I
you're
proposing
so
so
strategic
plans,
so
strategic
plans
both
propose
and
that.
A
I
do
not
think
I
did
all
right
so
yeah,
so
so
strategic
plans,
they
they
propose
and
they
they
propose
and
they
vet
things
and
okay.
So
this
maps
to
basically
the
risk
tolerances
of
a
b
testing.
These
feature
branches
with
of
the
developers
against
each
other,
because
you
have
certain
times
when
you
need
to
test
things
in
live
environments
based
on
resource
constraints.
So
say:
maybe
you
have
staging
dev
prod
right
and
you
you
know.
Maybe
you
want
a
dog
food
user
user.
You,
you
know
user
bob
with.
A
A
A
Which
means
you
have
to
cherry
pick,
and
this
is
what
I
meant
by
when
I
said
like
inclusive.
It
means
that
if
you're
going
to
cherry
pick
all
the
permutations,
then
you
have
to
apply
the
ones
that
are
like
before
that
so
and
then
basically
and
and
you
basically,
and
so
you
do,
that
for
all
the
permutations
and
then
you
run
all
of
the
tests
right
and
so
so
so
here's
the
thing
right.
Not
always.
A
Are
you
going
to
catch
you're
not
going
to
catch
everything
based
on
that
right
so
because
some
people
may
not
have
written
tests
yet
now?
How
do
we
mitigate
that?
Well,
we're
going
to
you
know
obviously
try
to
keep
our
you
know
our
or
the
locality
of
our
focus
right
to
these
function.
Scoped
objects
that
focus
on
the
data
and
the
transforming
of
the
data
right,
and
so
we're
going
to
try
to
wrap
up
the
docs
and
the
tests
and
the
execution
into
these
single
files.
A
You
know
like
with
our
adrs
who
are
restructured
text
files
right
now
or
our
you
know
our
python
files.
I
think,
ideally
we'll
end
up
with
a
situation
where
our
python
files,
where
we
fix
the
syntax,
highlighting
in
vim
and
vs
code,
to
support
like
this
for
this,
this
nested
syntax,
hiding
highlighting
when
editing
so
that
you
know.
Basically,
you
know
you
can
you
can
edit
your
your.
A
You
know
your
python,
your
python
file
as
your
unit
of
reuse
and
then
your
you
know
your
function
within
there
can
be
plugged
and
played
with
anything
else
and
yeah
so
and
then
obviously
all
of
the
wrappers
around
existing
specs
and
protocols
and
things
which
can
be
implemented
themselves
as
just
little
functions.
So
the
goal
is
like
the
completely
just
like,
as
modular
as
possible,
of
the
design
right
and
and
compatibility
with
existing
interfaces.
B
A
Operations,
yeah
there's
some
bad
inconsistencies
with
the
use
of
plural
plurality,
or
not
plurality
in
here.
So
then,
now
we
see
here
and
now
we're
going
to
do.
Oh
yeah,
I
was
going
to
pull
up.
That's
how
I
got
distracted.
I
was
going
to
pull
up
the.
A
Okay
yeah,
so
here
it
is
basically
if
you,
if
you
go
here,
this
is
what
we
did.
It's
basically,
you
know
if
you
want
to
contribute
it
to
the
repo,
which
is
what
we're
doing
we're,
putting
it
upstream
right
now,
then
you
know
you,
you
better
put
it
in
this
directory,
so
we're
gonna
say
pure
id.
A
So
now,
what
we're
doing
here
is
now
we're
going
to
check
out
our
branch
right,
so
dfml
operations
peer
id
and
we'll
preface
it
with
alice.
B
Well,
everything
is
going
to
be.
Basically,
everything
is
basically
all
right
check
out
the
purity.
A
Okay
and
now
we
jump
on
in
there
we're
going
to
give
it
the
old
get
status.
Oh
actually
in
here,
when
you
create
an
operation,
you
need
to
remove
the
dot
get
directory,
so
we
need
to
update
jh
issue.
B
A
Scale
common
or
wait:
no
so
docs
docs,
tutorials
models
package,
add
note
about
git.
A
Up
streaming,
okay
and
here's
the
body-
and
this
was
introduced
in
docs,
okay,
so
our
nose
introduced
in
so
basically
when
we
did.
A
When
we
did
this
service
dev
okay,
so
when
we
did
this
another
add
note
about
git
deletion,
so
so
git.
So
what
we
did
here
is
we
did
git
log
dash
p,
dash
dash
dot,
dot,
slash
dot,
dot,
df
of
mouse
service
dev,
okay.
So
this
is
going
to
tell
us.
Where
is
you
know
what
what
are
the
last
patches
dash
p
that
modified
or
what
are
the
last
yeah
patches
dash
p,
which
are
commits
which
modify
this
file
within
our
within
our
git
directory
right?
A
A
A
A
I
forgot
I
did
this.
It
keeps
my
out
of
my
hand,
okay,
so.
A
A
A
So
I
think
if
we
we
will
use
the
strategic
plans
to
yeah,
do
sort
of
clustering
models,
anomaly,
detection
whatever-
and
I
think
we
can
say
like
you
know:
oh
yeah,
okay,
we
were
gonna
snapshot,
the
entire
environment
when
we
run
our
tests
and
when
we
run
our
system
like
when
we
run
a
system
context
right
in
so
you
can.
A
Basically
the
the
goal
here
is
to
allow
for
configurability
in
the
amount
of
data
collection
as
well
as
the
amount
of
you
know,
configurability
or,
like
you
know,
hooks
into
things
right,
so
we
can
basically
run
the
orchestrator
whatever
orchestrator
we
run,
we
can
set
up.
You
know
we
we
can,
we
can
set
it
up
so
that
it
it.
A
It
somehow
allows
us
to
reach
our
hooks
in
there
and
and
save
that
state,
because
if
we
can
save
that
state,
we
can
inspect
that
state
sort
of
like
we're
using
the
strategic
plans.
It's
like
the
the
the
the
generic
brain
of
the
the
fuzzer
right
and
then
we
can.
We
can
try
new
we
or
we
can.
We
can
see
which
pat,
which
things
moved
strategic
method,
match
strategy,
strategic
plan,
outputs
and
if
you're
moving
the
strategic
plan
output,
then
you're.
A
If
you're
moving
the
strategic
plan
output,
then
that
means
that
there's
some
kind
of
a
normal,
it
might
mean
that
there's
some
kind
of
anomaly
right
and
so
basically,
if
we
had
something
that
went
and
detected
all
the
files
that
were
created
within
a
test
case
run,
and
we
saw
that
this
dot
git
directory
is
now
created.
Right,
we
could
say:
hey
anomaly,
there's
a
whole
bunch
bunch
of
files
that
were
created
right,
and
so
this
is
sort
of
like
our
adaptive
sandboxing
technique.
A
So
so
we
sort
of
build
these
yeah
we're
basically
building
these.
These
profiles
of
the
trains
of
thought,
as
if
they're
like
work,
workloads
and
then
we're
sort
of
applying
this
adaptive.
The
adaptive
sandboxing
is:
is
this
anomaly
detection,
which
you
know
decides
basically
what
to
do
based
if
there's
an
anomaly,
it
detects
the
anomalies
and
it
decides
how
to
respond
to
them
on
a
case-by-case
basis
by
feeding
into
other
strategic
planes
which
take
the
outputs
of
the
previous
strategic
plans
as
inputs
and
any
sort
of
each
one.
A
If
you
look
at
it
forms
like
a
layer,
I
think
I
think
it
forms
basically
a
layer
and
a
neural
network
if
you
look
at
each
of
them
and
so
yeah,
but
I'm
not
sure
about
that.
A
A
A
Into
the
neural
network
weights,
is
it
in
there,
I'm
not
sure
if
it's
worded
in
this
way,
which
is
why
I'm
struggling
with
this?
I
think
it's
in
too
many
bits
and
pieces
all
over
there.
So
I'm
trying
to
word
it
in
one
cohesive
thing
or
pull
it
out
of
here.
So
you
can.
You
can
do
this
encoder,
and
so
you
can
create
an
encoder
by
doing
essentially
this
with
these
multi-output
models
where
the
outputs
you
do
all
these
permutations,
oh
yeah.
This
is
what
it
was.
A
So
you
do
all
these
permutations
of
multi-output
models
for
the
feature,
engineering
and
with
the
hyper
parameter
tuning
so
automl
on
these
strategic
plans.
A
A
Is
you
basically
just
permutate
over
everything,
building
models
where
applicable,
build
every
applicable
model
that
shows
correlation
between
all
I
o,
looking
at
all,
I
o,
as
in
all
the
way
down
through
every
system,
context
where
all
the
permutations
are
defined
by
the
calculated
risk
and
risk
risk
tolerances,
with
applied
overlays
for
any
specifics
for
the
con
for
for
within
the
top
level
system.
Contact.
So
basically
like
don't
spend
too
much
money
on
on.
A
You
know,
csp
time
or
something
but
go
experiment
right
so
yeah,
and
how
did
we
get
on
this
basically
introduced
in
blank?
So
we
were
attempting
to
explain
the
ad
a
b
testing
and
how
we
would
you
know:
cross-validate
future
branches,
combined
with
a
b
testing
to
users
to
understand
what
individual
commits
introduced
bugs
so
basically
to
go.
You
know,
sort
of
like
we're,
trying
we're
trying
to
understand.
Basically
we
we
do
the
risk
we
do
you
do.
A
We
do
our
risk
assessment
and
allocate
to
users
appropriately
where
users
are
use.
Cases
are
data
flows
are
executed,
system
contacts
within
the
chain
which
have
low
risk
depending
if,
if,
if
or
or
within
an
acceptable
risk
tolerance
right.
So
this
is
your.
You
know:
okay,
I'm
gonna.
Try
right,
I'm
I'm
I'm
I'm
like
okay,
so,
okay,
so.
A
So
yeah,
what
are
you
gonna
do
so?
Basically,
you're
gonna
go
try.
So
you
you
wanna
test
this,
this
10
gigabyte,
you
know
20
gigabyte,
whatever
thing
right,
okay
or
no,
you
were
on
what
was
it
we're
on
100
megabyte
disk
right?
And
so
then,
if
we
had,
you
know
the
limit
was
200
gigabytes.
It's
not
going
to
work
right,
so
we
somehow
generated
a
system
context
right.
So
we
generated
a
system.
We
thought
we
thought
a
thought
right.
We
think
we
thunk
a
thought.
A
A
new
system
context
in
the
hypothesis
state
right
without
like
the
executed
values,
is
or
maybe
there's
like
an
id
there's
like
there's,
maybe
like
an
idea,
a
hypothesis
where
you're
proposing
some
outcome
right
and
that's
the
the
running
through
all
the
strategic
plans
to
get
the
to
to
get
the
output
results
and
then
there's
you
know
the
executed
value,
the
the
the
the
ground,
the
ground
ground,
truth,
as
declared
by
whoever
signed
that
d
id
right
or
is
associated
with
that
the
id,
and
this
is
what
allows
you
to
pick
and
choose
what
data
you
want
to
work
with
in
in
this
environment
as
it
as,
if
you
are,
you
know,
for
example,
in
a
ai
that
is
sort
of
just
floating
out
there
through
all
of
these
devices.
A
You
can
basically
say
you
know
this
is
the
data
that
is
applicable
to
my
strategic
plans
and
therefore
this
is
the
data
that
I
need
to
consume,
and
these
are
the
did
peers
that
I
need
to
traverse
right
to
perform
whatever
task
or
whatever
particular
time.
I'm
trying
to
perform
in
guidance
with
my
strategic
plans,
which
are
have
this
signed.
Providence
information
included
via
the
verifiable
claims,
so
this
you
know
so
so
we
were
talking
about
the
like
the
20
gigabyte
limit.
A
So
basically,
you've
got
this
this
this
mesh
of
devices
out
there
and
you
say,
hey
so
somebody
somebody
go
try
this
experiment
right.
Well,
it's
like
well,
what's
the
what's
the
risk
cost?
Okay,
what's
the
analysis
basically
like?
I
need
something
with
you
know:
100
megabyte
drive
that
can
try
to
create
this
file.
So
whoever
can
go.
You
know
execute
me
that
flow.
I
would
be
really
happy
to
hear
the
results.
Thank
you
very
much.
A
You
know,
so
you
send
it
out
there
to
the
did
chain
wherever
you
want
to
do
that,
and
we
have
this
generic
proxy
infrastructure
on
top
of
that,
so
you
could
send
it
wherever
you
wanted
right,
but
you're
just
using
the
the
id
and
maybe
hopefully
this
optionally.
You
know
encoded
with
seabor
to
transmit
these
things,
these
things
around
over
different
protocols
and
and
transport
mechanisms,
so
as
to
create
this
like
mesh
network
across
yeah
protocols
and
and
and
transport
mechanisms.
A
So
whatever
same
thing,
so
then
you
get
into
like
this
like
issuing
of
reward
and
like
incentivizing,
certain
behaviors,
and
so
because
you
want
to
basically
like
influence
the
agents
within
the
network,
to
you
know,
produce
metrics
that
are
beneficial
to
some
of
your
strategic
plans.
A
So
this
is
like
you
know,
when
you're
detecting
alignment
and
when
you're
you
know
killing
two
birds
with
one
stone
sort
of
thing.
So
if
two
edge
agents
are
executing
trains
of
thought,
which
are
you
know,
similar
by
some
measure
of
similarity,
there's
some
measure
of
similarity,
basically
an
encoder
model,
encoder
decoder
model,
I'm
pretty
sure
that
that
that's
going
to
encode
the
system
context
to
some
sort
of
like
you
know,
stream
right.
A
So
we're
going
to
be
able
to
represent
these
we're
going
to
need
to
represent
these
system
context
as
some
stream.
Some
sort
of
thing
that
maybe
it's
like
some
sort
of
like
kind
of
like
dna
thing
where
we
can
sort
of
you
know.
I
think
maybe
it's
this
it's
it's
something
where
we
can.
Basically,
like
you
know,
really
understand.
A
You
know
what
makes
these
things
similar
to
each
other
right
and,
and
I
think
what
makes
them
similar
to
each
other
is
really
just
oh,
I
guess
you're,
probably
just
going
to
run
strategic
plans
over
them.
That
say
what
is
similar
within
this
context,
just
as
the
same
way
as
you're
going
to
say
what
do
I
want
to
do
for
this
deployment
environment
for
this,
for
this
top
level
system
context
right,
so
I'm
just
like
running
from
the
command
line
or
whatever.
A
B
A
B
A
No,
the
tests
do
the
test
run.
I
am
not
sure
yeah.
I
think
the
tess
didn't
clean
up.
That
sticky
thing.
I
think
the
tess.
A
A
All
right,
so
what
are
we
looking
at?
What's
the
damage
so
a
couple
of
files
here
so.
A
So
what
do
we
got?
So
we
have
our
plugin
and
let's
open
up
so
so
so
so
we
should
safely
okay.
So
let's
look
at
the
readme,
it's
probably
just
garbage
yep,
okay,
there's
nothing
in
here,
so
you
usually
you're,
probably
used
to
seeing
set
of
py
files.
A
So
now
there's
a
there's
something
going
on
where
we're
shifting
the
packaging
ecosystem
and
python
has
been
changing
over
the
past
few
years
and
they're
redoing
things
and
there's
this
this
sort
of
general
acknowledgement
in
the
community
that
hey
you
know
we
really
shouldn't
be
executing
code
when
we
go
and
build
new
builds
from
random
dependencies
that
we
grab
off
the
internet
right.
So
let's
stop
using
python
files
and,
let's
switch
to
you,
know,
maybe
something
a
little
more
static
like
basically
a
a
tommel
file.
A
Here's
yeah,
you
know
all
our
random
little
package
data,
so
you
know
I
think
that
we're
actually
going
to
move.
A
I
think
it
almost
makes
sense
to
move
even
farther
away
from
this,
because
we
want
to,
I
think,
we're
going
to
try
to
keep
this
as
lightweight
as
possible,
which
basically
means
we're,
we're
gonna,
kill
all
the
packaging
stuff
and
we're
basically
gonna
assume
that
alice
is
gonna
package
this
for
us
down
the
road,
so
we're
just
gonna
get
rid
of
all
this.
We're
actually
not
gonna
use
any
of
this.
The
only
thing
yeah
we're
not
going
to
use
any
of
this
we're
not
going
to
use
any
of
this
at
all.
A
We're
not
using
these.
So
the
only
problem
with
that
is
then
you're
tied
into
this
build
tool.
Alice
is
a
build
tool.
Okay,
so
you
can't
basically
just
directly
point
it
at
anything
because
you're
stuck
using
the
build
tool,
even
though
we
love
the
build
tool.
A
So
in
that
case
it
doesn't
really
matter,
because
what
we'll
do
is
we
will
have
the
stuff
get
basically
just
auto-generated
and
you
can
commit
it
to
git
right.
You
can
there's
no
reason,
you
can't
just
commit
it
to
git,
and
we
can
do
you
know
delta
so,
but
we're
just
gonna
delete
it
for
now.
So
we're
basically
just
gonna
we're
just
gonna
wipe
out
all
this
stuff.
We
actually
don't
want
any
of
this.
A
Pure
id
so
yeah,
eventually,
we
would
have
something
that,
let's
see
eventually,
we
would
have
something
that
just
like
automatically
wraps,
basically
eventually
alice,
will
be
able
to
automatically
wrap
any
library
for
us
by
doing
introspection
on
the
ast
so
that
that
would
be.
You
know
the
next
step
here
right.
Basically,
you
know
determine
what
bit
build
build,
build
data
flow
which
are,
is
you
know,
analyze
source
code?
A
You
know,
understand,
apis
exposed
apis,
understand,
unexposed
apis,
and
if
you
could
rewrite
it,
programmatically
and
refactor
it
programmatically
to
expose
those
unexposed
apis,
and
then
you
know,
map
those
to
your
data
flow
construct.
And
then
you
can
do
synthesis
or
dynamic
execution.
A
With
either
of
those
so
great
so.
A
All
right:
okay,
okay,
okay,
all
right
all
right,
so
much
information,
okay,
okay,
okay,
I'm
just
trying
to
get
everything
out
here
so
because
I
know
the
the
notes
are
a
little
bit
hard
to
know.
I'm
not
I'm
sure
this
is
not
that
much
more
coherent.
Unfortunately,
this
is
the
the
just.
Just
live
editing,
basically
thoughts;
okay,
so
let's
go
ahead
and
install.
A
A
B
B
B
A
A
A
We
need
to
be
able
to
just
throw
a
giant
base64
encoded
blob
in
there
and
then
we'll
be
happy
because
at
a
moment,
at
a
minimum
we
throw
a
full
data
flow
in
there.
A
A
Yeah,
so
my
understanding
is
that
you
know
there's
things
that
interpret
this
and
we
might
be
one
of
the
things
that
would
interpret
this.
Well,
maybe
that's
how
it
works.
We're
we're
interpreting
this
and
then
we're
going
to
go
to
that
end.
Point
I'm
not
exactly
sure
service,
endpoint
yeah.
I
think
that
does
mean.
I
think
this
is
where
we're
going
to
say:
hey,
you
know,
here's.
B
A
B
A
B
A
All
of
these
become
inputs,
all
of
these
become
inputs
and
then
all
right,
I
got
stuff
now.