►
A
A
Wow,
this
is
complicated.
Okay,.
A
A
A
So,
where
are
we
yeah?
We're
gonna
start
with
a
little
bit
of
context
today?
Well,
this
whole
thing
is
supposed
to
be
context,
so
we're
going
to
copy
this
image,
because
this
image
has
some
context.
A
Where
should
we
put
this
so
maybe,
okay,
so
I
went
and
restructured
some
of
the
the
the
you
know
these
work
in
progress,
the
tutorials
right,
and
so
I
also
had
a
thought.
You
know
that
as
we
as
we
look
across
these
strategic
plans
and
and
build
these
neural
networks
and
stuff,
you
know.
A
A
A
Okay,
that's
right!
I'm
supposed
to
use
the
other
desktop!
Okay,
all
right!
Okay!
Here
we
go
so
let's
go
ahead
and
paste
this
in
here
background.
Maybe
this
should
go
here
preface
introduction
context.
Here
we
go.
A
Yeah,
so
this
is
the
lay
of
the
land
here.
Can
we
see
the
screen?
Yes,
okay,
perfect,
and
so
what
what
we
did
here
is
basically
drew
up.
You
know
kind
of
high-level
howard
connect
how
we're
going
to
connect
things
right
and
yeah.
Once
again,
I
apologize
for
this
monitor,
but
I'm
gonna
need
all
this
screen
real
estate.
So,
okay,
so,
and
and
so
what
are
we
doing
right?
We're
trying
to
really
understand
the
the
first
order
of
business
is
really,
you
know
understand.
A
What's
what
the
code
is
and
what's
good
code
right
and
then
second
order
of
businesses
is
how
to
run
it,
and
then
throughout
is
how
to
secure
that
right.
So
you've
got
you've
got
you
know
you,
you
basically
have
this
sort
of
web
of
intertangled
kind
of,
like
you
know
strategic
efforts
or
projects
right,
and
you
know
we're
gonna
tie
them
all
together.
So
okay,
so
we
need
to
do
so.
So,
as
we
split
out
into
second
and
third-party
plug-ins,
you
know
to
to
facilitate
that
ecosystem.
A
A
Inner
source-
yes
use
case
inner
source,
so
this
is
where
we
can
nope.
This
is
the
polyrepo
pull
model,
dev,
tooling,
okay,
so
there's
another
one,
there's
another
one
search
for
inner
source:
it's
there
we'll
have
to
fix
that
later.
A
But
basically
you
know
that's
that's!
This
is
our
sort
of
ci
across
projects
right
are
double
checking
that
that
the
plug-ins
are
adhering
to
these
development
practices,
which
you
know
we
would
like
to
see
of
at
least
the
second
party
ones,
and
then
understand
you
know
to
report
out
on
the
health
of
the
third
party
ones
right,
because
we
want
to
you
know,
make
sure
that
as
a
community
for
end
users,
we're
providing.
A
You
know
this
configurability
around
these,
the
ability
for
them
to
create
third-party
plug-ins,
and
when
we
do
that,
we
need
to
be
able
to
like
give
them
the
tools
to
point
it
at
their
own
stuff
to
gamify
their.
You
know
increasing
of
good
properties
right.
So
this
is
like
you
know
your
your
open,
ssf
security
scorecard,
which
is
something
and
and
so
so
so
open
ssf.
A
So,
let's
go
actually
we
can
go
right
to
dfml
and
we
can
look
at
the
open,
ssf
best
practices,
so
this
used
to
be
called
the
core
infrastructure
initiative,
core
infrastructure
initiative,
badging
program-
and
it's
since
been
adopted
by
the
open
ssf,
which
is
the
open
source
security
foundation.
A
So
basically,
this
is
this
is
so
and
let's
go
to
inner
source
patterns
too
inner
source
patterns.
Okay,
so
there's
this
set
of
docs
inner
source
patterns.
This
is
really
good
stuff
for
understanding
inner
source,
and
so
you
can
sort
of
and
there's
a
little
inner
source
demo
on
the.
A
There's
a
little
inner
source
demo
here,
where
we
run
this
ibm
just
ibm,
wait
a
minute.
This
is
a
different
okay!
We've
done
this
one
multiple
times!
Oh
no!
This
is
the
oh
yeah.
No,
this
is
because
we're
on
the
master
branch
docs,
which
is
now
the
main
branch
docs
which
is
now
here
under
examples.
Okay,
so
sap
put
together
this
portal
all
right.
So
once
again,
let
me
bring
up
back
the
map.
Let's
have
the
map,
so
we're
going
to
get
lost
without
the
map.
A
Okay,
so
what
are
we
doing?
We're
building
alice
right,
okay
was
alice.
Alice
is
the
ai
software
architect,
so
she's
gonna
need
to
understand
the
code
and
understand
you
know
why
the
code
is
written.
The
way
it
is
not
just
like
functionally
how
to
execute
the
code,
and
so
to
do
that.
A
We
basically
need
three
things
we
need.
We
need
effectively.
We
need
like
aiml
edge
and
3.
like
if
you
think
about
it,
sort
of
conceptually
at
a
high
level
like
what
are
the
things
that
come
together
to
make
this
thing
that
we're
going
to
make,
and
so
how
are
we
going
to
knit
them
together
right
and
that's
what
we're
talking
about
right
now,
so
the
let's
see
so
so.
A
The
edge
portion
of
this
we're
gonna
break
up
into
two
two
things
were
three
things
really
ai,
all
right,
aml
edge
and
goddammit.
I
keep
forgetting
this
every
single
time.
I
say
it
aml
edge
and.
A
What
is
the
last
one?
I
just
forgot
it.
I
keep
forgetting
this.
I
wrote
it
down
whatever
it
doesn't
matter,
so
we're
basically
just
going
to
assess
that
we're
going
to
look
at
the
code
right.
We
need
to
understand
how
to
run
the
code
and
then
how
to
reorganize
it.
Okay,
so
web3
web3
edge
and
aiml.
A
Edge
yeah
ml.
I
keep
coming
up
with
ones
that
are
sub
things
of
that.
When
I
try
to
remember
it
again,
instead
of
that,
but
that's,
I
think,
the
appropriate
segments,
because
it's
it's
this
cross
between
the
iot
and
servers,
because
you,
basically
basically
what
we
want
to
do
is
we
want
to
create
this
like
distributed
compute
system
with
ml
built
in
that
also
is
like
this
giant
data
feedback
loop
right
and
so
that
we
can
like
analyze
anything
that
we
do
and
then
do
it
better
in
this
distributed
setting.
A
So
basically
that
gets
us
back
to
you
know.
We
need
to
go
understand
the
quality
of
the
code
that
we're
going
to
run,
which
is
the
openssf
initiatives
feed
into
that
inner
source
feeds
into
that.
We
need
to
know
how
to
run
it
so
debugging
trying
to
do
debug.
A
So
one
of
the
first
things
that
we're
going
to
do
is
we're
going
to
try
to
teach
her
how
to
basically
run
unix
commands,
and
then,
after
that
you
know,
the
goal
with
the
next
thing
we'll
do
is
is
plug
her
into
the
command
line.
You
know
to
basically
do
this.
A
You
know
this
fully
connected
dev
model
thing,
basically
like
you
like,
if,
if
she's,
if
she's
she's
we're
we're
like
sending
our
what
commands
we
run
and
the
output
to
her
and
she's
running
locally
right
and
she
can
kind
of
tell
us
based
on
the
things
she
sees
externally,
you
know
where,
where
we're
going
and
if
we're
going
in
the
right
direction
or
not
right,
so
I
think
I
had
something
I
wrote
on
here
yesterday,
which
was
like
you
know,
say
that
you
and
another
developer
are
working
on
a
project
right
and
the
database
goes
down
and
you're
querying
the
database.
A
You're
writing
some
code
that
queries
database
you're
changing
some
things
right
now,
you're,
not
sure.
If
what
you're
changing
is
not
working
because
the
database
is
down
or
because
you
wrote
the
code
wrong
right,
but
if
both
your
machine
and
your
buddies
machine
were
sort
of
talking
side
by
side,
you
know.
Without
you
doing
anything,
then
they
would
know
instantly
right
and
you
would
never
be
wondering
well,
you
know,
is
it?
Is
it
my
fault
or
is
it?
You
know
something
else?
A
That's
happening
right
now
and
you
know
we're
basically
going
to
extend
that
concept
to
the
point
where
we're
going
to
sort
of
look
at
all
of
the
history
of
anything.
That's
ever
been
tried
and
then
kind
of
say,
yeah.
You
know
that's
already
been
tried.
A
You
might
want
to
work
another
direction
right
or
if
you
do
want
to
keep
going,
here's
the
specific
vein
that
hasn't
been
tried
right
and
that
way,
you're
sort
of
always
exploring
new
paths
right
and
you're
kind
of
sharing
knowledge
as
fast
as
possible,
and
so
so
so
why
are
we
working
on?
This
did
stuff
first.
A
So
the
reason
why
is
when
we,
when
we,
we
want
to
build
this
memory
right
and
because
you
know
the
first
thing
we're
going
to
do
is
train
trainer
honors
on
on,
like
like
on
all
these
terminal
commands
and
then
then
we'll
try
to
yeah,
basically
we're
going
to
feed
as
much
data
as
possible
right,
and
we
want
to
make
sure
that
we're
not
like
constantly
moving
that
data
format
around
like
from
place
to
place,
because
this
is
one
of
the
original
things
we
had
to
deal
with.
A
It's
like
okay,
well,
you
know
you're
doing
these
machine
learning
applications,
and
you
know
you
first
you're
prototyping
here
and
you
wanted
a
json
and
then
you
wanted
a
csv,
and
then
you
got
to
put
it
in
my
sql
right,
and
so
we,
let's
just
not
deal
with
that.
I
mean
we.
We
have
the
code
to
deal
with
it,
but
let's
just
let's
just
not
do
that
right
and
let's
just
start
with
it
already
on
this
web
3
stuff,
then
we're
just
not
going
to
have
to
deal
with
this
right.
A
That's
the
whole
point.
So
basically,
what
we're
gonna
do
is
we're.
Gonna,
extend
the
concept
of
data
flows
and
and
really
think
about
you
know
as
as
a
manifest
and
there's
some
docs
in
here
on
on
the
manifest
format.
But
it's
very
I
mean
you're
very
it's.
Basically,
a
data
flow
with
some
docs
and
you
know
how
you
package
it
up.
Whatever
you
call
it,
it
doesn't
really
matter.
You
know,
the
point
is
put
it.
A
You
got
to
put
everything
in
one
place
right,
and
this
is
the
the
system
context
right
so,
and
this
is
what
we're
gonna
encode
to
like
this
sort
of
dna,
and
so
the
the
the
reason
why
we're
encoding
it.
Some
dna
is
because
it's
contextual
right
so
when
what
we're
gonna
do
is
we're
gonna
we're
gonna
figure
out
how
to
use
we're
going
to
figure
out.
A
First
of
all,
you
know
whether
these
did
and
distributed
web
node
concepts
are
things
that
we
could
use
here,
pretty
sure
they
are
and
if
they
are,
then
we're
going
to
with
it
with
the
blockchain
stuff,
and
I'm
pretty
sure
this
is
some
kind
of
blockchain
stuff.
Basically,
you
you
don't
want
to
put
all
the
data
on
the
chain,
because
then
you
know,
there's
people
all
right.
This
is
distributed
network
like
so,
let's
not
make
other
people
in
the
network
pass
along
a
bunch
of
data
that
doesn't
need
to
be
passed
along
right.
A
So
what
we
want
to
do
is
use
this
like
a
giant
graph
database
right
or
like
a
linked
list,
and
so
because
these
distributed
identifiers-
they
say
this
thing
is
this
here
and
over
here.
It's
something
else
right
and
then
you,
basically
you
just
built
these
giant
graphs,
and
so
then
they
propagate
through
the
network
via
the
distributed
network
nodes,
at
least
that's
what
I've
gathered
so
far,
and
that's
what
we're
going
to
go.
A
Try
to
run
some
code
today
to
find
out
now
what
we're
going
to
put
in
there
is
the
manifest
the
system
context
right,
and
what
we
found
out
is
that
if
we
maintain
this
concept
of
the
system,
context,
sort
of
all
the
way
down
like
that,
like
turtles,
all
the
way
down
like
you
can
it
becomes
it's
just
everything
becomes
trivial
and
it's
great.
It's
the
same.
A
It's
like
if
you
use
async
all
the
way
down,
then
you
don't
have
to
deal
with
stuff,
which
is
why
we're
here
so
okay,
because
we
had
to
write
this
because
there's
nothing
that
used
async
all
the
way
down
at
the
time.
So
here
we
are
okay,
so
basically
we're
going
to
explore
the
did
stuff
we're
going
to
put
the
manifest
in
the
in
the
in
the
content
of
the
thing
that
we're
distributing.
A
However,
via
these
distributed
web
nodes
right
with
the
with
the
identifier
being
the
did,
then,
at
that
point,
we're
going
to
we're
going
to
work
on
the
data
flow
as
class
pull
requests
a
little
more
and
what
that
data
flow
is
class.
Pull
request
is
doing
is,
let's
bring
it
up.
A
Yes,
I'm
sorry,
I
have
not
had
time
to
review.
We
gotta.
I
got
I
gotta
pitch
this
stuff
to
my
manager,
so
by
next
week.
So
this
is
why
we
have
to
do
a
bunch
of
research.
Obviously
this
is
slightly
different
stuff,
but
I
want
to
make
sure
that
this
stuff
is
there
as
a
foundation,
so
slightly
yeah.
This
is
pretty
much.
This
is.
A
All
right,
okay
data
flow
as
class,
so
here's
the
adr
and
what
I
was
doing
with
this
is
yeah,
so
here's
the
adr.
So
what
I
was
doing
with
this
is
there's
a
bunch
of
code
in
here,
but
it's
not
rendering
and
I
opened
an
issue
with
github,
but
basically
because
of
the
console
test
stuff
and
I
tagged
it
with
test.
So
what
I
was
doing
is
I
was
writing
the
adr
and
the
test
all
in
one
doc
like
this
right.
So
the
code
is
right
here.
A
So
the
code
is
right
here,
and
this
is
just
like
inline
in
a
code
block
and
bim
will
do
highlighting
for
you,
and
I
think
other
editors
will
too,
which
is
nice,
so
basically
yeah
wrote
the
adr
and
the
thing
all
in
one.
So
this
is-
and
this
is
where
we
want
to
go
right.
This
is
like
notebooks.
This
is
like
you
know
the
stuff
that
we
had
done
recently.
A
If
we
do
this
and
and
then
you
know,
the
unit
of
re
is,
is
a
file
right,
okay,
so,
and-
and
we
know
we
know-
we
know
that
we
can
easily
work
with
a
file
right
so
as
far
as
like
humans
are
able
to
consume
easily
one
file
in
one
context
at
a
time,
especially
if
you
don't
have
a
4k
monitor
that
makes
it
really
hard
to
consume
more
than
one
file
at
a
time.
Okay.
So
what
is
this?
Basically?
A
So
the
data
flow
is
class
stuff.
So
so
quick
recap.
Okay,
so
basically,
what
are
we
doing?
We're
going
to
put
all
the
data
on
the
blockchain
then
after
we
do
that.
A
We're
not
going
to
put
all
the
data
we're
going
to
put
the
references
like
the
metadata
on
the
blockchain
and
then
we're
going
to
reference
the
data
via
the
data
flows
which
we're
going
to
put
on
the
blockchain,
and
so
then
you
can
basically
instantiate
whatever
methods
you
want
to
go
get
the
data,
but
this
thing
is
going
to
tell
you
where
the
data
is
and
how
you
should
go,
get
it
and
how
you
should
pass
the
credentials
that
you
have
or
you
may
need
to
the
things
so
that
you
can
go
get
it
right
or
you
know
any
other
options
right
so
and
what
we'll
do
is
we'll
probably
store
all
of
our
data
in
what
are
we
gonna?
A
A
I
think
we're
going
to
just
do
it
all
in
memory
right
now
and
then,
but
we
just
need
to
know
that
this
works.
We
just
need
to
know
that
it
works.
We
can
do
it
all
in
memory,
but
we
need
to
know
that
it
works,
and
then
we
can
pickle
it
we'll
just
pickle
it.
Okay,
I
just
simplest
approach.
First,
so
and
and
after
we
do
that,
then
the
next
thing
we're
going
to
need
to
do
is
why
I
was
saying
pickle
it
is.
A
We
want
to
be
able
to
basically
take
these
input
networks
right,
take
these
data
flows
and
snapshot
them
at
any
point
of
time,
right
and
and
because
this
is
what
lets
us
understand
like
what
is
one
thought
like
one
moment
in
thought,
because
if
you're
following
the
thread,
then
you
use
see
that
there's
like
some
analogies
between
the
system
context
and
like
a
chemistry
equation
right.
A
So,
basically,
you
can
think
of
it
like
we're,
making
everything
like
functional
programming
right
where
we're
constantly
you
know,
tick-tocking
from
one
state
to
the
next
and
each
system
context
defines
a
state
and
it
defines
the
you
know
the
inputs
that
are
within
a
state.
You
know
basically
all
of
the
data.
If
you
were
inspecting
something
like
you
had
fuzzy
instrumentation
at
runtime.
What
are
all
what
at
one
on
one
instruction?
A
What
is
all
the
data?
Basically
traverse,
all
the
structs
right
traverse.
All
the
fields
tell
me
their
type
information.
It's
just
this
giant.
You
know
you
could
be
that
detailed
if
you
wanted
to
right
so,
okay,
so
basically
we're
going
to
put
that
on
and
then
we're
going
to
run
it
okay.
So
what
do
we
want
to
do
is
is.
Is
the
data
flow
api
is
not
ergonomic.
A
Many
have
many
have
commented,
including
myself.
So
it's
just
it's
not
it's
not
it's
no
fun.
So
what
we
want
to
do
instead
is
is
you
know,
make
something
that
that's
that's
more
familiar
right,
say,
for
instance,
like
a
regular
class
for
god's
sakes.
So
that's
what
we're
going
to
do
and
basically
you
know:
you'll
create
the
class
and
you'll
okay
class
instantiation.
That
was
a
really
double
contrast.
Entry
on
orchestrator,
that's
not
possible.
It
has
to
be
async.
A
Okay,
so
you'll
create
the
class.
You'll
enter
the
context
of
the
class,
and
then
you
can
call
the
methods
and
when
you
con,
when
you-
because
I
mean
we
need
context,
entry
everywhere
right,
so
this
class
instantiation
will
become
the
the
context
entry.
Maybe
I
probably
meant
to
write
that.
That's
just
weird
that
I
wrote
that
weird
brain
fart-
okay,
so,
let's
see
okay,
so
basically
yeah
we're
gonna
make
it
so
that
when
you
enter
the
class
it
starts
running
the
data
flow
right,
any
any.
A
No
input
operations
start
running
right.
So
if
we
are,
if
we
are
looking
at,
for
example,.
A
Okay,
so
if
we're
looking
at
at
these
so
get
user
input,
this
is
perfect,
so
get
user
input
is
an
operation
that
is
a
auto
start
operation,
so
it
it
as
soon
as
you
start
the
flow
as
soon
as
you
add
that
first
set
of
con
of
inputs,
which
is
usually
what
kicks
off
the
flow,
at
least
it's
the
way
that
the
flow
gets
kicked
off
right
now
through
all
the
high
level
apis.
A
If
you
you,
so
you
need
to
add
some
inputs
to
the
context,
basically
to
kick
off
the
data
flow,
because
there
has
to
be
a
context
active
right,
so
I
think
we
need
to
go
patch
it
so
that
I'm
not
sure.
A
Yeah,
I'm
not
sure,
I'm
not
sure
if
you
can
add
a
context
yeah
you
should.
You
can
add
a
context
with
no,
you
can
add
an
input
set
context.
You
can
add
it
yeah.
You
can
add
an
input
set
with
only
an
input
set
context
and
I
think
it
will
add
the
seed
inputs
and
if
no
cd
inputs,
it
will
auto
start
it's
just
that
you'd
have
to
instantiate
the
class.
So
it's
not
immediately
obvious
or
you
could
do
the
dick
with
a
key
and
okay
okay.
So
basically
you
can
do
it.
A
It's
just
once
again,
it's
not
very
ergonomic,
so
so
get
user
input.
Okay!
So
basically
we're
gonna
say
essentially
when
we
do
this
method
thing.
A
A
A
A
A
We
have
that.
Okay,
I
haven't
thought
about
this
stuff.
Yet
now
I'm
thinking
about
it,
which
is
why
it's
taking
I
mean
it's
always
taking
a
long,
but
we
have
this
issue
for
generic
save
and
load
right.
So
we
basically
want
to
take
any
file,
any
stream
anything
and
be
able
to
save
and
load,
something
which
I'm
now
thinking.
A
So
in
that
case-
and
this
has
to
do
with
this
web3
thing
as
well,
because
you're
basically
pointing
it-
and
this
has
to
do
with
the
the
the
turtles
all
the
way
down,
so
if
you
point
it,
if
you
point
it,
okay,
if
you
say
I'm
looking
at
something,
I'm
looking
at
a
d
id
or
you,
you
say
basically,
here's
a
d
id,
I
want
you
to
go
run
the
data
flow.
That
gets
me
the
thing
in
it
right
referenced.
A
A
A
A
A
A
A
A
All
right,
you
guys
can't
see
the
brightness,
so
that
is
not
how
that
works.
For
you,
I'm
turning
up
brightness
on
my
monitor.
A
A
A
You
know
not
quite
two
months,
okay,
so
this
is
what
I
was
talking
about.
Look
at
this.
This
is
fantastic!
Isn't
it
I
love
it.
I
can't
yeah,
oh
hey
how's
it
going.
Are
you
making
dinner
cool,
no
yeah,
all
right
cool
thanks,
baby,
all
right?
Okay,.
A
Okay,
yeah,
I
can't
decide,
I
think
you
know
it
would
be
nice.
The
other
thing
we've
talked
about
is
you
know?
I
guess
we
just
go
patch
femme,
but
I
don't
want
patchfilm
code
block
here.
This
is
my
example.
A
This
just
needs
to
be
fixed
and
then
it's
fine,
but
you
know
and
then
for
like
you
know,
probably
vs
code
2
or
emacs
or
whatever,
and
then
that
would
be
fine.
It's
just
if
it's
context
of
where,
as
long
as
it's
context-aware
and
it
can
jump
from
format
to
format
that
would
be
ideal.
You
know
so
either
python
in
rst
is
probably
not
python
in
rst
is
not
not
good.
If
we
could
have
rst
in
python.
A
A
A
A
For
some
reason,
nothing
else
seems
to
work.
I
don't
know
why
at
least
not
for
me.
This
is
only
thing
that
works
for
me
and
it
works
really
well
because
watch
this
every
time
you
see
the
file
boom
rerun
boom,
rerun.
A
A
I
got
about
like
halfway
through
that.
It's
been
like
eight
hours
on,
it
stayed
up
very
late
and
I
did
not
did
not
finish
it.
So
it's
like
it's
one
of
those.
It
needs
a
lot
of
refactoring
I
the
problem
with
it
was.
It
depends
on
dfml
and
then
dfml
depends
on
console
test,
but
I
think
if
we
can
get
this
data
flow
as
class
stuff
going,
the
console
test
stuff
has
heavy
plug-in
use
that
had
to
be
refactored,
and
I
was
like
okay,
I
don't
want
to
have
a
we're.
A
A
A
A
A
This
is
where
those
strategic
plans
and
things
come
in
is
we
want
to
basically
have
different
right
now
we
have
one
set
of
output
operations
per
data
flow
and,
with
the
data
flow
as
class,
we're
going
to
have
a
different
set
of
output
operations
for
each
method,
and
each
method
is
going
to
be
treated
as
a
a
s
like
a
deployment
environment
or
something
this
is
where
you
may
be
calling
from
the
cli.
You
may
be
calling
from
the
http
interface
you
may
be
calling
from
a
com
like
a
chat
bot.
A
A
A
So
this
is,
you
know,
we
see
it
it's
being
extended
right,
so
this
package
contents
is
being
we're
hijacking
the
output
of
this
we're
making
it
a
value
within
a
key
value
pair,
and
then
we're
going
to
interpret
that
key
value
pair
as
a
repo
object
where
the
directory
is
the
key
and
then
we're
going
to
run
this
operation,
which
accepts
a
repo
object
right.
So
we
do
a
transformation.
A
By
focusing
on
the
data,
we
can
do
a
transformation
of
the
data
for
get
it
into
from
one
interface
to
another
interface.
As
far
as
the
data
model
is
concerned,
once
you
have
it
in
the
interface,
the
data
model
of
the
thing
that
you
want
to
basically
like
have
it
receive
the
new
data,
then
you're
good
right.
So
basically
we're
doing
a
type
conversion
here
and
we're
going
to
have
introspection
on
all
of
the
data
types
via
the
python.
A
Typing
system
and
we
finally
figured
out
how
to
make
that
work
with
the
it's
an
extension
of
the
stuff
we're
gonna
do
for
locality,
where
we
basically
just
add
the
type
as
a
parent
input,
and
then
you
don't
need
to
have
definitions
as
a
separate
object,
you're
just
adding
them
as
a
type
and
well.
What
are
all
the
definitions
of
these
things?
Well,
let
me
just
traverse
the
parents,
or
I
can
maintain
a
reverse
map
right
for
for
for
the
sake
of
the
input
network.
A
So,
oh,
where
were
we?
Okay?
Okay?
Okay,
we
roll
it
back,
roll
back,
okay,
so
the
data
model.
Okay,
so
yeah,
see
see.
This
is
pretty
sweet
right.
So
basically,
what
you
can
do
is
you
can
take.
You
can
take
one
so
now
now
think
of
it.
Think
of
it
as
if
it's
this
is
the
upstream
package
right.
This
is
the
upstream
package
on
the
right,
and
now
I
made
a
fork.
A
I
made
a
fork
and
I
want
to
add
my
fun
little
new
functionality
to
the
fork
right.
Well,
I
mean
you
know
like
this
is
what
we've
been
doing
for
you
know
a
while.
Now
right
and
it's
obviously
kind
of
slow
right
so
yeah
I
don't
want
to
deal
with
merge
conflicts.
You
don't
want
to
deal
with
merged
conflicts.
Nobody
wants
to
deal
with
merge
conflicts
right.
So
if
you
put
your
overlay,
your
pull
request
your
patch
set-
and
you
translate
your
upstream
and
your
in
your
downstream
into
these.
A
And
then
you
have
the
orchestration
environment,
which
is
basically
that
which
is
the
orchestrator,
which
is
how
do
I
run
this
thing
like?
How
do
I
run
the
top
level
context?
If
I'm
going
to
kick
this
thing
off
right?
So
that
thing,
the
top
level
context
is
always
known
because
there's
always
a
caller
right,
so
the
caller
defines
the
top
level
context
right,
and
so,
if
I
am
the
http
service
right
and
I'm
going
to
call
a
data
flow,
so
the
caller
is
the
http
service
and
it
needs
to
overlay.
A
There
needs
to
be
a
communication
mechanism
that
happens
where
we
declare
what
inputs
a
data
flow
needs
like
as
if
this
data
flow
is
in
operation,
and
so
we're
gonna
do
this
by
making
a
manifest
for
everything,
because
the
manifest
will
declare
the
inputs
and
the
outputs
as
well
as
objects,
created,
sort
of
as
side
effects
or
potential
edge
cases
or
faults,
fault
handler
type
cases
where
you
know,
if
you
cancel
in
the
middle,
then
you
know
maybe
there's
this
whole
other
data
flow.
A
That's
going
to
run,
and
you
or
you
you
know
you
basically
infer
that
this
data
flow
is
going
to
run,
because
you
see
that
you
know,
database
drop
is
going
to
be
executed.
You
know
in
the
event
of
a
cancel.
This
is
a
really
bad.
Whatever
this
is
is
bad
right.
You
don't
want
this
to
happen.
This
is
why
we
want
to
understand
this,
so
so,
basically,
but
okay,
so
so
so
so
so
the
top
level
context,
the
top
level
context
is
going
to
overlay.
A
A
Your
base
flow
is
a
part
of
the
system
context
the
caller
introspects,
something
in
the
system.
The
top
level
system
context,
the
the
caller
is
the
top
level
system
context,
so
it
it
introspects
something
within
the
system
context
that
it's
it
looks.
It
looks
at
the
manifest
the
manifest
says
here
are
my
inputs,
so
these
are
probably
definitions
or
something.
A
A
Statically
defined
as
a
value
set
the
value
like
statically
defined
as
a
value,
or
set
the
value
to
be
use,
default,
value
and
then
or
and
and
if,
if,
if
there
is
no
value
for
an
input
or
config
of
a
given
operation,
then
the
system
context
is
invalid
and
the
there
is
essentially
this
allow
list
of
what
things
can
come
from.
What
origins,
right
and
now
remember
the
origins
right.
A
So
when
we,
when
we,
when
we
came
up
with
the
origins,
we
basically
said
everything
is
from
the
seed
origin
right,
but
but
but
we
wanted
to
have
this
mechanism
by
which
the
origin
could
be
changed
so
that
you
could
understand.
You
know
what
it,
what
what?
What
trust
property
should
I
instill
on
this
thing
right,
because
the
origin
is
something
that
is
not
going
to
be
modifiable
right.
The
origin
is
basically
think
of
it,
like
your,
your,
your
okay.
A
A
A
We
can
have
some
sort
of
mode
where
you
can
have
like
provenance
checking
to
tell
you
if
you
know
the
environment
that
generated
the
input
was
allowed
to
modify
the
origin
or
whether
there
was
a
caller
which
is
maybe
a
tested,
and
then
we
know
that
the
origin
wasn't
modified.
But
the
point
is
this:
allows
you
to
understand,
you
know
was
my
input?
Validation
done.
You
know,
where
are
all
my
untrusted
sources,
so
so?
Okay,
so
so
what
we're
gonna
do?
Okay?
Where
were
we?
Where
were
we?
A
There
was
something
good
happening
there,
so
so
we're
going
to
we're
going
to
okay,
the
caller,
the
caller.
The
caller
is
and
the
manifest
so
the
manifest
defines
the
inputs.
The
inputs
are
essentially
what's
ever
been,
whatever
has
been
put
on
the
allow
list,
and
that
means
we
can
map
it
to
the
thing
where
we
had
a
a
requirement
for
our
manifest
manifest
manifest
manifest
okay.
A
So
this
is
the
manifest
shima
and,
and-
and
you
there's
also
this
shim-
some
of
you
may
remember
the
shim,
so
the
shim,
the
shim,
is
the
answer.
A
A
A
And
you'll
remember
that
we
wrote
this
for
our
second
party
third
party
ci
jobs.
It
took
us
a
long
time
to
to
knit
this
one
together.
We've
talked
about
this
for
a
long
time,
this
second
party
thing
and
and
and
boy
obviously
being
a
little
bit
pedantic
about
you
know
the
way
that
it
gets
done,
but
you
know
I
did
who
wants
to
do
it
twice
right?
Not
not
not
not
me
not
any
of
the
rest
of
us.
So
why
are
we
here?
A
Okay,
the
shim,
the
shim
name,
spaces,
namespaces
namespaces
on
format,
names,
okay,
so
this
manifest
shim
and
the
manifests
work
together
to
define
this
to
define
a
essentially
like
an
extensible
file
format
right.
This
is
this
universal
blueprint.
This
you
know
thing
that
can
allow
us
to
proxy
to
arbitrary
things
right
and
it's
not
one
thing
right.
It's
it's!
A
If
you,
if
we
can
come
to
an
agreement
on
a
set,
if
we
can
come
to
an
agreement
on
a
set
of
characteristics
that
any
given
blob
of
data
should
have
or
any
given
you
know
blob
every
once
in
a
while
in
a
stream
of
data
should
have,
then
we
can
have
a
way
of
understanding
any
data
stream.
You
know,
basically,
if
you're,
if
you're
like,
we
can
understand
any
data
stream
just
by
looking
at
a
few
bytes
of
it
right
and
or
well,
maybe
more
than
a
few
bytes.
A
A
We
know
that.
So
this
is
the
section
where
we
talked
about
that.
So
we
know
that
json
schema.
The
people
who
who
wrote
json
schema
did
a
lot
of
good
work
and
we
should
learn
from
them
right.
So
so
they
use
this
construct
of
this
dollar
sign
schema
and
you
know
so
we're
missing
name
spaces
here.
So
this,
oh
no,
we
sort
of
have
this,
so
you
know
we
can
treat,
could
treat
treat
dot
as
namespace.
A
I
think
I
think
that
is
what
this
meant
by
that.
I
think
that's
why
we
had
that.
Initially,
okay,
it's
been
a
few
months,
so,
okay,
great
so
basically
we're
gonna,
see
the
stuff
we're
gonna,
throw
the
shim
at
it.
The
shim
is
gonna,
say
here's
what
the
next
phase
parser
is
right.
So
here's
the
namespace,
here's
the
format,
here's
the
format
name
within
that
namespace.
Here's
the
schema
for
that.
Okay!
Now
I'm
going
to
go
validate
that
document
right.
So
basically,
you
know
load
in
the
data.
A
Maybe
it's
stored
as
a
json,
maybe
just
stores
yaml.
Maybe
this
is
like
a
zip
file
right
interpret.
You
know
given
given
given
some
kind
of
like
discovery
mechanism
for
that
write.
Some
kind
of
fingerprinting
mechanism
go
and
load
the
next
phase
parser
and
that
next
phase
parser's
gonna
be
like
okay.
Well,
you
know,
I
know
what
to
do.
You
know,
I'm
I'm
a
single
purpose
thing
that
you
know
only
knows
how
to
parse
this
format
at
this
version.
Right
that
way,
your
parsers
don't
have
to
get
complicated.
You
know
you're.
A
Basically,
you
know
when
you
when
you're
writing,
like
think
about
you're,
maintaining
a
parser
in
a
branch
of
code
or
in
a
in
a
repo
you're
gonna.
Obviously
you
know
branch
branch
branch
for
every
release,
you're,
not
gonna.
I
mean
you're
not
to
want
to
go
back
and
and
and
you're
not
going
to
want
one
parser
to
like
handle
the
format
of
all
the
older
versions
of
things.
No
you're
going
to
be
modifying
that
file,
you're,
not
going
to
add
a
bunch
of
like
don't
don't
do
that.
A
Don't
do
that,
so
all
right,
okay,
so
yeah,
so
the
manifest
schema.
Okay,
that's
great!
Okay!
I
was
struggling
with
that
one.
I'm
glad
we
figured
that
out.
So
this
it's,
oh
god,
this
is
so
clean.
I
love
it.
I
love
it.
I
love
it.
Okay,
so.
A
Okay,
so
now
that
we
figured
that
out,
we
talked
about
the
overlays.
A
We
talked
about
we're
talking
about
data
flows
class,
so
when
we
execute
so
basically
we'll
we'll
we'll
we'll
we'll
point
at
this
web
three,
you
know
we'll
point
at
this
did
hopefully,
once
again,
you
know
need
to
get
that's
what
we
need
to
do
next,
let's
figure
this
out
we'll
point
at
the
did:
we'll
look
at
the
contents
within
it,
we'll
throw
it
at
the
shim.
The
shim
will
say:
what's
the
next
phase
parser
the
next
phase
parser
will
you
know,
begin
to
load
load?
A
You
know
decode,
that
information,
the
rest
of
the
information
right
so
decode,
the
rest
of
the
information
and
then
and
maybe
the
shim-
should
be
an
operation,
and
it
should
take
something
from
the
top
level
context
if
you're
executing
you
know
like
via
python,
so
you
could
be
like
you
could
be
like
okay
yeah.
A
So
maybe
you
start
with
like
a
json
and
the
json
is
well
defined
and
then
the
json
is
like
I'm
being
called
from
the
cli
and
the
cli
says,
and
the
cli
then
calls
into
it,
and
it
says:
okay,
you're,
looking
at
a
web,
three
you're
looking
at
you're
looking
at
the
contents
of
a
did,
and
then
it
calls
into
the
next
one
and
it
adds
an
input
that
says
you're
looking
at
the
content,
so
the
id
and
here's
your
format,
here's
your
here's,
your
you're!
Looking
at
the
contents
of
a
did.
A
Here's
your
next
space
parsers
go
load
me
to
data
flow
within
this.
Then
you
call
back
up
up.
Then
you
execute
the
data
flow
and
you
do
it
by
understanding.
You
know
the
allowed
list
of
inputs
and
then
defining
the
data
flows
in
operation,
and
then
you
check.
If
your
system
context
is
valid,
do
you
have
all
the
inputs?
You
know?
Maybe
if
it
says
you
need
to
provide
your
credentials
for
this
and
then
you
know,
obviously
we
should
also
have
something
that
says:
okay,
this
is
this
credential.
This
is
this
credential.
These
are
sensitive.
A
Hey.
This
thing
wants
access
to
credentials
in
general,
that's
sensitive,
and
then
that
way
we
can
properly
trace
any
sensitive
credentials
all
the
way
through
everything
and
okay.
A
And
then
you
know
this
data
flow
is
class
thing,
so
we'll
define
the
data
flow.
The
data
flow
will
have
all
of
these
different
things
that
are
like.
You
know,
basically
hey
if
I'm
on
the
cli
do
this,
if
I'm
deploying
to
or
if
you're
calling
the
data
flows.
Think
of
the
data
flow,
like
maybe
a
make
file,
make
file
right.
A
So
you
call
the
data
flow
and
you
say
you
know,
make
all
right
and
okay,
so
then
I'm
gonna
say
run
method,
all
right
or
maybe
add
input
where
the
definition
is
top
level
or
you
know
deployment
right.
It's
deployment
is
not
the
right
word.
Maybe
it
is
the
right
word:
deployment,
environment,
yeah
deployment,
environment
for
lack
of
a
better
term
right.
A
Okay,
we're
gonna,
do
that
next,
we're
gonna
stop
now
and
and
come
back
and
now
that
we
covered,
you
know
what
what
we're
doing
here,
where
we're
all
at
and
then
you
know
after
we
have
that
so
we're
gonna.
Add
that
add
the
stuff
to
the
web.
A
Three
we're
gonna
figure
out
the
did
you
figure
out
just
how
to
make
some
basic
file
formats
run
the
code
that
they've
got
and
then,
after
that
we're
gonna
go
and
do
the
data
flows
class
and
then
we're
gonna
figure
out
how
to
access
the
information
all
right.