►
A
Good
morning
it
is
friday,
I
think
yes,
it's
friday
and
we're
rocking
and
rolling
okay.
So
where
were
we
at?
We
committed
the
initial
set
of
the
initial
set
without
edits.
So
let's
take
a
look
here
again
and
say.
A
Discussion
comment:
discussion,
discussion,
comment,
edit
reply,
user
content,
edits
user
content,
edit
connection
user
content,
edits.
A
A
We
define
fragments
which
are
like
what
we
want
are
saying
what
we
want
from
a
given
type:
okay,
like
a
select.
A
A
A
Thank
god
that
somebody
gave
graphql
api.
I
don't
think
we
would
have
been
able
to
get
the
mark
down.
Otherwise
I
totally
forgot
about
that,
but
the
scraping
approach
that
I
had
gone
with
at
first
it
returns
the
elements
unless
they
come
after
supposedly
I'm
sure.
Well,
we
would
have
been
able
to
get
it
eventually,
but
it's
probably
easier.
A
A
A
C
C
A
A
C
C
A
Church
versus
boo,
boo,
boo.
A
A
A
B
It
will
spread,
let's.
A
A
A
C
A
Nation
code,
there
should
be
some
in
the
github
in
github
operations
somewhere.
A
A
B
A
B
A
B
B
All
right,
where
is
where's,
that
nice,
therefore,
that
we
were
using
oh
overlay.
C
A
A
A
B
B
B
B
A
So
if
we
so
where's
our
diffs.
A
All
right,
so
here
is
some
diffs.
Let's
see
diff
diff
those
are
not
diffs.
C
B
A
A
A
So
the
last
the
end
the
end
of
the
list
of
nodes
within
the
with
the
diffs.
So
basically,
this
is
the
oldest
version.
It
looks
like
so
we'll
grab
edited
at
we'll,
add
okay,
so
created
at
will
be
or
edited,
let's
see.
So
what
about
this
one
so
created
at
blank
okay,
so
that
is
the
first
edited
at
time.
A
A
B
A
A
A
C
A
Okay,
so
if
there
are
edits,
ensure
they
are
committed
in
the
correct
order,
sorted
by
date
with
latest
being
the.
A
C
A
C
A
A
A
C
A
At
given
a
git
repo.
A
Git
repository
set
the
contents
of
the
given
file
file
path
to
be
the
given
contents.
A
All
right
we're
going
to
do
a
lot
of
rebasing
here
we're
going
to
do
some
automated
rebasing,
so
this
is
going
to
be
really
interesting,
actually
get
set
file.
Contents
update.
This
is
going
to
be
really
interesting
for
some
later
stuff,
where
we
do
automated
refactoring
so
give
it
a
good
repository.
A
A
A
All
right
so
get
groupo
checked
us
back.
I
think
we
can
just.
Can
we
just
take
a
gear
repo?
Should
we
take
a
gear
repo
checked
out,
or
should
we
take
a
get
repo
an
interesting
question?
Well,
I
think
that
if
we
took
a
git
repository,
checked
or
get
repo
checked
out,
spec,
then
as
the
data
type,
I
believe
that
the
commit
would
serve
us
well
there.
So
because
we'd
know
what
would
we
would
we
care
about
that?
I
think
we
might
return
a
git
repository
checked
out
spec.
Yes,
we
return
a
gate.
A
Repository
checked
out
spec.
This
is
our
return
type,
because
we
know
what
commit
we're
on
when
we're
done
with
this
set
file.
B
A
A
A
A
A
Because
the
critical
piece
here
is
that
the
you
know
we
we
want
the
managed
locking
for
this
type
of
thing,
because
that
git
repo
is
not
going
to
get
it
want
to
have
multiple
gear
commands
run
on
it
in
parallel,
so
typing
union,
so
engines
can
now
be
written.
C
B
A
B
B
C
B
C
B
A
A
A
A
Okay,
locking
type
hints.
B
A
A
We
want
the
type
checker
to
see
this
thing
and
go.
Oh,
it's
just
a
good
repo
spec.
You
know
and
say
so
that
we
can
so
that
we
can
annotate
appropriately
right.
A
A
A
So
I'm
telling
you
basically
that
I'm
converting
this
thing,
that's
a
good
repository,
I'm
taking
a
lock
on
it
and
I'm
producing
this
other
thing
which
it's
derived
from,
which
also
requires
the
lock
right.
So
now
the
orchestration
knows-
and
we'll
probably
you
know,
we
can
probably
leverage
nfts
for
this
really
nice
way
to
do
distributed,
locking
there
I've
been
looking
to
solution
that
problem
for
a
long
time,
so
so,
basically
yeah.
We
we
take
the
the
read,
write
lock
on
the
git
repo
when
it
when
it
comes
in.
A
We
take
the
read
right
lock
on
the
git
repo
when
it
comes
in
aka.
The
caller
should
lock
that.
A
That
object
before
before
calling
this
function
right,
and
so
why
do
we
do
it
this
way,
because
functions
should
not
deal
with
locks
right?
No,
no!
Don't
do!
That
is
the
point.
That's
a
little
bit
strong,
but
if
you
always
sort
of
like.
A
If
you
always
leave
locking
to
the
caller,
then
each
little
you
know
each
little
operation
that
you
write
is
lockless
and
if
you
need
to
rearrange
things
right
and
the
main
reason
why.
Why
do
we
care
about
this
locality
right?
So
if
we
need
to
rearrange
things
and
redeploy
things
in
in
different
configuration
or
across
different
hardware,
software
trust,
boundaries,
right
or
or
locality
boundaries?
A
If
you
didn't
try
to
manage
the
locking
internally,
if
you
tried
to
manage
the
lock-in
internally,
that
would
be
impossible
for
us.
Well,
okay,
it
wouldn't
be
a
possible.
We
could
do
it,
but
it
would
just
be
really
annoying
and
this
is
going
to
be
much
less
annoying.
So
we're
just
going
to
sort
of
declare
and
then,
if
you
wanted
to
do
a
different
execution
environment
or
like
you
know,
just
hard
code,
it
yourself.
You
would
just
know
that
you
take.
You
know
this
is
your
how
you
help
you
you
add.
A
You
know
this
is
just
help
for
you
to
know
what
to
do
with
your
locks
right.
But
if
you
want
to
use
your
an
orchestration
environment
that
deals
with
locking
for
you,
then
it
will
use
the
type-ins
to
do
so
right
because
it's
going
to
generate
these
into
the
definitions,
which
are
this
language
agnostic
mechanism
for
doing
this
or
for
communicating
the
fact
that
locks
need
to
be
taken
in
some
way
right.
So
this
says
in
our
web3
world
this
says
so.
A
To
slash
from
web3
we'd
see
the
repo
object
and
the
output
of
this
function
be
treated
as
nfts,
because.
A
A
By
leveraging
nfts
right
so
they're
distributed
seven
fours
as
well.
If
you
have
like
a
finite
number
of
them,
if
you
have
a
finite
non-binary
non
zero
slash,
one
number
of
them.
A
A
All
of
the
things
that
require
you
taking
a
lock
in
the
data
flow
and
your
general
ability
to
transform
right
so
we're
doing
this
cross
domain
conceptual
mapping
right
where
we
predict
the
different
states
in
that
little.
You
know
a
over
b
equals
x
over
y
equation
right,
so
we're
doing
that
mapping.
A
So
if
we
have
two
bottom
unknowns
right,
we
have
to
use
the
model
here
that
can
predict
to
this
and
or
a
model
here
that
can
predict
to
this
or
a
model
here
that
can
predict
to
this
and
then
predict
this
or
a
model
here
that
predicts
from
this
to
this
or
a
model
here
that
predicts
from
this
to
this
right.
We
just
use
our
mod.
We
have
two
right.
Two
is
enough
when
we
have
models
right.
A
Actually,
one
is
enough,
as
long
as
we
have
models
that
we
could
have
theorized
about
or
hypothesis
right,
so
we
hypothesize
not
theorize.
We
hypothesize
right
the
hypothesis,
so
there's
remembrance
right
we're,
calling
stored
data
sort
of
safe
mode
hypothesis.
You
know
potentially
not
safe,
some
inference
happening
there
and
then
execution
right,
which
is
we're
going
and
trying
doing
the
scientific
process.
A
So
so
I
think
you
know,
and
I'm
trying
I'm
trying
to
figure
out
what
if
we
can
use
this
at
the
quantum
simulation
in
any
way,
and
I
think
that
there's
something
to
be
had
here
with
you
know
the
the
zero
or
one
as
being
could
I
took
the
lock
or
not
take
the
lock
and
the
skew
being
the
likelihood
thereof
or
something
and
then
encoding
the
matrix
of
all
of
the
locks
that
you
might
need
to
take
into
that
and
then
somehow
telling
you
hey
do.
A
I
think
I
think,
do
you
think
you
can
make
this
state
transition
happen?
Do
you
think
you
can
reach
equilibrium
of
the
new
state,
given
the
old
state?
You
know
based
on
these
underlying
resources
that
require
locking
right.
So
the
real
things
that
that
are.
Are
you
know
the
sort
of
looking
at
a
lock
as
like
a
I
don't
know?
Maybe
it's
not
looking
at
a
lock.
I
don't
know
something.
I
think
I
think
there's
something
we
can.
A
We
can
try
to
predict
whether
we
can
you
know,
do
some
sort
of
fast
path
prediction
on
the
inference
of
being
able
to
so
there's
like
okay.
First
of
all,
there's
okay
ask
alice
to
do
something
right.
Okay,
so
she
starts
thinking
about
it,
but
and
she
may
start
working
towards
it
right,
but
this
is
maybe
something
where
we
can
say
immediately.
A
Some
sort
of
optimization
around
that
muscle
memory,
best
guess
ness
of
her.
I
don't
know
playing
with
it.
B
A
Okay,
we
also
signal
to
the
caller
that
if
they
use
the
returned
object,
it
should
be
it
should
we
recommend
that
they
lock
it
if
they
don't
want
to.
A
That's
up
to
them,
but
we
recommend
you
lock
it
because
there's
some
properties
of
it
that
may
or
may
not,
but
we
recommend
you
lock.
It.
A
As
the
producing
operation,
we
understand
that
what
we
are
returning
returning
inherently
has
state
or
has
state
this
state.
We
also
know
to
be
not
conducive,
not.
We
also
know
to
not
natively
support.
A
A
Well,
let's
I
mean,
let's
hope,
that
they're
locking,
let's
hope,
that
they're
locking
things
appropriately
right
and
that
we're
not
just
gonna,
you
know,
create
this
giant
mess
of
parallel
axis
right,
but
we
always
assume
that
something
may
be
called
in
parallel
and
be
helpful
to
those
who
are
calling
it
and
let
them
know
if
they're
gonna
need
to
manage
the
locking
or
if
the
object
will
manage
or
anything
that
they
would
do
with
it
would
be
managed
already
right.
B
A
Okay,
okay,
so
then
yeah,
because
so,
if
we
say
read,
write
lock
on
gate
repository
here,
it'll
just
show
up
as
a
union
right,
but
for
our
analysis
purposes,
like
our
static
analysis
that
we're
going
to
do
on
this
type
in
our
introspection,
it'll
show
up
as
read
write
log,
but
your
type
checker
will
just
say
it's
a
union.
It's
a
git
repository,
yes,
okay,
I
mean.
Hopefully,
if
it
doesn't,
I'm
going
to
be
pissed
so,
okay,
what
else
does
this
thing
take?
A
Date,
look
at
that.
Oh,
oh,
look
at
that!
That's
fantastic!
Okay!
Oh
wow!
Oh
wow,
wow!
That's
perfect!
Okay!
This
is
perfect.
This
is
perfect.
This
is
perfect.
Okay.
This
is
perfect,
so
excited
okay,
provided
the
static
type
checkers
pick
this
type
of
stuff
up,
so.
A
A
All
right
so
we're
thinking
with
the
data.
Actually
we
figured
out.
I
realized
we
need
this
later,
so
it
would
be
nice.
We
should
grab
a
little
inspect
helper
to
just
say:
you
know,
function,
dot
like
something
like
data
type
and
then
it
would
go
and
grab
the
data
type
for
the
argument
so
like
if
we
said
this
would
be
nice
basically.