►
From YouTube: Incremental Delivery Working Group - 2023-05-22
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
B
Yeah,
so
we
didn't
meet
last
week
the
week
before
that
we
were
talking
about
Matt's
concerns
with
the
proposal
that
we
had.
B
Oh
yeah
again,
some
also
some
updates
I
know
that
yaakov
has
still
been
doing
a
lot
of
work
on
his
implementation
and
he
also
has
a
a
PR
with
the
spec
edits
that
Mac
that
matches
implementation.
B
Unfortunately,
I've
just
been
really
busy
and
haven't
been
able
to
look
much
at
either
one.
But
that's
really
exciting
and
I
want
to
take
a
look
at
those
yeah.
So.
B
Matt
brought
up
last
time
about
is
his
main
concern
was
a
lot
of
list
traversal
to
figure
out
on
the
client
like
how
how
the
pieces
of
the
response
fit
together
and
I.
Think
I
wrote
I
just
made
like
a
few
examples
to
try
and
show
what.
B
What
that
looks
like
we,
we
talked
much
about
putting
some
data
in
line
in
the
actual
data
object
here
that
that
has
some
concerns
with.
How
do
you
distinguish
it
from
real
data?
That's
like
kind
of
a
new
precedent
in
graphql
that
I'm
a
little
hesitant
to
take
on.
B
So
this
was
what
he
originally
said,
but
he
said
that
there
was
mistake
that
the
path
should
be
represented
as
a
tree
so
that
it
could
be
traversed
together
with
the
data
and
in
this
response,
the
IDS
that
we
had
change
from,
pointing
specifically
representing
a
combination,
a
deferring
instance
like
a
combination
of
this
defer
at
this
path
with
this
label
and
instead
represent
one
piece
of
the
incremental
data.
B
So
and
then
that
was
a
new
one,
a
new
incremental
piece
to
point
to
another
piece
of
data,
and
so
you
could
follow
them
through
to
get
all
of
them.
My
concern
was
like
how
do
we?
How
do
we
know
one
when,
if
now
the
ID
is
refer
to
the
individual
incrementals?
B
How
do
we
know
when
a
whole
defer
fragment
has
been
delivered
so
that
might
be
solvable
here,
but
I
I'm,
not
sure
if
it
is
in
this
in
this
form,
Yakov
had
a
proposal
where
it
kind
of
now
there's
like
two
sets
of
IDs
there's
one
for
the
incremental
data
and
there's
another
for
the
for
the
Deferred
fragments
and
it
uses.
B
Pending
is
now
like
mirrors
data,
but
uses
the
minus
sign
or
some
other
characters
to
represent
what,
where
the
insertion
points
of
the
deferries
are,
and
my
concern
with
this
one
is
that
it
reduces
the
readability
of
the
response.
Like
graphql,
has
a
very
readable
response
now,
but
this
is
a
bit
more
abstract.
B
This
one
would
basically
be
exact
almost
the
same
as
what
we
had
in
the
gist
proposal,
but
the
pending
is
a
tree
of
arrays
I'm,
not
honestly,
I'm,
not
super
happy
with
this
either
I
think
it's
a
little
confusing,
but
this
would
let
you
Traverse
both
at
the
same
time
and
find
where
the
deferrs
are,
and
so
it's
kind
of
like
the
first
element
in
the
array
is
the
name
of
the
key
and
then
the
other
elements
would
be
children
inside
of
it.
B
So
this
one,
you
don't
need
to
have
any
syntax,
any
special
strings
or
characters
inside
an
object
to
know
that
these
are
key,
that
these
keys
are
special
I
think
it
also
affects
the
readability
a
bit
though,
and
just
to
show
like
some
other
examples
you
would
also
have
pending
inside
of
incremental.
When
then,
once
you
come
across,
another
differ
inside
of
the
original
one,
so
you
can
get
the
ID
for
additional
pendings.
B
C
I'm
not
keen
on
requiring
clients
to
Traverse
trees
when
they
don't
really
need
to
like
I'd,
rather
have
a
top
level
path.
That
says
you
know
you
need
to
Traverse
now
to
do
a
thing,
but
if
there
is
no
path,
then
you
don't
need
to
do
anything
kind
of
thing.
A
bit
like
errors
like
you,
don't
have
to
walk
through
the
entire
tree.
Looking
for
an
error
location,
you
just
get
a
list
of
Errors
at
the
end
and
they
tell
you
where
they
belong.
C
Yeah
I
mean
that's,
that's
one
solution
for
it,
but
even
with
Max
proposals,
Matt
said
we
could
still
achieve
the
same
thing
with
our
inline
elements,
but
instead
by
having
paths
marked
up,
they
just
need
to
be
marked
up
in
the
in
the
right
way.
I'm
not
keen
to
require
traversal
like
for
many
clients
that
speak
native
Jason
like
they
will
take
Json,
and
they
will
turn
it
into
an
object
structure.
And
if
there
is
no
incremental,
then
they
can
just
use
that
object
verbatim.
C
They
don't
need
to
walk
through
it,
scan
through
it
and
see
if
there's
anything
missing
and
I
think
with
incremental.
It
should
be.
Isn't
it
as
simple
as
just
like
looking
at
a
top
level
property,
if
it's
there,
then
I
have
to
do
a
little
bit
more
work
but
being
told
explicitly
what
to
do
with
these.
You
know
three
placeholders
or
whatever,
rather
than
having
to
Traverse
the
entire
tree,
to
look
to
see
if
there's
any
specially
named
property
on
any
of
those
objects.
D
C
To
just
say
here
are
the
paths
at
which
things
are
coming
later
and
then,
if
you
need
to
Traverse
at
that
point,
that's
fine,
because
you've
been
given
the
explicit
path.
You
don't
have
to
Traverse
over
every
single
property
of
every
single
object
in
every
single
list.
In
the
entire
response
you
just
attach
things
at
the
the
given
locations,
but
we
also
open
ourselves
to
the
the
custom
scalar
problem
that
Ivan's
mentioned
a
few
times,
which
is
if
we
are
putting
things
in
line
and
traversing
to
find
them.
C
Then
the
client
won't
know
whether
that
thing
is
something
that
is
just
part
of
like
a
custom
jail
escape,
Json,
scaler
or
whether
it
is
something
significant
to
to
stream
and
defer.
D
E
Mean
about
what
point
the
human
right,
I
didn't
test
my
setup
basic
okay
right
as
so.
This
can
be
solved
by
I.
Think
Matt
at
some
point
was
this
idea:
I,
don't
tremaco
actually
came
up
with
it,
but
you
can
embedded
stuff
if
inside
wet
stuff
you
you
explicitly
validate
with
the
reference.
Basically,
if
you're
refer
shopping
and
inside
stuff
that
you
reference,
you
will
have
a
pop
to
to
a
place
where
you
reference
it
at.
E
I
would
say
like
for
both
for
smart
for
clients
and
for
people
reading
it.
It
would
be
easier
if
it's
not
the
list,
but
a
map
of
stuff.
So,
for
example,
we
use
like
I,
don't
know
whatever,
like
10
percent
one
as
a
reference
and
if
you
go
to
list
of
like
and
where
you
stop
stopover
references,
I
think
and
if
you
go
into
like
refer.
E
E
So
and
since
graphql
is
like
a
tree,
graphql
responses,
the
three
you
you
shouldn't
be
able
to
use
it
in
multiple
ways
or
even
if
you
can,
if
you
can
use
it
multiplies,
you
can
have
Buffs
as
array,
so
you
can
validate
it
by
having
in
a
substitute
reference
you
you
can
validate
places
where
you
reference
it.
E
D
For
what
it's
worth
I
think
I
mean
I
have
to
look
at
more
closely
just
to
make
sure
but
I
think
I,
think
I
I
think
it
looks
like
Rob's
proposal
achieves
basically
the
same
thing
as
my
special
character
proposal,
but
I
think
it's
I
agree
that
it's
you
know
does
have
some
readability
concerns,
but
it
looks
more
readable
than
mine
and
so
I
think
I
think
just
looking
at
it
from
the
first
get-go
I
think
I
would
prefer
it,
but
I
just
have
to
make
sure
that
it's
a
you
know
you
know
equivalent
in
terms
of
all,
but
it
looks
it
looks.
D
You
know
basically
different
ways
of
packaging
the
the
of
the
tree,
but
it
looks
basically
like
a
similar
tree-like
structure.
D
Also,
like
I,
was
saying
in
the
chat:
I'm,
not
sure
how
organizing
all
these
items
in
the
lit,
if
I
understand
correctly
I
I
guess
the
problem
is,
is
whether
we're
using
IDs
instead
of
instead
of
path.
Instead
of
paths
and
the
proposal,
I
I
had
I
guess:
I
I
still
retain
the
ability
to
use
IDs.
So
there
should
be.
D
But
I
mean
maybe,
should
we
take
a
step
back
I
mean?
Are
we
all
convinced,
to
some
extent,
that
by
Matt's
main
feedback
that
we
should
be
providing
a
way
to
you
know
to
Simply
build
up
the
the
duplicated
payload
I
mean?
Are
we
all
finding
that
convincing
and
just
the
question
is
how
to
do
that.
B
C
Rob,
what
would
your
proposal
look
like
with
an
array
in
here
like
a
list?
Sorry.
B
C
I
think
it
would
probably
still
work
the
same
effectively
you're
looking
at
the
the
entries
and,
if
they're
a
string,
then
you
know
walk
on
if
they're
an
object,
then
that's
the
defer
thing
and
then
I
guess
you
would
just
have.
If
they're
an
array,
then
you'd
walk
on
again.
It's
just
this
time.
It's
a
an
array
rather
than
a
string.
B
Right
yeah,
this
should
like
really
like
following
the
the
elements
in
that
array,
should
give
you
the
same
as
what
the
path
is,
and
we
have
numbers
in
the
path
for
that
so
I
guess
that
could
work.
E
I
posted
in
the
chat.
What
I
was
talking
about?
Because
because
you
mentioned
that
I
said
you
can
let
them
buy
the
magic
syntax
inside
the
data.
But
I
like
growth
example,
where
you
can,
by.
E
You
can
a
reference
individual
Fields,
you
can
reference
group
Fields.
It's
not
my
my
idea.
It's
not
about
particular
syntax
I.
Just
came
up
with
some
syntax
in
matter
of
fact
minute.
One
minute,
general
idea
is
that
if
your
reference
somewhere
from
some
from
inside
data,
you
need
to
distinguish
two
cases.
One
case
it's
like
legit
reference.
Another
case
is
like
random
data
inside
random
data
inside
custom
Scholars
to
differentiate
with
these
two
cases.
E
You
need
to
follow
a
reference
and
check
path
inside
the
reference,
if
powerful
inside
the
reference
matches
place
where
you
use
a
magic,
syntax,
magic
syntax
as
well.
If
you
for
reference
and
when
you
follow
it-
and
there
is
like
nothing
there
or
like
path
different
or
like
reference
is
not
existing,
it's
mean
like
which
is
like
random
data
inside
Customs
covers
that
intentionally
or
unintentionally.
Look
like
a
magic
syntax.
E
Just
because,
like
crops
point
Sorry
for
for
make
it
convoluted
and
a
couple
of
threads
inside
this
discussion,
but
Rob
said
like
to
make
with
syntax
more
readable,
it
would
be
ideal
to
to
reference
it
inside
the
data
itself,
but
it's
blocked
by
by
the
fact
that
you
can
use
the
same
syntax
inside
Customs
covers
with,
like
with
mechanism.
I
will
to
use
magic.
E
Syntax
just
wanted
to
make
that
so,
like
people
can
figure
out
and
yeah
like
as
Benja
said
that
can
reference
like
companion,
kids,
it's
not,
it
can
do
whatever
like
variation
in
general.
It's
like
I
think
it's
a
good
solution
for
figuring
out
like
magic,
syntax
problem
as
and
this
will
make
format
readable.
A
person
who
reads
it
manually.
E
Don't
need
to
to
do
like
do
magic
things
they
had
or
like
I,
don't
know
if
clients
will
also
don't
need
to
track
like
paths,
they
need
to
track
buff,
but
they
don't
need
to
Traverse
and
check
like
constantly
check
and
increment
only
when
they
see
magic
syntaxes.
You
need
to
look
somewhere
and
painting
coincide
inside
something
and
they
need
to
track
path
to
to
validate
reference.
B
In
my
opinion,
the
magic
syntax
is
a
strike
against
readability
too,
because
it's
like
because
right
because
right
now,
you
know
the
data
is
always
like
a
mirror
of
what
pretty
much
of
what
the
request
is
and
having
extra
stuff
in
there
I
think
is,
is
a
pretty
big
compromise
to
making
graphql
like
I
I
would
rather
have
it
in
a
parallel
tree.
A
I'm
I'm
not
sure
yet,
while
I
agree
on
that
in
general
Rob,
it
also
like
from
a
debuggable
debuggability
standpoint.
It's
much
easier
right,
like
I,
I,
I'm,
actually
also
thinking
about
the
same.
Until
now,
data
was
always
a
one-to-one
representation,
but
debuggability
Wise
It's.
It
looks
really
nice,
like
you,
understand
how
this
patch
tree
works
and
where
it
points
to
and
stuff
internally.
F
A
A
E
Want
to
mention
that
I
think
it's
like
both
sides
is
valid
like,
for
example,
if
you
look
at
the
yamu,
yamu
have
references
and
it's
part
of
not
many
people,
people
use
it,
but
that's
part
of
the
language
it
it's
it's
pretty
Euro.
E
E
I
think
it's
a
New,
Concept
and
because
previous
mental
model
was
you
only
have
like
pure
Json
inside
the
data,
it's
I
think
special
stuff
inside
change
mental
model,
and
it's
not
necessary,
like
it's
like
surprising,
because
it's
new
I
would
say
because
graphql
is
established.
Technology
and
its
new
mental
model
is.
Is
it
worth
to
change
mental
model
and
I?
F
D
The
Proposal
that
you're
suggesting
would
you
still
have
see
if
I
can
look
at
the
screen?
Would
you
still
use
the
shorthand
of
the
ID
to
like
I
see
you
don't
have
IDs
in
the
in
the
oh?
No,
you
do.
You
do
have
IDs
in
the
pending.
So
would
you
still
use
that
shorthand
to
shorten
the
list
in
terms
of
of
the
path
to
shorten
the
path
in
some
of
the
responses,
or
do
you
think
that
well.
B
I
was
thinking
that
once
you
know
what
the
idea
is,
you
probably
want
to
quickly
scan
the
incremental
and
find
all
the
ones
that
are
matching
it
so
that
it
would
be
beneficial
to
have
all
the
IDS
tagged
on
there,
which
then
means
that
you
can
shorten
the
path.
I
mean
I.
Guess
you
could,
if
you
figure
out
like
a
common
ancestor
of
them,
but
for
Simplicity
I
did
not
put
that
there.
D
I'm,
just
looking
at
your
suit,
so
I
I
think
that
in
mine
in
mine,
I
also
had
in
the
incremental
the
same
path
like
structure
with
special
characters.
I
mean
you
could
do
it
with
arrays
and
I
also
retain
the
ability
to
shorten
the
path
with
the
IDS
I
think
that
combination
in
particular
makes
it
very
not
readable,
meaning
I
wonder
if,
in
my
tree
example
or
if
in
your
tree
example,
would
still
be
kind
of
readable,
but
it
would
still
be
more
readable
if
we
didn't
have
both
of
those
now.
D
Both
both
of
those
I
think
are
helpful,
but
you
know
to
the
client,
but
I
will
yeah,
but
I
think
that
combination
particular
might
be
more
efficient,
but
but
really
not
so
readable.
B
D
So
right,
but
I,
think
I,
think
Matt's
concern
for
the
I
mean
I.
Don't
want
to
speak
for
him,
considering
he's
not
here,
but
I
think
he
was
saying
how
you're
gonna
have
to
do.
I
can
remember
him
vaguely
saying
you
know
you
have
to
do
all
these
incremental
path
tree
walks
that
are
sort
of
unnecessary,
so
I
think
that
applies
to
the
incremental
section,
maybe
even
more
than
the
pending,
because
we
there
could
be
even
more
incremental
items
right.
The
per
payload
than
there
are
of
pending.
So
I
think
it
would
be.
D
You
know
from
his
perspective
or
from
from
that
points
perspective
it
would
be
beneficial
to
have
a
tree-like
structure
within
incremental
as
well.
D
That's
within
the
pending
section,
but
I'm
saying
even
within
the
meaning
when
pendings
show
up
in
incremental
right
when
you
have
additional
pendings
you're
doing
it,
but
I
think
even
when
you
have
multiple
incremental
payloads,
meaning
we're
saying
the
the
client
is
expected
to
to
update
the
tree
multiple
times
and
they're
going
to
need
to
do
it
at
different
locations.
So
why
should
they
have
to
walk?
You
know
the
tree
each
time
from
from
the
root
they
we
should
Bunch,
those
or
or
group
them
together.
B
I
I
was
I
was
thinking
that,
if
they're,
if
their
client
is
already
walking
through
all
the
data,
then.
B
Then,
wherever
you're
walking
data,
you
would
also
in
in
parallel,
walk
through
the
pending,
in
the
same
way
to
see
where
they
are,
and
so
maybe
I'm
not
understanding
what
you're
saying
but
I
think
like
so
as
you
walk
through
Foo
you're,
also
walking
through
pending
here,
and
you
find
this-
this
defer
ID
and
now,
when
you're
walking
through
this
incremental
to
process
it.
You
would
also
I
think
I
did
this
wrong.
B
There
should
be
another
layer
in
here
with
bar,
but
you
would
go
into
bar
into
baz
and
then
find
no.
This
actually
should
just
be
barred,
not
bad
as
that's
the
error
here
and
that's
where
you
would
find
the
no
that's
still
wrong.
This
example
isn't
right.
D
I
guess
what
I'm
saying
on
the
from
the
gist
comment
that
I
have
I,
have
a
few
examples
and
I
again
I
agree
that
my
the
format
that
I
have
is
not
readable
with
those
special
characters
and
the
ID
shortening
the
past.
But
basically,
whenever
you
have
an
incremental,
an
individual
payload
with
more
than
one
item
in
the
incremental,
that's
being
delivered
more
than
one
incremental
data
item.
D
If
you
can
group
them
together
by
path,
it
would
be
beneficial
to
do
so
like
what
I
see
in
your
example
is
when
you
have
one
incremental
that
spawns,
maybe
multiple
pendings,
but
not
when
you
have
more
than
one
incremental,
that's
fulfilling
at
the
same
time,
but
maybe
I'm
not
looking
in
the
right
place.
D
So
if
you
take
a
look
at
like
the
last
example
from
might
just
the
gist
that
I
that
well
you're
just
and
then
the
comment
that
I
have
there,
like
the
last
example
I'm,
not
even
looking
at
the
whole
thing,
but
in
that
last
example,
the
last
payload
sends
two
incremental
items,
potential
slow
field
a
and
it
also
sends
f
yeah
and
they-
and
they
both
you
know,
are
grouped
together
at
you-
know,
path
at
the
path
of
R1
and
shortened
to
the
path
of
R1,
which,
if
we
look
up
further,
you
know
to
where
it
was
said
that
was
pending,
we
know
is
like
is:
is
a
b
like
that's
the
path
of
R1.
D
So
even
if
you
don't
shorten
it
and
if
you
use
a
raise
instead
of
my
equal
special
characters,
I
think
the
court.
You
know,
according
to
the
like
the
philosophy
that
we
want
to
reduce
tree
walks,
we
would
want
to
group
those
payloads
I,
zero
and
i1
together
in
some
way
and
some
tree-like
structure
so
that
the
client
knows.
Okay,
we
can
start
from
R1
or
maybe
even
start
from
the
root,
go
down
to
a
b
and
then
insert
in
that.
D
You
know
further
from
the
you
know,
depending
on
whatever
additional
paths
might
be
there
in
i0
and
i1,
or
in
this
case
it's
just.
There
is
no
additional
path.
They
just
insert
two
things
at
I,
zero,
I
one
so
I
mean
I.
Just
think
that
tree-like
structure
is
good.
You
know
when,
if
it's
good
for
pending,
it's
also
good
for
incremental.
D
I
I,
don't
know
I'm,
not
saying
it's
worth
it
I'm
just
saying
whatever
benefit
that
it
gives
us.
You
know
we
have
to
weigh
against
whatever
cost
there
is,
but
it's
sort
of
like
the
same
or
a
similar
trade-off
issue
over
overall,
like
you
know,
I
think,
we've
I
think
I
think
that's
just
different
ways
of
packaging.
D
The
same
information
so
taking
a
step
back
like
I,
think
we're
in
a
overall
good
place
and
Matt
I,
I
guess
I
I'm,
you
know,
leaning
more
toward
Matt's
suggestion
of
making
it
giving
additional
information
so
that
it's
really
easy
for
clients
to
put
together
a
non-deduplicated
payload
like
if
it
you
know
a
an
immutable
object.
D
Like
I
sort
of
see
that
point
to
me
you
know
and
I,
hopefully
we
can
get
to,
but
you
know
again,
I
guess
we
have
to
come
up
with
some
structure
and
compare
it
so
I
guess
the
details
are
kind
of
important,
but.
C
Matt's
not
here,
but
one
of
the
things
I'd
love
to
note
is
whether
he
only
cares
about
stitching
together
the
various
fragments
that
were
referred
to
at
that
level
or
whether
he
wants
effectively.
The
final
object
value
at
that
level,
respecting
all
of
the
fragments,
not
necessarily
just
the
ones
that
the
component
itself
has
specified,
because
those
are
two
quite
different
problems.
If
it's
only
the
ones
that
the
component
itself
cares
about,
then
you
know
fragment
spread
at
that
level.
C
You
care
about
fragment,
spread
at
a
higher
level
closer
to
the
root
of
the
query.
Those
are
kind
of
irrelevant,
and
part
of
this
is
the
is
the
issue
of
like
those
root
level
queries
needing
to
be
merged.
C
They
would
each
need
to
be
traversed
separately
if
you
are
rendering
something:
that's
like
a
dot
b,
dot
C,
but
if
the
component
only
cares
about
fragments
spread
within
its
own
aw.c
and
doesn't
care
about
aw.cs
that
come
from
elsewhere,
then
the
problem
gets
a
bit
simpler,
because
we
already
know
that
we're
going
to
be
giving
each
of
those
deferred
fragments
their
own
ID.
We
just
need
to
track
those
and
give
them
to
the
component
effectively.
C
Well,
basically,
another
way
of
expressing
that
does
map
want
an
algorithm
where
you
give
it
the
list
of
incremental
payloads
and
a
path,
a
dot
b,
dot
C
and
you
get
the
final
object
or
does
he
want
an
algorithm
where
you
give
it
the
list
of
incrementals
and
the
fragment
name,
Amber
path,
so
that
you
only
worry
about
resolving
that
one
particular
fragment
at
that
position.
D
I
imagine
it
would
be
the
latter
if
I
understand
correctly,
because
you
only
care.
He
only
wants
to
answer
like
what
the
component
wants
to
wants
to
answer,
meaning
not
just
an
arbitrary
question
on
the
graph.
C
I
certainly
think,
that's
probably
what
relay
cares
about.
B
Yeah
I
mean
it's.
It
is
of
course
hard
to
theorize
what
mad
wants
around
and
be
here
so
I
can
reach
out
to
him
and
see
if
he
can
make
it
to
the
to
the
next
meeting.
F
E
E
It's
not
I'm,
not
totally
sure
that
I
remember
correctly
and
like
yeah,
so
my
my
idea
what
Matt
says
can
be
different
from
like
yako
for
Benja,
so
so,
if,
if
Matt
cannot
join
I,
think
like
as
as
a
person
who
has
the
strongest
opinion
about
last
proposal,
we
present
on
the
last
working
group,
it's
like
in
a
sense
his
obligation
to
to
write
it
and
I.
Think
written
form
is
even
better
than
ask
him
to
join
because
like
he
will
join
one
particular
meeting,
but
the
next
meeting.
E
We
will
also
have
discussion
about
those
points.
So
I
don't
know
what
would
be
better.
Maybe
it's
probably
up
to
him
to
decide
if
it's
separate
document
or
he
need
to
add,
like
a
additional
requirement
to
restriction.
I
forget
how
it's
called
in
a
champion
document
that
Rob
created
but
yeah
I'm
I'm
waiting
for
making
it
written.
B
E
E
B
I
wonder
like:
have
you
been
able
to
try
and
Implement
some
client
work
on
like
this
original
proposal?
Did
you
run
into
any
of
the
issues
that
that
Matt
was
mentioning.
D
If
they're
able
to
to
take
a
look
at
those
specification
edits,
it
might
help
expedite
I,
guess
you
know
things
in
terms
of
review
of
The
implementation
and
I'm
happy
to
walk
anybody
through
that
through
the
code
changes
or
the
spec
edits,
they're
kind
of
extensive,
but
I,
think
and
I,
and
they
definitely
the
spec
edits
in
particular
are
a
particular
approach,
meaning
it
would
be
simpler
if
we
presented
the
spec
as
not
delaying
execution
of
the
defers
until
the
initial
result
is
done,
and
then,
but
what
these
spec
edits
do
is
follow
the
algorithm
where
we
allow
defers
to
to
their
deferred,
but
they
allow.
D
But
they
are
start.
You
know
they.
They
are
execute
semi-con
currently
as
much
as
possible
like
it's
a
node
and,
and
so
it
shows
how
that
can
be
managed
and
where
the
state,
for
that
you
know
the
state
of
the
publisher
and
those
the
different
executions
where
all
that
is
held
and
how
it's
modified
and
so
I
tried
to
isolate
that
to
a
particular
subset
of
the
algorithms.
D
B
All
right,
so,
if
that's
it
I'll
see
you
all
next
week.