►
From YouTube: Incremental Delivery Working Group - 2023-05-01
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
B
D
A
Knew
that
I
completely
forgot,
because
yeah
our
entire
router
team
is
all
based
in
Europe,
so
they're
all
gone
today,
which
of
course
is
the
day
that
I
have
10
million
router
questions.
So
of
course,
oh,
why
are
you
calling
in
you
should
be
enjoying
the
day
off.
A
D
A
A
Are
you
located
Sabrina.
A
Yeah
East
Coast
time
same
as
me,
then
so
you're
fine
for
for
time
of
the
day,
exactly.
B
D
D
A
All
right,
so
it
sounds
like
we
have
some
fun-filled
things
to
talk
through
I
missed
last
week's
call,
but
where
do
we
want
to
kick
things
off?.
E
Hey
yeah,
so
I
updated
the
the
gist
with
an
example
of
stream
and
I
think
that
it's
mostly
similar
to
what
was
like
previously
decided
before
we
changed
the
response
shape,
but
I
want
to
see
if
everyone
agrees
with
what
I
was
thinking.
E
D
C
Yeah,
sorry,
if
my
ounce
is
on
that
yaakov
were
a
bit
ad
hoc
I
was
doing
it
from
Mobile
on
and
off.
F
A
E
Yeah
anyway,
so
I
added
this
stream
example
there's
only
one
here
for
now.
So
this
example
there's
both
a
defer
and
a
stream,
and
then
we
have
an
initial
count
of
two.
So
initial
response
gives
you
two
list
items
pending.
This
is
the
pending
for
the
defer.
E
Also
we
have
the
pending
for
the
stream
that
has
its
own
ID.
There's
gonna,
be
one
pending
object
per
stream,
not
one
for
every
individual
item,
the
path
points
to
the
fields
and
we
have
the
label
there.
E
So
in
this
example,
I
made
it
so
that
the
defer
resolves
first.
So
that's
here,
no,
never
mind
the
deferral
resolves
last.
So
here's
the
here's,
the
next
list
item-
and
that
is
the
last
one,
but
we're
gonna
say
that
this
is
backed
by
an
async,
asynchronous
interval
or
something
where
you
don't
know
synchronously
whether
the
list
streamless
has
ended
yet
so
we
don't
have
the
completed
for
it
yet,
and
so
that
can
come
in
its
own
payload
with
telling
you
the
ID.
E
E
E
I
think
the
only
and
then
the
incremental
object
for
stream.
We
had
discussed
this
like
going
back
last
year
and
I'm,
keeping
it
as
items
and
an
array
so
that
there's
no
subpath,
because
it's
just
it's
always
going
to
be
the
same
ID
same
path
as
what
was
impending
and
I.
We
had
discussed
previously
that
we
would
like
have
a
path
with
a
index
in
it,
but
I
think
that
it's
not
necessary
anymore.
I.
E
Think
the
case
that
led
us
to
that
decision
was
that
these
defer
wood
branching
could
leave
you
to
have
like
the
same
stream
running
independently
and
that
the
ID
without
the
index
it
could
be
ambiguous,
but
I
think
it
makes
sense.
We
can
just
put
we
wrap
this
the
list
item
in
an
array
we
can
return
more
than
one
at
once.
E
If
that's,
if
the
server
can
do
that-
and
it
just
means
to
append
it
onto
the
previous
list
items-
and
we
had
said-
we
had
also
discussed
this-
that
if
you
have
an
array
with
a
null
inside,
that
means
that
the
list
item
itself
is
null
but
sending
items
with
the
whole
thing
being
null.
Not
inside
in
Array
that
means
that
you
encountered
some
kind
of
error
bubbling
where
the
list
item
was
non-null
and
that
should
have
bubbled
up
higher
foreign.
E
Yeah
so
I
think
what
the
only
thing
that's
actually
changing
here,
besides,
like
the
ID
and
path
moving
around
with
everything
else,
is
the
fact
that
we're
not
sending
the
index
with
that
points
to
where
we
are
in
the
list
and
I.
Think
that's
okay
for
what
I
said,
but
I
want
to
hear
if
anyone
disagrees
or
has
thoughts
on
that.
F
Just
on
that,
on
that
point
about
index,
I
think
one
of
the
things
that
that
Lee
may
have
mentioned
is
that
he
likes
the
index
because
of
let's
say
a
payload
for
some
reason
is:
is
dropped
or
something
like
that
you
know
putting
aside
whether
that's
you
know
would
have
been
a
concern
previously,
because
we're
now
I
think
deduplicating
I
think
we'd
have
much
bigger
problems.
F
Actually,
if
a
payload
is
dropped,
and
so
we
are
assuming
that
all
payloads
have
been
sent
and
so
I
don't
think
that
concern
directly
applies
to
the
the
sending
the
idea
anymore
either.
Although
there
may
be
another
issue,
there.
C
But
yeah
good
point,
I,
think
Lee
as
well
also
preferred
the
explicitness
of
having
an
ID.
He
said
more
information
will
help
you
know
to
to
catch.
If
things
aren't
behaving
correctly.
C
I
have
a
separate
piece
of
potential
thought
around
this
I
actually
like
this
and
I
think
this
is
how
we
should
go,
but
nonetheless
just
to
fight
the
other
side.
For
a
moment,
if
you've
got
a
stream
of
data,
it's
a
little
bit
like
a
subscription
right.
So
you've
got
a
subscription
with
a
bunch
of
events.
When
you
get
each
event,
you
resolve
it.
In
the
meantime,
more
events
are
still
coming
in
at
the
moment
in
graphql.
C
We
resolve
that
event
and
we
send
it
through
and
then
we
start
resolving
the
next
one,
but
in
a
stream
and
defer
context,
we
could
effectively
receive
and
and
process
these
items
individually,
and
it
might
be
that
item
six
is
ready
before
item
three
is
for
example.
C
So
if
we
did
have
the
index,
we
could
deliver
them
out
of
order,
and
you
can
imagine,
like
I,
don't
know,
Insurance
comparison
sites
where
they
go.
Often
they
talk
to
all
these
different
vendors
and
they
know
they've
got
this
list
of,
let's
say:
60
vendors
and
they've
put
them
in
an
order,
but
like
different
ones,
are
coming
in
at
different
times
as
they're
ready
could
be
a
thing
that
said,
I
think
we
should
stick
with
what
you've
proposed.
E
Yeah
I
I
was
thinking
that
I
I,
don't
think
I
really
want
to
like
specify
out
of
order
item
delivery
as
part
of
stream
like
for
this
proposal,
and
it
sounds
like
something
that
you're
going
to
want
to
opt
into.
If
you
do
want
that.
So,
like
that's
the
point
where
opting
into
it
gives
you
an
index
of
where
some
additional
metadata
of
how
like
the
where
things
are
in
the
list,
order.
C
F
So
it
wouldn't
work
for
non-object
Fields,
but,
having
said
that,
still
I
agree
that
this
seems
the
way
to
go,
and
if
we
want
to
think
of
a
new
format
later,
we
could
even
think
of
a
a
different
directive
even
to
reduce
confusion.
B
If
I'm
thinking
about
building
some
form
of
like
client,
parser
or
client
store
to
store
this
information,
it
gets
a
little
non-obvious
on
how
to
use
the
pending
list.
So,
if
we're
not
passing
the
path
back
with
the
incremental
response,
then
your
partner
needs
to
maintain
some
form
of
state
of
what
the
actual
pending
list
is
and
have
some
form
of,
like
click
lookup
or
Cache,
that
isn't
kind
of
standard
with
graphql.
B
In
order
to
be
able
to
identify
where
this
incremental
payload
needs
to
go
since
it's
not
with
the
incremental
info
anymore
and
also
there's
a
subtlety
with
if
I'm,
building
like
a
client-side
representation
of
or
client
some
client-side
store
to
store
the
graphql
data,
you
now
kind
of
need
to
have
a
field
state
of
pending
to
be
able
to
service
to
the
user,
because
there's
a
subtlety
between
having
a
null
field
value
and
having
a
missing
field
value
which,
with
the
pending
list,
we
now
kind
of
have
to
have
a
notion
of
for
the
client,
because
it's
not
necessarily
correct.
B
B
Passing
back
north
of
the
client
is
technically
incorrect
right
because
it's
not
a
nullable
field,
but
you'd
have
to
basically
parse
the
pending
path
as
you're
going
to
read
the
fields
and
then
Supply
some
like
separate
State,
and
that's
the
only
thing,
that's
wrong.
It's
just
slightly
weird,
like
there's
a
subtlety
to
it
like
basically
let
him
use.
You
know
that
this
is
still
being
deferred
versus
this
is
a
null
value
versus
pyramid.
C
Yeah
definitely
I
think.
That's
that's
a
very
good
point.
The
way
that
I've
been
thinking
about
it
is
that
the
the
incremental
payloads
on
their
own
are
not
actually
they're,
not
something
that
the
user
is
going
to
see
on
its
own.
So
until
you
get
a
completed,
you
don't
kind
of
apply
that
to
your
main
store
and
the
the
completed
and
pending
are
effectively
part
of
that
so
they're,
this
transient
thing
that
tracks
this
build
up
change
and
then
every
now
and
again
that
flushes
into
your
main,
like
Object
Store.
C
It
does
also
give
you
the
ability,
especially
with
the
labels,
for
example,
for
you
to
know
that
a
specific
fragment
or
whatever
it
is,
is
in
this
pending
state.
So
you
can
then
correlate
this,
which
obviously,
we've
had
relay
in
mind
for
for
doing
that
kind
of
thing.
For
so
but
yeah.
It's
definitely
you
know.
All
this
stuff
is
quite
interesting.
B
Are
we
doing
some
form
of
like
is
fulfilled
Fields
per
lay
per
fragment
label
because
I
know
there
was
like
previous
cases
where
you're
deferring
the
same
data,
even
if
the
label
is
different,
but
you
consider
it
as
one
object
in
your
Global
store
right
if
it
corresponds
like
the
same
ID.
B
Would
you
mind
repeating
there
may
be
subtleties
between
the
implementation
I'm
used
to
in
this?
Is
the
ID
separate
from
the
defer
label
or
you
are
they
one
to
one
here.
E
They're
not
one
to
one
because
you
could
have
you
could
have
a
deferred
that
has
a
nest
that
has
a
you
could
have
a
fragment
that
has
a
deferred
fragment
inside
of
it
with
the
label
and
then
that
fragment
could
be
used
in
multiple
places.
So
it
would.
The
label
isn't
going
to
be
unique.
In
that
case,
the
combination
of
the
path
and
the
label
should
be
unique
and
that
that's
that's
assuming
that
you
didn't
that's
assuming
that
you
are
like
passing
unique
labels
to
everything
but
yeah.
E
So
the
ID
is
really
I.
Guess
an
ID
for
the
combination
of
the
path
and
the
label
for
defer.
F
Right
now,
the
proposal
is
is
that
if
there
is
no
label,
we
split
out
separate,
separate
IDs
and
you
know
that's
something:
I'm
going
to
raise
as
a
potential
or
I
would
raise
it
even
now.
As
you
know,
we
don't
have
to
do
that.
It's
definitely
a
choice
that
was
made,
but
you
know
so
I
think
the
some
of
the
members
here
you
know
are
in
favor
of
that,
but
the
other
thing
we
suggested
was
merging
payloads
with
the
same
label.
F
Even
if
you
have
a
a
we
have
a
validation
rule
to
that,
so
that
the
label
should
be
unique
to
path
but
I'm,
not
sure
if
it
covers,
if
no
label
is
used.
So
we
could
extend
the
validation
rule
to
make
sure
that
we
don't
have
that
situation
anyway
and
then
we
don't
really
have
to
worry
about
it
or
you
know
so,
there's
a
little
bit
of
a
subtlety
there.
But,
and
and
honestly
you
know
in
general
the
ID
it's
supposed
to
be
that
way.
F
E
Make
sense,
yeah,
I,
think
I
think
the
idea
the
idea
was
that
like,
if
you
don't
use
labels,
you
don't
have
to
use
them
and
they
don't
affect
the
execution
at
all.
We
we
do
have
a
validation
rule
that
says
that
if
you
are
providing
the
labels,
they
have
to
be
unique.
C
You
also
asked
about
dunda
fulfilled
or
something
like
that
fulfilled
failed.
C
So
I
assume
you
talk
with
Matt
right,
Matt
Mahoney,
so
one
of
the
key
things
that
we've
been
trying
to
build
into
any
of
the
solutions
that
we've
been
discussing
is
making
sure
that
fragment
like
isolation
is
still
in
in
place
or
fragment.
Consistency.
Sorry!
C
So
if
you
were
to
do
that
with
the
the
dunder
type
name
field,
so
like
fragment,
Name,
colon
done
to
type
name
or
something
like
that,
exactly
like
this,
in
example,
a
that
should
definitely
that
will
only
come
through
when
everything
for
that
fragment
comes
through.
That's
obviously
not
the
Deferred
stuff,
but
anything
that's
not
deferred.
It
would
all
come
through,
possibly
in
multiple
payloads
in
the
same
incremental,
but
it
would
all
be
in
the
same
payload.
So
when
you
then
flush
it
through
to
the
main
store,
it
will
all
be
there.
B
And
the
idea
is
like
you
basically
know
that
you
don't
have
a
full
completed
you'll.
You
won't
flush
any
of
that
information,
even
if
you
have
it
basically
from
parse.
If
you
don't
have
a
completed
block
and
it's
coming
to
different
credentials,
it's
not
going
to
go
back
to
whatever
product
you
store.
C
Every
payload
that
you
receive
from
server
so
either
that
initial
one
with
data
or
anyone
with
incremental
and
then
a
list
in
it
should
be
processed
as
an
atomic
whole.
So
every
time
you
receive
one
of
those,
it
is
safe
for
you
to
flush
it
through
to
your
main
store,
but
only
all
of
the
changes.
You
can't
do
it
halfway
through
the
incremental
list.
It
has
to
be
the
whole
list
processed
at
once
and
then
flushed.
E
Yeah,
like
this
example,
Jay
is
the
field,
that's
different
from
the
the
parent,
and
if
this
takes
a
really
long
time
to
resolve
we're
not
sending
like
the
the
type
name,
my
fragment
Alias
ahead
of
time,
because
it
can't
be
used
until
the
whole.
All
the
fields
for
the
fragment
are.
Are
there.
E
Yeah
yeah
I
think
that
it's
it's
interesting.
The
way
that
you
said
that
the
incremental
has
to
be
an
atomic
piece
that
can
be
used
because
with
defer.
That
means
that
I
think
in
every
one
of
these
examples,
there's
always
a
completed
in
every
in
every
response,
because
we
wouldn't
be
sending
deferred
data
unless
that
it
was
completing
a
fragment
and
that's
not
the
case
for
stream,
though,
where
you
just
want
the
list
items
when
they're
ready,
so
definitely
it's
getting.
A
response
that
doesn't
have
any
completed
is
a
valid
payload.
A
C
C
This
way
it's
much
simpler.
Okay,
there
is
a
little
caveat
which
isn't
actually
in
the
example
and
that's
what
we're
about
to
be
discussing
I.
Think
where
Rob
said
what
we
previously
agreed
is
that,
if
an
item,
if
a
null
bubbles,
then
we'll
send
through
items
itself
as
null
that
I'm
not
in
agreement
with,
but
we
can
discuss
that
in
as
part
of
the
next
bit:
okay,
yeah
cool.
E
Oh,
why
don't
we
just
get
right
into
it?
Yeah
do
you
want
to
start
why?
Why
are
you
not
in
agreement
with
that.
C
Great
question
so
I
haven't
fully
solidified
my
thoughts
around
this,
but
at
the
moment,
if
you
so
ignoring
stream
and
defer
right
if
you're
in
graphql-
and
you
get
an
error
as
part
of
your
response,
Jakob
did
you
want
to
speak
now
or
you're,
okay
waiting
for
a
moment,
no.
C
So
in
graphql,
at
the
moment,
without
streaming
defer,
if
an
error
is
raised,
the
error
will
have
a
parcel
generally
will
have
a
pass,
and
that
path
will
be
such
that
at
some
point
in
that
path,
as
you
browse
through
that
final
data
object,
a
field
will
be
null
and
it
will
reference
a
field
that
is
null
and
then
after
that
point
it
can
still
reference
further
bits,
and
you
know
that
then
the
null
has
bubbled
from
that
to
here
with
stream
and
defer,
or
at
least
with
this
current
proposal
that
Rob
and
I
are
putting
forwards.
C
That's
not
necessarily
the
case,
so
if
a
null
bubbles
in
to
something
that
would
have
been
nullable
but
has
already
been
delivered
as
non-null,
then
effectively
the
whole
thing
blows
up,
and
now
this
error
that
would
have
caused
it
would
have
a
path
that
doesn't
like
have
a
null
in
it.
So
I
actually
I
gave
Rob
an
example.
Just
now,
let
me
see
if
I
can
put
it
into
the
chat
did
I
no
I,
it's
in
the
I.
Think
I
gave
the
act
of
the
example
yeah.
C
So,
in
a
thread
in
the
Discord
conversation
see
if
I
can
actually
link
you
to
the
right
message,
copy
message:
link
here:
I've
sent
your
link
to
the
Discord
message,
but
effectively
I've,
given
a
small
graphql
query
that
has
a
undeferred
query
to
a
nullable
field
with
an
a
inside
of
it
and
then
a
defer
with
a
nullable
field
again
with
always
throws
inside
of
it.
So
in
this
case,
nullable
is
nullable,
but
it's
already
been
delivered
with
a
inside
of
it.
C
So
if
we
then
deliver
the
if
the
always
throws
throw
throws,
we
want
to
raise
that
to
the
nullable.
We
can't
because
we've
already
set
it
to
something
non-null
so
now
the
path
of
that
error
wants
to
be
nullable
always
throws,
but
nullable
itself
isn't
null
and
always
throws
doesn't
exist,
like
literally,
does
not
exist
in
that
final
reconciled
object
and
that's
actually
a
break
or
a
Divergence
from
the
current
graphql
behavior.
C
So
I
tend
to
think
about
this
quite
a
lot.
This
incremental
delivery,
as
we've
effectively
got
two
things.
We've
got
this
final
object
that
we're
building
and
we've
got
these
like
transmission
messages.
This,
like
metadata,
that
we're
saying
oh
here,
we've
got
a
bit
more
data
for
this
and
here's
the
thing
that's
going
to
be
pending
here
and
so
on
and
so
forth,
which
we
we
track
over
time
and
I
think
that
these
errors
actually
belong
to
the
latter
of
those.
C
They
belong
they're
kind
of
metadata
that
to
do
with
the
transport
of
this
or
the
resolution,
but
they're
not
part
of
the
final
reconciled
object
which
to
me
should
still
follow
the
same
rules
that
regular
graphql
data
and
errors.
Would
that
was
quite
a
long
explanation
and
I'm
not
sure
if
it
was
clear,
so
feel
free
to
ask
any
questions.
E
E
C
I
think
it
might
be
an
issue
for
the
way
that
some
applications
handle
errors.
So
it
could
be
that
when
they're
looking
at
errors,
they
look
at
the
field
that
it
relates
to,
and
generally
they
would
say.
Oh
this
field
is
null
check
for
an
error
here
and
if
there
is
an
error
here
then
then
take
the
relevant
action
display
the
relevant
thing,
but
this
won't
be
null.
C
So
it
won't
check
for
an
error
here
and
it
will
go
on
to
the
next
layer
and
it
will
never
see
no
always
throws
because
it
doesn't
exist.
So
it
would
never.
You
know
see
it
so.
I,
don't
think
it's
necessarily
a
big
issue.
C
We
could
certainly
deliver
them
and
just
tell
people
we're
doing
that
now.
I
mean
they'd
have
to
opt
into
using
it
by
stream
and
defer
anyway.
So
we
can
absolutely
say,
there's
a
change
in
Behavior
here
and
so
the
way
that
you
handle
errors
has
to
be
different,
I'm
just
saying
it.
C
It
is
different
and
we
don't
have
to
deliver
it
in
the
way
that
we
previously
discussed
actually
delivering
it
as
part
of
the
completed
record,
which
is
already
an
object
with
ID
to
allow
a
space
for
this
additional
information
would
be
reasonable.
So
saying
like
this,
entire
fragment
has
failed
because
of
this
error,
rather
than
putting
it
into
the
actual
incrementals.
The
way
I
see
it.
The
incrementals
get
a
play
applied
to
the
final
object,
and
this
error
kind
of
exists
outside
of
that,
because
the
fragment
actually
never
applies.
C
F
So
I
would
just
say
that
I
I
completely
agree,
I,
think
that's
what
I
was
hinting
at
like
above
in
the
Discord
like
I,
think
we
should
have
an
errors
field
as
that
meaning
there
should
be
a
completed
for
regularly
completed,
fragments
for
regularly
completed,
deferred,
fragments
and
regularly
completed
streams
and,
and
then
all
the
incremental
all
includes
data
that
is
successfully
merged.
F
And
then
you
have
an
errors,
I
would
think
array
or
errored
or
something
separate
array
for
all
these
things
that
have
errored,
and
we
can
talk
a
little
bit
about
the
format.
I
guess
you
don't
really
need
the
null,
then
you
can
just
sort
of
have
the
the
error,
the
path
and
the
or
the
ID
I
guess
in
the
implementation.
That
already
has
the
IDS
working
and
then
the
error
or
errors.
C
That's
what
I
would
do
personally,
yes,
I,
would
drop
that
incremental.
Stick
the
errors
directly
in
the
completed,
but
this
is
only
to
be
strict
to
stress
this.
This
is
only
when
the
null
bubbling
like
branches
past
the
the
defer
boundary
regular
errors
that
happen
just
on
fields
that
are
then
caught
as
part
of
that,
and
still
you
know
part
of
a
valid
data.
They
would
still
go
in
the
incremental.
So
it's
only
this
exceptional
case
that
we're
that
we're
changing
so.
F
I
I
do
want
to
talk
a
little
bit
just
to
make
sure
later
like
after
we
think
about
this
and
decide
this
I
guess
about
about
about
those
boundaries
in
a
little
bit
more
detail.
I
have
an
example:
that's
a
little
different
than
like
I
think
it
was
example.
H
and
I
want
to
make
sure
I
understand
that,
but
I
think
it's
sort
of
separate
than
this.
E
Yeah
I
think
I'm.
Okay,
with
that,
with
moving
dropping
this
incremental
data
and
moving
the
errors
onto
the
completed
object,
you
would
have
to
send
it's
possible.
The
same
error
could
cause
multiple
fragments
to
error
out,
so
you
would
be
sending
the
error
more
than
once,
potentially.
C
I
was
thinking
about
that
Rob
and
in
the
case,
where
they're
shared
right,
where
they've
got
an
overlap,
so
we've
got
a
and
b
and
they're
overlapping
we'd,
effectively
give
you
three
pendings
right
a
pending
that
represents
the
shared
bit
of
A
and
B
appending
for
a
and
a
pending
for
B,
the
pending
for
A
and
B,
the
first
one.
That
would
be
no.
F
C
F
I
mean
you
could
always.
You
could
always
have
like
an
array
of
IDs
that
an
individual
error
has
I
mean
I.
Think
we
can
maybe
try
to
work
on
the
format
of
an
array
of
ideas
that
the
individual
error
has
caused
merging
to
you
know
we
might
have
to
Workshop
that.
C
I
think
for
these
kind
of
situations
we
don't
necessarily
have
to
worry
about
optimizing
this,
as
most
as
it
can
be
like
this.
Isn't
the
happy
path
right
so
performance
here
isn't
as
critical
as
it
would
be
in
the
happy
path,
at
least
in
my
opinion,
and
also
if
it's
the
same
error
multiple
times
it's
just
going
to
compress
down
with
gzap
to
be
ridiculously
tiny
anyway.
C
E
Yeah
I
think
it
makes
sense
if
you
did
have
the
same
error.
It
is
I
think
it
is
a
little
weird
because
that
this,
like
error,
is
not
really
corresponding
to
this
data.
If
it
was
something
lower
than
the
top
level,
and
it
could
be
more
than
one
ID
so
I
think
it
makes
sense
for
I
think
it
makes
sense
to
move
it
so
I'll
update
this
example.
I'll
remove
this
entirely
I'll
put
errors
here
and
I'll
make
another
example
for
the
same
thing
with
stream.
E
C
Yeah,
yes,
I
think
that
the
stream
case
warrants
further
discussion,
though
the
Deferred
case
is
relatively
straightforward
because
effectively,
you
deliver
nothing
right,
but
the
stream
case
is
problematic
because
you've
already
delivered
some
stuff
and
then
you
stop
and
that
stopping
is
one
of
those
things
where
we
can't
retroactive
like.
If
we
were
to
resolve
that
in
graphql,
then
that
field
that
entire
field,
the
list
field
would
become
null.
C
But
now
it's
not
null
it's
a
list
with
like
eight
items
in
it,
for
example,
that
should
have
had
20
but
item
nine
through,
so
that
is
potentially
something
that
we
need
to
flag
very
clearly
to
Consumers
that
are
using
these.
These
graphql
apis
with
a
stream
on
a
field
to
make
them
aware
that,
just
because
it
has
nine
items
on
it,
you
should
check
for
an
error
to
know
whether
nine
is
complete
or
whether
there
could
be
more
I.
Think
it's
a
little
bit
subtle.
E
So
you
think
it's
just
something
that
needs
to
be
like
clearly
called
out
in
the
spec,
not
that
we
need
I
I'm,
not
sure
what
other
Behavior
we
could
do.
C
Another
option
and
just
muting
this
as
an
alternative
is:
we
could
just
say
that
putting
stream
on
a
field
automatically
makes
the
the
fields
within
it
nullable
effectively
they're
their
own
independent
little
null
boundaries.
So
then
you
could
get
nulls
in
it
item
nine
would
be
null
and
then
10
through
20
would
all
be
fine
again.
C
I,
don't
think
we
should
do
that,
but
it
is
an
option
other
than
that
I'm,
not
sure
what
other
options
we
have.
Maybe
we
could
actually
deliver
an
item
with
that
field
set
to
null
and
then
just
have
the
client
overwrite
it.
But
again
we
don't
like
this
idea
of
like
giving
the
user
data
and
then
taking
it
out
from
under
their
feet.
So
I
don't
think
we
should
do
that
either.
E
Yeah
I'm
almost
positive.
We
discussed
that
option
when
we
were
discussing
like
the
stream
payload
with
items
being
in
an
array,
and
it
was
like
it
was.
The
group
decision,
then
to
return
items
and
all,
but
with
the
new
response,
shape.
I
think
that
I
think
it's
it's
the
same
to
just
put
the
items.
The
errors
inside
completed
and.
F
We
also
have
the
case
where
an
async
iterable
throws
and
so
meaning
in
terms
of
declaring
a
field
null
about
we,
you
know
we,
we
do
have
another
case
where
the
async
iterable
throws
and
I
think
we
well
actually
I
have
to
double
check
how
that's
dealt
with
in
the
non-stream
context,
but
we
haven't
released
that
yet
and
so
I
think
we
caught.
Do
we
send
an
index
of
the
path
that
didn't
arrive
and
just
call
that
as
an
error.
E
C
Yeah,
so
if,
for
example,
you're
querying
from
a
database
and
the
database
gets
shut
down
whilst
you're
streaming
the
results
from
it,
then
that
would
be
a
situation
where
this
could
occur.
I
don't
I.
Definitely
don't
think
it
should
use
like
the
next
index,
because
we
don't
know
that
the
next
index
is
definitely
going
to
exist.
It
would
imply
the
existence
of
something
that
may
not
actually
exist,
so
I
think
much
better
to
just
use
the
field
itself
than
this
field
and
just
give
an
error.
C
There
is
sharing
in
the
background,
so
yeah
I
think
we
should
just
put
an
error
directly
there
and
it
would
go
as
part
of
this
completed
again,
so
a
null
bubble
and
the
async
iterator
termination,
both
of
those
would
cause
the
the
completed
with
errors.
F
F
So
we
have
an
example
in
the
in
the
list
Dash
test
file,
where
we,
where
we
do
exactly
that
I
guess
we
say
we
we
actually
add
a
null
to
to
the
list
that
is
returned,
even
though
you
know
we're
not
sure
whether
that
exists
or
not
exists,
but
we
just
put
in
like
a
pseudo
null
value
and
an
error
with
that.
F
So
that's
currently
how
we
deal
with
it
here
there
that
might
take
some
some
thinking,
because,
basically,
what
we're
saying
is
we're
now
with
async
iterables.
We
now
have
a
situation
where
they're,
even
without
stream
and
defer,
we
have
an
error
that
isn't
really
located
to
a
path
per
se.
C
F
We
haven't
released
that
behavior
I,
don't
think
Rob
do
you
know.
E
No,
it's
not
it's
still
in
Alphas
are
betas
whatever
we're
on
in
graphql.js.
F
I
mean
they
both
seem
sort
of
like
reasonable
choices,
but
I
do
agree
that
what
we're
saying
now
is
probably
more
correct,
although
it
gives
you
less
of
partial
data,
but
I'm
not
sure
how
we
could
represent
it.
In
the
old
format,
I
mean
we
have
a
lot
of
choices
now
with
the
new
format,
but
with
the
non-streaming
format,
I'm,
not
sure
what
choices
we
have.
F
No
no
I
mean
I,
mean
I,
mean
I
mean
for
the
we're
gonna
We
for
the
non-streaming
case,
meaning
not
with
incremental
delivery.
We
have
an
async
generator
that
may
know
so
we're
saying
in
the
non-streaming
case.
Currently
graphql.js,
you
know,
put
stack
attacks
on
a
no
but
we're
saying
now
and
I
think
that's
right
that
that's
not
quite
correct.
F
E
C
F
Thank
you
so
much
so
so
I
just
want
to
make
sure
there
was
one
case
that
I
have
in
the
tests
for
the
implementation
that
I'm
working
up
that
I
I'm,
not
a
hundred
percent
sure
that
that
it
aligns
so
should
I
show
it.
You
think
in
the
Discord
chat
or
in
the
test
files,
or
should
I
just
paste
it
into
the
chat
here.
F
Okay,
so
the
question
is
this
query
right
that
we
see
over
here
and
and
I'm
not
talking
about
a
a
slow,
so
we're
basically
talking
about
non-nullable
fields
that
return
null
and
no
bubbling,
and
the
question
is
is
is
how
far
is
how
exactly
the
null
bubbling
will
work?
So
we're
looking
at
a
a
query
with
two
different
defers
with
with
two
different
paths,
but
they
overlap
the
the
non.
The
problem
field
is
going
to
be,
none
error
field
and
they
overlap.
F
C
and
C
is
is
nullable,
and
so
normally
you
know,
without
stream,
without
defer
it
would
everything
would
bubble
to
C
would
just
be
null,
and
the
question
is
in
this
case-
let's
say:
let's
say
the
first.
The
first
defer
resolves
first,
so
my
algorithm
initially
gave
gave
this
as
the
response,
meaning.
So
the
C
is
gonna.
Eventually
we
were
talking
about
how
everything
is
going
to
be.
You
know
we
have
it.
F
The
non-overlapping
portions
are
sent
separately,
so
a
is
sent
well,
a
is
actually
not
deferred
at
all
so
ASN
first
and
then
at
the
path
of
a
we
have.
Some
field
is
sent
separately.
It's
only
part
of
one
of
one
of
the
deferrers,
and
then
we
have
B
and
C
is
also
only
a
part
of
one
of
its
Furs,
but
it
can't,
but
it
overlaps
between
the
two
fragments,
so
it
sends
as
a
separate
one
because
it
belongs
to
both
the
overlapping
fragments
are
sent
separately
and
then
boom.
F
We
had
we
had
our
null.
So
my
question
is
my
question
is
if
this
is
actually
correct?
Oh
so
then
what
happens
to
d
d
can't
be
sent
because,
because
at
this
point,
C
has
nulled
and
so
D
is
just
suppressed
or
filtered
and
I
want
to
make
make
sure
right
now
it
says
completed
that
both
of
them
are
completed,
but
Etc.
So
is
this
what
we
envisioned?
F
Because
in
this
example
we
do
have
some
overwriting
I
mean
we
talked
a
little
bit
about
how
the
constraint,
one
of
the
constraints
that
we
wanted
is
only
simple
merges,
and
so
each
of
these
incremental
data
records
or
incremental
payloads.
F
Each
of
these
items
in
the
incremental
array
can
just
be
overwritten,
and
normally,
when
there's
no
no
bubbling
it,
should
it
shouldn't
really
ever
actually
overwrite
something
meaning
that
thing
at
ABC
should
really
have
been
given
a
value
for
non
null
error
field,
but
because
it
was
no
it's
going
to
it's
going
to
potentially
overwrite
what
we
have
at
C
now
we
just
said
that
we're
actually
going
to
take
this
out
this
payload
entirely
out
this
third
payload
over
here,
and
so
this
payload
is
no
longer
going
to
appear
right
and
so
I
think.
F
One
of
the
options
then
is
to
is
to
that
everything
would
be
similar
to
this,
but
just
send
the
first
to
incremental
payloads,
but
I'm
not
sure
again
still,
if
that's
really
what
we
want,
because
then
C
didn't
send
no
no
error
field,
it
did
sorry,
that's
a
fur
fragment
is
not
actually
complete.
Like
let's
say
you
have
like
someone
using
a
label
on
it
or
something
and
they're
testing
to
see.
If
everything
is
real,
they
would
expect
it
to
be
actually
complete
and
it's
actually
missing.
Non-Null
error
field.
F
It
wasn't
able
to
be
successfully
delivered
everything,
so
so
I
imagine
I'm
just
not
sure
if,
when
we
talk
about
no
bubbles
or
beyond
the
fragment
yeah,
so
maybe
you
have
the
answer
already.
E
Yeah,
this
is
not
exactly
what
I
was
expecting.
I
would
expect
that
you
get
d
sent
in
this
scenario.
E
So
so,
basically,
like
you
have
you
have
some
field
being
executed
for
the
first
offer
you
have
B
and
C
being
executed
for
the
two
both
differs,
then
you
have
G
being
executed
for
the
second
differ
and
Nano
error
field
for
the
first
defer
and
you
have
you're
holding
on
to
like
the
references
for
all
those
and
when
non-null
error
field
nulls
that
stops
you
from
sending
any
of
the
payloads
that
go
with
the
first
defer
and
send
send
completed
there
and
the
errors
for
it.
E
But
it
shouldn't
impact
the
fields
that
are
in
the
second
deferred
that
should
still
get
delivered
completely
right.
F
F
We're
going
to
put
this
in
a
separate
error
scale,
but
you're
saying
that
deferred,
fragment
shouldn't,
send
anything
and
I
definitely
agree
with
that,
and
so
that
I'm
100,
okay
with
and
actually
makes
makes
my
you
know,
because
I
makes
that
concern
go
away.
F
I,
don't
think
we
should
partially
deliver
a
fragment
and
so
but
now
you're
saying
if
you
actually
don't
send
that
whole
fragment,
you
can
send
D,
even
though
you
know
kind
of
that
there's
a
problem
with
d
because
it
sort
of
because
I
know
bubbled
up
to
C
and
C
only
comes
down
from
D,
so
yeah,
you
know,
I
think
it's
just
worth
discussing,
I
see
what
you're
saying
but
I
wonder
what
the
group
thinks
yeah.
C
I
actually
spent
some
time
on
Thursday
and
Friday,
rewriting
my
spec
edits,
they're,
not
ready
to
present
yet,
but
I
know
how
this
would
work
in
my
algorithm,
so
yeah
what
you
originally
proposed,
or
what
you
originally
described,
breaks
the
the
fragment
consistency
that
I
think
is
one
of
our
governing
things.
So
we
both
agree
that
that
shouldn't
be
the
case
right.
C
C
That
is
the
combination
of
both
of
them,
so
that
is
the
the
B
and
C
field,
so
those
that
would
get
executed
first
and
then
once
it's
executed,
A
and
B
would
execute
in
parallel
they'd
race
and
whichever
one
of
those
completes
first
would
go
along
with
with
the
shared
a
b
payload
if
it
doesn't
break
the
fragment
boundary
now.
This
is
where
things
get
really
interesting
in
this
case,
if
you
had
this,
which
one
was
one
that,
through
null
error
field
yeah.
C
So
if
you
have
the
first
defer
a
resolve
first,
what
would
actually
happen
is
BC
would
have
been
evaluated
and
c
will
be
an
object.
Then
a
will
execute
and
some
field
will
be
fine
and
none
null
error
field
will
throw
and
that
would
try
and
violate
C.
But
C
is
effectively
already
delivered.
It's
not
already
delivered,
but
it's
seen
by
the
algorithm
as
already
delivered,
because
it's
shared
between
both
A
and
B.
So
that
would
then
cause
the
entire
of
the
a
fragment
to
have
a
non-null
exception.
C
Then
we'd
go
on
to
b
b
would
resolve
the
D
field,
which
presumably
results
with
our
error
because
happy
path
we
generally
prefer,
in
which
case
the
a
b
shared
which
is
the
BC
bit
so
many
letters
I,
should
have
chosen
x,
y
and
z,
so
the
BC
would
come
through
and
then
the
D
would
come
through
and
that
would
be
fine,
then
we'd,
say
completed
two
and
previously
we'd
have
completed
one
with
an
error
which
was
effectively
that
the
the
C
couldn't
be
made
null
but
yeah.
C
There's,
there's
weird,
there's
slightly
odd
behaviors
around
this,
but
I
think
they
are
the
ones
that
we
want,
because
they
enable
more
of
the
happy
paths
to
happen.
I.E
if
C
resolves
to
something
that
is
not
null,
then
if
we've
got
a
whole
bunch
of
defer's
off
of
that
one
of
those
going
null
shouldn't,
stop
all
the
rest
of
them
from
running.
We
want
to
prefer
the
happy
path.
E
Yeah
I
think
I
had
wrote.
I
wrote
at
the
top
of
that
gist
that
deferred
fragment
data
will
be
returned
as
null
and
no
bubbles
to
feel
this
previously
delivered,
but
that's
not
totally
accurate,
because
if
the
null
happens
before
another
defer,
that's
happening
in
parallel,
so
I
think
that
needs
to
be
reworded,
but
yeah
I
I
agree
with
the
way
Benji
described
it.
I
think
we
describe
the
same
thing,
but
you
described
it
better.
F
No
I,
I
I,
think
I
think
both
of
you,
yeah
yeah
I,
mean
I,
think
that's
fine,
but
just
to
make
sure
I
understand.
B
and
C
are
still
delivered
separately
than
D,
even
though
there's
only
one
fragment,
because
if
it
wouldn't
have
knulled
right
it,
you
know
we
don't
change,
we
don't
change.
The
fragment
shapes
right.
E
Yeah,
because
those
shapes
of
the
incremental
payloads
are
figured
out
basically
like
before,
you
start
executing
things
and
then
once
you
do
start
executing
them,
then
like
you
would
have
to
like
try
to
reassemble
them
back
into
a
different
shape.
If
that
was
the
case
right
at
like
before
you
delivered
them
like
with
this
algorithm
you
just
like
when
you
do
collect
Fields,
you
pretty
much
know
like
what
those
incremental
payloads,
yeah
and
I,
don't
think
we
want
the
order
of
execution
for
the
fields
itself
to
affect
that.
F
E
Yeah,
so
for
the
we
have
the
primary
working
group
on
Thursday
I
have
a
PR
to
add.
Add
this
to
the
agenda.
E
I
think
my
plan
is
that
I'll
probably
show
a
few
of
the
slides
that
maybe
Benji
made
or
something
similar
of
like
these
are
like
just
a
whole
background
of
like
how
we
got
into
this
very
big
rabbit,
hole
of,
and
then
I'll
yeah
like
about
response
amplification
stuff
like
that,
then
I'll
like
show
this
proposal.
I'll
go
through
these
examples
and
we'll
see
if
we
have
feedback
from
the
bigger
group
and
I'll
update
the
stock
to
put
the
the
null
bubbling
stuff
that
we
talked
about
today.