►
From YouTube: Incremental Delivery Working Group - 2023-03-13
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
B
A
D
B
Should
we
get
started
yeah
so
so
since,
since
the
last
meeting
a
couple
of
developments,
Benji
has
a
new
proposal.
C
Yes,
sorry
lost
my
unmute
button.
Yes,
I
can
absolutely
do
that.
C
So,
just
over
a
week
ago,
as
you
know,
a
couple
weeks
ago,
I
proposed
that
I
thought
that
it
would
be
reasonable
to
follow
the
existing
field
merging
algorithm
in
graphql,
which
turned
out
to
be
more
challenging
than
I
first
expected
and
I
sort
of
presented
a
really
early
version
of
the
a
PR
to
the
spec
to
do
that.
But
it
was
fuzzy
in
a
lot
of
regions
and
and
wasn't
even
really
a
first
draft
or
anything
it
was.
It
was
quite
raw.
C
I
spoke
with
Lee
later
that
week
to
unpick
what
it
was.
That
was
that
I
was
struggling
with
there.
What
it
was
that
I
was
trying
to
say
and
Lee
did
help
me
with
some
whiteboarding
to
figure
out
exactly
what
it
was
that
we
were
trying
to
say
and
what
I
hadn't
realized
is
what
it
comes
down
to
is
effectively
seeing
all
of
the
defers
at
one
layer
as
a
boundary,
rather
than
looking
at
each
individual
selection
set
as
a
boundary.
C
This
field
is
a
regular
field
which
is
going
to
be
used
right
now,
which
is
like
the
state
for
everything
in
graphql
right
now
right,
we
always
use
every
field
unless
there's
like
a
skip
or
or
include
directive,
the
other
state
or
another
state
is
this
field
is
to
be
streamed
and
I
believe
that
the
validations
that
we
have
in
place
at
the
moment
are
if
a
field
is
to
be
streamed
or
if
it
has
the
stream
directive,
then
every
instance
of
that
field
must
also
have
the
stream
directive.
C
According
to
the
the
validation
rules.
I
think
that's
right
is
that
right,
Rob,
yeah,
yeah
cool
and
then,
of
course,
the
third
one
is.
This
field
is
to
be
deferred.
Now
the
interesting
thing
with
defer.
Is
it
doesn't,
unlike
the
stream
it
doesn't
have
to
match,
so
you
can
have
like
username
in
one
fragment.
That's
used
immediately
and
username
in
another
fragment,
that's
deferred
and
that's
fine
effectively.
The
well
actually
that's
what
some
of
the
discussion
is
around
does.
C
Should
we
deliver
it
in
both
cases,
or
should
we
only
deliver
it
in
The,
Upfront,
case
kind
of
thing,
so
what
me
and
Lee
figured
out
was
effectively
if
we
follow
that
field
merging
strategy,
and
then
we
assign
each
of
these
fields
to
one
of
these
three
things:
do
it
now
stream
it
or
defer
it,
then
what
we
can
do
is
when
we
look
at
each
of
the
fields
with
the
same
response
name
effectively
the
same
path
like
username,
for
example.
C
We
can
say
well
this
one
says
to
use
it
now,
this
one's
deferred,
so
the
use
it
now
wins,
and
we
just
put
it
into
the
use
it
now
bucket,
and
we
forget
about
the
Deferred
bit.
We
still
use
the
selection
set
from
the
defer
later,
but
that's
a
separate
thing
so
effectively
we
walk
through
the
entire
thing.
We
pull
out
everything
that's
to
be
done
now,
and
we
execute
that
now
and
then
for
the
rest
of
the
things
that
weren't
executed.
C
C
Can
you
see
the
RFC
alternative
proposal?
101
1018
yep
cool
yep,
perfect.
So
the
idea
is
this:
if
you've
got
a
query
like
this
one,
we
look
through
it,
we
see
A
and
B
are
right.
There,
they're,
not
inside
and
into
first
C
here,
is
inside
a
defer,
so
we'll
ignore
it.
For
now,
D
is
not
in
there
to
for
neither's
e
same
goes
for
G,
and
all
of
this
is
deferred.
So
we
pull
those
non-deferred
bits
out
the
a
d
b,
e
and
G
and
they
effectively
are
executed.
C
Then,
as
the
first
thing
and
that's
sent
as
your
first
response
then-
and
you
know
here's
an
example
of
that-
we'll
go
back
to
that
in
a
moment,
then
we
look
at
the
next
layer
of
the
stuff
that
was
deferred.
So
here
this
is
a
c
field
which
has
got
wonderful.
We've
got
an
F
here,
that's
got
one
defer
and
we've
got
the
A,
C
and
H
here
that
have
got
one
defer.
C
We
ignore
the
C2
here,
because
it's
got
It's
effectively
deferred
and
then
deferred
again
in
another
selection
set,
so
this
will
be
ignored
and
it
means
that
at
the
root
path,
we're
going
to
run
the
H
here,
the
a
was
already
executed
up
here.
So
we
don't
need
to
run
it
again,
but
the
H
does
need
to
be
run
and
that's
the
only
thing
at
this
layer
that
needs
to
be
run
in
this
second
phase.
C
C
Now
importantly,
we
don't
want
to
break
anything
to
do
with
like
the
fragment
consistency,
and
so
what
is
absolutely
critical
here
is
that,
even
though
these
are
three
different
things,
three
different
selections
to
be
around
that
kind
of
thing.
They
must
be
delivered
together,
so
they're
separate
objects,
but
they
must
always
all
of
them,
be
in
the
same
incremental
total
payload
for
the
incremental,
and
that
means
that
we
we
effectively
say
You
must
apply
all
of
these
all.
Oh,
that
should
be
a
closing
square
bracket.
C
We
should
apply
all
of
these
all
at
once
and
then
once
that's
done,
then
you
can,
you
know,
do
a
render
or
write
to
the
cache
or
whatever
it
is
that
your
client
needs
to
do
next
and
that
helps
us
to
ensure
consistency.
So
we
don't
break
things
like
maths,
Dunder
type
name.
You
know
this
fragment
has
been
loaded
hack
or
any
other
future
things
that
we
want
to
add
like
that.
C
Just
part
of
the
it's
part
of
the
specification
like
it's
it's
written
out
in
the
execute
selection
sets
effectively.
All
of
this
is
so.
This
is
all
following
the
spectex
that
I
have
written
in
this
pier
and
that's
essentially
it
on
top
of
that
I've
add
so.
The
initial
payload
comes
with
its
data
and
then
I've
added
the
pending,
which
is
much
like
what
we
discussed.
You
know
a
few
months
ago
to
say
there
is
data
that
is
pending
at
the
root
level
path.
C
A
the
only
thing
that
is
now
pending
is
that
the
AC
path,
so
we
say
at
the
AC
path.
There
is
a
thing
that
is
pending
and
then
we'll
deliver
that
when
we
go
and
process
that
and
we'll
say,
that's
complete
and
then
finally
we'll
say:
there's
nothing
else
to
come.
This
has
next
true.
Here
should
technically
be
a
haznex
force.
C
It's
not
a
big
deal
like
it.
We
can
have
it
either
way,
but
at
the
moment
this
is
one
of
the
issues
in
the
spec
text
is
that
it
doesn't
have
the
mechanism
to
figure
out
at
this
point
that
it
can
be.
Has
next
false.
So
we
just
do
it
in
that
final
follow-up
pull
request,
but
that's
essentially
it
so
it's
done
based
on
these
defer
layers.
So
the
great
thing
about
this.
This
is
all
so.
This
is
a
pull
request
that
is
created
based
on
the
current
state
of
the
graphql
spec.
C
So
the
latest
draft-
it's
not
based
on
Rob's
previous
work,
which
means
it
doesn't
have
any
of
the
validation
rules
that
Rob's
already
prepared
or
any
of
the
other
texts
that
we
absolutely
need
just
to
be
clear
and
that's
just
because
I
wanted
to
make
the
diff
as
small
as
possible
as
to
show
what
the
difference
between
the
the
algorithms
for
this
and
the
algorithms
for
the
current
draft
are
without
factoring
in
any
of
the
you
know,
previous
decisions.
So
the
changes
for
this
are
relatively
straightforward.
C
One
of
the
main
things
is
that
the
the
execute
selection
set,
rather
than
just
returning
data
like
it
used
to
before
it
now
effectively
returns
like
a
three
three
Tuple
or
an
object
or
whatever
you
want
it
to
be,
so
it
returns
both
the
data,
the
things
that
are
to
be
deferred
and
the
things
that
are
to
be
streamed.
C
It
then
goes
it
executes,
so
the
the
data
is
the
result
of
execution
and
all
of
the
defers
and
streams
are
not
executed.
They're
not
even
started
at
this
point.
According
to
the
spec,
now
keep
in
mind,
an
optimized
server
can
choose
to
start
executing
those
early.
If
it
wants
to
that's
unimportant
to
the
spec,
the
spec
just
says:
what
order
do
they
need
to
be
delivered
in?
What
does
it
need
to
look
like
is
happening,
so
if
there
is
only
data
and
there
isn't
any
defers
or
streams,
then
we
just
return.
C
Everything
like
we
normally
would
have
done,
return
the
defersing
errors
and
we're
done
otherwise.
We
create
our
incremental
event
stream
and
this
incremental
event
stream
is
what
goes
through
the
machinations
of
dealing
with
like
the
pendings.
You
know
the
next
layer
of
defers
and
all
of
this,
so
this
is
like
this
is
almost
like.
Our
run
Loop
that
that
handles
here-
and
it
is
quite
a
lot
of
text,
but
it's
all
just
for
handling
stream
and
defer.
It
also
has
things
like
this
flush
stream.
C
I,
don't
even
know
how
this
should
be
and
I
have
asked
Lee
as
well,
but
he
hasn't
managed
to
get
back
to
me
on
that.
Yet
the
flush
stream
is
what's
important
for,
like
you
were
asking
before
Rob.
How
do
we
know
when
we
can
send
all
these
things
together?
C
The
flush
stream
basically
says
send
whatever
the
remaining
incrementals
are
at
this
point,
and
all
the
pendings
and
all
the
completed
at
this
point
in
time
and
importantly
other
than
this
very
final
one
here,
the
other
ones
are
all
optional.
So
if
you
want
to
optimize
your
server-
and
you
want
to
say,
I
only
want
to
do
this,
every
100,
milliseconds
or
something
like
that,
you
can
do
so.
C
You
have
to
do
it
at
a
point
where
flush
stream
is
allowed,
but
then
you
can
bundle
everything
up
together
as
one
massive
list
of
incremental
changes
and
send
that
all
through
as
one
payload
and
that
will
be
more
efficient
for
the
client
in
most
cases,
but
other
than
that.
The
main
differences
to
the
actual
existing
spec
text
are
actually
quite
small,
so
they're
all
relatively
small
changes
that
basically
follows
the
existing
flow.
In
many
cases,
it's
things
like
where
we
used
to
have
a
field.
C
We
now
have
an
object
that
represents
both
the
field
and
some
metadata
about
it
like
whether
it
should
be
deferred
or
not.
This
I
guess
is
a
big
change.
Was
this
oh
yeah?
This
is
checking
to
see
if
every
entry
is
deferred,
then
we'll
skip
it
and
do
it
later.
Otherwise,
we'll
do
it
now
effectively,
but
yeah.
It
leverages
very
much
the
existing
field,
merging
mechanics
and
what
that
means
is.
C
Each
field
is
only
ever
executed
once
when
we
cue
something
to
be
executed
later,
like
a
field
that
is
deferred,
we
will
store
the
the
result
of
the
resolver
effectively
like
the
the
object
value
along
with
it
is
into
that
next
phase,
then
that
will
get
consumed
and
then
it
can
be
garbage
collected
or
whatever
your
language
does
to
deal
with.
That.
B
The
one
the
one
thing
that
I'm
concerned
about
is
the
fact
that
the
paths
don't
line
up
with
where
the
deferes
are,
which
I
know
you
said,
is
an
explicit
non-goal.
Sorry.
B
E
You
know
it's
yeah,
it's
it's
a
bit
harder
at
the
moment
to
to
to
yeah
to
wrap
my
head
around
on
this,
not
completely
sure
if
it's
a,
if
it's
going
to
be
easy
for
us
to
to
use
that
the
main
the
main
thing
I
think
for
for
me
on
an
apology
is,
is
to
to
yeah
to
know
whether
one
particular
fragment
is
is
completed
or
not,
and
yeah
not
completely
different
for
now.
If,
if
that
will
be
easy
with
this,
this
version.
C
Yeah
that
shouldn't
be
an
issue
like
knowing
that
the
fragments
are
completed
should
be
the
same
as
you
effectively
if
you
use
Matt's
done
to
type
name
trick.
So
basically
you
put
something
in
the
in
the
fragment.
That
is
only
in
that
one
fragment.
Then
you
can
just
check
for
that.
If
it's
present,
then
you
know
it's
now
completed
by
the
time
you've
processed
all
of
the
incrementals
in
one
individual
payload.
D
I
I
have
a
couple
questions,
so
since
we
go
in
like
where's
is
basically
mean,
even
if
we
do
earliest
fragments
it
means
we
cannot
do
parallel
execution
of
some
links
in
any
any
way.
So,
basically,
if
you
want
some
stuff
to
be
faster,
you
need
to
push
everything
you
need
to
wrap
everything
else.
You
know
like
dummy
before.
D
C
Right,
yes,
that's
correct,
there
is
like
I,
haven't
written
it
into
the
spec
and
I'm.
Not
exactly
I
was
thinking
about
it
again
this
morning,
I'm
not
sure
exactly
how
we
would
do
it,
but
there
is
the
possibility
of
having
like
defer
again
in
the
same
selection,
set
at
the
moment
that
all
just
gets
collapsed
to
you
to
one
thing.
Basically,
a
field
is
either
deferred
or
not
deferred
each
point,
but
we
could
add,
like
you
know,
levels
of
defer
or
something
like
that
and
split.
C
Those
up
more
thought
would
be
needed
for
that
if
we
deem
that.
That
is
something
that
is
essential,
but
that
you
know
there's
some
flexibility
to
to
look
at
doing
things
like
that
afterwards,
not
afterwards,
as
in
later
I
mean
like
you
know.
If
this,
if
this
fixes
many
of
the
issues,
then
we
can
explore
some
of
these.
If
we
feel
that
they
are
important.
A
D
Right,
okay
and
I
have
another
question
or
potential
issue
since
I
understand
you
don't
wait.
I
didn't
read
the
whole
great
way.
It's
like
pretty
pretty
a
lot
of
changes,
but
if
can
you
can
you
open
your
your
proposal?
An
example:
query
yeah,
so
you
have
C
twice
all
right,
yeah
so
and
they
both
so
they
were
collected.
But
imagine
you
had
like
see
outside
of
the
floor
with
like
some
fields.
D
Question
is
during
execution,
the
resolver.
You
need
to
understand
what
directives
are
polite,
so
a
resolver
need
to
get
all
the
notes
like
notes
or
directives
in
graphics.
I
can
I
still
notes
and
you
can
look
into
directives
so
when,
if
we,
for
example,
add
like
in
synchronous
part
of
after
B
and
immediately
I
can
actually
write
so
here
yeah,
if,
if
we
had
like
yeah
here
and
inside
one
of
here,
for
example,
where
the
exam
directive
yeah
super
useful
feature
yeah.
D
So
during
this
execution,
resolver
does
resolver
get
like
both
nodes
or
not.
C
D
Is
is
set
of
fields
presented,
is
a
collection
of
fields
happening
or
not
before,
calling
like
just
resolver
of
C
like
how
directors
get
delivered
is
yeah,
it's
like
implementation
details,
but
if
reference,
if
I
respect,
algorithms
spark
algories
need
to
have
correction
of
fields.
D
Especially
now,
since
we
don't
have
any
duplication
is
like
all
the
fields
collected
in
some
some
set
at
least
yes
or
something
else.
C
Just
the
same
as
they
are
currently
with
the
difference
that
at
the
moment,
you
have
a
map
that
goes
from
the
response,
key
response,
name
whatever
it's
called
to
then
a
a
set
of
fields
right,
so
you
did
have
like
a
bunch
of
selections,
fragment,
spreads
Fields,
you
reduce
that
down
to
just
a
list
of
fields,
and
then
the
response
name
for
them
and
validation
at
the
moment
basically
says
that
everything
in
that
set
is
roughly
equivalent
right
now.
C
What
we
have
is,
instead
of
having
the
field,
we
have
this
object
that
defines
both
the
field
and
some
additional
metadata
about
it.
So
that
is
all
it's
all
part
of
the
collect.
Fields
algorithm.
C
Could
you
clear
the
The
annotation?
Okay
thanks?
Sorry,
it's
been
a
moment
since
I
read
this
so
yeah
the
field.
The
field
detail
is
the
critical
thing
here.
C
D
D
So
we,
if
you
have
like
I'm,
trying
to
understand
if
you
have,
for
example,
it's
also
not
in
your
example,
but
if
initial
response
includes
C,
like
with
some
C3
field,
like
you
wrote
the
to
answer.
My
previous
question
for
second
path
will
shift
like
with
path
will
change
to
A
and
C
or
not.
C
Exactly
what
would
happen
yeah
this
would
become
okay,
AC
and
then
this
would
just
become
the
selection
set
C1.
So
one
of
the
really
critical
things
here
that
like
Lee
was
discussing
with
me
is
effectively.
We
want
to
make
sure
that
it's
always
a
minimal
merge,
so
we
don't
want
to
do
deep
object
merging
because
it's
expensive,
so
every
time
you
get
an
incremental
response
in
our
system
there
you
can
just
do
a
simple
shallow
object
match.
So
here
you
at
path,
zero.
You
just
merge
this
in
so
you're
effectively
overwriting.
C
D
C
C
D
I
also
thought
about
this
problem
because,
like
anything
that
push
before
like
two
leaves
everything
that
I
meet
duplicating
in
wait,
there
is
like
the
duplication
of
leaves
and
the
duplication
of
like
everything
of
keys.
So
you
implemented
the
duplication
of
keys,
any
solution
that
implement
the
duplication
of
keys.
It
will
push
items
to
to
it.
It
will
push
like
the
first
two
items
of
arrays
of
list,
and
by
doing
so,
we
actually
end
up
in
situation.
D
D
This
is
a
move
in
right
direction,
but
big
problem
with
it
is
like,
for
example,
you
wrote
she
is.
She
is
a
list.
You
wrote
inquiry.
Everything
works
good
until
somebody
else
change,
another
fragment
that
add
C
in
initial
response
and
suddenly,
instead
of
like
previously,
your
stuff
gets
deferred
as
one
array
and
compact
representation.
Everything
worked
now
because
somebody
else
fragment
mentioned
C,
your
default
shifted
to
two
items
and
you
get
like
entry
for
every
item
which
is
not
problem
on
on
for
a
client.
It
should
be
handled.
Not
not.
D
The
problem
of
application
should
be
handled
by
client,
but
server
can
decide.
It's
like
denial
of
service
attack,
if
like
if
every
item
in
the
big
array
creates,
creates
entry
in
incremental,
so
it
can
unwind,
as
we
discussed
previously
as
protection
measurement
for
from
like
from
current
huge
incremental
and
with
like,
is
strange
in
your
fragment.
Everything
is
worth
somebody
else
changed
another
fragment
and
suddenly
you
defer
your
data
and
gets
unwind
instead
of
being
deferred,
which
is
weird
what
I
wanted
to
to
mention.
D
I
think
if
we
go
with
any
solution
that
do
the
duplication
of
keys,
we
need
to
figure
out
how
to
do
Bachelor
of
items
in
the
list.
Otherwise
it's
not
feasible.
It's
it's
a
problem.
C
So
if
I,
if
I
understand
you
correctly,
if
we
imagine
that
we've
got
a
field
inside
of
let's
say
a
Z,
there
is
a
list
you're
effectively
saying
you're
going
to
end
up
with.
You
know,
Foo
one
whatever
it
doesn't
really
matter.
Yeah.
A
C
Of
these
right,
rather
than
having
the
list
come
through
as
an
individual
thing
which
yeah
it's
I,
think
you're
you're
not
wrong.
That
is
exactly
what
would
happen
in
the
current
proposal.
C
There
are
potential
options
that
we
could
consider
for
dealing
with.
This
like
I,
was
saying
before,
like
batching,
but
at
the
moment
like
I'm,
okay
with
this
and
I,
don't
think
that
this
would
be
really
a
denial
of
service
concern
from
the
service
point
I
mean,
arguably
it
could,
but
really
all
we're
doing
is
we're
putting
this
this
amount
of
extra
data
instead
of
a
comma.
C
It's
just
you
know
it's
not
a
huge
amount
of
data
to.
In
addition,.
B
They
are
batched
because
they
would
have
to
be
inside
the
same
response
in
the
same
incremental
array.
Right.
C
Yeah,
that's
already
guaranteed,
so
they
will
be
like
this.
They
will
have
to
come
through
in
one
single
incremental
again.
This
should
have
been
a
square
bracket
incremental
list
like
this,
so
it's
already
batched
and
that's
actually
coded
into
the
spec
itself.
With
the
you
know,
everything
needs
to
be
done
at
once.
E
D
Yeah
imagine
remember
like
when
I
proposed
to
Edo
I
can
comment
of
field
initially,
and
we
had
like
big
discussion
about
saying
you
should
not
unwind
everything
like.
D
If
you
want
to
deliver
stuff
in
the
first
batch,
you
need
to
put
it
in
Crematory.
You
cannot
unwind
it
in
data
and
your
argument.
Everybody
else
argument
against
it
was
that
we
create
entries
inside
the
incremental
so
and
it's
a
problem.
C
With
the
way
that
it
was
previously
specified,
because
there
was
a
lot
of
duplication
in
that-
and
you
could
multiply
like
my
example-
I
think
back
then
was
like
you
could
take
a
response
from
like
1.8
megabytes
up
to
48
megabytes,
just
in
that
initial
payload.
But
that
wouldn't
be
the
case
anymore
with
this,
with
what
I'm
proposing
here
because
of,
as
you
say,
the
deduplication
of
of
not
duplication,
the
you
know
the
keys.
Each
key
is
only
like
handled
once.
C
There
is
none
of
that.
Repeating
of
those
leaves.
C
Yeah,
so
the
the
maximum
problem
that
you're
gonna
have
is
the
depth
of
your
query,
which
often
will
have
a
limit
on
depth
right
times
the
length
of
the
possible
key,
which
is
technically
Unlimited
in
graphql,
but
I
think
it
would
be
very
reasonable
for
people
to
add
validations.
That
say,
you
know
if
your
key
is
longer
than
100
characters.
You
probably
you
know,
I'm
pushing
it
a
bit,
so
the
the
longest
path
is
going
to
be
in
the
region
of
let's
say,
500
bytes,
so
yeah
you
can
definitely
inflate.
C
You
know,
let's
say
a
thousand,
so
you've
got
like
500
bytes
times
a
thousand,
so
it's
like
500
kilobyte,
it's
not
the
48
megabytes
that
we
had
before
it's
a
much
much
much
lower
attack
Vector
than
what
we
were
previously
discussing,
and
even
the
numbers
I
just
said
there
are
still
you
know.
They're
still
large,
like
deferring
inside
of
a
thousand
items,
is
already
something
that
arguably
you
should.
You
know
pagination
limits
and
such.
C
So
you
have
to
be
extremely
careful
with
this
approach,
so
the
the
rules
that
I
figured
out
for
this
are
effectively
each
time
you
see
a
defer
or
a
stream
directed
you
may
as
the
server
ignore
it
and
pretend
it's
not
there,
but
that's
it.
You
can't
do
it
on
a
field
basis,
because
otherwise
it
will
break
the
fragment
modularity
issues,
so
it
has
to
be
done
based
on
the
actual
directive
itself
like
effectively
at
a
fragment
boundary
because
directives,
you
know
defer,
is
applied
on
a
on
a
fragment
for
stream.
C
Obviously,
it's
a
field
but
whatever
so
you
can
ignore
the
defer.
If
you
do
ignore
the
defer,
then
it's
ignored
for
every
instance
of
that
it's
not
like.
Oh,
we
we
defer
the
first
hundred,
but
then,
after
that
we
inline
the
others
or
anything
like
that
like
you,
either
throw
it
out
entirely
or
you
use
it
for
everything.
C
C
Others
is
that
what
you
mean
I
think
inside
of
a
list.
It
would
technically
be
okay,.
C
C
According
to
various
rules
that
we
come
up
with,
but
yeah,
it
must
be
done
based
on
the
directive
itself
and
yeah
Rob
I
think
you're
I
think
you're
right
that,
because
the
path
makes
it
so
long
as
the
path
is
unambiguous,
it
should
be
okay.
It's
you
just
don't
want
to
break
fragments.
B
If
you
have
two
deferrers
under
the
same
object
with
different
selection
sets
in
each
one,
I
know
before
we
had
said
that
like
because
they
they
would
have
the
same
path.
You
would
have
to
either
ignore
both
of
them
or
keep
or
keep
off
of
them.
But
I.
Would
that
be
the
case
here
like?
Could
the
server
decide
to
inline
the
first
two
defers
but
keep
but
honor
the
second
two?
B
B
B
C
And
I
think
it's
it's
actually
really
important
that
we
don't
actually
link
the
client
to
tell
that
that
it
relates
to
this
one
specific
defer
or
anything
like
that,
because
it's
going
to
rule
out
a
huge
amount
of
potential
like
future
I,
don't
know
optimizations
or
other
flexible
things
that
we
want
to
do.
The
client
shouldn't
specifically
think
of
this
defer
being
a
thing,
and
this
defer
being
a
different
thing,
like
all
the
deferries
is
saying
to
the
server
I'd
like
it.
C
If
you
were
to
send
this
later,
like
you
don't
have
to,
but
it's
up
to
you
but
I'd,
really
like
it.
You
know,
if
you
don't
worry
about
sending
this
with
the
first
payload,
it's
a
request
as
much
as
anything.
It's
not
actually
like
a
concrete
thing.
It
doesn't
have
a
name:
we've
got
rid
of
labels,
so
yeah
I,
don't
think
we
should
tie
them
together.
C
B
Yeah
again,
my
my
concern
is,
which
maybe
is
fine,
is
about
specifically
for
clients
that
have
fragments
tied
to
components
and
how
the
client
is
going
to
know
when
to
render.
So
so
with
relay
when
you
are
you're
rendering
a
component
that
component
has
its
fragment
with
it
and
it's
the
parent
component.
B
That
applies
the
defer
to
the
spread
for
that
fragment,
which
is
nice,
because
you
could
have
two
components
in
two
different
places
and
two
different
parts
of
your
data
and
the
parent
could
decide
to
defer
one,
but
not
the
other
and
so
just
clients
needing
to
understand
they
need
to
like
render
the
loading
indicator
until
this
defer
is
done
yeah.
So
maybe
that
means
a
compiler
adding
the
type
name
hack
or
you
have
to
do
it
manually,
but
but
yeah.
That's
just
what
I
want
to
get
an
input
from
from
the
client
Developers.
C
Yeah
I
agree:
I
did
actually
reach
out
to
Matt
to
ask
about
this
and
say
what
he
thought
of
it,
but
he's
been
super
busy
on
other
things,
so
he
hasn't
been
able
to
think
about
it.
C
But
he's
hoping
to
at
some
point
soon
well
at
the
moment,
like
Matt's
type
name
hack,
is
basically
the
best
solution
that
we
have
for
this,
and
and
for
this
problem
in
general,
like
especially,
if
you're
using
say
the
Skip
and
include
directives
like
this
is
already
a
problem
that
you
have
in
the
graphql
space,
knowing
whether
a
fragment
is
complete
or
not.
C
But
if
we
were
to
later
have
like
labeled
fragments
or
things
like
that,
that
should
work
in
stream
and
defer
just
the
same
as
it
would
in
in
regular
graphql.
So
it
should
continue
to
to
honor
all
of
these
things.
B
Okay,
you
had
also
mentioned
in
Discord
that
you
can
tell
it
to
independently
run
to
potentially
slow
sibling
Fields
with
aliases
on
the
parents,
yeah
I'm
I
I'm
a
little
surprised
by
that
I
thought
that
it
would
work.
Maybe
can
you
explain
it
sure.
C
Okay,
so
let
me
just
try
and
scribble
something
together
for
this,
so.
C
Trim
a
bunch
of
this
away
so
effectively.
What
we're
saying
is:
Thing,
One
and
Thing
Two
right.
C
So
at
the
moment
you
might
want
to
say
do
something
like
this,
where
you
have
an
alias
another
Alias,
then
you
have
the
third
thing
inside
that
is
expensive,
I'm
going
to
get
rid
of
this
directive
as
well,
because
that's
going
to
be
confusing,
in
fact,
let's
just
change
this
to
slow
field,
one
and
slow
field,
two.
C
C
Even
like
this,
this
slow
field
two
is
inside
of
wonderful
and
this
slow
field
one
is
inside
of
one
level
of
defer,
so
they
belong
at
the
moment.
In
the
same
thing,
there's
technically
nothing
that
strictly
enforces
I
mean
the
way
that
I've
written
this
back
at
the
moment
puts
them
in
the
same
thing
that
we
don't
need
to
have
them
like
that,
because
they
are
like
independent.
They
can
be
split
apart
from
each
other,
so
long
as
they
are
entirely
independent.
C
But
if
you
had
a
field,
a
oh
gosh,
sorry
about
that,
I
must
have
pressed
the
key.
If
you
have
a
field
a
it
makes
it
much
more
clear.
Why
that's
the
case?
C
A
C
To
understand,
and
then
this
would
have
the
frag
two,
for
example.
Now
this
a
must
only
be
ran
once
right
and
that's
the
same
a
as
this
and
what,
if
I
used
a
let's
use,
Z
just
to
split
that
up,
so
the
A.Z
here
and
the
a
oh?
No,
that's
not
true,
because
this
is
alien
right.
So
so
technically
we
could
split
that
up
as
well
yeah.
C
There
could
be
something
that
links
them
together.
So
if
I
go
back
to
what
we
had
before,
like
this
sorry
I'm
doing
a
lot
of
on-the-fly
thinking
here.
B
C
I
think
this
is
I
mean
it's
trying
to
say
effectively.
The
foo
in
both
of
these
would
need
to
only
be
executed
once
so
there
could
be
a
thing
that
ties
them
together
for
some
reason,
even
though
the
slow
field
one
is
inside
this
path
and
slave
field
2
is
inside
this
path.
Technically,
you
know
the
Foos
join
them.
We
could
potentially
make
it
so
that,
like
imagine
this,
these
don't
have
diverse
and
instead
we
just
add
a
defer
here.
C
There's
nothing
that
would
stop
us
at
the
moment,
I
think
from
making
these
two
things
independent
in
the
current
algorithm.
They
are
treated
as
the
same,
but
we
could
make
them
independent
if
we
can
figure
out
that
they
there
exists.
No,
you
know
food
that
joins
them
together
as
the
same
thing
so
long
as
they
are
completely
independent
paths
and
the
same
paths
won't
appear
in
the
same
fragments.
Then
we
could
split
them
up
they're.
Just
you
know
there
isn't
an
accommodation
for
that
in
the
spectex
that
I've
written
currently.
A
B
That
I
do
feel,
like
that's
pretty
important,
I
mean
I,
guess
I,
don't
understand
why
they're
linked
together,
if
they're
under,
if
their
paths
are
completely
different,.
C
You
know
the
select.
The
fragments
need
to
still
be
consistent,
I,
don't
know
whatever
it's
I
think
it
would
be
challenging,
but
I
think
it's
possible,
but
it's
definitely
possible
I.
Just
I,
don't
know
how
I'd
write
it.
B
Okay,
I
think
I
need
to
spend
some
more
time
with
it,
but
aren't
when
you
get
down
so
you
when
you
execute
this
food
food
is
getting
merged
together,
that's
the
same
field.
First
I
mean
first,
the
first
pass
is
collecting
fields
at
the
root
level
and
that's
going
to
return.
Foo
thing.
One
thing:
two
right:
yeah
now
you
start
you
execute
Fufu,
you
execute
thing
one!
You
execute
thing
you!
Actually
you
start
executing
thing
one
you
go
into
that
selection
set
and
now
you're
getting.
B
You
should
only
be
getting
I,
guess,
frag,
one
Z
and
slow
field,
one
now
right
and
with
slow
field,
one
being
deferred
yeah
and
you
haven't
even
started
looking
at
thing
two.
Yet
right,
but
even
though
now
thing
one
is
completely
resolved
for
the
initial
payload,
except
everything
except
for
slow
field,
one,
we
still
don't
start
executing
slow
field
one
until
we
finish
going
through
thing.
Two.
Is
that
the
way
it
works.
C
C
D
D
D
Why
we
cannot
work,
how
it
has
next?
Why
we
cannot
just
use
some
marker
to
signalized
level
is
finished,
why
we
need
to
physically
ship
everything
into
one
by
word,.
D
Because,
like
as
we
discussed
previously,
we
can,
if
coin
gets
everything
batched
up,
we
cannot
change
it
innovator.
D
But
it
would
be
strong
government
against
Stadium
Quake
any
features
that
ships
stuff
in
in
parallel,
like
independent,
like
like
on
the
same
before
I
worship
like
some
stuff
earlier
strong
government
against
said
you
know
like
any
such
feature,
would
be
that
quiet
expect
stuff
being
watched.
C
D
C
The
the
spec
text
guarantees
the
order
of
the
payloads
that
will
be
received
by
the
client,
roughly
obviously
like
streams
and
and
like
parallel
defers
that
are
on
different
list
items.
They
can
come
in
in
any
order
according
to
the
spec,
but
within,
like
regular
object,
selection
sets,
we
we
specify
exactly
what
order
things
come
in
and
it
means
that
there
must
that
the
parent
object
must
always
be
present
before
we
send
the
next
layer,
and
that's
that's
not
something
that
we
explicitly
state.
That
is
just
a
property
of
the
algorithm
right.
E
C
D
One
positive
thing,
I
will
say
so
we
can
do
how
do
I
mental
model
real
mental
model,
no
not
bunch
of
like
rules,
but
it's
mental
model
back
when
we
had
we
both
and
full
shape,
meaning
quite
give
you
space
for
available
and
before
you
will
get
all
the
fields
and
the
defer
as
the
same
batch,
it's
kind
of
which
month
of
model
made
sense.
It
had
like
a
bunch
of
problem
myself
when
we
shifted
through
a
bunch
of
like
half
measures
which
I
I
I,
wouldn't
call
them
mental
models.
D
It's
just
so
I
accept
of
like
rules
and
now
now
it's
finally
look
like
a
mental
model.
I
still
think
it's.
How
like
a
problem,
but
at
least
it's
like.
D
If
you,
if
you
in
your
brain,
if
you
understand
like
the
force,
is
like
wires
and
you
understand
how
graphql
margin
works
now
now,
it's
like
you
understand
how
this
thing
works,
even
without
like
reading
your
algorithm,
it's
like
it's.
D
The
thing
is
like
logical
and
it's
not
true
based
it's
kind
of
like
one
principle.
Everything
under
this
assembly
forever
execute
like
is
one
thing
and
everything,
and
it's
Street,
like
it's
recusively
initial
response
of,
like
you
have
initial
response
when
you
have
deferred
with
initial
response
and
subsequent
before
inside
weather,
for
our
initial
responses
subsequent
before
so.
D
Finally,
it's
like
like
manto
model,
so
on
what
basis
is
like
a
progress
and
I
can
I
think
we
can
figure
out
like
edge
cases
and
and
problems,
but
I
think
some
of
them
are
important,
like
ability,
at
least
like
someone's,
saying
how,
in
future,
we
will
add
like
delivery.
D
Because
now,
in
current
state,
what
what
you
discussed
with
Rob,
it's
like
people
would
want
it,
but
yeah
and
maybe
we'll
add
the
new
super
question.
But
I
don't
see
how.
C
Well,
one
of
the
thank
you
event
at
first
one
of
the
other
interesting
things
about
this
is,
if
we
want
to,
we
could
like
effectively
ship
this
as
is,
and
then
when
we
want
to
do
those
those
splitting
up.
Like
the
priority
of
the
diverse,
we
could
do
that
as
a
separate
change,
and
it
wouldn't
change
anything
from
the
client's
perspective.
The
payloads
they
would
receive
would
be
different,
but
because
we
don't
link
the
individual
act,
defers
to
something
that
we
concretely
give
the
user.
C
We
just
tell
them,
there's
more
stuff
coming
at
these
paths
and
here's
the
data
and
we
ensure
that
the
the
fragments
that
we
give
you
are
consistent
and
delivered
together.
If
we
have
these
two
independent
fragments
that
have
no
overlap
whatsoever,
then
in
a
later
version
of
the
spec,
we
could
deliver
them
separately
in
either
order
without
breaking
any
existing
clients,
which.
D
I
think
is
really
nice,
a
problem
that
it's
like
quantity
to
detect.
If
fragments
get
shipped.
Yes,
they
can
do
with
electric
from
mad,
but
at
the
same
time
they
can
use
so
ico-iring
thing
because,
like
aliastric
basically
force
you
to
modify
query
and
it's
Mike
by
what
bigger
So
currently
like
if
server
works
as
you
write
and
spec,
basically,
you
can
track
delivery
of
fragments
based
on
wears
shipped.
B
Hey
sorry,
I
need
to
jump
in
because
I
need
to
leave
for
another
meeting,
but
there's
a
there's.
Another
working
group
on
Thursday
I
think
that
maybe
we
should
talk
through
this
with
a
bigger
group,
then
and
get
more
opinions
from
there.
What
do
you
think.
C
I
think
it's
probably
quite
a
large
thing
to
present
at
the
normal
working
group
like
because
you
have
to
give
people
enough
backs.
I
actually
have
I
didn't,
show
them,
but
I
actually
have
a
bunch
of
slides
that
explain
like
the
the
underlying
of
a
bunch
of
stuff
that
we
talked
about
in
this
group
previously.
C
So
we
could
I
could
use
that
as
a
starting
point,
but
I
think
it
would
be
like
a
you
know.
45
minute
session.
B
Yeah
yeah,
okay,
maybe.
C
I
think
that
would
be
the
right
approach
see
if
anyone
on
this
working
group
has
some
major
pushback
first
and
then,
as
you
say,
maybe
to
the
next
primary
working
group.
We'll
add
it
there
in
in
April.
B
Okay,
I
have
to
jump
off.
You
guys
can
keep
talking
if
you
want,
but
thanks
a
lot.