►
From YouTube: Incremental Delivery Working Group - 2023-02-20
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
B
B
Yeah
so
so
Benji
I
saw
you
just
sent
me
your
Spectra.
Maybe
do
you
want
to?
Will
you
walk
us
through
it
a
little
bit.
C
Yeah
sure
apologize
again,
I
still
have
a
sore
throat
since
last
week,
so
apologies
with
all
of
that
I
will
share
my
screen.
C
All
right,
hopefully,
you
can
see
something
titled,
absolutely
terrible,
not
even
quite
a
first
draft,
quite
first
draft
yeah
okay.
So
after
last
week,
I
did
try
and
make
some
changes
and
then
I
got
sicker
and
completely
lost
track
of
what
it
was
that
I
was
doing
and
then
gave
up
halfway
through
the
week
and
just
went
to
bed.
C
Unfortunately,
Rob
reminded
me
a
couple
of
hours
ago
that
I
did
say:
I
would
do
this,
so
I've
I
tried
to
scrape
something
together
based
on
whatever
ramblings
it
was
that
I'd
done
last
week
and
then
on
that
I've
then
evolved
quite
a
bit
very
very
recently,
so
some
of
this
has
been
written
in
the
last
sort
of
10
minutes,
FYI
to
explain
the
state
of
it.
C
So
the
concept
is
this:
I've
introduced
the
concept
of
phases
and
I
will
stress
that
this
is
not
like
proper
spec
edits.
So
it
does
things
like
it
mutates
things
which
we
definitely
don't
want
in
the
spec,
because
that
would
break
like
error,
handling
and
a
whole
bunch
of
other
stuff.
C
C
Now,
when
you
execute
a
selection
set,
you
group
all
of
the
selections
by
their
field
name
and
then
normally
we
just
look
at
the
first
selection
in
each
of
those
groups,
because
they're
all
in
normal
graphql
that
exists
today,
they're
all
equivalent
right,
they've
already
got
all
the
same
directives.
They've
got
the
same
arguments
all
that
thing.
The
only
thing
that's
different
for
each
of
them
is
potentially
like
their
selection
sets
stuff
like
that.
C
That
is
a
little
bit
different.
Now
I
would
stream
and
defer.
So
what
I've
done
is
I've
effectively
attached
the
phase
information
to
the
selections,
so
we
know
effectively
which
phase
each
of
those
selections
comes
through
and
then,
when
it
comes
down
to
actually
executing
the
selection
set,
we
can
then
look
at
the.
C
Let
me
turn
on
the
ignoring
of
white
space
changes.
If
I
can
find
out
how
to
do
that,
and
then.
C
Yeah
here
we
go
so.
C
If
none
of
the
fields,
the
selections
are
for
the
current
field
name
are
in
the
current
phase,
then
we
basically
skip
over
it
and
we
note
that
we're
going
to
be
executing
it
later
and
we
effectively
store
this
object
value
for
later.
So
we
don't
have
to
execute
its
parent
field
again
or
anything
like
that,
actually
where
it
should
be
enqueued,
but
like
I,
haven't
written
the
correct
text
for
that.
Otherwise
we
do
the
normal
stuff
where
we
normally,
we
normally
execute
it,
but
we
pass
in
the
the
phase
information.
C
Then
there's
a
bit
of
weirdness
about
like
once
you.
If
you
decide
that
you're
executing
this
field
in
the
current
phase,
then
we
effectively
pull
all
of
those
different
types
of
selections,
some
of
which
are
for
different
phase.
C
We
pull
them
effectively
into
this
phase,
the
idea
being
that
then,
when
we
come
to
look
at
the
remainder
later,
we
can
see
that
they're,
not
in
our
current
phase
anymore,
because
they've
already
been
dealt
with
or
something
like
that
I
added
a
little
like
should
differ
helper
to
help
the
system
decide
whether
you
know
if
it
wants
to
not
defer
for
performance
reasons,
but
that's
neither
here
nor
there
really
that's
kind
of
a
separate
concern.
C
So
then
we,
when
we
start
actually
executing
a
query.
We
build
a
new
phase
like
which
is
the
root
phase.
Effectively
everything
starts
in
the
root
phase.
We
then
execute
the
selection
set
using
that
root
phase.
If
no
other
phases
are
created,
then
we
just
do
what
we
always
used
to
do
and
we
return
an
unordered
map
containing
the
data
and
errors.
Otherwise,
if
the
current
phase
the
root
phase
has
children,
then
we'll
need
to
do
streaming,
so
we
create
the
stream.
C
We
set
a
list
of
pendings,
which
will
be
all
of
the
the
children
from
the
current
phase,
send
that
out
onto
the
onto
the
stream,
along
with
the
the
data
and
errors,
and
then
we
go
about
and
we
execute
the
phases,
and
we
do
that
in
parallel
or
the
the
siblings
we
execute
those
in
parallel
and
for
those
it's
actually
quite
similar
to
what
we
normally
do.
So
it.
The
the
interesting
thing
here
is
effectively.
C
The
parent
field
has
already
been
executed,
and
what
I'm
proposing
with
this
is
that
we
don't
what's
a
good
scratch
card
website.
C
What
I
was
trying
to
say
is
that
previously
we
were
talking
or
I
was
talking
about
the
it
being
sort
of
equivalent
to
compiling
that
defers
into
separate
queries,
but
I
think
what
we
can
actually
do
instead
is
just
send
through
like
the
resulting
selections,
and
there
can
be
multiple
of
them
that
come
out
of
the
same
defer
as
separate
payloads
and
then
once
you've
done
that,
then
you
would
add
the
fact
that
it's
completed
to
the
stream,
so
I
guess,
if
I
open
the
zoom
chat,
I
can
type
something
in
there
again.
C
C
D
Execution
state
that
we
can
Trek
across
actually
the
parallel
threats
that
we
have
for
part
of
the
executions
that
we
have.
C
Yes,
so
yeah
we
effectively
cached
the
object
value
the
path
so
that
we
know
that
we
can
execute
those
later
without
having
to
re-execute
the
the
dependent
fields
and
an
important
thing
about
what
this
does.
Is
it
doesn't
do
the
maximal
deduplication
that
we've
been
talking
about
it
just
does
like
best
effort
the
duplication
so
like
one
layer
effectively.
E
C
B
C
Effectively,
what
we
do
is
the
selection,
which
is
like
the
the
instance
of
a
field,
but
not
you
can
have
multiple
instances
of
the
same
field:
multiple
selections
of
the
same
field
right
so
each
of
those
effectively
knows
which
defer
it
belongs
to
and
then
or
which
phase
sorry
that
it
belongs
to,
and
then
we
can
use
that
information
when
we
collapse,
those
down
which
I
think
is
around
here.
So
it's
in
inside
of
the
execute
selection
set.
C
We
look
at
the
group
field
sets
for
the
given
response
key,
so
we
pull
those
fields
out
those
individual
selections
and
then
based
on
that,
we
can
then
look.
C
We
can
look
at
those
to
see
whether
all
of
those
selections
are
not
in
the
current
phase
or
whether
they
are
or
whatever
that
logic
here
might
not
be
perfect,
but
effectively
we
can
use
the
knowledge
of
which
phase
each
of
those
fields
is
in
to
know
whether
or
not
it
needs
to
be
executed
or
not
whether
it's
already
been
executed.
C
C
C
That's
effectively
what
a
selection
or
a
field
selection
is
it's
the
field
name,
the
parent
type,
the
selection
set
and
the
args
that
it
will
be
passed
through
and
I.
Think
directives
as
well
and
we'd
be
adding
to
that
also
and
which
phase
does
it
belong
to.
E
B
Is
is,
it
is
the
idea
that
we
are
just
kind
of
doing
traversal
through
the
whole
tree,
but
like
grabbing
the
values
and
sending
them
to
the
right
payloads?
Is
that
what's
happening?
No.
C
No,
no,
no,
no,
we
don't
we
don't.
We
don't
do
any
additional
traversals
and
what
we
would
do
with
the
current
graphql
other
than
obviously
handling.
Then
the
Deferred
execution
of
anything
that
was
deferred.
C
The
things
that
will
be
so
I
know
we've
talked
about
this
as
well.
In
terms
of
like,
can
we
start
executing
the
Deferred
things
before
the
main
selection
set
is
complete,
and
this
proposal
doesn't
state
that
that
is
what
happens.
That
doesn't
mean
you
can't
do
it.
It
just
means
observably.
It
would
need
to
be
as
if
it
hadn't
been
the
case
so
yeah.
C
C
Jakob
says
in
chat
that
he
sees
this
as
similar
to
his
approach,
except
that
we
deformalize
defer
depth
as
phase
and
it's
better
because
we're
not
relying
on
memoization
but
instead
using
proper
types.
F
I
mean
you
also
have
the
ability
to
to
complete
the
phase
like
as
it
comes
and
then
Market
as
completed
I.
Think
that's
also,
you
know
another
good
feature.
Basically
I
think
it's
great
I
mean
I,
it's
gonna
require
more,
you
know,
you
know
finessing
and
implementing,
and
the
question
I
guess
is
benefits:
cost
versus
benefits,
I,
guess
time
and
development,
but
I
think
it's
ultimately
better
I,
don't
know.
C
And
I
should
stress
as
well.
This,
like
hasn't
looked
at
stream.
I
haven't
factored
this
into
mutations
or
subscriptions,
though
it
would
work.
Basically,
the
same
I
just
haven't
done
the
the
edits
there.
So
this
is
like
incredibly
raw
I
need
to
turn
this
into
more
of
an
actual
first
draft,
but
I've
been
a
little
under
the
weather,
so
I
haven't
had
much
time.
B
D
But
just
so
the
deduplication
that
that
we
are
doing
here.
D
So
how
how
how
does
that
deduplicate
make
sure
that
we
don't
is
it
so
you
want
to?
You
want
to
essentially
store
each
of
the
selections
that
you
have
executed
on
the
face
or.
C
What
I
want
to
do
and
what
I've
actually
done
are
two
slightly
different
things,
but
to
to
answer
your
question
effectively,
the
deduplication
happens
during
the
the
execute
selection
set.
C
So
if
we're
not
executing
it
now,
then
we
add
it
as
a
thing
to
be
executed
later.
Otherwise
we
execute
it
now
and
we
send
it.
So
it
will
only
be
one
of
those
two
things
for
each
field
like
it
either
gets
executed
or
it
doesn't
now
that
won't
be
true.
C
Once
they're
once
you
effectively
Fork
execution
through
a
defer,
then
it
might
be
that
you
know
it
gets
evaluated
in
two
different
great
grandchildren,
but
for
for
siblings,
certainly
for
a
parent
and
child,
it
will
be
deduplicated,
parent
and
child
phases
to
declare.
C
E
B
Yeah,
so
just
to
talk
through
like
when
we
say
like
static
deduplication,
whereas
our
original
originally
we
wanted
to
like
model
the
same
behavior
where,
if
you
were
to
rewrite
a
query
or
you,
the
execution
would
act
in
the
same
way.
B
So
so
my
my
assumption
was
that
this
would
mean
that,
if
you're
looking
at
this
query,
it
would
just
get
Rewritten
to
remove
this
defer
entirely
and
you're.
Only
ever
going
to
get
one
payload
from
that.
Yes,.
C
That's
still
the
case
with
my
proposal
what
it
will
be
still
the
case:
it's
not
currently
because
effectively
a
a
new
phase
gets
created,
but
that
phase
ends
up
having
nothing
in
it
and
there
isn't
currently
a
step
in
which
I
clear
that
away
but
yeah.
Ultimately,
it
should
hopefully.
B
Okay
and
now
the
second
example
is
this:
one:
there's
no
overlap,
so
this
should
return.
This
should
always
have
a
defer
with.
This
should
always
yield
two
payloads.
C
C
Because
effectively
the
defer
gets
pushed
inside
of
the
nested
object,
but
not
really
like
you
wouldn't
represent
it
like
that
in
syntax,
but
the
the
nested
object
field
is
pool
direct
into
the
parent
and
it's
only
the
selection,
for
name
that
is
still
actually
deferred.
B
So
the
the
path
of
that
payload
would
not
be
point
to
the
same
path
of
where
the
defer
is
correct.
F
Okay
is
that
is
that
desirable,
or
just
like
something
something
that
is
the
current
state
foreign.
C
Desirable
or
undesirable,
I
I,
don't
think
it
particularly
matters.
I
think
the
main
things
that
we
need
to
get
out
of,
like
defer,
is
to
know
that
when,
when
a
deferred
thing
is
completed,
it
is
completed
and
we've
got
that
with
the
with
the
pendings
and
the
the
completed
this
ID
or
whatever.
C
So
so
long
as
the
information
gets
there
I
don't
particularly
mind
if
it
comes
over
multiple
payloads
or
anything
like
that
split
up
from
what
what
was
there
before.
This
is
obviously
different
trade-offs
to
what
we've
discussed
for
some
of
the
other
Solutions.
This
is
an
alternative
solution,
so
it
may
be
that
different
people
feel
different
ways.
There.
F
Well,
I
mean
it's
very
similar,
I
guess
to
what
Ivan
was
suggesting
right
previously
and
I
think
we've.
We
talked
about
an
implementation
being
like
sort
of
a
test
case
and
yeah.
C
I
think
one
of
the
people
that
might
have
an
issue
with
what
I'm
proposing
is
Matt
Mahoney,
so
I
mentioned
that
because
he's
not
here,
he
was
saying
before
that.
He
would
rather,
that
the
client
only
receives
payloads
when
they
are
like
actionable
or
something
like
that,
so
they
needed
to
be
like
a
complete
state.
C
So
in
what
I'm
proposing
you
might
end
up
getting
different
payloads
for
different
layers
and
then
a
notification
that
a
particular
defer
is
now
complete
and
I.
Think
Matt
would
rather
that
we
didn't
receive
that.
But
one
of
the
advantages
of
the
approach
that
I'm
proposing
is
we
don't
have
to
do
quite
so
much
nested
object
merging,
because
we
tell
you
what
we
tell
you
deeper.
What
the
path
is,
we
don't
have
to
repeat,
like
parent
objects,
multiple
times,
for
example,
so
it's
you
know
trade-offs.
C
B
B
In
in
your
case,
the
the
thing
that
I
that
I
was
struggling
with
when
I
was
implementing
was
if
this
nested
object
returns
null
there.
There
has,
there
wouldn't
be
any
way
to
like
know
by
going
deep
down
inside
that
they
actually
do
have
the
same
or
different
children
fields
and
knowing
whether
that
a
static
algorithm
would
have
deduplicated
or
not.
But
I.
Guess
that's
not
the
case
here,
because
you
wouldn't
you
wouldn't
be
sending
this
object
a
second
time
ever
so
yeah.
C
We'd
never
even
get
as
far
as
looking
at
the
name
field,
because
it
effectively
follows
the
same
algorithm.
That
graphql
has
currently
with
some
very
slight
modifications
so
because
the
nested
object
field
would
be
null
we'd
never
executed
sub
selections,
so
they
would
never
be
added
to
the
queue
to
be
executed
later
part
of
being
added
to
the
queue
that
name
field
there.
If
that
were
to
be
added
to
the
queue
it
would
be
added,
along
with
the
object
value
of
the
nested
object
field.
C
F
C
Yeah
I
think
it's
it's
really
up
to
the
server
you
you
do
it
I
assume
in
a
similar
way
to
data
loader
you'd,
effectively,
wait
a
text
see
what's
ready
and
then
send
that
all
together,
but
I'm
not
sure
that
we
need
to
specify
that
to
too
tightly.
C
There
are
some
interesting
things
like
if,
for
example,
you
do
a
defer-
and
that
has
a
deep
selection
set.
That's
all
inside
of
that
defer
and
has
maybe
like
lists,
and
things
like
that
inside
of
it
that
will
still
that
will
still
all
come
through
as
just
like
a
single
payload.
C
It's
only
when
you've
got
like
lots
of
branching
going
on
through
lots
of
different
defers
where
things
get
a
little
bit
more
interesting,
I.
Think.
C
Yes,
the
algorithm
requires
that,
in
terms
of
specifying
it
in
the
spec
you
as
a
server
can
choose
to
not
do
that
so
long
as
you
still
deliver
it
in
the
right
order,
because
otherwise
you
might
start
delivering
things
that
should
not
have
been
delivered
because
an
error
occurred
at
a
later
stage
or
whatever.
C
So
you
would,
if
you
were
to
start
executing
that
stuff
in
advance,
you'd
have
to
make
sure
you
cached
it
and
then
run
it
through
the
algorithm
later
effectively
but
yeah
from
an
algorithmic
point
of
view,
excluding
any
fancy
optimizations
you
might
choose
to
do.
We
assume
that
or
we
state
that
you
will
start
executing
anything
deferred
after
the
non-deferred
stuff
is
complete.
B
But
you
would
still,
you
would
have
to
do
it
that
way,
right,
because
you
couldn't
get
to
this
level
and
then
have
like
one
thread:
that's
executing
this
independently
and
another
one.
That's
executing
this
independently,
because
you
need
to
know
the
values
that
were
which
that
were
returned
by
the
resolvers
in
the
initial
payload
before
you
could
start.
C
C
So
those
are
what
you
would
then
execute
next
now
technically
this
the
spec
would
state
that
you
wouldn't
even
look
at
executing
name
until
after
everything's
been
sent,
but
there's
nothing
to
stop
you
from
starting
to
execute
name.
You
already
know
the
value
of
nested
objects
at
that
point,
because
it's
there
in
the
parent
selection
set.
It's
already
been
evaluated
in
the
in
the
initial
non-deferred
payload.
C
Effectively,
they'll
regrouped
together
by
field
field
collection.
F
Meaning
because
payloads
are
sent
partially
and
then
they're
notified
that
you
know
that
it's
ready,
but
you
know
the
client,
you
know.
Couldn't
you
know
we
would?
We
might
be
able
to
add
that
on
I
guess
later,
if
that
was
something
that
was
desirable,
but
it
would
be
a
little
tricky.
The
model
is
definitely
a
patch
model.
D
E
C
Sorry
I
didn't
I,
didn't
fully
understand
that
I
know.
We've
talked
about
this
like
patching
and
caching
and
all
this
stuff
before
but
I
I,
don't
remember
the
specifics,
but
I
think
all
of
the
incremental
delivery
stuff
is
all
about
throwing
stuff
later,
which
is
adding
stuff
to
things
you
already
have
yeah.
So
in
that
way
this
would
also
be
like
that
also.
D
C
Yeah
and
that
would
be
guaranteed
by
the
fact
that
it
comes
just
out
of
the
value,
the
standard
collection
algorithm
that
we
have
in
graphql
and
it
would
also
get
rid
of
all
of
was
it
Quakers
issue
I
forget
who
it
was
where
the
the
question
was
around
like
what
happens?
If
you
know
you
get
the
list
evaluated
and
when
you
evaluate
it
in
one
thread,
there's
four
things
and
then
in
the
other
thread,
there's
three
and
they're
different
things,
but
you
select
different
fields
from
each
I.
C
Do
you
know
what?
No?
That
would
still
be
an
issue
or
if
the
list
is
pulled
down
twice
through
sibling,
sibling,
defers
or
you
know,
cousins,
I,
guess
or
something
like
that-
some
some
complicated
tree,
because
we
don't
do
that
100,
full
deduplication.
C
There
could
be
some
interesting
patching
there.
Oh
my
God.
B
B
C
Yeah
I'm
gonna
have
to
write
that
up
as
actually
because
that's
that's
when
I
hadn't
apparently
thought
through
well
enough
and
I'd
need
to
think
about
that,
and
that
actually
may
well
tie
into
what
yaakov
was
saying
before
about
the
the
normalized
cache
because
effectively,
if
you've
got
these
nested
defers
and
these
like
branches
of
nested
defers,
then
there
could
become
a
situation.
I
think
where
one
side
of
the
tree
is
dealing
with
a
list.
C
F
Yeah,
actually,
what
I
was
talking
about
a
cash
actually
I
was
referencing.
You
know
a
comment
that
I
made
that
that
you
know
I,
I
I
think
that
it
it's
it's
it's
it
would
be.
It
would
be
very
interesting
to,
for
you
know,
to
enable
a
situation
where
a
client
didn't
have
to
keep
a
global
cash
for
the
entire
response,
and
when
I
I
don't
think
we
need
to
use
cast,
we
could
use
the
word.
F
You
know
you
know
you
know
patch
the
original
response
and
just
keep
that
currently
available
like
it
would
be
nice
if
you
know
if
different
components
were
interested
in
different
things,
and
some
of
them
could
be
deferred.
If
we
could
say
that
you
know
they
each
are,
you
know
they
each
can
be
supplied
by.
You
know
only
the
data
that
they
that
they
require,
like
a
simple,
a
simple
client
might
not
necessarily
need
to
patch
the
overall
response.
With
that
you
just
need
to
patch.
F
You
know
the
the
components
you
know
right
now,
we're
combining
them
at
the
same
level,
so
it
would
Supply
and
contain
all
the
data
for
those
deferred
components
and
they
would
be
supplied
with
them,
and
you
know
that
would
be
it
there,
wouldn't
have
to
be
like
a
global
people
response
object.
It's
it's
there's
two
blockers
for
that.
One
is
the
deduplication
itself
which
we
have
talked
about,
as,
as
you
know,
it
would
be
nice
to
to
not.
F
F
You
know
to
some
extent
to
deduplicating
where,
where
possible,
but
but
you
know
it's
and
the
other
blocker
is,
is
you
know
we're
inlining
things
so
in
general,
I
think
once
we
have,
you
know,
you
know
we
have
the
possibility
that
things
might
be
in
lines
and
then,
and
once
things
are
in
line
you're,
definitely
working
with
the
original
response
and
it
becomes
difficult
to
sort
everything
out.
F
So
I
was
hoping
that
once
we
got
to
a
a
response
shape
that
we
that
we
liked
it
might
be
possible
to
separate
the
initial
response,
meaning
to
to
inline,
not
by
sticking
everything
within
the
original
response,
but
to
inline
by
you
know
telling
client
you
know
sending
the
data
separately,
but
at
this
you
know
in
in
this
upper
tree,
but
at
the
same
time
now
again,
inlining
is
good
for
deduplication.
So
these
these
things,
that's
like
a
cross-cutting
concern
is,
is
this
use
case
real?
F
You
know
that
I'm
talking
about
really
so
beneficial.
You
know
to
you
know
to
give
up
some
of
these
things,
but
you
know
just
just
you
know
there
was
just
something
you
know
that
I
was
thinking
about,
might
be.
You
know
better
for,
like
even
like
a
naive
client
to
just
be
able
to
notify
the
components
and
have
the
components
know
that
all
of
its
data
is
available
with
the
data
that
is
here
right
now
that
it
just
received.
F
So
you
know
again,
it
brings
us
sort
of
further
afield
but
I'm,
just
pointing
out
I
guess
that
this
model
brings
us
a
little
further
from
it,
meaning
once
where
once
we're
extending
individual
Fields,
you
know
it
brings
us
a
little
further
from
that.
It's
not
necessarily
bad
things.
I,
don't
think
that
was
the
direction
we
were
going
in
to
begin
with,.
C
This
that's
an
interesting
point
so
to
try
and
apply
that
to
a
concrete
use
case
that
I
have
that
I
would
like
in
incremental
delivery
to
deal
with,
and
it
is
a
little
bit
left
field
from
what
we
would
normally
do
with
graphqr
and
that
is,
for
example,
building
a
streamed
query
over
a
very
large
data
set
like
gigabytes
and
then,
for
example,
having
a
client
that
might
take
receive
that
and
write
it
out
to
a
CSV
file,
or
something
like
that.
C
So
if
we
have
to
have
this
global
object,
that
represents
everything
about
this
this
query,
then
we
would
have
to
have
enough
RAM
to
store
the
full
result
of
like
multi
gigabytes
in
it.
But
I,
don't
think
that's
the
case,
even
with
what
I'm
proposing,
because
what
we
effectively
are
saying
you
know
is
that
there's
a
there's
deferred
things
at
this
path
or
this
path
is
now
complete.
C
Etc
I
think
that
what
you
can
do
is
effectively
just
keep
a
cache
of
that
object
that
you're
currently
dealing
with
or
those
objects
in
that
list,
and
yet
you
would
be
patching
those
you'd
be
adding
extra
Fields
into
those.
Maybe
into
child
selections
and
so
on,
but
once
that
particular
entry
in
the
Stream,
for
example,
is
complete,
then
you
don't
need
it
anymore.
C
You
write
it
out
as
a
row
in
your
CSV
file
and
then
you
can
carry
on
processing
the
rest
of
the
Stream
So
I
think
you
can
still
do
that,
but
I
would
have
to
definitely
think
about
it.
More
I
think
what
you're
saying
is
compared
to
the
situation
where
we
would
have
no
deduplication,
whereby,
if
you
defer
a
fragment
when
you
later
receive
it,
you've
received
a
fragment.
This
is
the
entire
data.
I
need
for
this.
One
row
in
my
CSV
I
can
write
it
and
then
I
can
throw
it
away.
B
B
The
the
way
that,
on
the
server
by
basically
flushing
out
fields,
without
waiting
for
I
mean
by
by
readjusting
the
specs
so
that
we're
able
to
send
fields
not
that
wood
paths
that
don't
line
up
exactly
with
where
the
deferrs
are.
That's,
how
you
get
around
the
server
having
to
buffer
data
of
like
the
shared
parent
objects,
because
they're
fleshed
out?
And
you
just
have
moved
on
to
executing
the
the
children.
Fields
right.
A
Like
I
can
provide
control
example,
if
you
have
a
query
and
you
put
everything
on
the
default
like
on
top
level,
and
there
is
like
a
small
field,
single
Sofia,
the
smallest
field,
in
the
entire
choir,
it's
mean,
like
you,
need
to
to
store
away
entire
response
until
you
get
with
like
single
top
level
slowest
in
the
query
field
so
effectively,
I
agree
with
Jacob.
It's
like
it's.
C
A
Yeah
yeah
and
you
said
about
your
streams,
multi
gearbox
streams
like
when
you
wait.
Imagine
you
have
the
four
on
the
top
level,
always
like
all
the
fields
and
one
of
the
few,
this
like
slowest
field,
so
you
stuck
like
with
entire
response
in
your
cache
until
you
like,
oh,
like
pretty
big
chunk
of
it
like
there
is
no
way
Yeah.
So
basically,
if
you
do
talk
about
before-
and
it's
not
finished
like
quickly,
you
stuck
on
it
so
I'm,
like
oh
I,
sure.
E
C
Wouldn't
receive
any
of
the
Deferred
stuff
until
after
the
root
selection,
I
still
sorry,
Ivan
I,
don't
I,
don't
quite
follow
what
you're
saying
it
sounded
like
you
were
saying.
If
you
have
a
root
level
defer
that
has
big
complex
clip
in
it
and
also
a
root
level,
non-deferred
slow
field,
no.
A
So,
like
you
basically
said,
instead
of
keeping
in
memory
inquired
the
entire
all
right
entire
query,
you
can
keep
on
the
stuff
that
gets
deferred,
but
what,
if
entire
query
or
like
the
biggest
part
of
the
choir
is
deferred
and
since,
like
the
sword
was
filled,
is
right.
Under
that
top
level,
it's
mean
like
quite
need
to
store
entire
result
until
you
can
get
like
this
whole
field.
C
The
proposals
that
I've
placed
it
basically
works
very
much
like
the
the
current
execution
algorithm
anyway
right.
So
what
would
actually
happen
is
all
of
those
root
level.
Fields
inside
that
defer
would
be
executed
in
parallel,
and
then
they
would
all
be
sent
through,
but
it
would
be
sent
through
as
if
it
was
just
one
single
deferred
thing
like
there's
it
wouldn't
be
built
up
in
in
patches.
It
would
just
be
if
you've
got
one
big
complex
defer.
It
would
just
be
evaluated
as
one
big
complex
differ.
A
Yeah,
so
so
to
clarify
like
your
proposal,
because
I
might
I'm,
maybe
I'm
confused.
So
what
constrained
you
breaking
you
breaking
constrain.
You
know
sending
multiple
you
break
in
constraint,
of
changing
the
path
more
stuff
like
delivered
or
because,
like
yeah.
C
So
if
you've,
if
you've
got
that
I,
think
we've
I,
think
Rob
wrote
a
query
that
was
like
you
know,
a
with
a
selection
in
it
of
B,
with
a
selection
in
it
of
c
and
then
another
root
level
defer
that
had
effectively
a
b
and
then
maybe
D
or
something
else,
so
they
were
very,
very
similar
to
each
other,
except
for
one,
like
you
know,
great
grandchild
field.
The
difference.
Thank
you
Rob.
C
E
C
A
C
A
C
A
So,
basically,
when
you
write
when
you
write
your
fragment
and
you
have
different
at
all,
like
you
will
differ
entire
fragment
whatever
and
with
bunch
of
fragments.
Mashed
Clan
cannot
trust
that,
like
she
had
like
a
default
on
on
top
of
on
this
example,
but
he
will
get
like
in
pendant
or
something
he
will
get
ID
with
with
different
stuff.
C
No
yeah,
you
were
right
originally,
actually
thinking
about
it.
The
the
path
for
this
defer
would
have
been
the
root
selection
set,
because
that's
where
it
where
it
is
so,
yes,
you
would
have
to
Cache
that
entire
object
in
order
to
to
patch
it.
C
B
So,
for
this
example,
you're,
basically
going
to
be
sending
you're
gonna,
send
first
the
initial
payload,
with
with
these
fields,
you're
going
to
say
that
there's
a
pending
with
the
path
at
the
root
yep,
then
you're
gonna
send
data
with
Just
J
with
a
path
of
F2
CF.
Yes,
then
there's
going
to
be
a
third
payload
that
says
at
the
root
is
completed.
Yes,.
C
C
Just
as
Rob
describes
just
then,
but
imagine
after
you
get
that
J.
You
also
get
another
patch
that
comes
through
with
a
C2
path,
F2
and
then
you
get
that
the
rear
is
completed.
A
A
It
seems
and
if
I
copy
paste
with
with
differ
and
we'll
do
and
and
we'll
make
like,
and
we
should
be
sure,
unders
wait,
make
a
copy
of
of
like
the
third
part
and
shift
before
one
level
inside
so
I
will
wrap
C
into
like
differ
yeah,
so
I
will
basically
get
J
twice.
A
Yeah,
just
like
we
have
like
wine
with
wine
that
rope
selected.
A
Yeah
yeah
correct:
that's
what
mean
we
will
have
to
J
two
instances
of
J,
which
is
okay,
because
we're
not
fully
duplicating.
But,
yes,
you
have
two
instances
of
J
with
the
path
of
J,
so
we're
also
breaking
constrain
of
multiply
by
watts
per
path.
A
A
A
So,
based
on,
like
our
initially,
we
have
like
a
free
solution,
free
constraint
and
every
solution
brought
one
constraint,
Jacob
proposed
like
another
one,
and
we
switched
to
quadrant
here
is
basically
you
Whispering
router,
two
things
think
get
I
actually
like
free
kind
of
path
is
unpredictable,
like
client
can
calculate
but
like
more
complicated
using
complicated
algorithms.
So
it's
basically.
A
Statical
predictable,
but
not
it's
not
the
same
as
like
before
that
it
was
written
on,
so
it's
different
path,
right
path
of
payroll
and
path
of
this
world.
So
it's
like
two
new
entities:
do
you
have
like
a
cup
of
one
G4
can
be
splitted
in
multiples.
A
It's
the
second,
the
constraint,
the
change
right
and
for
constraint.
The
change
is
that
will
have
multiplied
by
words
with
the
same
path,
but
yeah
now
I
get
starting
to
get
answered
because
it's
a
new
idea
of
like
default
path
and
pivot,
buff,
so
yeah.
So
basically,
we
will
have
like
instead
of
pivots,
we
need
to
say
by
patches.
It
will
make
more
sense.
So
we
have
like
we
will
have
multiple
patches
on
the
same
yeah
for
points
to
keep
track
of
it.
A
I
think
like
in
reality,
coins
will
just
have
like
one
Json
object,
and
it
will
just
patch
it
because
otherwise,
it's
like
it's
too
much
hassle
to
to
maintain,
and
you
will
have
like
more
that
the
duplication
on
the
client,
because,
like
yeah
one
thing
because
Jacob
what
he
is
working
on,
I
also
want
to
share
what
I'm
working
on
like
nothing
to
show.
Yet
one
thing
I
figured
out
and
the
direction
I'm
looking
into
is
like.
Can
we
statically
write,
query,
minion
coin?
No
execution
is
changed.
A
Do
my
channel
and
forget
about
right,
parallelization
just
match
everything
into
one
response,
and
if
people
want
parallelization
staff
executed
independently,
they
will
use
aliases,
so
focus
on,
like
I
have
execution
like
basically
I.
Had
it
effectively
algorithm
write.
One
query
into
normal
form
of
aquarium
basically
do
do
affect
only
collect
fields,
and
so
all
the
steps
there
so
effectively
like
currently
basically
Fields
get
matched
I
I,
trying
to
find
like
a
set
of
simple
rules
that
are
all
like
so
much
stuff
inside
outside.
A
Before
you
know
the
combination
and
and
make
it
predictable
and
static,
and
if
people
wants
to
say
like
some
something
is
in
the
panel,
they
need
to
use
like
Alias
or
labels
or
some
other
mechanisms
we
discussed
with
because
similar
to
to
I.
Think
it's
a
similar
thing
to
inviting.
When
we
start
when
we
agreed
that
we
need
to
unwind
into
initial
response
to
prevent
the
naval
service
attack.
A
If
we
say
that
on
the
same
level,
people
who
cannot
have
like
deferred
executed
independently
and
they
needed
to
do
earliest
on
the
same
level
price
is
already
paid.
We
need
to
add
mechanisms
to
do
that,
so
why
not
create
a
rule
that
squash
everything
into
one
query
with
like
stuff
get
different,
not
independently
but
subsequently
deferred,
and
if,
if
people
want
parallelization,
they
need
to
do
addresses
and
I
think
its
direction.
I
I
want
to
explore
just
wanted
to
to
announce
it.
Hopefully
I
will
become
proposal
written.
B
Yeah
we're
at
time.
B
Yeah
I
mean
I
think
that
it's
good
to
like
keep
going
through
all
these
different
ideas
and
make
sure
that
we've
that
we're
confident
in
whichever
direction
we
go
in
so
so
I
would
wait
to
see
some
examples
of
what
you're
thinking
of
on
and.
B
For
for
Benji's
proposal.
A
Basically,
it's
like
last
time
on
a
working
group,
mine
working
group.
We
discuss
it
and
it
leads.
The
reaction
was
like
problems.
Is
real
solutions
have
like
trade-offs?
Let's
explore
possible
solutions.
A
So
it's
what
we're
doing
right
now,
so
I
think
everything
is
good.
One
thing,
maybe
at
some
point
we
need
to
have
like
a
documentary,
trade
off
or
constraints
that
we
break
and
and
some
of
these
ideas
to,
because
we
already
started
branching
up
like
ideas,
branching
up
of
ideas
and
it's
for
us,
it's
kind
of
makes
sense,
but
for
people
outside
of
this
group,
it's
like
it's
like
too
big
a
jump
to
understand
what
we
discussing
currently.
B
B
A
It
will
introduce
New
Concept
like
introducing
a
new
Concepts
like
default
decoration
path
and
part
of
the
patch.
C
C
So
I
need
to
think
about
that
a
bit
more
but
yeah.
It's
been
a
good
discussion
thanks.
Everyone
yeah.