►
From YouTube: Incremental Delivery Working Group - 2023-06-26
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
B
Hey
everyone:
let's
get
started.
B
So
so
last
week
was
a
pretty
small
meeting
and
we
were
talking
mostly
about
early
execution
versus
deferred
execution
and
I.
Guess:
I!
Guess,
besides
that
I
had,
since
we
decided
on
the
on
the
the
payload
format,
I
updated
it
I
put
it
in
I,
moved
it
out
of
that
distance
into
the
GitHub
discussions
to
see.
B
If
anyone
else
had
any
thoughts
haven't
heard
any
feedback
from
there
yet
and
I
went
through
a
bunch
of
the
all
discussions
that
we
had
and
marked
a
lot
of
them
resolved
most
of
the
ones
around
duplication
response
explosion,
all
the
stuff
that
the
new
payload
format
addresses
yeah.
And
then
we
talked
about
this
early
execution,
University,
deferred
execution
and
I
kind
of
want
to
continue
this
discussion
a
bit
and
I
think
that
I
will
probably
add
it
to
the
agenda
of
the
next
working
group
meeting
the
primary
one.
B
That's
on
what
is
that
that
is
next
week
next
Thursday
July
6th,
so
we
can
get
water
input,
but.
B
Execution
means
that
when
you
have
fields
that
are
deferred,
we
start
executing
them
right
away
or
with
some
amount
of
delay
like
waiting,
a
a
run,
Loop
take
or
something
similar,
and
basically,
like
writing
this
back
from
the
perspective
that
things
are
going
to
be
executed,
as
they're
encountered,
with
implementations
allowed
to
have
a
delay
of
any
amount
before
deferred
stuff
starts
getting
executed,
whereas
deferred
execution
means
that
you
don't
even
start
calling
the
resolvers
of
deferred
Fields
until
all
of
the
non-deferred
fields
in
the
same
level
have
been
completed,
so
their
resolvers
called
complete
value
called
so
the
whole
the
whole
tree
is
is
down
there
and
the
there
are
some
some
reasons
brought
up
of
how
you
know
you
can
have
cues
in
your
in
your
schema
implementation.
B
B
I've
been
thinking
about
it
over
the
past
week
and
I
think
that
I'm
still
not
convinced
by
deferred
execution,
I
I
agree
that
all
the
things
that
that
we
raised
are
potential
pitfalls
or
issues,
but
I
I'm,
not
convinced
that,
like
a
globally
enforced
deferred
execution
is
the
right
way
to
solve
it,
because
it
really
feels
like
we're
putting
a
penalty
on
all,
unlike
every
schema,
implementer
every
resolver
in
case
they
do
have
these
issues
but
I
feel
like
we
could
like
provide
tools
that
would
solve
these
issues
like
I,
like
I,
think
that
we
could
do
with
implementation
specific
things
like
put
stuff
in
the
data
that
we
send
to
resolvers.
B
B
C
Implementation
can
do
whatever
implementation
wants
right.
It
can
wait
for
stock
quite
a
couple
of
federations.
It
makes
sense
to
to
do
early
execution
because
the
whole
environment
is
controlled
for
for
acceler
implementation
like
kitchen
permutation
can
decide
I'm
I'm
just
worried
about
first
experience
like.
D
C
Somebody
Rides
Away
kind
of
care
server
or
they
migrate
to
a
new
version
of
my
character
specifically,
and
they
discovered
like
surprising
Behavior,
especially
since,
like
it's
people
have
so
much
like
worst
case
scenario,
differ
is
useful,
but
at
the
same
time
it's
weird
to
help
quite
useless
bike
feature
and
force
people.
To
like
a
way
can
we
wait?
Can
we
produce,
for
example,
or
something
if
we
produce
the
warning
in
case?
Why
stuff
is
worked?
It
would
be
ideal.
So
this
one
warning
warning
like
console,
walk
something
saying.
B
C
D
C
B
But
it
really
feels
like
we're
like
yeah
I
mean
I
I
totally
I
totally
like
understand
that
that,
like
that's
an
issue
that
you
can
run
into
the
first
use,
experience
could
be
non-optimal,
but
it's
like
that.
The
the
expense
of
that
is
is
like
your
your
hindering
like,
if,
like
the
the
users
who
are
able
to
solve
that
and
right,
yeah.
D
C
C
D
B
Yeah,
that's
that's
like
the
inverse
it's
it's
kind
of
like
setting
a
flag
you
you
would
basically
like
the
inverse
of
that
was
like
what
I
had
proposed.
Where
you
put
something
on
resolve
info
Benji
said
it
could
be
like
a
promise
parent
defer
that,
then
you
could
await
inside
your
resolver
and
that's
kind
of
like
taking
like
the
opposite
approaches.
So
you
know
it's
it's
early
execution
by
default,
but
then,
where
you
do
run
into
these
issues,
you
can
you
can
do
this
or
I'm.
B
Imagining,
like
you
have
like
some
code
that
you
use
to
like
make
your
database
queries
and
then
you
could
make
that
smart
enough,
that
you
could
pass
the
resolve
info
to
it
and
it
could
make
separate
cues
for
if
something's
deferred
or
not
or
it
could
or
it
could
like,
do
the
awaiting
on
its
own
or
so
it's
not
necessarily.
You
have
to
like
flag
something
on
every
single
resolver.
C
C
The
Romanian
problem
is
that
there
is
no
like
clear
path
from
from
I
I
have
like
I,
don't
have
like
before
is
not
different.
Anything
to
that
wine.
There
is
no
way.
C
B
It
makes
the
response
larger,
there's
a
bit
more
work
for
the
server
to
do,
and
my
clients
should
understand
that
I
I,
don't
know
I
feel
like
is.
Is
there?
Is
there
a
power,
a
parallel
here
to
like
how
like
graphql
like?
Could
you
could
a
naive
graphql
implementation
could
have
like
the
N
plus
one
problem,
but
we
we
have.
We
recommend
data
loader
as
like
a
way
to
do
that.
B
C
C
Thinking
about
that,
there
is
like
two
solutions.
One
solution
is
like:
if
you're
willing
to
invest
time
and
redesign
potentially
redesigned
with
some
underlying
code,
you
can
use
yakov's.
C
B
C
That's
why
I'm
with
with
like
bunch
of
proposal
is,
is
good
enough,
like
without.
D
C
Hope
I
would
say
strong.
No
after
this
change,
I'm
like
it's
not,
there
is
no
like
Booker
or
L
execution.
There.
E
C
Why
downsides,
but
not
a
vocal
and
I
think
on
the
main
working
group
you
can
discuss
and
see
and
get
the
temperature
like
what
people
want,
especially
like
what
should
be
default.
Behavior
like
fast
early
execution
or
default
execution,
because
where
there
is
a
both,
should
be
quite
possible.
B
Just
reading
Benji's
message
in
the
chat
now.
E
Benji
I
think
that
a
summary
that
I
definitely
agree
with
in
terms
of
the
first,
the
the
past
criteria,
the
failure
criteria,
I'm,
not
sure
if
the
I'm
not
sure,
if
you
would
when
you
know
this
is
just
a
summary
and
the
second.
A
Yeah
I
think
giving
users
the
way
to
opt
into
early
execution
would
be
the
best,
which
is
why
explicitly
I've
said
that
we
should
allow
early
execution
it
just
isn't
what
the
the
default
should
be
and
I.
Think
broadly,
this
comes
down
to
like
Ivan
was
saying
it's
the
it's
that
first
expectation
like
people
are
going
to
upgrade
to
a
new
version
of
graphql.
A
That's
going
to
have
stream
interfer
people
are
going
to
say:
oh
okay,
I
understand
what
stream
interfer
is
I
want
this
data
sooner
and
this
other
data
later
so
I'm
going
to
add
defer
to
the
stuff
that
I
don't
care
about
at
first
and
if
that
doesn't
actually
speed
up
their
page
load
times.
They'll
just
think.
Oh,
this
feature
doesn't
really
work
and
that'll,
be
it
and
I.
A
Think
because
of
like
you
were
saying
Ivan
with
the
the
third
party
modules
like
because
we
support
arbitrary
business
logic
and
graphql
is
just
a
layer
over
arbitrary
business
logic,
implemented
by
any
number
of
services,
either
locally
or
remotely.
We
cannot
know
if
there
are
any
cues
there,
either
operating
system,
cues
cues
that
are
in
our
own
code.
A
Cues
that
are
in
remote
Services,
we
don't
know
and
so
having
the
default
behavior
for
this
be
essentially
the
early
execution
and
potentially
give
this
bad
first
impression,
I'm,
not
in
favor
of
that
said,
I
do
agree
with
Rob
and
Michael
and
various
others
that
making
it
so
that
every
graphql
request
that
uses
to
pair
and
stream
is
takes
longer.
Just
purely
because
we
defer
every
execution
is
not
completely
desirable
in
itself.
A
So
I
think
for
people
that
understand
their
systems
being
able
to
opt
into
the
early
execution
is
wise,
whether
that
be
in
a
global
across
the
whole
schema
with
a
flag.
Within
this,
this
opt
out
that
we've
discussed,
like
the
the
resolve
info,
dot
parent
to
fur
promise,
or
something
like
that
or
whether
it
be
on
a
per
resolver
basis,
which
I
think
is
less
desirable.
A
Yeah,
there's,
definitely
I,
think
there's
a
middle
ground,
but
as
much
as
anything
part
of
the
question
here
is
what
do
we
actually
specify,
because
the
specification
of
early
execution
becomes
quite
complex
comparatively
because
it
separates
execution
from
delivery
in
a
way
that
the
graphql
spec
doesn't
really
do
currently.
E
E
I.
Think
that
I
think
that
you
know
for
the
reasons
that
you
and
Yvonne
mentioned
I.
Would
you
know
having
early
execution,
be
the
default?
E
You
know,
I
think
without
a
broader
understanding
of
its
you
know,
of
the
likelihood
of
impact
you
know
does
seem
sort
of
unnecessary.
You
know
power
users
who
want
to
enable
it
as
long
as
they
have
the
option.
It
would
seem
fine,
but
and
I
think
so.
I
think
broadly
there's
a
lot
of
agreement
that
that
it
would
be
good
to
provide
it
that
servers.
E
You
know
the
options
to
users
I
had
a
I
posted,
a
comment
in
I
think
on
I
think
it
was
on
the
the
issue
of
the
discussion
that
you
raised
Benji,
where
I
think
that
we,
you
know
if
it's
just
a
question
of
whether
to
specify
it
or
not,
and
not
a
question
of
whether
to
to
have
the
option
to
provide
it.
E
I,
don't
really
see
the
downside
of
of
specifying
it
in
terms
of
complexity,
I
mean
I,
think
the
fact
that
it's
possible
you
know
and
that
we've
proven
it's
possible
with
an
implementation
and
with
spec
edits
of
to
whatever
extent
they've
been.
You
know,
you
call
you
know
the
algorithm
and
the
invitation
of
proof
you
know,
but
to
whatever
you
know
the
fact
that
we've
shown
that
it's
possible
sort
of
sort
of
as
a
boon.
You
know
and
and
sure
it
adds
complexity,
but
you
know
isn't
that
the
you
know
something.
E
That's
that
the
algorithms
that
we
include
in
the
specification.
You
know
help
people
with.
You
know
that
you
need
to
Matt.
You
know,
help
manage
that
complexity,
so
I
feel
like
considering
that
we
have
algorithms
and
we
have
a
you,
know
spectex,
and
we
have
a
reference
implementation.
If
it's
just
a
question
of
well,
one
is
simpler
and
one
is
more
complex,
I'm,
not
really
sure
what
advantage
we
have
of
not
sharing
the
you
know,
complex
Solutions,
we've
discovered
with
those
who
want
to
implement
them.
E
I
do
think
there
might
be
a
value
of
giving
a
more
simpler
algorithm
as
well,
but
you
know
I
I
think
it
might
be,
might
be
wise
to
even
include
both
Alternatives,
maybe
one
in
an
appendix
and
one
and
one
in
the
main
text,
but
I'm
not
meaning.
We
seem
to
have
basic
agreement
that
it
should
be
possible
so
that
it's
enough
that
it's
an
option,
it's
an
optimization
that
many
might
feel
useful,
so
I
feel
like
in
terms
of
specifying
it.
E
The
downside
of
the
additional
complexity
you
know
is
actually
is
sort
of
an
upside.
You
know
that
that
we
we
can
help
people
with
that.
C
Actually,
like
I,
have
idea,
because
we
can
go
back
and
forth
what
can
help
us
is
to
put
more
input
into
the
loop
it's
useful.
The
first
idea
is
since
Facebook
implement
the
stream
before
years
ago.
A
question
is
what
they
use.
Facebook
is
like
special,
because
they
have
like
a
huge
money
for
a
huge
like
monolith,
and
they
like
put
a
lot
of
investment.
D
C
B
Things
may
have
changed,
but
that's
that
was
what
I
but
I
I.
Think
that's
what
they're
doing,
but
we
should
hear
it
from
yeah.
C
Yeah
awesome:
maybe
they
have
some
background.
For
example,
they
try
to
write
the
execution.
It
will
put
that
you
sound
for
Noob
and
second
idea.
Remember.
We
have
like
couple
occasions
when
we
did
a
pose
on
Twitter.
If
I
remember
one
time,
I
know
for
sure
we
did
the
poll
when
we
introduce
like
discuss
introducing
Day
times
color
and
we
discussed
by
the
name,
conflict
Champion
for
the
NG,
I,
think
Lee
or
Angie.
C
Somebody
did
like
go
on
Twitter
ask
like
how
do
you
want
this
cover
to
be
called
because
proposal
back
then
was
to
call
that
offset
daytime
something,
and
most
people
said
no.
We
actually
like
wanted
to
be
called
daytime.
So
maybe
it
was
what
actually
potential
clients
one.
Oh
people
who
use
this
each
other
want
like,
and
to
make
this
question
unbiased.
C
Wait!
Don't
put
any
numbers
just
ask
like
what
you
expect
initial
response
return
as
early
as
possible
and
all
you
wanted
return
as
early
as
possible,
but
at
the
same
time
try
to
minimize
or
minimize
like
the
executions,
the
whole
query
potentially
making
it
like
initial
response.
Little
bit
to
it.
I,
don't
know
how
to
describe
it,
but
yeah
I
think
we
can
do
a
poll.
C
B
Yeah
well,
yeah.
The
thing
was
very
opposed.
It's
like
I
I,
don't
know
you
may
just
get
lots
of
responses
of
people
who
don't
like
fully
understand
the
context,
because
it's
like
when
something
when
it's
like
just
for
naming
something
it's
just
kind
of
what
you
prefer,
but,
like
you
may
yeah,
you
may
not
fully
understand
like
what
the
trade-offs
are.
D
C
B
But
even
that,
like
there,
there
could
be
other
ways
that
you
could
have
like.
You
could
have
them
both
be
as
fast
as
possible
without,
like
the
Deferred
execution.
Isn't
the
only
way
to
do
that,
but
but
anyway,
yeah
that
we,
we
could
think
about
it
about
the
right
way.
To
ask
that
to
ask
the
question,
but
but
before
that,
I
I
would
like
to
get
feedback
from
the
bigger
working
group
meeting.
C
One
quick
argument
on
the
main
group:
one
present
when
present
like
a
switch
as
a
solution
as
to
use
as
you
think
it
should
be
for
a
bar
field,
a
switch
option
or
something
because.
D
C
You
did
like
some
synchronous
calculation.
You
shouldn't
put
like
a
special
JavaScript
synchronous
calculation
in
a
mind
thread,
but
at
the
same
time
it
can
do
some
calculation
and
you
never
want
to
causing
cancer
resolvers
as
early
execution
right.
There
is
like
no
reasons
to
do
an
execution
for
for
synchrance.
B
D
E
C
C
Something
is
between
something
that
previously
people
was
okay
with
not
not
to
move
like
to
web
worker.
D
C
But
now
it's
basically
making
usual
response
to
over
yeah.
It's
not
the
argument,
early
execution,
it's
like
example
of
why
per
field
work
might
make
sense,
because
only
the
scheme
after
knows
what
this
resolvers
do
internally,
is
that
way
is
the
synchronous?
Is
it
asynchronous?
Is
it
like
synchronous
part?
Is
it
quite
big
or
not
so
I
think
if
we
give
control
to
a
user,
it
should
be
perfumed.
B
Going
back
to
the
the
spec,
if,
if
we
write
the
whole
spec
from
with
the
Assumption
of
deferred
execution,
then-
and
we
wanted
to
add
a
mode
for
early
execution
that
seems
like
it
would
be
a
lot
of
changes
to
go
in
that
direction.
And
but
if
you
want
it.
D
C
Execution
I
would,
edit
from
from
day
one
like
in
environment
in
an
environment
there,
you
control
all
the
code
and
where
you
you
know
what
you're
doing.
C
D
B
People
with
early
execution
when
your
server
is
implementing
it,
you
have
to
also
there's
there's
a
few
more
cases
you
have
to
handle
for
error,
bubbling
that
that's
basically
the
difference,
but
if
you
are,
if
you
did
write
all
that
out
in
the
spec,
and
you
wanted
to
have
an
option
for
deferred
execution,
it
seems
like
it
would
just
kind
of
be
like
a
very
small
change
to
be
like
because
for
early
execution,
I
think
we
want
to
say,
like
the
server
can
have
some
amount
of
delay,
but
it
would
be
like
if
you're
doing
this
in,
like
in
deferred
execution
mode,
you
would
just
wait
as
long
as
like
you
would
basically
just
like
do
that
await
in
the
spec
as
part
of
like
the
executing,
you
would
do
that
before
you
call
the
resolver.
E
So
so
Rob
I
I,
think
I
think
here,
I,
I
kind
of
agree
with
Benji
I
think
the
overall
changes
between
between
early
execution
and
delayed
execution.
You
know
they
they
start
off
with
how
to
handle
errors
of
null
bubbling,
but
they
also
sort
of
impact.
How
we
actually
do
field
collection,
you
know,
do
we
have
sort
of
like
leftover
fields
and
there
could
be
a
a
whole
host.
I
mean
I.
I've
worked
up,
some
spec
edits
for
early
execution
and
I.
E
Think
probably
those
could
be
improved
in
many
many
ways,
but
if
we
were
using
delayed
execution,
I
think
it
overall
could
be.
It
could
be
much
simplified
and
I
think
what
Benji
is
getting
at.
Is
that
there's
a
you
know
a
there's,
a
certain
beauty
or
even
advantage
in
the
Simplicity
first
of
all
making
sure,
let's
say
just
making
sure
there
are
fewer
errors,
but
you
know
maybe
just
making
sure
the
spec
is
more
extensible
later
on
I
mean
I
can
see
the
advantages
of
Simplicity
but
I.
E
Think
if
we
go,
you
know
I
mean
if
we
start
with
early
execution.
Sorry,
if
we
start
with
delayed
execution
and
then
we
want
to
make
a
change,
it
wouldn't
have
any.
E
It
wouldn't
be
a
breaking
change,
but
there
would
be
a
large
number
of
spec
edits
and
if
we
we
had
early
execution
in
the
spec
as
I
have
now
in
in
my
proposal
you
know
we
could
you
know
we
could
support
delayed
execution
in
in
the
spec,
but
it
might
be
beneficial
to
those
implementers
who
are
only
using
delayed
execution
to
actually
have
those
simpler
algorithms
as
well.
So
that's
why
I
think?
E
B
B
E
I,
don't
think
you're
supposed
to
rely
on
the
resolvers
Having
side
effects,
except
with
their
mutations
I
mean
some
people
might
but
I
don't
I
mean
we
can
even
add
a
note
that
that's
still
not
a
good
idea
but
but
but
that's
not
to
say,
I
mean
I
come
out
on
the
side
that
we
should
definitely
should
specify
early
execution
and
I.
E
Think
it's
I
think
would
be
important
or
what
I've
gotten
from
these
meetings
is
a
little
bit
of
confusion
as
to
what
topic
we're
discussed
or
not
confusion,
but
just
realization
that
there
are
two
different
topics.
One
is
which
what
what
should
an
implementation
pursue-
and
you
know,
should
there
be
flags
and
and
so
forth,
and
then
the
other
question
being.
E
What
should
the
specification,
specify
and
I
think
there's
basic
agreement
that
there
should
be
options
and
implementations
and
a
very
likely
implementations
will
find
it
useful
to
enable
early
execution
and
I
think
there's
broad
agreement
there,
and
the
only
question
is:
is
how
exactly
to
have
that
reflected
in
the
specification
text
and
it
seems,
like
opinions
run
the
gamut
from.
We
should
specify
the
late
execution
and
just
provide
a
simple
note.
You
know
that
you
can
do
early
execution
and
good
luck.
E
You
know-
and
you
know,
good
luck
with
the
algorithms
or
or
you
know,
or
what
I'm
suggesting
good
luck
with
the
algorithms
and
maybe
even
checking
appendix
or
you
know
or
versus
having
the
or
early
execution
algorithm
and
the
main
text
or
the
appendix
for
the
delayed
versus
you
know
only
having
the
early
execution
in
the
main
text,
I
mean
I,
I,
don't
think
I,
don't
think.
There's
anyone
here
that
thinks
that
some
implementations
might
want
to
go
with
one
or
the
other
I
think
we
all
realize
that
there
are
trade-offs.
E
So
the
question
is
what
to
specify
and
in
that
sense
like
having
a
poll
as
to
what
to
specify
you
know,
would
be
a
very
different
poll
as
to
whether
you
want
to
do.
You
know,
have
early
or
a
very
different
discussion
in
the
main
working
group,
so
I
think
we
have
to
when
we
present
it
to
the
main
working
group.
Maybe
we
want
to
either
clearly
present
both
questions
or,
or
you
know,
or
focus
on
one
of
them.
C
C
C
D
C
Graphics
code
should
be
UI
code
or
roughly
equal
to
to
spark
so
continue
with
four
chain.
Actually,
I've
I've
really
worked
us
to
to
specify
the
way
it
execution.
It
might
work
stuff
simpler
for
a
person
who
just
trying
to
understand
how
graphql
works
and
go
into
spec
and
read
it,
but
to
be
to
be
truthful
to
what.
What
do
we
describing.
C
C
Maybe
have
like
some
caution
explaining
that
third
party
quote
or
like
even
OS,
well
limits
like
connections
or
something
else
can
create
a
problem.
So
you
should
wisely
choose
the
fold
but
yeah.
If
idea
is
a
implementations
most
of
implementation,
we
will
do
early
execution.
It
should
be
early
execution
and
spec.
B
C
B
And
do
you
know
what
maybe
from
inside
Apollo
there's
there's
what
well
what
I
guess,
because
because
they
are
already
shipping
deferred
to
and
so
I
think,
if
that's
using
graphql.js,
that
would
be
the
currently
implemented
at
early
execution
right.
C
Yeah
but
but
I
think
if
it's
if
it's
in
Federation
is
probably
written
in
Rust,
so
I
will
ask
in
in
a
group
but
yeah
yeah.
It's
good
idea
in
general
to
ask
about
what
currently
Improvement
that.
D
C
B
All
right,
I
I'm,
not
gonna,
be
able
to
make
next
week's
meeting
so
I'm
going
to
cancel
it.
B
Then
the
the
Thursday
after
that
is
the
primary
working
group.
So
I'm
gonna,
add
this
to
the
agenda
and
I'll
try
to
work
on
coming
up
with
a
good
way
to
present
all
the
info
that
we
discussed.
E
Just
a
just
one,
quick
word
on
the
power
loader
utility,
the
that
was
a
suggestion
of
Ivan's
from
last
month
from
last
week
to
to
you,
know,
to
provide
like
sort
of
a
parallel
to
data
loader
for
people
to
start.
You
know
dealing
with
this
right
off
the
bat.
E
What
that
does
is
it?
It
makes
sure
that
to
disentangle
data
loaders
by
sending
defers
sending
deferred
a
requests
separately
than
the
non-deferred
using
the
information
that's
exposed
in
the
resolver,
it
doesn't
solve
a
situation
in
which
the
whole
deferred
batch
is
sent
to
your
endpoint
earlier
than
than
your
non-deferred
batch
and
causes
your
endpoint
to
queue.
So
it
doesn't
solve
that
problem,
but
it
does
solve
the
entangling
one
for
the
queuing
problem.
You
know
that's
something
you're
not
in
control
of
you
would
have
to
wait
on.
E
You
know
disable
early
execution
for
that,
for
that
resolver
I
think
there
can
be
some.
You
know
again,
I
think
I,
think
of
early
extension
disabled
by
default.
E
I
think
we
can
do
some
some
work
on
query
planning,
even
in
graphql.js
or
you
know,
probably
as
a
at
least
initially
as
a
separate
library
that
can
actually
create
more
complex
and
more
accurate
ways
of
optimizing
this
this
further
and
making
sure
that
that
we
don't
we,
you
don't
have
to
completely
wait
for
the
entire
initial
result
to
to
finish
just
because
the
initial
result
may
include
you
know
something
that
that
that
might
get
blocked
by
your
later
result.
You
know
for
a
query:
that's
Operation!
E
That's
like
stored
in
the
reserver
like
a
like
a
store
or
like
a
stored,
Opera,
a
persisted
operation.
We
could
get
something
much
smarter
in
place
and
I
think
once
we
get
early
execution
in
and
like
under
a
flag,
I
could
stop
start
working
on
that.
E
This
wraps
around
a
data
loader,
it's
not
a
it's,
not
a
fork,
and
so
one
aspect
of
that
is
it
uses
the
same
data
loader
and
the
same
cache
and
also
the
same
the
same
connection.
E
So
there
might
be
a
different.
You
know
you
know.
Basically,
if
you
want
to
use
different
connections
as
a
separate
tool,
then
you're
basically
talking
up
maybe
about
two
different
data
loaders.
So
there's,
maybe
some
some
some
some
options,
maybe
that
to
handle
this
somewhat
differently.
But
this
is
just
like
a
first
proof
of
concept.
E
Yeah
exactly
to
use
the
same
cache,
and
that
means
that,
because
they're
not
queued
together,
they
won't
be
present
in
the
cache
at
the
same
time,
so
that
you
may
get
the
same
item
sent
in
two
different
batches.
You
know
simultaneously
because
they
don't
you
know,
neither
one
will
be
in
the
cache
until
they
both
both
come
back
but
they're,
never
entangled,
which
is
the
which
was
the
main
problem
that
this
was
trying
to
solve.
C
Yeah
think
about
for
working
on
that.
It's
give
arguments
if
it's
fine,
how
hard
it
is
to
implement
and
how
the
whole
story
would
look
like
you
know,
since
we're
now
in
discussion,
phase
and
figure
out
Solutions
should
be
like
super
polished,
but
when
we,
when
we
decide
if
we
decide
to
go
like
our
in
every
execution,
it
should
be
anyway,
something
from
that
order.
Right
back
when
we
released
to
provide
like
coherent
story
trip
or
something.
E
Yeah
I
mean
so
yeah
I
took
that
suggestion
and
then
I
bet
got
some
feedback
from
Benji
in
terms
of
data,
loaders
sort
of
being
feature
complete
at
present
and
I.
Guess
it
totally
is
for
what
it's
worth
so
I
mean
I.
Guess
it's!
You
know
the
question
of
whether
the
vision
for
the
library,
whether
it
should
be
in
data
loader
or
alongside
it-
maybe
that's
for
Lee
to
tell.
C
E
C
Yeah
in
person
have
like
a
server.
How
like
finally
and
button
might
be
like
really
deep
into
gravity
story,
because
we
used
and
learned
for
like
years
and
work
on
it
and
in
customer
and
right
now
it's
like
early
majority.
So
previously
it
was
earlier
doctors,
some
Enterprise
company
and
some
companies
just
switching
to
it.
So
I
think
we
need
to
also
bear
in
mind
that
for
some
people
that
the
order
would
be
a
new
thing.
C
D
B
Like
the
takeaway
here
is
that
data
loader
has
this
problem
and
we're
able
to
solve
it
and
it
it's
we're
able
to
solve
it
and
it
does.
We
don't
have
to
do
deferred
execution
to
solve
this
problem
with
data
loader.
C
B
All
right,
yeah
so.
B
Like
I
said,
I'm
Gonna,
Cancel
next
week's
meeting,
so
I'll
see
you
all,
maybe
at
the
working
group
on
the
Thursday
after
that,
or
maybe
the
Monday
after
that
yeah.
Thank
you.
Everyone.