►
From YouTube: Incremental Delivery Working Group - 2023-06-19
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
Oh
yeah,
it
is
the
U.S
holiday
today,
so
we
might
be
a
little
late.
A
Okay,
so
the
first
thing
that
I
wanted
to
do
is
I
think
the
discussions
that
I
have
in
the
repo
are
a
bit
outdated.
Now
that
we
decided
on
the
new
response
format,
so
I
think
that
I'm
going
to
post
a
new
discussion,
that's
basically
like
copy
paste
that
just
that
explains
the
response.
Format
we've
been
discussing
say
that,
like
this
is
the
direction
that
we're
going
in
now
I'll
leave
it
open.
If
people
anyone
wants
to
suggest
tweaks
or
anything.
B
A
It's
it's
the
gist.
That's
at
the
it's,
the
they're,
just
not
any
of
the
comments
we
we
talked
it
through
with
Matt
and
I
think
that
his
concerns
have
been
addressed
now
so
So.
The
plan
is
to
go
with
what
we've
been
discussing
for
the
past
few
weeks
in
this
at
the
top
of
this
just
yeah,
so
I'll
post
a
new
discussion
topic
that
has
like
all
this
info,
so
anyone
who's
following
that
repo
can
see
it,
and
if
anyone
has
concerns
or
or
suggestions,
we
can
work
through.
A
That
I'll
also
go
through
all
of
the
any
of
the
unresolved
discussions
that
are
open.
That
may
have
been
resolved
by
this
there's
there's
a
few
from
Benji
about
duplication.
All
that
stuff
so
I'll
make
sure
that
those
are
I'll
leave
a
comment
that
they're
addressed
then
mark
them
as
a
result
and.
E
One
good
question
and
does
anybody
working
on
you.
B
Know
because
we
discussed
a
lot
of
denial
of
service
attacks
like
increase
in
payroll
and
everything.
Does
anybody
working
on
right
analysis
by
current
proposal
if
it's
like
and
can
be
attacked,
and
what
should
server
check?
Why
can
what
would
be
reasonable,
like
what
you
should
apply
limits
on
stuff
like
that?
Is
it
like
I
think,
like
somebody,
and
you
need
to
to
go
through
it
and
like
outline?
What
what
is
the
level
of
service.
A
Yeah,
maybe
I
can
make
like
a
comparison
between
this
new
format
and
the
old
one
and
show
how
like
the
deduplication
yeah
with
with
the
old
one.
Basically
like
you
could
like
amplify
the
request,
payload
size
exponentially,
maybe
more
than
that,
but
with
this
one
I
think
that
it
grows
at
a
much
more
constant
rate,
because
it's
really
only
the
sending
more
longer
paths.
Multiple
times
now
so
I
I
can
like
write
up
an
example
that
demonstrates
that.
A
Yeah,
this
is
old
and
and
I
think
I
I
mean
I'm,
not
really
sure
like
if
we
could,
like
concretely
say,
like
a
server
shouldn't,
allow
more
than
like
X
defers
or
something,
but
I
think
that
we
should
have
some
language
that
says
that
there
should
be
some
kind
of
limit
and
leave
it
up
for
the
implementations
to
decide.
B
B
It's
like,
without
my
suggestion,
was
that,
like
okay,
we
can
without
analyzen
proposal
too
much
it's
a
question.
What
should
you
limit?
You
should
limit
like
number
of
different
groups.
You
should
limit
number
of
fields
and
they
differ.
You
should
number
like
a
different
level
or
it
provide
different
the
same
level.
It's
much.
B
So
it's
like
number
is
not
so
what
point
here
is
like
if
we
want
people
to
implement
it
or
like
evaluated
in
reasonable
fashion,
so
part
of
package
deal
is
to
to
to
how
I,
so
at
least
suggestion
what
should
be
limited.
B
A
Yeah
yeah,
what
I'm
most
worried
about
is
not
like
the
number
of
fields
that
are
being
deferred
or
the
number
of
levels,
but
just
specifically
like
the
total
number,
which
could
grow
a
lot.
If
you
are
deferring
inside
of
lists
or
list
of
lists,
or
something
like
that,
that's
so
so
we
should.
We
should
make
note
of
that.
A
Yeah
and
Benji
did
this
really
good
write-up
about
the
topic
of
early
execution
and
made
some
cases
explaining
that
I
don't
know
if
everyone
had
a
chance
to
read
through
this.
A
A
Yeah,
so
so
my
my
thoughts
on
this
in
in
this
in
my
server,
basically
like
we
only
load
data
through
API
requests
and
we
don't
have
any
kind
of
cues.
So
it's
not
like.
We
wouldn't
see
this
issue,
and
so
so
I
am
concerned.
A
If,
like
we,
if
I
was
like
forced
to
extend
like
the
total
time
of
request,
like
everything
that
was
deferred
was
requested
later,
there's
not
like
too
much
benefit
for
me
to
do
that
so
I'm
hoping
that
so,
like
my
suggestion,
was
that
if
we
come
up
with
some
kind
of
way
that
could
let
and
implement
or
choose
if
they
wanted
to
like
give
them
away
that
they
could
delay
the
resolvers
instead
of
instead
of
like
directly
forcing
the
execution.
A
But
yeah
want
to
hear
everyone's
thoughts.
D
Just
to
just
to
challenge
that
Rob,
are
you
sure
that
there's
no
cue
in
your
HTTP
fetching
it's
very
common
for
systems
to
put
in
in
place
the
limits
on
the
number
of
concurrent
fetches
that
are
allowed?
The
number
of
current
Network
requests
things
like
that.
It
may
not
be
as
explicit
as
the
example
that
I've
laid
out
in
this
example,
but
it's
very
common,
certainly
web
browsers,
for
example,
in
HTTP,
one
limited
to
like
four
connections
per
domain,
or
something
small
like
that.
D
A
Yeah
totally,
it's
definitely
possible,
but
I
do
know
that
we
have
like
a
lot
of
very
large
queries
that
make
lots
and
lots
of
requests
and
we
have
wonderful
charts
for
them.
So
if
there
is
that
type
of
limit,
I
think
it's
it's
pretty
high.
A
E
One
question
so
yeah,
but.
B
I
wanted
to
to
question
I
asked
in
in
my
comments.
Basically,
if
there
is
like
a
resolution
stage,
completion
stage
and
delivery
stage,
so
I
does
like
we
can
say,
implementation
at
minimum
should
the
forward
delivery,
but
everything
else
is
up
to
implementation.
Question
here
is
like:
is
it
visible
to
to
a
client
or
not?
I
can
client
figure
out
if
it's,
if
delivery
just
deferred
or
if
like
completion,
differ
not
like
performance,
wise,
otherwise
like
without
timer?
That's
where
it's
like
change
in
semantics,
change
like
perceivable
result
for
client
yeah.
B
So
we
discussed
in
basically
standard
algorithm
or
and
standard
implementation,
or
we
discussion
like
requirement
for
promise
pack
because
for
requirements
we
need
to.
You
know
like
to
make
think
rigid.
We
need
to
explanation
why
it
should
be
quite
rigid
required
so.
A
I,
don't
think
anyone
was
suggesting
that
the
spec
make
the
requirement
one
way
or
another,
like
I,
think
that
I
I
was
saying.
I
was
thinking
that
the
spec
should
you
should
write
this
back
as
if
it's
doing
early
execution,
but
have
a
note
that
an
implementer
can
delay
as
long
as
it
wants
and
that
could
be
by
one
tick.
It
could
be
some
amount
of
time.
It
could
be
all
the
way
up
to
what
Benji
proposed
of
the
maximal
option
and
I
think
and
in
veggies
he
in
the
original
post.
A
Here
he
said
that
like
even
though
even
if
we
spec
the
fully
deferred
execution,
it
doesn't
mean
that
early
execution
isn't
allowed
it's
because
you
can,
you
could
still
like
you,
could
write
something
with
early
execution
and
you
still
get
the
same
results
and
that's
allowed
by
the
graphql
spec.
A
B
Bubbling
is
changing
because,
like
you
know,
imagine
like
we
call
resolver
and
the
resolver
can
immediately
throw,
but
it
can
like
promise
can
be
rejected
and
in
both
cases
like
is
it?
If
we
delay
execution,
do
I,
call
it
to
resolver
or
resolve
or
rejects.
We
will
have
the
same
result.
A
Yeah.
Here's
here's
the
example:
if
you
have
this
query,
this
is
a
non-nullable
field.
That's
going
to
error,
and
this
is
just
a
regular
field.
If
there's
no
defer
here,
then
when
this
gets
executed,
Foo
for
Value
completions
waiting
for
both
to
finish
it's
going
to
see
that
this
one
errors,
that
error
is
going
to
Bubble
Up
and
you
get
a
result
of
who
no,
if
you're
doing,
deferred
execution,
then
you're
going
to
execute
this
field.
A
This
Field's,
not
even
going
to
start
executing
until
this
one's
done,
you're
going
to
see
that
this
nulls
that
bubbles
up
bar
never
gets
executed
and
Foo
gets
returned
as
null.
A
If
you
have
early
execution,
you
start
executing
this
field.
Now
you
start
your
deferred,
payload
executing
bar.
That's
in
whatever
queue
you
have
of
future
payloads
that
need
to
be
delivered.
This
one
errors,
bubbles
up,
you
return
Foo
null
and
the
initial
payload.
Now
you
need
to
make
sure
that
that
deferred,
payload
that
has
bar
doesn't
get
sent
to
the
client,
because
it's
going
to
be
pointing
to
a
location
that
doesn't
exist.
A
So
because
you
have
this
in
its
own,
like
because,
like
by
the
nature
of
deferrer,
you
have
this
pointing
to
its
own
new
payload.
You
have
to
make
sure
that
you're
not
sending
things
that
that
don't
have
like
a
path
to
place
them
in.
D
There's
also
like
bigger
problems
with
this
as
well,
so,
for
example,
in
this,
it's
the
same
situation.
Imagine
that
this
was
instead
of
mutation
operation
and
someone
was
implementing
the
pattern
of
nested
mutations,
which
people
shouldn't
do
and
isn't
something
that
we
encourage,
but
we
don't
I
think
strongly
discourage
it
in
the
spec,
though
I
think
we
should.
D
So
if
these
non-nullable
filter
errors
and
bar
are
both
actually
fields
that
have
side
effects.
I.E
mutations,
then
in
the
deferred
execution
model
bar,
will
never
execute
what,
because
the
other
one
through,
but
in
the
non
in
the
early
execution
model
bar
will
execute
anyway,
even
if
non-nullable
filter
errors
throws,
and
that
means
that
the
result
of
this
would
change
like
the
side
effects
that
happen
from
this
would
change,
depending
on
whether
we
do
early
execution
or
not
or
deferred
execution.
A
Doesn't
this
happen
today
with
with
nested
mutations
is,
is
if
this
is
a
mutation
only
the
top
level
Fields
get
executed
seriously
yeah
yeah,
yes,.
D
So
it
does
happen
today
with
how
they're
done,
but
if
you
wrap
the
thing
into
fur
and
the
spec
says
that
the
defer
means
that
it
won't
be
executed
until
afterwards
or
the
user
who
wrote
writes
it
expects
that
then
it's
a
Concrete
change
in
Behavior
between
early
early
execution
and
deferred
execution.
D
So
we
we
state
that
whatever
the
spec
says,
people
should
adhere
to
that
and
you
can.
You
can
do
another
algorithm
so
long
as
the
result
can't
be
determined
by
the
client
to
have
been
different
and
though
the
payloads
that
you'd
receive
would
be
the
same.
The
actual
side
effects
that
would
have
happened
would
actually
be
different
between
the
early
execution
and
the
Deferred
execution
cases.
A
E
B
Like
what
what
I
think
personally
is
a
like
with
the
reserves,
any
type
of
optimization
can
happen,
and
if
you
rely
on
order
or
something
outside
of
mutation,
it's
it's
your
own
issue
with
you
inside
particular
implementation
about
about
errors.
I
actually
like
I
thought
a
bit
about
it,
and
I
actually
think
parent
boundaries
should
be
predictable,
minion,
even
if
we
we
do
early
execution
and
synchronous
part
froze
of
early
execution
so
like
in
this
example.
B
Let's
say
like
not
knowable
field
is
not
proven,
but
butter
is
flowing
and,
and
it
can
throw
in
two
in
two
ways
it
can
throw
in
synchronous,
part
or
it
can
throw
in
it,
can
reject
the
promise,
depending
on
what
in
early
execution
model
result,
is
changing
so
I'm
like
I'm,
for
describing
it
like
that,
basically
stuff,
that
you
defer
always
don't
blow
up
like
things
with
caveat.
If
the
phone
is
unwind,
everything
is
work
as
usual
if
the
form
is
not
unwind,
even
if
it's
like
executed
earlier
later
stuff
stuff
is
predictable.
B
Stuff
with,
like
initial
part
of
get
shipped,
is
and
doesn't
go
up
by
by
default
Parts,
even
if
resolver
started
synchronously.
A
I
I
think
that
we
that's
handled
like
we're
not
going
to
see
a
difference
in
in
results
if
a
deferred
field
blows
up,
synchronously
or
asynchronously
that
that's
that's
all
handled,
you'll
see
the
same
result
and
we
have.
We
have
examples
of
that.
I
mean
we
don't
have
specific,
there's,
definitely
a
time
where
that
wasn't
the
case,
and
that
was
I.
Thought
was
just
a
bug
that
we
had
fixed
so,
but
but
we
have
examples
of
null
bubbling
inside
defers
and
how
we're
handling
that
in
the
in
the
gist.
B
Okay,
so
yeah
summarize,
we
just
discussing
like
behavior
of
calling
the
resolver
we're
not
discussing
quite
user
visible
stuff
we
discussing
like
if
something
blow
ups,
if,
if
we
another
is
over
reservoir
of
default,
part
gets
code
or
not.
C
A
And
yeah,
so
it's
it's
basically
like,
so
that
I
think
there's
there's
two
ways:
we
could
write
this
back.
You
could
write
this
back.
That
says:
there's
early
execution
implementers
can
delay
it.
You
could
write
the
spec
that
says
there.
It's
deferred
execution.
Implementers
can
execute
things
sooner,
but
they
have
to
make
sure
that
that
the
results
are
exactly
the
same.
A
Now
I
was
arguing
that
we
should
do
the
I.
Think
if
you
do
the
Deferred
execution,
spec
algorithm
things
are
simpler
because
you
don't
have
to
handle
cases
like
like
this
null
bubbling
it
just
it
just
works
automatically,
because
the
Deferred
stuff
won't
ever
get
executed
until
after
things
and
I'll
do.
If
you
do
it,
if
you
do
early
execution,
then
you
do
have
to
handle
those
cases,
and
so
the
spec
is
a
little
bit
more
complex.
A
Personally,
that's
that's
what
I
would
want
from
from
my
server
and
I.
Think
that,
like
you,
should
just
have
all
the
details
in
the
spec
of
what
needs
to
be
done
to
do
that.
B
Wait
and
you
you
say
like
people-
would
wanted
because,
like
I
see
like
pretty
different
performances,
performance
of
like
the
whole
query,
performance
of
a
server
and
performance
of
initial
part,
so
from
performance
of
initial
part,
it's
a
little
bit
preferable
to
defer
execution.
B
So,
like
initial
parts
of
that-
and
she
do
not
work
on
anything
and
even
synchronous,
part
of
deferred
stuff
is
not
executed.
So,
like
you
get
initial
stuff
sooner
server
performance
is
not
affected.
The
in
any
way
like
throughput
is
the
same.
E
B
So
what
what's
your
argument
for
for
watching
for
one
tank
early
execution?
Is
it
like
with
number
of
connections,
or
you
want
users
to
see
why
go
adding
master
or
yeah?
It's
both
of
those.
A
Yeah
I
think
the
risk
of
delaying
the
initial
part
is
is
certainly
there,
but
I
think
that
it's
negligible
and
at
least
in
my
implementation.
H
A
F
Well,
so,
first
of
all,
hi
and
second
of
all,
I
think
this
is
a
really
really
interesting
issue,
because
meaning
even
if
even
it
I
mean
we
have
no
way
Benji
was
it
you
know,
is-
is
really
pointing
out
something
tricky,
because
we
have
no
way
of
knowing
how
these
things
will
cue
Downstream.
F
So
if
you
potentially
are
sending
a
API
call
to
a
particular
server
with
some
fields
for
the
initial
results
and
then
with
some
deferred
fields
and
your
and
and
that
API
then
will
be
hit
by
both
of
those
things
at
the
same
time
you
know
and
it
could
get
the
Deferred
Fields
first
potentially,
because
the
even
though
weights
a
tick
you
know
it,
you
know
it's
things
in
the
initial
result.
You
know
further
down.
The
tree
could
could
be
started
after
a
deferred
field
with
early
execution.
F
So,
even
though
we
think
you
know
we're
calling
out
to
other
apis
if
we're
calling
out
to
the
same
API
at
any
point,
we
could
hit
Benji's
issue.
So
you
know
so
it's
a
it's
a
it's
a
tricky
one
to
say
for
sure
it
doesn't
exist,
no
matter
the
graph.
You.
C
F
A
Yeah
definitely
agree,
we
don't
know
how
things
queue
Downstream
and
but
when
you,
when
you
get
to
that
point,
couldn't
you
also
have
an
issue
where,
like
if
you're
deferred
calls
are
stretched
out
longer
than
the
Deferred
part
is
like
being
sent
later?
That's
like
also
being
sent
concurrently
with
other
requests
that
may
not
have
defer
in
it.
I,
don't
know,
I.
Think
it's
really
hard
to
to
model
like
yeah.
F
I
think
that's
a
really
good
point:
I
mean
how
different
would
this
be
than
than
a
very
busy
server.
So
if
you
have
a
very
busy
server
hitting
the
same,
you
know
really
lengthy
calls,
and
you
know,
and
it's
able
to
handle
that
well,
that
pod
pretty
well.
B
Like
others
are
kind
of
broken
by
early
execution
right,
if
you
use
data,
orders
like
against
the
first
towards
get
resolvers
get
something,
and
this
something
is
batched
before
part
before
some
batch
together
with,
like
initial
result,
stop
in
the
same
data
orders
batch
things
of
that
order.
You're.
A
Breaking
up
a
lot,
I'm,
not
sure
if
it's
on
my
end,
I,
don't
think
I
could
make
out
solely
what
you're
saying.
A
Anyway,
what
I'm
thinking
now
is
that
like
I,
would
like
to
favor
early
execution
but
provide
a
way
for
resolvers
to
to
get
delayed,
execution
and
I
feel
like
that
would
be
implementation
specific,
but
I
was
saying,
like
maybe
there's
more
data
we
could
put
on
graphql
resolve
info
that
lets.
You
know
if
a
field
is
being
deferred.
A
A
That
would
mean
more
work
for
the
resolvers
that
need
that
do
need
the
Deferred
execution,
but
it
would
at
least
give
you
the
choice,
because
there's
no
way
to
like
execute
something
sooner
in
the
resolver,
but
you
could.
We
could
provide
a
way
to
to
delay
to
delay
things.
D
It's
very
easy
for
someone
who
finds
that
they
have
this
problem
to
work
around
it
by
just
putting
this
at
the
top
of
every
single
one
of
their
resolvers,
and
they
could
even
opt
into
this
or
even
pass
it
as
a
plug
to
graphql
and
have
Graco
JS,
for
example.
Just
do
this
automatically
for
you,
it
is,
is
definitely
a
choice
that
the
schema
designer
is
making
at
this
point
I'm
just
not
sure
that
the
default
should
be
to
default
to
early
execution.
B
Yeah
I
also
support
that
that
my
question
in
the
chat.
A
What
do
you
mean
by
batching
from
the
initial
part
to
the
Deferred
part.
B
So
imagine
we
have
like
a
last
fields.
It's
the
same
field,
but
I
just
yeah
and
some
of
aliases
is
the
initial
part,
some
of
them
in
default
part.
You
call
that
war
I
seems
to
do
early
execution,
you
basically
in
the
same
magic,
so
that
what
that
award
does
is
to
put
them
in
same
same
array
and
send
it
to
service
one
batch.
So
we
basically,
if
we
use
data
order,
natively,
we
tying
initial
result
performance
to
defer
performance.
B
So
if
we
speak
only
about
this
fields,
like
just
field
I,
don't
know,
I
do
know
something
not
have
like
something
without
you
know
how
deeper
Fields
you
have
just
a
powerful
field
and
there's
always
use
data
order,
and
you
use
my
true.
You
have
a
to
field.
Honest
one
is
initial
result.
Another
is
default.
Both
of
them
call
that
order
that
order
photo
X
stuff
in
the
same
basket,
shape
it
to
implementation
in
the
same
basket.
B
So
now
you
in
situation
where
data
or
other
forces
initial
fields
from
initial
results
to
wait
on
a
default
sub.
So,
even
though
the
first
shift
like
shipped
separately
but
performance,
wise
default
part,
is
tied
to
initial
result
and
I
think
it.
It
would
be
common
in
implementations
that
try
to
do
batching.
B
B
Building
or
like
in
the
model
graphqjs
model,
where
data
world
separate
from
Graphics
that
order
doesn't
know
if,
like
if
it's
code
that
doesn't
know
like
this
course
should
be
but
separately
from
other
calls.
F
So
hi
sorry
I
dropped
so
I,
so
we
I
added
into
our
working
implementation,
something
that
may
help
with
that
meaning
exposing
to
exposing
to
resolvers
the
priority
that
they're
running
at
and
so
resolvers
could
group
could
could
group
their
data
loaders
accordingly.
F
Basically,
I
implemented
both
suggestions
as
far
as
I
understood
them.
From
that
issue,
one
is
exposing
to
resolvers
the
priority
of
the
current
of
the
currently
deferred
field,
as
well
as
exposing
to
them
whether
whether
the
parent
has
completed
and
exposing
a
promise
if
it
hasn't
completed,
exposing
a
promise
that
resolves
when
it
does
so
they
can
choose
to
disable
on
a
per
field
basis.
Whenever
that
field
is
deferred,
they
can
choose
to
disable
early
execution.
F
It's
a
little
bit
tedious
to
do
that
for
every
resolver.
If
you
don't
want
to
have
early
execution,
but
you
could
always
just
simply
programmatically
wrap
every
resolver,
I
guess
so.
B
B
Since
we
I
didn't
stuff,
additively
million
great
people
can
enable
stream
differ
without
without
changing
the
the
resource
it's
mean.
We
cannot
require
coperating
resource
by
default
and
it's
situation
for
every
implementations.
B
So
like,
since
people
already
have
schema
and
people
already
have
reserves
and
that
day
to
graphic
address,
17
or
graphql
or
whatever
and
suddenly
stop
work
with
stream
default
person
is
happy.
Like
entire
front-end
team
was
happier,
they
start
using
the
force
and
when
they
discovered
away,
we
have
to
think
they
basically
stop
technically
default
shipped
and
separate
by
what.
But
they
receive
it
almost
in
something
the
time
difference
between
initial
requirement
and
differ.
What
is
like
non-existent?
B
Basically
in
some
cases,
so
since
your
solution
exposing
stuff
to
resolver,
but
unless
we
want
to
resolve.
B
We
cannot
trust
that
person
don't
have
like
watching
that
award,
there's
interlocks
or
any
other
like
performance
pitfalls,
so
I
would
argue
that
weight
a
weight
execution
default
execution
is
a
Way
Forward,
since
we
retract
your
Legend
with
feature
without
and
don't
require
people
to
opt
in
the
resolver
level,
so
people
can
opt
into
into
stream
before
on.
Like
schema
level,
they
can
add,
remove
directive
to
something
but
wow.
Why.
F
If,
if
if
a
server
you
know
allows
defer
and
stream
upgrades
to
graphql
v17
exposes
the
first
stream
directives,
clients
will
get
the
new
get
the
new
format,
but
they
may
not
get
all
the
potential
benefits
of
defer
unless
the
resolvers,
when
necessary,
are,
are
adjusted
now,
I'm,
not
sure,
if
I'm
not
sure
in
the
end
of
the
day,
I'm
not
actually
sure
whether
that
interconnection
of
the
initial
result
and
the
Deferred
payload
will
ultimately
lead
to
a
performance
benefit
that
the
client,
a
performance
impact
that
the
client
won't
find
beneficial,
meaning
it's
sort
of
tricky.
F
It's
not
necessarily
true
that
every
batching
that
connects
the
initial
result
and
deferred
payload
is
going
to
be
like
a
nightmare
scenario
that
you
know
that
you
know
of
the
very
expensive
field
and
the
very
cheap
field
it
could
be.
Most
of
those
are
the
cheap
you
know
are
the
same.
You
know
are
just
going
to
overall,
not
add
extra
latency.
You.
G
F
You
know
so
I
think
it
requires
like
profiling.
You
know
in
particular
examples
to
see,
and
so
what
I
would
argue
is
is
we
need
to.
We
definitely
need
to
explore
it,
but
it
might
require
a
sort
of
generic
extension
to
data
loader.
F
To
that
you
know
that
is
aware
of
these
different
cues
rather
than
and
and
may
require
some.
You
know
rewriting
of
of
resolvers.
You
know
overall,
but
I
think
we
may
find
that
it.
You
know.
F
Basically,
it's
a
new
feature
that
that
servers,
the
clients,
won't
get
the
ultimate
benefit
from
unless
you
know,
unless
the
servers
adapt
accordingly,
so
a
server
may
choose
to
enable
it
everywhere,
except
where
and
then
you
know
where
it
sees
that
it's
not
beneficial
as
much
as
we
think
it
could
be
start
attacking
resolvers
and
trying
to
optimize
I,
think
that
could
be
one
plan
of
attack,
but
you'll
never
get
worth
performance
than
a
not
and
then
a
than
a
than
a
query
that
does
an
operation
that
doesn't
have
the
fur
and
stream.
F
Because
of
this
issue,
you
just
may
not
get
all
the
benefits
of
that
you
might
expect.
So
you
know
it's,
you
know,
but
I
do
see.
It's
your
point
like
we're.
Releasing
a
new
feature
that
that
clients
may
you
know,
may
not
see
the
immunity
benefit
from
you
know,
even
if
it's
no
worse
than
before,
we
sort
of
promised
it
would
be
better.
So
that
would
be
confusing,
but
I'm
just
not
sure
how
off.
If
that
happened,
the
99
of
the
cases.
That
would
be
very
bad.
F
You
know
just
for
overall
we'd
look
foolish,
but
if
it
happens
in
one
percent
of
the
cases,
then
I
think
clients
what
you
know.
Everyone
would
be
happier
that
it's
on
and
that
early
execution
is
enabled
by
default.
So
where
you
know
what
is
the
problem
scenario
in
practice?
Is
it
99
one
percent
somewhere
in
between
that's
much
closer
to
one
end
of
the
spectrum,
or
is
it
50,
50.
I
think
we
need
to
explore
that.
B
I
am
and
I
just
understand
the
just
understood
that
it's
enough
to
have
only
like
one
Resort
affected
like
if
you
have
only
one
resolver,
that
the
do
I
know
even
that
or
there's
something
custom,
and
this
resolver
is
tied
between
initial
result.
Md4
result.
B
B
G
F
G
F
It's
the
very
expensive
field,
that's
being
batched
between
the
initial
and
the
between
the
initial
and
the
it's
a
third
payload
and
they're
both
equally
expensive,
and
they
don't
help
each
other
out
by
being
executed
as
a
batch.
Then
then
I
do
agree,
but
I'm
not
sure.
F
That's
the
most
common
scenario,
where
the
resolvers
that
are
batched
are
the
expensive
ones
and
and
and
that
that
batching
won't
overall
lead
to
a
result
that
the
client
will
want,
meaning
I
would
assume
anything
in
the
initial
result
is
not
going
to
be
expensive
at
all.
Otherwise
it
would
be
deferred,
so
so
a
data
loader
that
includes
both.
You
know,
basically
to
make
a
long
story
short
I
shouldn't
make
too
much
assumptions
I'm,
just
not
sure
that
we
can
accurately
predict
without
getting
some
some
concrete
examples.
Certainly
I
can't.
B
Yeah,
what's
why
I
say
we
cannot
provide
performance
tests
like
whatever
test
we
will
write
synthetic
test
we
will
write,
will
be
biased
by
person
who
wrote
it
and
his
assumption
so.
F
E
B
See
it
like
that
people
get
like
90
percent
of
what
is
the
oil
price
in
representative?
Basically,
my
partner
is
like
you
get
90
percent
of
a
result,
a
performance
Boost
from
different
stuff
like
we
use
like
20
percent,
but
by
not
doing
early
execution.
B
But
you
prevent
yourself
from
from
the
backing
really
complex
situation,
and
you
prevent
yourself
from
like
embarrassing
when
stuff
is
stripped
like
immediately
and
like.
B
You
need
to
track
it
in
every
resolver
and
we
need
to
explain
these
Concepts
to
graphql
newbie,
the
white
guy,
who
start
like
several
about
graphia
tutorial.
We
need
to
include,
like
part,
explaining
what's
priority
in
the
resource
and
like
also
your
books,
saying
like
you,
cannot
join
batch
or
interconnect
without
priority
stuff
stuff
with
different
priority
levels.
You
cannot
like
even
explaining
what
is
is
complex
and
I
would
say
most
people
like
in
90
cases.
B
They
would
prefer
to
have
Simplicity
and
would
like
little
bit
performance
and
instead
of
like
a
little
bit
more
performance
but
like
suddenly,
you
need
to
learn
New,
Concept
and
track
it
and
have
like
all
libraries
that
you
use
to
understand
like
priority
levels.
B
G
F
I'm
not
I
mean
I
agree
with
you
that
it
is
a
new,
definitely
A,
New
Concept,
it's
sort
of
A
New
Concept
that
flows
from
the
promise
of
graphql
and
concurrent
execution.
I.
Think
it's
like
akin
to
to
what
we
could.
You
know,
I
think
the
first
problem,
or
maybe
not
the
first,
but
one
of
the
first
problems
that
people
come
across
when
building
graphql
servers
might
be
something
how
they
build.
F
F
You
know
there
for
you
to
maximize
performance,
but
you
need
to
be
aware,
you
know,
but
in
the
same
you
know
so
I
I
do
think
it's
a
New
Concept,
but
I
mean
my
main
concern
is
is
is
is
sort
of
like
a
utilitarian
one
frankly,
I
I
think
Benji
makes
a
good
argument
or
he
made
offline
to
me.
F
I
can't
recall,
if
you
put
it
in
print
but
I,
think
I
I
agreed
with
it,
where
we
had
that
discussion,
which
is
that
the
spec
should
really
match
the
reference,
implementation
and
I
know,
but
I
know
a
long
long
time
ago,
or
maybe
not
so
long,
depending
on
your
frame
of
reference
Ivan.
We
had
a
whole
question
about
how
much
graphql.js
should
differ
from
the
spec,
but
I
do
think
in
this.
F
This
is
a
pretty
big
difference
to
have
early
execution
not
be
in
the
spec,
but
to
have
it
be
in
graphql.js
and
from
what
I
recall.
Benji
was
arguing
that
they
should
be
the
same
and
and
I
kind
of
think
also
they
should
be
the
same,
but
I
do
want
to
make
early
execution
an
option
for
those
people
who
want
it.
F
So
that's
kind
of
where
I
fall
out
that
because
I,
don't
necessarily
think
that
for
sure
it
should
be
on
by
default,
at
least
when
it's
publicly
released
I
think
in
the
experimental
stage
when
we're
releasing
it
for
people
to
play
with
it.
I
think
we
might
want
to
encourage
people
to
use
it.
F
So
we
can
see
if
it's
actually
a
problem
in
the
vast
majority
of
cases,
you
know
sort
of
mess
with
people
in
that
way
to
you
know,
let
them
perform
a
natural
experiment
for
us,
but
I
definitely
would
want
it
out
there
in
graphql
JS
and
because
of
that,
I
would
want
there
to
be
spec
language
that
clearly
lays
out
how
it
works,
alluding
to
what
Rob
was
mentioning
before.
So
that's,
not
necessarily
the
best
argument,
but
that's
sort
of
my
you
know
what
what
it
boils
down
to
for
me.
B
I'm
I'm
personally
thinking
for
the
same
reason,
I
explained
it's
also
a
problem
in
graphql.js.
It's
like
you.
You
need
to
understand
what
you're
doing
I'm
like
I
work
in
C,
plus
plus
for
a
long
time
with
like
multi
trading
and
even
for
professional
programmers.
It's
hard
to
understand
like
how
things
interconnected
or
like
what
resources
are
shared
and
when
just
example
about
rctp
Connections
or
like
other
resource
pools.
I
would
say
it's
like.
B
Are
you
like
dying
resolver
execution
to
something
with
with
don't
know,
wait,
it's
it's
not
only
happening
in
the
resource.
It
might
imagine
you
calling
some
other.
B
B
Like
all
the
way,
but
if
you're
using
what's
that,
like
you,
have
arguments
that
this
interconnection,
maybe
they're
not
harmful,
but
we
they
cannot
produce
widely
used.
Libraries
assuming
edge
cases
is
like.
B
B
From
iPad
design
perspective
I
think
it's
it's
like.
We
should
go
wait
before
the
execution,
both
in
graphql.js
and
spec
by
default,
so
maybe
we're
in
graphqjs.
Maybe
if
people
will
request
and
provide
Benchmark,
why
it's
really
important.
In
that
case,
maybe
we
will
enable
what
you
park
or
something,
but
by
default
I,
think
it
should
be
a
default
execution
of.
A
If
that's
the
default,
I
really
think
that
we
need
like
a
global
way
to
opt
out
of
it
and
then
maybe
a
resolver
local
way
to
opt
it
back
in,
but
I
I
would
really
hate
to
like
say
you
have
to
have
this
penalty
of
deferred
execution.
If,
when
you
know
that
this
resolver
is
like
that
is
not
going
to
be
affected
by
it
and
and
then
like
once
it
still,
it's
deferred,
there's
no
way
to
to
undo
it.
If,
like.
A
B
In
this
case,
you
need
this
mechanism
to
say,
like
particular
result,
because,
like
people
do
model
work,
s
like
grizzlers
potentially
can
I,
don't
know
like
how
widely
it's
done,
but
some
companies
use
handwritten
IPI
graphql
gateways
when
they
like
each
team,
maintain
like
particular
directory
with
like
their
own
reservers
so
like.
If
we
want
to
do
I.
B
F
Yeah
I
think
we
may
one
day
one
day
you
know
so
I
mean
with
this
feature.
I
think
we
might,
you
know,
want
to
declaratively
sort
of
label
which,
which
and
resources
that
resolver
uses-
and
you
know
paved
the
way
toward
more
advanced
query
planning.
I
I
mean
again
I.
Think
overall
I
think
a
lot
of
you
know
to
sort
of
quote
from
what
Benji
dumped
into
the
chat
that
I
see.
F
You
know
he's
he's
suggesting
that
defer
is
in
his.
In
his
opinion,
a
replacement
for
issuing
a
graphql
result
waiting
for
the
result
and
then
querying
additional
graphql
data
based
on
the
result.
I,
don't
think
I
would
disagree
with
that.
I
just
would
think
that
would
think
that
you
know
it's
it's
overall
I
mean
you
can
still
do.
That's
still
always
an
option
to
you,
and
so
it's
not
I'm
not
sure
how
much
of
a
benefit
defer
would
be
to
clients
that
you
know
to
to
that
option.
F
I
mean
you
can
have
client-side
deduplication,
meaning
what
we're
doing
in
terms
of
deduplication
you
could
do
client-side
I
am
I
thought
that
early
execution
was
actually
one
of
the
main
features,
not
the
primary
feature.
Obviously,
that
was
increasing
the
speed
of
the
initial
result,
but
I
thought
it
was
the
main
features,
because
you
could
always
increase
the
speed
of
the
initial
results
just
by
sending
separate
requests
and
that
option
is
still
always
available.
F
So
I'm,
not
you
know,
I
think
I
thought
that
this
Balancing
Act
between
increasing
the
speed
of
the
initial
result
and
providing
you
know,
total.
You
know,
decreased
total
result.
Time
was
basically
the
main
benefit,
so
you
know
I
I,
I
still
think
I
mean
I
I.
Think
if
we
have
a
global
I
think
we're
already
basically
seeing
that
people
are
requesting
this
feature,
I
mean
the
main
driver
Rob
here
of
the
of
the
champion
of
this
feature
is
asking
for
it.
So
I
think
we
already
have
like
people
asking
for
it.
F
In
terms
of
whether
it's
too,
you
know
difficult
to
use
whether
it's
you
know
it's
it's
it's.
It
should
be
included
but
enabled
under
a
flag.
So
I
don't
know.
There's
my
two
cents,
I
think
it's
a
pretty
important
feature.
B
One
qualification
you
even
with
default
execution,
you
still
have
like
performance
benefits,
you're,
not
call
it
an
intermediate
resolvers
so
like
you
have
like
intermediate
fields,
and
even
if
you
are
in
execution,
is
like
one
query
is
faster
than
two
separate
queries,
so
the
first
beneficial
because
you're
not
doing
database
hookup
from
photopoever
field
from
so
I
think
basis
like
major
saving
in
different
case
and
yeah
yeah,
I'm
kind
of
like.
B
If
you
feel
strongly
I'm,
okay
with
experimental
one
thing,
I'm
worried
about
I'm,
okay
like
if,
if
we
keep
risk
open,
put
like
a
label
saying
we
should,
we
should
discuss
it,
one
more
time
before
releasing
graphql
jss
extremely
different
as
non-experimental
feature
in
graphicjs
and
definitely
before.
We
match
it
to
spec,
and
here
just
to
do
experiment.
B
One
caveat
I'm
worried
about
how
we,
how
we
like
get
feedback,
I,
think
like
most
of
people
that
will
try
stream
the
form
they
will
not
have
like,
because
they
will
not
do
like
deep
performance,
metrics
and
especially
an
experimental
stuff
in
like
hobby
projects,
demos
or
like
trying
to
run
something
small
like
how
we
plan
is
people
like
will
really
debug
this
issue
and
reported.
B
If
you
will
have
it
for
years
years
right,
somebody
will
Discover
it
and
say
like
in
my
particular
case.
It's
it's
like
problematic
and
and
another
argument.
If
we
do
it
like
other
execution
by
default,
people
will
learn
about
it
and
they
will
think
like
values,
solution,
yeah,
I'm
kind
of
white
I'm,
I'm
can't
partly
skeptical
about
we're
getting
area
of
feedback
about
it.
In
my
short
running
experiment,.
F
I'm
not
sure,
if
that's
what
I,
what
that's,
what
I
meant
I'm
I
mean
I
I,
would
think
that
we
could.
We
could
release
it
before
the
spec
change
is
finalized,
early
execution,
I
mean
and
I
think
it
would
then
be
adopted.
You
know
that,
as
we
saw
already,
clients
would
then
try
to
match
the
new
format,
but
the
new
format's
not
going
to
change
based
on
early
execution,
deferred
execution
and
then
they're.
Gonna
start.
You
know
even
large.
F
F
Ocular
technology
and
I'm
not
sure
that
we
wouldn't
yeah
I'm,
not
sure,
maybe
I,
misunderstood
I'm,
not
sure
why
we
wouldn't
get
some
feedback,
that
it's
not
working
as
expected,
and
if
we
don't
get
that
feedback
right
away,
even
if,
if
those
edge
cases
exist,
I
think
because
we've
identified
this
issue
early
it
should
you
know
we
should
be
able
to
educate
around
it.
I
mean
it's,
it's
potentially
one
of
the
you
know
major
issues
with
this
new
feature.
F
What
major
potentially
a
confounder
but
potentially
not
so
I-
think
we
would.
It
would
have
high
visibility
about
it.
Yeah.
B
A
A
Sorry,
sorry,
one
second,
the
issue
with
data
loader
is
specifically
only
if
you're
doing
this
this
version
of
it.
It
wouldn't
be
a
problem
if
we
did
version
two,
which
is
where
deferred
fields
are
they're
not
deferred
entirely
until
all
the
other
resolvers
are
done,
but
they're
delayed
by
one
tick
or
something
right.
F
No,
the
issue
well
in
a
later
tick.
There
could
be
by
chance.
It
could
be
that
in
the
same
tick,
some
something
from
lower
down
in
the
query
of
the
initial
results
happens
to
coincide
with
the
same
tick
of
that
deferred
result.
F
So
that's
what
I'm
saying
I'm,
not
sure
I'm,
not
sure,
I,
think
we
really
need
to
have
a
real
world
experiment
to
see
like
how
often
does
that
happen,
and
even
when
it,
if
it
does
happen,
does
it
again?
Does
it
overall
maybe
improve
the
overall
effect
for
the
client?
F
You
know,
because
you
have
you
know
a
batching
between
the
initial
and
deferred.
We
don't
know
even
that
that
necessarily,
presumably
it
will
be
slow
down.
The
initial
results
to
some
extent,
but
we
don't
know
how
much
and
you
know,
and
in
which
majority
of
K
you
know
which
kind
of
cases,
but
instead
you
know,
but
in
theory
it
could
happen
even
even
with
if
it
delays
an
initial
tick.
You
know
sort
of
by
chance
if,
if
things
are
on
the
same
tick
later.
D
To
use
data
loader
more
than
once
in
the
same
resolver.
B
You
don't
need
to
like
multiple
usage
of
that
over
there
and
you're
not
like
you,
have,
for
example,
end
user
and
friends
and
friends
is
the
same
like
as
user.
So,
if
you
ask
the
same,
if
you're
accessing
the
same
type
through
different
intermediate
fields,
you
can
have
situation
where
stuff
quite
coincide.
F
But
it
would
have
to
be
if
I
understand,
even
in
the
same
resolver
or
even
like
again.
If
it's
the
same
object,
it
would
have
to
be
that
the
the
first,
the
first
to
me
that
the
ticks
match
up
and
that
you
know
isn't
you.
F
A
necessity,
it's
possible
that
the
ticks
will
match
up
but
in
my
mind
it
seems
somewhat
unlikely.
So.
B
A
question
here
like,
if
you
give
me
a
schema
using
data
or
others.
My
claim
is
that
I
can
construct
inquiry
that
will
entangle
initial
result
to
the
fourth
part.
Why?
Whatever
schema
you
give
me
like
these
resolvers
using
that
order?
I
am
as
attacker,
non-attacker
right,
but
I
can
construct
a
query.
This
query
can
be
like
rare
or
it
can
be
like
popular.
B
And
also
another
thing:
sometimes
that
can
happen
unpredictably,
because
some
some
code
in
intermediate
results
or
resource
is,
can
be
like
optionally,
optionally,
asynchronous,
for
example
cash.
If
you
have
a
cash,
you
get
something
synchronously
from
cash.
If
you're,
not
you
like
doing
something
asynchronously
and
because
of
what
six
can
shift
and
like
interact
or
not
interact,
it
would
be
hard
to
debug.
But
my
claim
is
the
these
queries
that
don't
go
initial
response
with
the
third
part
is
possible
in
any
scheme.
I
use
in
that
order.
B
Now,
among
other,
like
use
cases
that
Venture
point
that
out,
like
in
high
performance
servers.
Yeah
like
connection
limit
can
be
like
you
think,
if
you
have
like
a
DC
server
with
like
a
lot
of
CPUs.
F
Yeah
I
mean
again
I'm,
definitely
not
disagreeing
with
you.
I
think
there
could
be
a
a
entanglement
and
I'm.
What
I'm
arguing
is
that
I'm
not
sure
to
what
extent?
Overall,
that's
going
to
lead
to
problems,
and
you
know
sort
of
by
definition,
I'm
saying
it?
Could
it
could
be
unacceptable,
I
guess
what
I'm?
F
What
I'm
saying
is
that
we've
given
Solutions,
where
developers
can
solve
those
problems
using
either
disabling
it
on
a
per
field
basis
or
using
exposing
the
priorities
so
that
they
can
then
have
separate
data
loader
cues?
F
F
I
think
this
is
a
feature
that
I
would
push
for.
I.
Think
it
would.
The
question,
in
my
mind,
is
only
the
timing
of
when
to
push
forward.
Do
we
say
that
first,
we
release
defer
with
delayed
execution,
and
then
we
push
for
changing
the
spec
and
changing
graphql.js
simultaneously
to
allow
early
execution.
F
Now
all
of
these
mitigations
are
going
to
be
not
actually
within
the
spec,
because
they're
very
much
tied
to
the
implementation.
They
involve
the
info
object.
You
know
the
context.
Data
loaders,
you
know
which
are
all
not
in
the
spec
s
so
in
my
mind,
I
think
it
I
think
it
would
be
beneficial
to
release
both
of
these.
You
know
sort
of
as
an
educational
tool
to
release
both
of
these
simultaneously.
F
F
You
know,
for
varying
this
feature
until
a
later
release,
I,
don't
know,
let's
say
in
another
year
after
deferred
drops
and
then
I
think
we
would
have
to
argue
that
we
would
need
to
change
the
spec
to
to
handle.
You
know
the
the
you
know,
the
How,
We,
Do,
filtering,
etc,
etc.
Just
like
we're
suggesting
now
so
that
graphql
jazz
could
have
this
feature.
Even
though,
when
it's
sort
of
lined
up
ready
to
go
now,
I
don't
know
it
doesn't
seem
to
be
the
best
way
to
go
so
again,
I
I
would
I.
F
Would
you
know,
vote
for
experiments
in
getting
some
real
world
data
by
releasing
it
as
is,
and
then
we
can
always
pull
back.
If
that
doesn't
work
out.
E
B
Yeah
over
time,
I
can
suggest
like
steps
that
can
to
extend
so
like
we
cannot
release
it.
Foreign.
B
Viaco
U
is
a
proponent,
create
PR
to
data
order,
advice
features.
There
have
examples
that
shows
that
the
data
or
there
can
work
with
priorities,
and
you
try,
if
it's
hard
or
not,
for
people
to
use
how?
How
I
API
look
like
because,
like
my
position,
but
cannot
implement
it
without
without
showing
how
people
can
use
it
together
with
data
order.
F
I,
don't
I
mean
I'm,
not
I'm,
not
sure
I,
I
I
agree
with
that
I
mean
I,
think
we
released
now
we,
but
some
you
know
the
graphql
was
released
without
data
loader,
so
I'm
not
I'm,
just
not
sure.
Overall,
if
I
agree
with
that
approach,
I
mean
I
can
I
I
can
try
to
work
on
it,
but
it
to
me
like
the
way
in
which
these
cues
would
want
to
be
managed
might
depend
on
the
resource,
but
I
mean
it's
not
like
fundamentally
I'm
against
him.
F
You
know
improving
a
data.
Loader
I
mean
I
already
posted
in
the
chat.
There's
a
generic
resource
manager,
not
in
the
chat
on
the
discussion
that
sort
of
provides
this
already
I,
don't
necessarily
feel
like
I
can
rewrite
that
priority
queue
Library
better
than
they
did
so
I
think
the
solutions
are
already
out
there.
F
I
mean
I
can
try
to
write
something
graphical
specific,
like
zero
dependency.
If
that's
seem
necessary,
but
I,
don't
I,
don't
think
we
should
deem
it
necessary.
A
I
I
send
you
lots
of
good
comment.
I'm
going
to
add
that
to
my
notes,
yeah
I
I
think
that
we
should
get
more
more
I
think
we
should
probably
talk
about
this
at
a
at
one
of
the
main
graphql
working
groups.