►
From YouTube: Incremental Delivery Working Group - 2023-02-27
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
C
C
Away,
I
I
didn't
took
it
in
touch,
and
would
you
do
like
agenda
items
on
next
working
group
main
one
I
mean.
C
A
B
I
haven't
added
anything
yet
yeah,
I
guess
we'll
see
how
the
discussion
goes
today
and
see
what
we
want
to
discuss
on
Thursday.
If
anything.
D
A
A
B
So
since
the
last
meeting
I
updated
the
top
level
of
this
discussion
expanded
on
all
of
the
different
options
that
we
were
that
we
were
discussing
I
listed
out.
How
like
we
have
these
three
constraints
now
one
payload
per
path.
B
Each
payload
corresponds
to
the
Deferred
directive
at
that
path,
and
the
payload
is
only
sent
when
all
the
data
specified
under
that
fragment,
but
some
of
it
may
be
sent
previously,
and
then
these
are
kind
of
the
options
that
we
had,
that
we've
been
talking
about
that
still
fit
Within
These
constraints,
I
put
trade-offs
for
all
of
them,
trying
to
be
unbiased.
But
but
definitely
let
me
know
if
I
could
adjust
these
in
any
way.
B
B
So
I
think
that
should
make
it
easier
to
understand
like
where
we're
currently
at.
What's
still
what's
being
considered
and
whichever
path
we
do
go
down
like
how
what
trade-offs
are
making.
E
Can
you
post
the
oh
I,
think
oh
it's
65
yeah.
B
Yeah,
so
yankov
had
also
posted
a
little
while
back
about
being
less
and
less
convinced
that
deduplication
is
worth
it
and
I
think
I'm,
starting
to
lean
into
that
as
well,
so
I
mean.
Maybe
we
should
talk
talk
through
this
a
little
bit
Benji
you
mentioned
about
how
it
could
be
a
security
problem.
Maybe
we
can
go
into
that
today.
F
Yeah
sure
I
I
think
this
all
Hearts
back
to
to
what
I
was
saying
originally,
which
is
that
you
can
effectively
inflate
the
the
payloads
quite
significantly
easily,
while
still
making
it
look
like
it
should
be
simple,
which
makes
it
harder
for
things
like
query,
cost
analysis
to
to
really
factor
in
just
how
complex
this
or
how
large
this
payload
multi
might
ultimately
be,
and
especially
that's
problematic
as
well.
F
If
those
fields
need
to
be
executed
twice,
which,
under
like
some
of
the
earlier
proposals,
they
would
have
been,
there's
also
issues
that
come
out
of
this
that
aren't
even
related
to
security.
F
So
there's
the
the
issue
that
was
raised
about
you
know
if
you've
got
if
you're
querying
a
list
inside
of
like
deep
inside
of
deeply
nested
defers
in
two
different
places,
when
you
evaluate
the
field
for
that,
it
might
give
you
two
different
sets
of
results
and,
if
you're
querying
different
fields
on
those
two
two
different
sets,
then
merging
them
back
together
might
be
nonsensical.
You
might
not
know
how
to
do
that
or
which
one
wins
and
I
can't
see
how
we
can
necessarily
solve
that.
F
Which
I
think
is
how
the
the
original
system
works
with
sort
of
branching
right.
B
F
I,
don't
think
we
have
the
same
issues
with
aliased
Fields
without
stream
and
defer
like
yes,
that
is
inflating
the
size
of
the
payloads,
but
it
is
also
you
are
explicitly
doing
so.
So
if
you
had,
for
example,
a
validation
that
checked
how
many
fields
you
have
in
any
selection
set
and
limited
it
to
I,
don't
know
50
or
whatever
you
thought
it
was
a
reasonable
number.
F
E
Yeah
so
so
the
statement
is
here
also
by
Yakov
I,
just
read
up
on
it:
it's
no
he's
leaning
into
no
inlining
and
no
deduplication.
That's
also
something
I
would
have
concerned
with
I
think
we
don't
have
to
de-duplicate.
E
Necessarily
all
the
data
I
mean
I
never
was
on
the
on
the
side
of
we
must
deduplicate
everything,
but
not
being
able
to
align
certain
fragments
that
certain
differs
could
be
really
a
problem.
B
Yeah
I
I
want
I
want
to
separate
that
part
out
of
it,
because
I
I
want
to
talk
only
about
deduplication
I,
because
I
I
do
think
that
inlining
is
important.
B
And
yeah
I
don't
want
one
thing.
C
Yeah
one
thing
that
can
be
useful
if
we
decide
to
separate
on
inline,
because
it's
like
it's
connected
topic
basically
is.
C
Is
like
actually
like?
What
do
you
mean
by
anyway.
B
C
B
C
B
E
Well,
it's
important
is
Paul,
oh,
where
I
see
it's
very
important,
because
when
we
have
nested
things
right,
do
you
have
a
stream
and
you
have
a
defer
inside
of
that
stream,
and
then
it
could
get
problematic,
also
nested.
The
first
could
get
a
problem
and
typically
what
what
server
could
do
is
remove
the
inner.
The
first
point
instance-
and
that
would
apply
to
that
statement
that
Rob
had
that
we
essentially
get
rid
of
them
on
the
same
path.
B
C
Oh,
like
you
mean
like
even
so
like,
if
you
have
least
different
elements,
can
have
like
one
before
like
item
zero
before
gets
and
one
item
one
not
to
no
one
item,
two
in
one:
it's:
okay
to
have
different
stuff
between
items
of
lists.
C
I'm,
like
I'm
kind
of
in
support
your
friend,
because
what
we
work
discussed
is
like
somewhere
in
the
middle.
We
do
duplicating
stuff
and
wait
after
after
I'm
like
after
we
start
the
duplicated.
With
the
initial
response,
we
like
open
the
box
of
the
first
inclined
to
do
like
patchink
and
if
we
start
with
I'm
and
we
client
paid
the
price
I
think
it
should
go
the
whole
way
towards
like
4G
duplication.
But
if
we
keep
right.
C
C
Right
no
way
no
patching
at
all
because,
as
Rob
said,
like
stuff
get
easier
than
wine
or
not,
if
somebody
subscribe
to
a
particular
fragment,
you
get
this
fragment
as
a
whole,
with
zway
field
duplicated,
even
with
initial
response.
So
we
don't
need
to
do
any
questions
you
just
get
like
you,
don't
even
need
in
a
sense.
You
don't
even
need
to
keep
initial
response
in
memory
Quant
itself.
The
application
can
keep
it
for
for
its
own
reason,
but
one
don't
need
to
do
any
punching.
D
C
Thing
Advanced
quantra,
like
really
need
to
do
like
masking,
so
if,
if
different
callbacks
subscribe
to
different
fragments-
and
they
happen
to
be
on
one
level,
how
do
I
need
to
do
like
hygiene
subfields
from
some
some
components?
C
So
I'm
I'm
like
either
way
I'm
prefer
for
this
application
little
bit
better,
but
if
we
cannot
achieve
footage
application
I'm
like
for,
if
you
don't
guarantee,
we
duplicate
data.
B
Yeah
that
that's
kind
of
where
I'm
starting
to
lean
to
as
well,
where
it's
definitely
like
I,
mean
I,
think
the
last
six
weeks
or
so
of
like
discussing
the
duplication
like
shows
that
it's,
if
it
was
easy,
we
would
have
figured
it
out
by
now.
So
I
want
to
like
see
like
is
the
no
deduplication
a
viable
option,
or
is
there
a
like
a
a
strong
reason
that
we
have
to
like
rule
it
out
completely.
C
Surprisingly
interesting
observation:
we
return
back
to
aliases
no
wait.
Aliasis
is
good
for
query
costs
because
you
can
calculate
query
costs
it's
predictable
here.
C
If
we
return
to
4G
duplication
here,
our
head
is
big,
but
it's
predictable.
The
only
unpredictable
thing
if
fragment
gets
annoying
or
not,
but
inquiry
cost
analysis.
You
can
the
same
thing
as
like
item
is
now
or
not
so
without
executing
query.
I,
don't
know
if
particular
I'd
like
to
play
a
few
days
now
or
not,
or
so
you
calculate
query
costs
for
like
maximum
thing.
So
query
cost
analysis
become
like
easier
for
for
stream
user
without.
E
Also
like
why
I
was
a
bit
leaning
against
full
deduplication
is
because
it
becomes
a
lot
more
complex,
but
not
only
on
the
server
side,
also
on
the
client
side
and
I.
Think
if
the
defer
feature
is
simpler,
it
has
the
potential
to
have
a
wider
adoption,
whereas
if
we
go
and
make
it
super
complex
to
even
implement
it,
then
not
a
lot
of
graphical
service
will
do
it.
E
Let's
see
other
side
of
this
thing,
it's
not
that
you
cannot
achieve
the
duplication
fully
and
do
this
patch
Style,
but
like
suddenly,
the
client
has
to
be
implemented,
has
has
to
hold
on
to
data,
has
to
wait
until
it
has
a
valid
state
to
patch
on
components,
there's
so
much
involved.
Suddenly,
that's
why
I
was.
E
Like
for,
for
me,
my
main
concern
was
with
all
of
that
was
we
need
to
merge
a
certain
differ,
the
first,
and
then
we
can
guarantee
a
certain
execution
cost,
also,
when
not
even
looking
at
cost
and
reduces
like
as
the
feature
of
cost
analysis,
but
more
like
we
can
guarantee
the
the
runtime
performance
of
the
server
much
better.
B
Yeah
I
mean
we
definitely
have
to
document
that,
like
there
is
a
cost
for
for
defer.
There's
the
overhead
of
the
payload.
There's
the
there
is
the
overhead
of
executing
fields
that
are
more
than
once
that
would
have
been
merged.
If
the
defer
wasn't
there.
B
A
B
The
cost
could
outweigh
the
benefit
and
I
think
that
we
need
to
like
for
public
servers.
We
need
to
make
it
clear
that,
like
think,
there
needs
to
be
like
some
kind
of
limiting,
in
effect
to
prevent
clients
from
dossing.
You.
E
Yeah
exactly
but
exactly-
and
this
thing
is
then
a
good
argument
that
the
complexity
on
the
losers
becomes
simpler
and
you
could
deny
and
requests
on
by
running
that
in
a
predictable
way.
F
So
there's
also
the
concern
on
top
of
like
worrying
about
actual
deliberately
adversarial
clients,
there's
also
just
naive
implementation
that
we
need
to
worry
about
so
where
you
can
have
the
multiplication
from
lists,
for
example,
when
a
developer
is
working
on
that
internally,
maybe
everything
looks
fine
right,
but
then,
once
it's
been
in
production
for
three
months
six
months
whatever
and
someone
there's
got
a
particularly
large
list
or
they've
got
a
list,
that's
got.
You
know,
nested
large
lists.
F
Suddenly
that
can
that
could
blow
up,
and
that
might
not
be
something
that
they've
done
deliberately,
but
it
is
something
where,
if
our
behavior
is
to
send
through
lots
of
duplicated
data,
they
might
be,
they
might
think.
Why
is
graphql
doing
this
like
clearly,
graphql
is
is
done
for
doing
it
this
way,
whereas
you
know,
maybe
we
could
have
done
it
in
a
more
efficient
way.
Sorry,
my
dog
was
just
going
I'll,
be
back
in
a
second.
A
C
F
F
I
want
to
say
that
the
the
issue
isn't
as
bad
as
it
was
when
I
first
talked
about
this,
which
was
back
in
October
so
back
then,
because
we
were
talking
about
having
like
labels
and
stuff
like
this.
We
did
have
this.
This
Alias
problem
that
effectively
we're
just
talking
about,
even
where
we
might
expect
people
to
be
constructing
these
queries
like
reasonably
themselves,
because
we
were
talking
about
people
adding
an
alias
to
every
time.
F
They
had
a
defer
and
you
could
end
up
with
a
lot
of
them
and
a
lot
of
multiplication
there.
F
But
already
we've
effectively
decided
to
get
rid
of
the
label
and
getting
rid
of
the
label
and
effectively
collapsing
the
defers
together.
I
think
massively
reduces
the
the
vulnerability
surface
on
this,
like
just
on
its
own,
without
doing
any
additional
work,
so
I
think
that's
a
big
step
up
that
we've
already
I
think
we've
pretty
much
agreed
to
do
that.
F
So
it's
not
as
huge
an
issue
as
it
was.
If
we
can
do
any
more
to
help
reduce
it
more
and
I
think
it
feels
like
we
should
be
able
to,
but
yeah
it's
not
as
it's
not
as
straightforward
as
we
might
hope
it.
I
I
find
it
quite
frustrating
that
we
can
look
at
queries
and
we
can
say
if
I
was
to
rewrite
this
query.
F
We
could
rewrite
it
as
this
we'd
get
all
the
same
data
and
it
would
be
very
efficient
and
we
can
see
that
we
can
almost
pretty
much
come
up
for
the
rules
for
that.
But
it's
just
not
the
way
that
the
spec
itself
is
written,
and
so
it's
quite
hard
to
to
put
that
into
the
field.
Merging
algorithm,
for
example,
because
it's
more
of
a
recursive
algorithm.
B
C
B
Doing
when
we
look
at
it
and
see
that
you
could
do,
it
is
like
we're
doing
like
look
aheads
into
the
depth
of
the
fields
and
we're
really
and
we're
looking
at
simple
examples,
and
it's
like
it
could
be
done,
but
it
just
it
like
would
I
think
it
would
be.
Adding
I
think
we're
underestimating
the
work
that
our
brain
is
automatically
doing
of
like
going
down
and
like
looking
at
fields
that
may
not
ever
be
resolved
to
figure
out
like
how
it
should
be
done
right.
E
Yeah,
it's
also
like
you
can
do
that
with
the
with
the
query
plan,
but
I
mean
the
agreement
was
not
to
have
the
requirement
for
a
credit
card
because
that
these
kinds
of
things
this
was
query
plants,
do
analysis,
structure
as
a
whole
and
even
can
do
multiple
traversals.
So
what
we
do
in
hotshock
is
multiple
traverses
on
the
query,
to
figure
out
the
query
plan
and
then
have
that
compiled
somewhere.
But
that's
not
what
the
like.
E
If
you
look
at
the
graphql
spec
it's
it
goes
level
by
level
selection
by
selection
and
that's
a
problem.
A
E
F
What
we
originally
proposed
or
what
we
proposed
a
while
back,
which
is
effectively
we
write
the
wording
in
the
spec
to
say
you
can
choose
to
not
include
fields
that
you've
already
served
effectively
or
subtle
words.
Something
like
that
would
be
enough.
It
would
allow
hot
chocolate
and
other
implementations
to
drop
out
of
fields
that
they
know
shouldn't
be
needed,
but
for
the
spec
text
to
itself
remain
relatively
tight,
but
the
main
issue
with
that
is:
it
means
that
it's
less
predictable
for
clients
what
they're
going
to
be
receiving.
F
Not
necessarily
the
best
default
like
shouldn't
everyone
just
do
it
the
good
way.
You
know
yeah.
E
But
the
the
the
issue
with
that
is
complexity
to
implement
a
graphql
server
like
when
I
started,
implementing
the
first
bits
of
our
graphical
server.
It
was
super
simple
because
we
have
graphql.js
and
the
spec.
It
was
pretty
much
straightforward
when
you,
when
you
require
people
to
implement
an
execution
plan.
That
is
a
whole
whole
different
story.
Like
we
read
into
like
a
lot
of
University
stuff
around
database.
E
Query
plans
to
figure
out
how
to
do
them,
so
I
wouldn't
require
people
to,
but
I
mean
like
what
you
propose
like
in
a
non-normative
note,
or
so
you
that
that
you
can
do
that.
In
this
case
you
optimize
the
transport
essentially
and
for
clients.
It
shouldn't
make
any
difference,
because
if
they
already
have
the
data,
it
doesn't
matter
if
we
don't
send
them
twice,
so
they
just
have
to
deal
with
the
case
that
these
could
be
omitted.
B
E
The
the
Assumption
I
had
was,
like
you,
don't
send
data
that
you
already
sent
down.
So
the
client
already
has
this
kind
of
data
a.
C
B
And
even
if
we
were
to
like
rewrite
this
back,
so
it
did
have
a
query
plan.
I
I,
don't
think
that
is
like
there's
I
mean.
Obviously,
a
query
plan
is
very
good
in
a
lot
of
cases,
but
where
you
are
necessarily
doing
work
that
may
not
need
to
happen
when
you
do
it
that
way.
Right,
like
you,
could
you're
spending
time
looking
at
fields
that
would
never
even
be
like.
E
So
we
on
average,
following
that
the
execution
plan
execution
is
faster,
but
there
was
there's
a
lot
of
argument
around
that
and
I.
Remember:
I,
don't
know
which
Facebook
engineer
can
look
up,
but
they
they
had.
They
didn't
implement
the
query
plan
because
they
said
maybe
we
never
come
to
this
execution
level
and
with
an
execution
plan
you
actually
have
to
figure
out.
It
could
be
this
type.
So
there's
an
interface,
so
you
get
quite.
E
You
have
to
look
at
a
lot
of
cases
that
that
could
happen
now.
Just.
F
F
Is
much
simpler
and
can
be
introduced
as
just
an
in
a
new
Step
At,
the
beginning
of
the
execution
phase
in
the
graphql
spec
like
as
a
completely
parallel
thing,
even
things
like
the
field.
Sorry,
the
fragment,
yeah,
the
field
merging
from
fragments
could
be
results
at
that
stage
and
then
the
rest
of
the
graphq
execution
could
then
suddenly
become
significantly
simpler,
like
everything
that
we've
got
for
field
merging
in
graphql
execution
currently
could
just
be
removed,
because
it's
already
been
resolved
from
that
first
step.
F
So
we
could
do
it
that
way,
but
I'm
not
saying
we
should
I'm
just
saying
we
don't
need
a
full-on
query
planner
to
solve
this
problem.
E
F
G
I
mean
I
mean
I'm
tempted
to
go
in
in
two
different
directions
here.
What
one
is
that
well
I
appreciate
the
concern
for
I
mean
basically
the
concern
you
just
at
the
point
you
just
raised.
You
know
really
makes
me
think
that
by
default,
we
we
don't
have
to
be
so
strict.
I
mean
queer.
You
know,
people
who
want
to
rewrite
their
queries
and
people
should
be
encouraged
to
I
think
rewrite
all
their
queries
to
reduce
the
complexity.
G
You
know
should
do
so
and
we
can
encourage
tooling
that
that
does
that
rather
than
you
know,
sort
of
complicate
the
Baseline,
the
Baseline
spec
I
mean
basically
what
we're
saying
is
we
recommend?
Where
are
we
rewriting
to
reduce
complexity?
And
you
can
use
complexity
analysis?
You
know
tools
as
well
as
query
writers
and,
and
we
suggest
that,
but
so
that's
one,
that's
one
direction
that
yeah
go
go
in
no.
E
No
just
to
interject
that
if,
if
we
do
such
a
thing,
then
we
need
to
have
spec
text
because
you
cannot
just
say
use
this
tool
because
everything
is
Javascript.
G
No,
no
you
I,
I
I,
don't
think
we
should
like
endorse
a
specific
tool,
but
we
can
but
meaning
we.
There
is
Javascript
tooling,
but
there's
tooling
that
I'm
less
familiar
with
I'm
sure
in
other
languages,
where
you
know
where
we
would
suggest
to
use.
You
know,
complexity,
analysis,
a
query,
operation,
complexity,
analysis
and
and
query
writing
operation
rewriting
to
to
you
know
to
meet
those
goals.
G
You
know,
use
those
two
in
tandem
like
I,
don't
think
we
necessarily
have
to
you
know
mandate
that
within
the
spec
we
could
just
have
a
normative
note
that
both
of
those
are
incredibly
good
ideas.
F
So,
let's
excuse
me,
let's
change
the
subject
just
slightly
kwaku
raised
that
issue
three
weeks
ago
about
these
nested
lists,
and
this
is
related
but
separate
from
this
whole
discussion
of
whether
we
should
like
fully
deduplicate
or
or
not
right.
This
is
actually
a
legitimate
ambiguity
in
the
data
that
a
client
is
going
to
receive
that
can
cause
them
to.
You
know
not
know
how
to
build
up
that
responsive
memory
and
I
think
that
that
is
an
issue
that
we
need
to
solve
and
I
think
how
we
go
about.
F
Solving
that
may
well
have
an
effect
on
the
rest
of
this
discussion,
so
I
think
we
should
probably
dig
into
that
a
bit
more
build
out
the
the
examples
of
that.
You
know
see
how
much
of
an
issue
it
really
is,
and
that
may
well
encourage
us
to
to
follow
particular
paths
when
it
comes
to
the
rest
of
these
issues
that
we're
discussing.
B
Is
this
an
issue
that
comes
from
when
you
have
both
the
combination
of
deduplication
and
the
same
Fields
being
executed
more
than
once,
whereas,
if
you
don't
have
this
issue,
you
only
have
this
issue
if
both
of
those
are
happening.
F
So
if
you
have
two
different
ways
of
like
I'm,
not
sure
that
the
example
query
here
is
actually
vulnerable
to
in
the
in
the
current
situation
you
remember
previously,
each
defer
would
actually
be
deferred
at
the
moment,
we're
merging
them.
But
if
you
forget
that
we're
merging
them
in
this
situation,
you
would
be
evaluating
that
list,
field
twice
and
you'd
be
sending
to
the
user
patches
on
that
list
field.
F
B
What
about
what
about
this
one?
Is
this
the
same
issue
or.
F
Imagine
that
that
second
payload
didn't
have
the
the
ID
in
either
of
those
selection
sets.
So
inside
of
the
defer,
there's
no
IDs,
you
would
be
receiving
back
a
list
of
comments,
it
might
be
the
same
length
and
then
for
that
you'd
be
receiving
back
authors,
which
might
be
the
same
or
different.
Lengths
and
you'd
be
trying
to
merge
that
back
in
to
the
ones
that
you
fetch
synchronously,
but
you
might
be
combining
data
from
two
completely
different
nodes.
C
C
Yeah,
so
we
we're
back
to
to
like
a
2X
Spectrum
node
duplication
by
Stafford
the
duplication,
4G
duplication
so
like
in
this
particular
case.
It's
simple:
there
is
other
cases
we
discussed
this
character
and
the
restore
exam
are
the
cases
we
which
we
don't
know
for
sure.
If
we
can
deduplicate
or
not
without
breaking
constraints,.
B
So
I
I'm
not
sure
that
I
totally
agree
with
this
or
not,
but
I
want
to
put
Playing
devil's
advocate
here
again.
I
I'm
gonna
say
that
this
is
totally
fine.
If
the
second
time
when
this
defer
happens,
you
did
Branch,
you
did
execute
again.
You
got
a
different
result
because
that's
it's
the
same
thing
could
happen.
If
you
used
elases
same
thing
could
happen
if
you
had
broken
this
up
into
two
queries
and
that's
something
that
that
clients
need
to
deal
with.
B
We
have
like
there's,
the
graphql
doesn't
have
like
a
global
object
identification,
but
there's
there's
the
realize
fact
that
uses
IDs
for
that,
and
as
long
as
your
IDs
aren't
being
deduplicated
with
different
results,
then
your
your
client's
gonna
know
that
the
first
result
it
got
here
is
is
out
is
still
and.
E
C
D
C
E
No,
we
executed
twice
depends
on
what
you
mean
with
reservers,
so
we
and
we
distinguish
between
just
a
data,
resource
or
sync
fields.
That
just
essentially
is
a
property.
In
this
case
it
would
be
cached,
it's
the
same
resolver
we
don't
do
anything
or
if
you
mean
a
real
data,
an
async
resolver,
where
you
have
a
database
code
behind.
We
distinguish
between
them.
A
B
E
F
E
F
Issue,
though,
because
you
you
still
end
up
with
an
object
that
is
consistent
right,
that
it's
got
a
foo
and
a
bar
they
might
have
like
if
a
was
random
number.
For
example,
it
might
have
two
different
values,
but
that's
fine.
The
the
issue
with
with
Craig,
whose
staff
is
effectively
that
we
end
up
with
not
knowing
how
to
merge
it
back
to
give
you
that
one
final
resulting
object.
So
your
examples,
your
counter
examples
were
using
aliases.
If
you
use
aliases,
you
don't
get
this
problem.
You
still
have
a
final,
unambiguous
object.
F
Sure
they
might
have
different
values
in
those
two
different
aliased
fields,
but
they
are,
they
are
representable
or
when
you've
got
multiple
queries.
If
you've
got
multiple
queries,
you've
got
either.
You've
got
a
situation
where
you've
got
a
normalized
cache.
If
you've
got
a
normalized
cache,
then
you'll
have
identifiers
on
the
nodes,
and
then
you
get
your
guarantees
through
that,
and
that
system
is
intelligent
enough
to
say
whatever
query
comes
later,
if
it
causes
conflicts,
it
just
blows
away
what
was
there
before
so
that
can
sort
of
solve
itself.
F
What
the
issue
is
is
that
if
we
have
this
situation
in
stream
and
defer
the
I
think
the
end
goal
of
a
stream
or
defer
query
is
to
end
up
with
a
big
Json
object
that
you've
built
up
over
time.
That
is
the
equivalent
to
what
you
would
have
got.
How
do
you
have
run
it
without
the
stream
interfer
it
just
you
know,
came
in
through
chunks,
but
at
the
moment
there
are
situations
in
which
we
might
not
be
able
to
build.
F
That
final
object,
unambiguously
like
if
the
if
the
values
that
came
through
most
recent
comment
initially
was
a
list
of
three
things.
And
oh,
it's
going
to
be
one
because
it's
most
recent
comment,
but
imagine
it
was
a
list
if
it
was
three
for
one
of
them
and
then
you
know
two
in
the
Deferred
one.
What
do
we
do?
Do
we
throw
away
one
of
the
old
ones?
F
You
know
how
do
we
mesh
them
together
and
that's
I
think
really
where
the
issue
comes
and
the
The
Proposal
that
I
had
that
we
were
discussing
last
week,
wouldn't
have
that
issue
because
effectively
that
field
that
exists
in
at
that
path
would
only
ever
be
executed
once
so
it
would
not
cause
this
branching
and
because
the
branching
isn't
there,
this
problem
cannot
exist.
C
It's
also
important
for
me.
One
thing,
I,
just
said
gray,
is
that
streaming
G4
is
used
to
get
stuff.
That
eventually
merge
in
is,
if,
if
you
need
one
big
object
like
and
you
wait
for
it
like,
why
do
you
use
different
stream
in
first
place?
C
I
always
assumed
we
discussing
quick
components,
plugging
something
where,
like
some
components
or
some
plug,
you
know
some
part
of
a
system
say
like
I'm,
okay,
to
wait
for
that
data,
and
when
data
is
delivered
with
component
is
triggered
back,
saying,
wipe
your
data
got
delivered
and
right,
please
particular
data,
not
the
whole
object.
C
D
C
A
new
example
we
can
add
G4
on
top
level,
also
use
blog
post
and
also
with
most
recent
comment,
so
we
can
trigger
to
like
for
duplicate,
third
duplication,
and
in
that
case,
like
in
case
of
initial
response,
and
one
incremental,
we
know
for
sure,
like
incremental,
we
have
will
be
delivered
after
initial
response.
So
we
cannot
find
no
like
the
end
object
after
margin.
C
B
C
C
You
have
different
paths,
you
have
the
same,
it's
not
in
a
sense.
It's
not
like
pure
function,
you're,
not
guaranteeing
resolvers
at
your
minion
exam
and
put
same
output
in
context
of
one
query,
but
it's
okay,
it's
what
we
already
have
on
spec.
What
we
discussing
breaking
here
is
like,
if
path
unique
in
GTA
5
value,
while
you
can
be
duplicated,
but
do
we
guarantee
that
one
particular
path
always
result
in
the
same
value.
H
I
would
I
would
just
add
that
it's
it's
very
easy
to
change
the
implementation.
We
have
on
graphql
JS
to
add
a
cache,
a
local
cache
and
ensure
that
every
path
has
the
same
value.
Even
if
the
spec
is
written.
As
you
know
allowing
branching
like
we
can.
You
know
it's
very
easy
to
just
modify
it.
You
know
a
little
bit
to
say,
use
the
locally
cached
value
for
this
execution
and
that
and
to
avoid
re-execution
I
mean
that's
an
that's
an
easy
switch
that
we
can
make.
H
Well,
it
wouldn't
be
worse
than
I,
don't
think,
then
you
know
you
know
like
Big,
O,
notations
and
doing
operation
all
at
once,
and
even
if
we
stream
we
can.
You
know
that
cast
implementation
could
using
garbage
collection.
You
know,
are
no
longer
being
used.
They
could
be
that
memory
could
be
deallocated.
H
C
C
C
A
C
If,
if
it's
like
hard
requirement
and
provide
all
the
contacts
saying
like
it
can
be
tricky
to
implement
and
other
things,
but
to
measure
temperature
in
the
room
and
like
how
people
think
think
of
it
and
if
it's
a
hard
requirement
it
it
make
our
decision
criterias.
You.
C
No
no
like
with
with
things
that
in
Europe
choir
is
basically
like,
should
we
branch
should
like.
Instead,
for
example,
when
you
said
the
initial
response,
One
path
have
one
value
and
inside
incremental
same
path,
have
different
value
like
what
what
we
discussed
like
last
10
minutes
with
problem.
D
C
C
Another
question
is
no
matter
how
many
times
it's
appeared,
one
or
many
should
have
the
same
value.
It's
different
question.
You
saw
the
question
like
you
can
duplicate
the
same
thing.
The
same
thing
like
the
same
path,
but
with
the
same
value
question
is
here
is
like
should
the
requirement
that
value
is
the
same.
E
Yeah
I'm
a
bit
concerned
now
after
in
these
examples,
I
never
really
thought
about
that.
But
now
I'm
thinking
like
where
it
comes
from
at
Facebook
is
actually
the
batching
with
export
directives
right.
That's
where
the
fur
and
stream
comes
from,
so
it's
essentially
a
way
to
Branch
off
or
essentially
to
have
the
have
one
query,
but
it
still
is
technically
two
queries
running
in
the
back
end.
B
D
B
Pretty
sure
all
their
clients
are
like
are
writing
into
a
a
store,
and
then
they
would
just
have
the
issue
where
they're
not
trying
to
assemble
it
back
into
like
one
response
for
the
whole
query:
they're
just
they're
writing
into
a
normalized
cache
and
exactly
right.
E
But
but
then
you
cannot
just
remove
the
IDS,
because
you
already
sent
them
right
right.
Yeah,
where
I
wanted
to
go
through
is
like
this
deduplication.
For
me,
it's
a
it
could
break
these
things.
If.
F
We
did
yeah
only
depending
on
how
you
do
that
deduplication.
So,
for
example,
the
the
pattern
that
I
encourage
would
mean
that
the
the
field
that
the
duplicated
leaves
is
on
would
itself
be
guaranteed
to
only
execute
once
so.
There
wouldn't
be
any
ambiguity
introduced
because
there
couldn't
be
the
two
different
underlying
representations.
F
The
query
rewriting
form
that
we
talked
about
before
that
that
would
reintroduce
this
problem.
E
Yeah
yeah,
so
what
Benji
from
what
Benji
says
is
essentially,
if
you
look
at
the
queries
here
most
recent
comment,
we
execute
most
recent
comment
just
once,
because
we
anyway
executed
in
the
non-deferred
part,
and
then
we
just
defer
the
parts
like
bio
in
this
case
out
in
a
second.
It
depends
on
how
you
execute
it
right.
B
B
That
we
have
to
address
this
right
now
are
either
breaking
the
constraint
of
One
path
for
payload
and
doing
what
Benji
or
Ivan's
proposals
were
and
sending
the
data
that's
shared
ahead
of
the
there
was
ahead
of
the
Deferred
Fields
being
resolved
or
or
adding
a
some
kind
of
like
execution,
resolver
cache
that
we
could
that
when
we
do
go
here,
we
reuse
the
value
from
the
cache
without
calling
the
resolver
again.
B
Okay,
so
on
on
Thursday,
I'll,
I'll,
add
I'll,
add
an
agenda
item
I'll
talk
about
how,
like
the
way
that
things
are
working
now
is
with
this
branching.
This
is
a
the
type.
This
is
a
you
know,
a
query
of
note
that
we
should
talk
about
and
how,
like
the
trade-offs
that
are
involved
to
address
that,
are
those
those
two
things,
and
then
we
could
see
like
how
important
it
is,
how
those
trade-offs
weigh
against
this
issue.
That's
what
you're
suggesting
Yvonne
right.
C
One
small
thing
I
would
add
like
competing
before,
to
make
it
even
more
like
to
show
like
even
bigger
picture
like
I
I
would
make
another
differ
on
top
level
which,
and
we
stop
what
we
differ.
We
include
blog
posts
and
most
recent
command,
so
not
only
like
initial
response
and
wonderful
competing
but
initial
response
and
to
differ
on
two
different
levels,
which
is
not
matched
competent
quite
for
for
most
recent
command.
A
F
Think
one
thing
that
I
would
like
us
to
think
about
a
little
bit
more,
which
may
simplify
things
is.
Do
we
and
I
know
I've
brought
it
out
before?
Do
we
really
really
really
need
multiple
levels
of
defer?
Is
one
level
enough?
Can
we
just
ignore
it
once
you've
seen
the
first
defer
or
the
first
stream.
F
B
A
F
But
writing
comments.
Writing
the
description
of
the
algorithm
that
I
proposed
before
would
be
a
lot
easier.
If
you
didn't
have
to
deal
with
nested
defers,
basically,
either
a
field
is
deferred
or
it's
not,
and
that
is
then.
That
would
then
be
the
end
of
the
decision
and
that
becomes
much
much
much
simpler
to
then
do
the
the
field
merging
in
the
existing
graphql
spec.
C
A
model
one
where
entire
query
is
assembled
from
fragments
tied
to
particular
components.
What?
If
what
would
you
do
if
you
have
like
the
components
that
initially
was
top
level
and
the
letter
is
reused
in
context
of
something
like
another
component,
which
is
also
do
differ?
So
in
a
model
where
component
is
independent
like
we
can
like?
The
only
thing
we
can
do
is
like
we
can
say
different
side
DeForest
get
automatically
in
line,
but
we
cannot
error
on
it.
F
E
But
I
think
it's
still,
it
could
be,
it
could
become
impractical.
But
let's
have
a
look
at
this.
I
mean
for
this
feedback
from
the
client.
Guys
would
be
welcome
like
Matt
and
others.