►
From YouTube: GraphQL Working Group (Primary) - 2023-05-04
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
How
was
your
short
short
week
in
theory,
going
yeah
not.
C
A
That's
great
nice
weather
is
rolling
in
where
you
are
or
not
so
much
yeah.
C
It's
it's
a
little
bit
overcast
today,
but
it's
been
really
really
Pleasant
yeah.
A
A
Is
good,
though,
things
are
good,
yeah
yeah,
hectic
yeah,
my
eldest
daughter,
is
she
just
finished
her
first
year
of
University
and
now
she
is
moving
into
her
first
apartment
by
herself.
So
yeah
very
interesting
times
around
here,
yeah
for.
B
A
Time
flies
without
a
doubt.
We
have
a
huge
spread
in
kids,
though,
because
I
have
a
seven-year-old
and
a
nine-year-old
as
well.
So
quite
a.
B
A
B
A
It
interesting
yeah,
it
keeps
it
interesting.
That's
right,
it
was
nice.
The
older
sister
was
the
babysitter
for
quite
some
time,
but
now
that
she's
out
it's
like,
oh,
we
don't
have
an
in-house
babysitter
anymore.
A
Exactly
now,
don't
get
me
wrong.
She!
She
was
pretty
quick
with
her
invoice.
That's
the
way
it
is
hi
Rob.
A
How
are
you
doing
guys?
How
are
you
good,
how
are
things
in
you're
still
in
New,
York,
right,
yeah?
Well,
Jersey
Jersey,
yeah,
yeah
how's,
the
how's,
the
weather?
Actually,
your
weather
is
very
similar
to
our
the
weather,
where
I
am
so
I,
probably
already
know
the
answer.
A
F
It's
been
gloomy
few
days,
but.
A
So
do
we
have
a
fairly
light
agenda
today?
It
looks
like
a
few
things
on
there,
but
not
too
much,
which
is
exciting,
because
we'll
get
a
lot
of
time
to
talk
about
the
incremental
delivery
stuff,
which
will
be
fun
for
sure.
G
A
Yeah
fairly
fairly
light
today
looks
like.
E
That's
all
right,
sometimes
a
focused
meeting
is
a
good
meeting
very
much,
in
which
case
we
might
not
consume
the
whole
time,
which
is
totally
cool.
E
To
say
important
item
yeah:
we
also
had
a
light
month
last
month,
so
I'm
probably
going
to
skip
over
our
prior
meetings
review
because
it
has
actually
been
pretty
steadily
focused
on
incremental
delivery
for
the
last
handful
of
meetings.
E
So
my
sense
is
that
everyone
in
the
room's
got
as
much
of
an
update
as
they
need
otherwise
feel
free
to
like
through
those
Mandarin.
Otherwise,
let's
see
we've
got
Quorum
here.
E
A
B
E
Well,
hopefully
we
see
you
happen
shortly,
but
I
can
get
us
kicked
off
Hello
everybody
welcome.
It
is
May
holy
moly,
making
our
way
through
the
year
very
quickly.
But
what
better?
What
crowd?
Could
it
be?
A
better
one
to
spend
a
Thursday
with
talking
about
graphql,
of
course,
by
everyone
joining
here?
We
all
agree
to
the
spec
agreement.
Participation,
guidelines,
contribution,
guide
code
conduct
feel
free
to
click
some
links
in
the
agenda.
E
If
you
ever
want
to
brush
up
on
what's
In
Those,
let's
do
a
quick
round
of
names
to
faces.
E
We
can
go
in
the
order
that
is
listed
in
the
agenda
file
and
I'm
at
the
top.
So
hello,
everybody,
my
name
is
Lee.
B
F
E
Thanks
everybody
and
hopefully
we'll
see
yaakov
shortly,
do
a
quick
review
of
the
agenda.
I'm
gonna
skip
over
reviewing
prime
or
secondary
meetings
and
action
items
since
we
were
focused
I
have
one
other
thing
that
I
didn't
explicitly
add
to
the
agenda.
I
probably
should
I'll
update
the
the
file
after
the
fact,
but
just
a
quick
update
on
the
graphql
comp
coming
up
later
than
the
year.
The
cfp
is
open
for
a
little
bit
longer,
and
this
is
a
crew
of
people
who
has
extremely
interesting
things.
E
I
will
probably
also
be
tapping
a
handful
of
folks
to
participate
in
the
actual
program
committee,
and
so,
if
that's
something
that
is
interesting
to
you,
then
reach
out
to
me
because
I'm
going
to
start
to
build
a
panel
of
folks,
some
of
which
will
come
from
the
foundation
board
and
some
of
which
I'd
like
to
come
from
the
technical
team.
E
That's
all
the
update
there.
If
anybody
has
any
questions
about
it,
I'm
happy
happy
to
answer.
E
And
Yvonne
welcome.
We
did
intros
already
but
I'll,
let
you
say
your
name.
E
Add
yourself
to
the
agenda
file:
when
you
get
a
shot,
hey
we've
got
yaakov
Yakov.
We
just
did
intros
feel
free
to
do
yours
all
right,
I'm,
yaakov,
thanks
friends,
you
may
have
just
missed
I,
was
mentioning
a
quick
update
on
graphql
comp
that
is
coming
up
and
that
the
cfp
for
talks
is
still
open
for
a
little
bit
longer
and
just
encouraging
everybody
here
to
propose
a
talk,
because
everybody
here
has
got
really
interesting
stuff
worth
talking
about.
F
Yeah
well
so,
we've
been
meeting
every
week,
our
incremental
delivery
working
group
and
I
think
that
over
the
past
few
months,
we've
made
a
lot
of
progress,
and
so
I
want
to
report
back
on
that,
basically,
like
the
just
to
give
like
a
reminder
of
like
the
big
issues
we're
trying
to
solve.
F
Basically
right
now
we
have
a
version
of
it.
That's
merged
into
main
of
graphql.js
and
I.
Have
my
spec
PR
that
follows
what's
in
there
and
what's
happening
in?
That
version
is
what
we've
been
calling
branching,
and
that
means
that
whenever,
as
you're
executing
you
come
across
a
deferred
fragment,
you
just
simply
take
the
selection
set
under
that
fragment
and
begin
executing
it.
That's
like
a
pretty
simple
algorithm,
but
the
downsides
that
it
comes,
but
it
brings
some
downsides
and
one
of
those
is
result
amplification.
F
So
you
have
all
these
separate
deferrers
there's
a
lot
of
fields
here
that
overlap
put
that
inside
of
a
list
or
listed
list,
and
you
could
end
up
with
lots
and
lots
of
payloads.
F
F
F
And
here's
like
one
example
of
a
problem
that
we
want
to
make
sure
we
don't
have,
which
is.
Let's
say
we
have
this
most
recent
common
field.
It's
both
inner
defer
and
outside
of
a
defer.
F
We
really
want
to
make
sure
that
it
does
get
executed
twice
and
if
it
does
and
it
gets
deduplicated,
then
you
could
have
like
a
really
weird
result
where
the
most
recent
comment
changed
in
the
time
that
the
initial
execution
of
the
field
happened
and
the
Deferred
execution
and
now
try
to
merge
those
back
together
and
that's
bad,
that's
wrong
data,
and
even
if
it
we
didn't
do
this,
do
this
deduplication
and
we
return
the
whole
objects
here.
F
You
then
you
end
up
with
a
situation
where
you
can't
really
get
like
a
what
we're
calling
a
reconcilable
final
object,
because
you'll
have
fields
of
the
same
path
and
different
results
for
them,
and
no
it's
not
clear
how
you
could
merge
that
into
a
reconcilable
path.
So
this
is
like
a
preview
of
another
document
that
we
came
up
called
that
we're
calling
the
solution
criteria
and
that's
a
long
list
of
things
sorted
by
critical,
major
minor,
that
we've
been
evaluating
against
I'm,
not
going
to
go
into
this
in
this
meeting.
F
But
the
link
is
here:
if
you
want
to
take
a
look,
so
I'm
gonna
jump
into
what
our
latest
proposal
is
yeah,
and
this
was
a
lot
of
work
from
a
ton
of
work
from
Benji
Yakov.
Everyone
that's
been
coming
to
these
meetings
every
week,
so
this
version
there's
there's
no
branching,
there's
no
duplicate
delivery
of
any
Fields.
The
risk
of
response
amplification
is
greatly
reduced.
F
We're
not
we're
not
we're
not
doing
our
like
execution
deduplication
by
caching
or
we're
not
saying
like
execute
it
twice,
but
the
first
time
like
hold
on
to
a
cache
of
the
results.
So
then
you
don't
call
it
a
second
time,
because
we
think
that
could
have
some
memory
burden
on
servers.
F
We
we
previously
discussed
like
methods
where
you
look
at
a
query
and
try
to
rewrite
it
to
remove
the
duplicated
Fields.
This
is
not
doing
that
either
we
do
have
consistent
delivery
of
fragments,
meaning
that
once
you
get
the
result
of
a
deferred
fragment,
you
know
that
you
have
all
the
fields
that
are
there
we're
not
going
down
the
path
of
merging
by
labels
anymore.
Labels
are
still
in
this
proposal,
but
they
are
purely
metadata.
F
They
don't
affect
execution
at
all,
and
there
are
some
nuances
about
null
bubbling
that
I'll
get
into
in
a
little
bit.
So
there's
a
bunch
of
examples
here
that
I
put
together
to
try
to
explain
all
the
different
aspects
of
how
this
works,
and
so
the
first
one
here
is
a
bunch
of
fields
that
are
in
your
initial
results
and
a
bunch
of
fields
in
the
defer.
If
you
look
carefully,
the
only
difference
between
the
two
is
I
is
in
the
initial
and
J
is
in
the
Deferred.
F
In
addition
to
this
type
name
field
up
here,
and
so
you
get
your
initial
payload
and
that
has
all
the
fields
down
to
H
and
I,
and
we
have
a
new
pending
array
that
tells
you
these
are
deferrers
that
are
being
worked
on
in
the
background
of
your
server
there.
What
you
can
expect
a
result
to
come,
for
we
give
an
ID
to
it,
and
the
path
is
the
path
of
where
the
defer
is
this
one's
at
the
root
of
the
query.
So
it's
an
empty
array,
then
inside
of
incremental
we
have.
F
F
F
This
is
a
very
similar
example,
but
now
there's
another
defer
nested
inside
of
the
the
first
defer-
and
this
is
demonstrating
that
when
you
get
the
results
from
the
first
defer,
which
is
here,
We're
Not,
Gonna
resend,
the
fields
that
overlap
between
the
two,
because,
let's
say
like
J
and
K
or
Ernie
sense.
Here,
they're
not
going
to
be
sent
again.
F
So
you
have
to
use
your
previous
results
to
know
that
the
whole
thing
is
resolved,
but
it's
only
sent
once
and
now
that
we
have
enough
data
to
complete
the
second
deferred.
You'll
get
the
second
completed
with
id1,
which
corresponds
to
that
pending.
F
F
It's
going
to
like
break
them
out
into
three
sets
of
of
group
field
sets
because
there's
this
one
slow
field,
a
is
part
of
the
red
group,
G
and
slope
field.
B
is
part
of
the
blue
group
and
then
E
and
F
is
part
of
both
groups.
So
there's
like
three
sets
that
are
independent
and
that's
how
we
determine
like
how
the
incremental
objects
are
returned.
F
So
in
this
example,
because
slow
field
a
returns
first,
you
get
the
data
with
enf
and
the
data
here
and
then,
when
slow
field,
B
is
done.
You
get
G
and
slow
field
B.
These
are
in
the
same
data
object
because
they
both
belong
to
the
same
sets
of
defers
and
yeah,
and
feel
free
to
interrupt
me
at
any
time
like
if
there's
questions
or
anything
because
I'm
doing
a
lot
of
talking.
F
And
so,
in
the
same
example,
when
slow
field
B
resolves
sooner,
it's
the
same.
Three
incremental
objects
are
delivered,
they're
just
come
back
in
the
order
of
where
now
the
ones
that
are
associated
with
the
with
the
the
blue,
defer
are
returned
together
and
then
the
one
that's
left
for
the
red
defer
comes
later.
F
Yeah
this
one
is
another,
is
a
similar
example
where
there's
a
list
item-
and
these
fields
are-
are
the
same
list
item
and
then,
but
this
one
defer
has
ID
and
this
one
defer
has
value.
So
because
list
and
item
are
members
of
both
defers.
F
They
come
back
in
their
own
incremental
object,
but
then,
once
we
get
inside
of
each
object,
we
have
to
execute
the
ID
and
value
separately
so
that
whichever
one
returns
can
come
back
and
that
leads
to
getting
the
data
like
this.
So
we
say:
ID
comes
back
first
now,
you'll
have
these
three
fields
for
the
three
items
in
the
list
and
then
the
three
values
later.
H
Hey
Rob,
would
you
mind
scroll
back
to
the
previous
example?
If
you
will
the
the
one
with
the
slow
potential
slow
field.
G
D
H
Initial
slow
field
b,
I,
guess
I'm
kind
of
already
thinking
about
on
the
server
side,
implementation,
so
basically
you're
doing
a
dedupe
right
if
I'm
understanding
correctly
so
the
the
incremental
one
and
incremental
two
like
basically
what
happens
if
the
cheaper
Fields
already
delivered
with
the
first
one
you
want.
The
second
payload
doesn't
deliver
it
again.
H
So
I'm
just
thinking
about
on
the
server
side,
sometimes
there
might
be
race
conditions
that
okay,
the
second
payload
didn't
know
the
first
one
already
delivered,
so
they
might
send
down
the
same
data
again.
Can
we
would
do
you
think
it
would
be?
Okay,
for
us
to
say,
like
duplicates,
is
okay
as
long
as
they're
the
same
value.
Sorry,
the
yeah.
If
that
makes
sense.
F
I
haven't
really
thought
about
it.
Maybe
the
way
that
we
were
modeling
the
execution
was
that
it
that
there
should
be
like
three
separate
executions
here,
the
one
with
ef,
the
one
with
potentially
slow
field,
a
and
the
one
with
G
and
potentially
slow
field
B
and
then
separately.
There's
two
like
different
objects
in
your
memory
and
they
hold
on
to
references
to
those
executions.
F
So
they're
both
gonna
have
two
references,
but
that,
but
they
should
each
be
pointing
to
that
same
instance
of
the
execution
for
enf
and
then
so,
when
this
one's
delivered,
we
can
store
that
this
has
already
been
delivered.
So
now,
when
the
second
one
is
completed,
you
should
know
that
it
was
already
delivered,
so
you
don't
have
to
send
it
again.
H
F
Yeah,
because
this
one,
like
it's,
the
with
they're
split
up
by
the
unique
set
of
which
differs,
they
belong
to,
there's
like
so
there's
like
the
unique
side
of
being
only
in
red
being
only
in
blue
and
being
inside
blue
and
red.
H
F
I
think
there
might
be
some
question
about
that,
but
that
that's
a
small
detail
that
we
could
get
into
later
yeah
and
this
one
just
shows
that
defers
that
are
at
the
same
path.
So
this
billing
and
this
previous
are
at
the
same
level
under
me
and
so.
F
F
So
this
this
shows
some
null
bubbling
and
I
think
this
is
where
this
is
like
might
be
the
most
unexpected
part
of
it,
or
maybe
a
little
bit
surprising.
F
So
there's
two
defers
here:
we're
going
to
say
that
in
this
example,
Crux
is
non-nullable
and
it
returns
null
and
baz
resolves
before
coex
does
so.
If
there
weren't
defers
here,
then
this
quarks
would
would
blow
up
and
you
would
get
null
for
bar,
but
because
this
defer
finished
first,
we
already
have
bar
sense
with
the
results
inside
of
it,
and
now
we
don't
want
to.
F
We
can't
go
back
in
time
and
change
change
this
field
it
was
already
sent
I,
don't
think
we
want
to
tell
clients
to
start
rendering
something
and
then
take
it
away.
So
what
we're
going
to
opt
to
do
is
basically
cancel
out
the
whole
fragment
we're
not
going
to
return
data
for
it
and
we're
going
to
say
that
in
the
completed
it's
going
to
have
the
error
that
led
to
that
conflict.
I
No
I
do
have
I
I
know,
I've
brought
this
up
in
the
working
group,
but
I
do
have
one
like
one
of
the
values
of
graphql,
at
least
in
the
response
format.
Is
that
the
what
you
get
in
the
response
is
essentially
what
you
are
interacting
with
on
the
product
side,
like
the
the
Json
response
from
a
product
perspective,
you
write
your
keys.
You
get
this
tree
back
and
like
it's
very
obvious
how
you
Stitch
the
data
you
got
from
the
initial
response
into
the
product
itself
and.
I
I'm
not
like
we
we're
trying
to
solve
here
a
potential
performance
problem
in
the
N
by
n
defers
case,
but
at
the
cost
of
from
a
client
like
if
I
don't
have
a
smart
client
I
just
have
a
like
Json
response
reader.
Basically,
the
actual,
like
I
I,
essentially
need
to
make
my
responses.
I
I
But
this
feels
like
you,
you
own,
like
when
I
hit
when
I
go
to
my
re-render
and
I
hit
the
pending
payload,
like
the
Deferred
fragment
the
work
to
then
make
sure
that
I
actually
render
all
of
the
sub
deferred
pieces
feels
like
like
it
feels
non-obvious.
How
I
would
build
that
unless
I'm,
like
keeping
a
mutable
tree
of
pending,
maybe
pending,
may
be
completed
responses
that
I'm
like
pushing
into
every
time.
D
I'm,
sorry,
you
know
it's
a
gradual
process
of
the
duplication
we
went
through.
So
my
first
stage
was
I
think
it
was
a
bunch
of
proposal
to
deduplicate
with
initial
response
and
once
we
we
decided
on
the
duplication
with
initial
response.
D
Quiet
need
to
maintain
mutable
or
merge
able
or
some
some
data
structure
or
what
type
anyway.
So
a
difference
here
is
like
previously
quite
need
too
much
like
keep
initial
three,
only
one
initial
three,
but
much
at
multiply
times,
potentially
with
every
every
incremental
delivery
he
gets.
You
need
to
get
initial
response
and
measure
it
now.
D
C
Yeah
I'd
also
like
to
comment
Matt.
You
mentioned
that
this
is
a
performance
issue,
and
that
is
originally
what
sparked
this
discussion,
but
actually
Quay
who's
here
with
us
actually
raised
a
really
good
issue,
which
is
this.
C
This
lack
of
reconcilable
final
objects,
this
situation,
where,
if
you
query
one
thing
in
one
area
and
then
that
same
thing
again
with
different
fields,
you
might
not
be
able
to
stitch
them
back
together
again,
and
you
can't
build
the
final
objects
now,
if
you're,
if
your
client
is
operating
on
a
fragment
basis,
that
may
not
be
such
of
an
issue
but
for
the
clients
who
treat
it
as
a
final
object.
They
want.
This
final
reconcilable
object
that
they
can
deal
with
that's
delivered
over
time.
C
This
is
actually
a
major
issue.
Now
we've
got
two
things
here:
right
that
are
potentially
causing
this.
One
is
duplicate
execution
of
the
field
and
the
other
is
duplicate
delivery
of
the
field,
and
we
can
solve
just
the
execution.
So
we
could
like,
for
example,
cache
the
execution
of
each
field
so
that
it
never
gets
executed
again.
C
That
would
get
rid
of
that
problem
and,
but
still
then
just
be
delivering
things
multiple
times,
but
it
just
seems
that
that,
if
we're
solving
the
execution
problem
solving
the
duplicate
delivery
problem
at
the
same
time,
especially
if
we
can
do
it
using
the
field
merging
algorithm,
which
is
what
I'm
proposing
at
the
moment,
does
make
a
lot
of
sense.
I
think
I'd
also
like
to
add
that
the
the
algorithm
that
I've
proposed
does
effectively
append
only
modifications,
so
it
will
never
overwrite
a
key
that
already
exists.
C
It
will
only
ever
push
to
a
list
in
the
case
of
stream
or
add
new
keys
to
an
object,
and
it
also
does
not
do
deep
merging.
So
it's
literally
here's
an
object.
You've
got
your
existing
object.
You
add
these
new
keys
in
and
that's
it
there's
no
like
deeper
traversal
than
that.
So
I
definitely
see
where
you're
coming
from,
but
we've
tried
to
limit
the
effects,
while
still
gaining
as
many
of
the
benefits
as
possible.
I
We
actually
have
a
client
that
doesn't
that
like
uses
the
responses
as
is
and
like
tries
to
build
a
product
of
some
form,
even
if
that's
like
a
feed
of
to-do
list
items
or
something
like
that
like
because
I
know,
I
know
the
first
stream
like
when
we
built
it
at
Facebook,
it
was
very
like
we
built
it
for
specific
product
use
cases
and
yeah
it
I,
don't
know,
that's
yeah
I
might
be
being
a
little
curmudgeonly
and
possibly
unnecessarily
derailing,
and
that
that
to
me,
though,
is
like
the
barrier
for
me
being
like.
G
Yeah
yeah
I
wanted
to
leave.
You
know
a
little
bit
of
space
for
that
for
for
that
one,
because
this
is
a
you
know
similar,
but
slightly
different
question,
and,
and
that
is
so
you
know
given
this.
You
know
the
current
state
like
this
proposal,
where
we're
deduplicating.
G
Would
a
client
realistically
be
using
the
the
pending
and
completed
Fields
here
or
would
we
be?
G
You
know
just
you
know
appending
in
as
we
get
things
and
then
and
then
the
only
information
the
client
really
needs
is
has
next
like
maybe
the
the
server
is
is
using.
You
know
some
of
this
information
internally.
As
part
of
you
know,
you
know
it's
bookkeeping,
but
but
basically
can
we
simplify
the
over
the
wire
protocol.
F
The
the
expectation
that
I
have
is
that
you
that
the
UI
developer
is
pretty
much
writing
a
component
that
corresponds
to
the
different
defers
and
the
pending
and
completed
is
how
the
client
is
going
to
know
to
okay,
it's
okay,
to
render
this
component
now,
because
we
have
all
these
fields
versus
it's.
F
Okay,
to
render
this
one,
and
especially
when
there's
when
there's
overlap
between
them,
and
we
had
also
discussed
it-
that
it
being
optional
for
a
server
to
even
execute
a
different,
and
so
the
pending
would
be
how
you
know
that
it
wasn't
just
in
line
for
performance
reasons
or
something
like
that.
I
This
makes
sense,
but
the
I
guess
the
question
is
like
I'm,
the
client
developer.
How
do
I
actually
get
that
F
Fields
value
like
what
is
what
is
the
actual
algorithm
that
I
have
to
work
through
when
I'm
in
my
product
code,
given
the
server
responses
that
I
have
to
know
whether
or
not
one,
whether
or
not
I
should
be
trying
to
get
this
F
field
or
whether
it's
like,
underneath
whether
it's
not
coming
back?
That
I
think
is
a
little
bit
more
clear.
I
That's
the
pending
payloads
bit,
but
then
once
we
have
the
pending
once
we
say
yes,
this
pending
payload
has
come
back.
How
much
of
the
response
do
I
need?
How
much
of
all
of
the
Deferred
responses
do.
I
need
to
look
through.
Do
I
need
to
do
a
big
O
of
n
traversal
on
all
of
the
responses
to
get
to?
Oh,
maybe
F
was
actually
returned
in
red
in
the
red
incremental
and
like
how
do
I
know
when
like
where
to
look
for
it.
Basically,.
F
I
think
the
Assumption
was
that
once
you
get
an
incremental
once
you
get
a
payload
returned,
you
have
to
apply
all
the
fields
that
are
in
the
incremental
to
the
prior
response,
and
at
that
point,
once
you
see
a
completed,
you
know
that
at
the
location
where
the
path
is
for
that
given
label,
you
have
all
the
fields
necessary
to
render.
I
I
A
specific
implementation
of
State
onto
like
the
client
must
Implement
state
in
a
very
specific
way,
whereas,
like
in
today,
like
I,
can
have
a
graphql
client
today
that
doesn't
do
the
first
stream,
but
does
do
potentially
subscriptions
where
the
state
model
is
like.
Basically,
I
just
use
the
Json
as
is,
and
I
can
Traverse
that
Json
directly
I
can
Traverse
the
response
directly.
I.
Don't
need
to
like
store
an
intermediate
object
to
keep
track
of
what
I've
gotten
previously.
I
F
I
I
And
whether
that's
that
store
is
just
like
a
tree
that
you're
updating
as
you
get
the
pending
payloads
you're
like
adding
to
that
tree
or
whether
it's
you
know
something
else
but
like
you
can't
just
take
the
wire
response.
Read
that
from
memory
directly,
whereas
you
can
today
like
I,
have
clients
in
the
wild
that,
just
like
the
their
store
is
the
Json
response.
F
I
I
It's
completed
so
now
that
it's
completed
I'm
going
to
do
a
render,
where
the
render
object
is
basically
this
full
list
payload,
and
there
should
be
some
algorithm
that
I
can
use,
given
the
completed
payload
to
basically
as
I'm
traversing.
Nowhere
in
that
payload
to
be
looking
for
the
traversal
bits.
I
F
Just
just
my
understanding,
like
would
a
a
simple
version
of
that
algorithm,
be
something
like
you
write
this
data
and
you
write
the
initial
data
into
an
object.
You
start
with
that.
That's
your
that's!
Your
base,
the
thing
you're,
starting
with
you,
merge
every
incremental.
F
I
Saying
without
the
merge
operation,
you
can't
so
assume
that
you
have
immutable
data
structures
because
we
do
and
that
you
can't
merge
into
the
existing
data
structure
that
you
got.
How
do
you
like?
What's
the
algorithm
when
it's
an
append
only
like
not
not
even
with
like
the
tree,
is
not
a
pendant,
it's
like
what
you've
got.
I
You
can
have
a
map
of
trees
like
the
the
current
algorithm
for,
like
figuring
out
F,
would
be
four,
or
rather
for
figuring
out,
potentially
slow
field.
B
would
be
for
each
incremental
payload
if,
if
the
path
like
walk
through
each
incremental
payload
response
with
any
ID
to
like
using
the
original,
so
you
definitely
need
to
keep
state
of
what
the
original
ID
ones
path.
Is
that's
fine
whatever,
but
like
from
that
original
path,
now,
Traverse
through
each
of
them
until
you
find
potentially
slow
field
B.
I
J
J
Think
I
I
definitely
think
two
things
one
is
like
Robin
benjian
mentioned.
We
have
an
algorithm
here
that
solves
branching
I
mean
duplication
on
the
execution
and
that's
that's
critical
and
and
I
I
sort
of
see
the
the
Simplicity
of
of
not
of
not
doing
deduplication
in
the
response,
meaning
sending
duplication
response
like
you're
saying,
but
it's
sort
of
it's
sort
of
also
the
case
that
these
are
just
different.
J
You
know
sort
of
serialization
or
wire
formats,
meaning
you
could
have
it
and
it's
much
easier,
I
think
to
go
from
the
deduplicated
data
that
Rob
is
showing
back
to
the
duplicated
data,
meaning
you
could
have
a
little
like
I
put
in
the
comments.
You
could
have
a
little
wrapper
that
sits
between
your
naive
client
and
the
server
that
feeds
the
tree.
At
the
moment
it
gets
a
completed.
You
know,
sort
of
feeds
the
feeds
the
tree
at
that
path,
that
it's
sort
of
saved
as
a
pointer
to
in
memory.
J
You
know
using
that
algorithm
and
then
it
just
produces
that
tree
for
the
you
know
at
that
path
that
sub
tree
at
that
path.
For
the
client,
when
the
and
and
now
it's
undeduplicated,
so
it
would
probably
have
additional
information
because
it's
the
fully
reconcilable
object,
but
it
wouldn't
you
know
it
could
create
a
new.
You
know,
you
know,
object
your
path
there.
So.
I
B
I
Instead,
like
the
spec,
at
least
in
my
mind,
is
more
about
the
access
pattern
like
when
you
are
trying
to
get
field
C
under
B
like
what
must
be
available,
at
least
from
the
client's
perspective,
it's
more
about
like
the
access
and,
if
you
need
and
like
we
have
had
completely
different
wire
formats.
In
the
past,
we
used
non-json
wire
formats
that
were
spec
compliant
because
we
had
the
server
on
both
the
server
and
the
client.
J
D
J
So
then,
we're
in
like
a
little
bit
of
a
bind,
because
let's
say
we
want
to
support
both
for
naive
clients
and
for
sophisticated
clients,
it's
sort
of
like
okay.
Well,
we
solved
the
execution
thing
and
now
we're
just
talking
about
data
formats
that
are
interchangeable
to
some
extent.
But
now
we're
just
talking
like
I
guess
about
how
to
specify
it
best
and
is
is
would
that
be
a
good
summary
or
or
not
so
much.
I
It's
essentially
what
the
algorithm
on
the
client
is
to
be
like
the
JS.
The
the
thing
about
the
Json
response
format
is:
it
exists
today
in
the
spec
not
counting
to
First
stream.
Is
that
that
is
that
response
format
basically
specifies
the
algorithm
for
accessing
the
data
here.
The
response
format
is
not
the
algorithm
for
accessing
the
data,
and
again
it's
not
that
this
is
like.
If
we
have
this,
then
we
need.
I
We
probably
also
need
to
include
in
the
spec
what
that
algorithm
for
the
data
access
actually
is
not
just
what
the
algorithm
is
for
like
getting
the
data
like
sending
the
data
over
the
wire,
and
this
basically
forces
that
coupling
of
both
and
that's
that's
fine,
that's
just
like
a
additional
coupling.
C
I
think
that
this
essentially
comes
out
of
the
the
execution
delivery
split
that
I
was
discussing
earlier.
If
we
are
in
the
spec
making
sure
that
things
are
only
executed
once
and
we're
not
using
things
like
now,
the
server
must
have
a
cache
you
know
or
whatever
then
effectively
at
the
moment,
like
graphql
executes
and
delivers.
If
you
will
a
field
in
the
same
instance
right,
it's
all
the
same
thing
you
execute
it.
You
return
it
and
that's
what
you
send
to
the
client.
C
So
if
we
are
making
it
so
that
a
that
the
execution
isn't
duplicated,
then
we
are
in
the
way
that
the
spec
Works
currently
we're
also
making
it.
So
it's
not
delivered
twice
that's
kind
of
an
inherent
thing
about
that
pattern.
C
So
if
we
wanted
to
solve
the
execution
problem,
which
is
absolutely
essential,
but
without
removing
these
additional
field
delivery,
we
either
need
to
split
up
execution
and
delivery
and
say:
okay,
execute
it
once
but
then
deliver
in
all
of
these
multiple
payloads
or
we
need
to
make
it
so
that
execution
does
caching
and
says:
oh
if
I've
already
executed
it
grab
it
but
still
return.
C
It
as
part
of
this
thing,
it's
I
think
it
makes
the
spec
significantly
more
complicated
and
though
I
agree
that
you
know
the
wire
format
is,
is
an
interesting
concept
here,
really
I.
Think
a
lot
of
the
I
mean,
especially
certainly
the
the
HTTP
like
Json
wrapping,
is
basically
take
the
stuff
from
graphql
and
return
that
in
Json
format,
it's
not,
you
know
to
compress
it
or
decompress
it
or
whatever.
C
We
don't
like
concern
ourselves
with
that,
so
I'm
a
little
bit
concerned
if
we
were
to,
for
example,
deliver
everything
as
big
things
and
then
say
at
the
network
level.
You
should
then
like
throw
away
those
things
and
then
re-add
them
again.
On
the
client
side,
that's
going
to
make
interoperability
for
the
majority
of
graphql
clients
which
are
Json
over
HTTP,
much
more
complicated.
I
C
Whilst
we're
on
the
the
topic
of
problems,
one
other
thing
that
I
wanted
to
comment,
that
the
current
proposal
and
in
fact,
every
proposal
that
we've
had
so
far
that
talks
about
field
duplication,
deduplication,
sorry
has
is
the
order
of
the
fields
within
the
maps-
is
no
longer
guaranteed.
C
So
if
you
have
an
A,
defer,
B
and
then
a
c
where
C
is
not
deferred,
then
it's
probably
it's
going
to
ultimately
become
a
c,
and
then
the
B,
because
that
comes
later,
but
things
get
more
complicated
when
you
have
a
mixture
of
defers
and
they
come
in
different
orders,
and
things
like
that.
So
there
is
some
complexity
there
that
if
we
were
not
to
de-duplicate
in
the
way
that
Matt's
suggesting,
then
that
problem
would
go
away.
I'm,
not
sure
if
it
even
really
is
a
problem.
C
Whether
having
unordered
maps
versus
ordered
Maps
is
a
significant
concern
at
this
point,
when
you're
already
using
an
incremental
delivery
mechanism,
where
you're
going
to
have
these
intermediate
objects,
that
effectively
are
going
to
grow
over
time
or
you
know,
they'll
be
replaced
with
something.
That's
grown,
don't
know,
but
it
is
worth
at
least
acknowledging.
F
If
the
solution
that
that
you're
getting
at
met-
but
does
that
mean
that
it's
fundamentally
like
not
able
to
prevent
this
type
of
thing
like
at
a
public,
graphql
server,
getting
extremely
large
amounts
of
data
or.
I
Essentially,
the
way
I
would
probably
try
to
do.
This
is
make
the
response
format
for
the
incremental
payloads
like
a
nested
tree
response,
like
we
sort
of
we
have
all
the
data
in
the
payload,
even
in
the
incremental
payload
as
it
exist
today,
like
we
have
that
data
available,
it's
just
the
algorithm
for
pulling
it
out.
I
We're
saying
we're
kind
of
deferring
that
into
like,
oh,
that
that
just
needs
to
happen
on
the
client
by
mutating
this,
like
State,
this
tree,
that
we're
building
up
and
I
think
you
can
actually
represent
the
incremental
payloads
as
essentially
sub
trees.
Just
like
we
do
in
normal
data
that
you,
you
like,
provide
a
mechanism
for
how
to
like
where
those
sub
trees
need
to
be
stitched
together,
and
if
you
have
that
and
you're
guaranteeing
like.
Oh
this
sub
tree.
Oh
here's
where
it
has
more
deferred
data.
I
So
when
you're,
when
you
actually
get
that
deferred
data,
you
know
to
like
keep
walking
through,
you
can
just
walk
through
the
tree
and
you
like
end
up
with,
like
the
Deferred
break
the
trees
and
like
flatten
them
into
a
list
of
trees
or
whatever,
but
you
should
be
able
to
know.
Oh
okay
I
hit
this
point
in
the
tree.
That
means
I
need
to
go.
Look
at
the
I
need
like
that,
creates
a
pointer
to
the
next
tree.
I
Then
I
can
just
keep
traversing
that
creates
a
pointer
to
the
next
one
like
I
I.
Don't
think
it's
I,
don't
think
the
actual
algorithm
on
the
server
for
like
deduplicating
and
all
that
is
inherently
Inc
like
inherently
incapable
of
this
I.
Just
maybe
I
need
to
like
attempt
to
write
out
exactly
what
would
be
required,
but
yeah.
That's
that's
essentially
like
the
way
I
see
it
is
like.
I
I
It
might
require
so
it
might
require
us
to
go
back
to
the
idea
of
like
having
the
defer
change.
The
format
like
having
some
tag
in
the
data,
but
if
we're
like
I
I,
think
that
that
is
a
trade-off
and
there's
a
reason
why
we
didn't
want
that
trade-off,
but
that's
a
trade-off
that,
like
maybe
there
is
enough
value
in
not
ending
up
with
these.
Like
flattened
list
of
things,
you
have
to
merge.
D
I
actually
thought
about
this
for
a
long
time
and
it's
it's
so
equates
to
your
pragmatalia's
proposal.
Initial,
like
another
format.
Realistically
speaking,
it
should
be
Json
compatible
and
with
Json
compatible
I
want
to
service
problem.
If
somebody
can
crack
it,
because
I'm
like
trying
to
solve
this
puzzle
for
a
long
time
and
like
puzzle
piece
is
like
that,
if
we
assume
and
like
in
hope
this
discussion
end
up
like
we
cannot
assume
anything
about
the
client.
Basically.
D
So
if
we
cannot
assume
anything
about
the
quest,
we
cannot
assume
one
is
able
to
pass
query
or
understand.
I
see
if
client
cannot
understand
the
HD,
it
cannot
distinguish
between
Scholars
and
and
like
structures
like
types
of
graphql
types.
So
if
we
decide
on
any
like
any
special
syntax
inside
Json
person
who
like
tries
to
attack
or
inject
like
people,
use
random
Json
and
some
systems
and
inside
random
Json,
you
can
use
whatever
Json
supports
it.
D
So,
whatever,
like
a
dollar
sign
at
the
array,
special
names,
everything
can
be
put
into
Json
thing
so
like
a
simple
client
who
cannot
distinguish
between
Customs
covers
and
Fields
mentioned
in.
Inquiry
cannot
understand.
Like
is
it
a
reference,
or
is
it
like
somebody
injected
maliciously
injected?
Similar
references
are
discovered.
If
somebody
can
figure
it
out,
it
would
be
like
Breakthrough
I'm
like
it's
some
way
to
some
way
to
escape
value
or
something
I
cannot
without
like
breaking
compatibility
and
just
Jason's
cover
is
widely
used.
D
Oh
Json
type
things
widely
used
in
community
so
and
in
a
lot
of
cases,
these
things
actually
contain
data
supplied
by
by
third
party.
So
we
cannot
assume
it's
even
sanitized,
so
like
just
one
mentioned
here,
I
also
might
think
it's
like
kind
of
Holy
Grail
solution
to
inject
something
inside
I'm,
just
like
thinking
about
it
for
like
and
I,
didn't
figure
out
how
to
how
to
do
it,
not
in
a
new
format,
but
in
such
Json.
C
You
could
solve
it
in
the
same
way
that
mind
boundaries,
work,
I.E,
specify,
along
with
the
initial
payload,
a
magic
string
that
will
never
appear
in
the
document
and
then
use
that
as
the
identifier
now
the
challenge
with
that,
when
it's
a
long-lived
string,
is
you
can't
guarantee
that
someone's
not
going
to
then
take
that
and
feed
it
back
in
which
could
be
an
issue
but
for
incremental
delivery?
C
I
think
it's
meant
to
be
effectively
like
a
you
know,
it's
it's
effectively
a
point
in
time,
execution
that
is
spread
out,
so
you're
not
meant
to
be
like
reflecting
mutations.
That
then
happen
based
on
the
results
of
what
you're
querying.
So
hopefully
it
wouldn't
be
such
a
concern,
or
it
would
at
least
be
deliberately
malicious,
in
which
case
you
know,
the
client
is
party
to
this
yeah.
That's
one
option.
D
J
Something
else
something
else
we
could
do.
I
haven't
completely
thought
this
through
is
but
right
now,
because
each
item
each
tree
in
the
incremental
list
could
appear
more
than
once.
We
have
a
choice
between
which
ID
to
to
send,
and
we
only
we
said,
we've
chosen
to
send
the
one
with
the
I
guess
the
shortest
subpath
or
no
subpath.
But
in
theory
we
could.
J
We
could
change
that
to
to
you
know,
sort
of
labeling
each
tree
in
the
incremental
list,
with
all
of
the
IDS
that
it's
included
in
and
and
if
that
extra
information
is
given
to
the
client
they
could
choose
if
they
want
to.
When
they
see
the
completed,
payload
I
mean
they
sort
of
now
have
more
choices,
meaning
because
we
currently
are
are
are
only
giving
them
some
of
the
IDS.
J
They
have
no
way
of
associating
the
completed
the
completed
object
with
those
incremental
payloads
without
going
back
to
the
final
reconcilable
object,
but
if
they
want
to
build
some
sort
of
map
instead
of
a
tree,
if
we
give
them
that
extra
information
I
mean
in
theory,
they
should
have
it
I
guess
or
they
could
derive
it
on
their
own
I
think.
But
we
could
give
them
that
extra
information
and
then
it's
sort
of
like
built
into
the
response,
like
a
little
bit
of
a
tree
shape,
haven't
completely
thought
that
through.
I
I
So
what
are
all
of
the
id1
incrementals
and
now
like
walk
those
paths
until
either
like
the
path
that
I'm
walking
is
not
there
like
from
both
so
I'm
going
both
from
data
and
all
of
these
incrementals
walk
those
paths
basically,
like
you,
can
yeah
walk
them,
it's
roughly
equivalent
to
a
merge,
but
you
don't
actually
have
to
do
the
merge.
You
can
just
walk
them
all.
At
the
same
time,.
J
Right,
it's
sort
of
the
equivalent
I
guess
of
what
the
server
would
have
to
do.
J
If
it
shows
I
mean
really,
it
should
just
pick
a
yeah.
It's
so
I
think
it
would
sort
of
be
equivalent
of
what
the
server
would
have
to
do
with
deduplication
of
execution,
but
choosing
not
to
de-duplicate
delivery.
I
guess
the
server
would
have
to
do
that
actually
and
so
we're
telling
I
don't
think
they're
yeah.
So
what
we're
telling
the
client
that
they
could
do
that
I
mean
I,
think
the
better
option
for
the
client.
J
It
would
be
to
just
build
that
final
reconcileable
object
as
kind
of
a
wrapper
like
the
first
solution,
but
but
basically
I
think
I
sort
of
agree
that
this
would
be
definitely
a
model.
But,
like
you
said
it
would,
it
would
be
a
change
that
would
force
a
certain
vision
of
State
on
the
client
and
in
my
mind
it's
just
kind
of
worth
that
so
I'm
not
sure.
If
you.
F
Yeah,
so
that
was
that
was
helpful
feedback,
I
guess
yeah.
B
A
Do
we
want
to
talk
more
in
depth
about
this
in
Monday's
meeting
I,
don't
know
if
you
can
make
Monday's
meeting
Matt.
F
Yeah
yeah
there's
only
two
examples,
so
I
guess
we
can
do
that
real,
quick.
F
Yeah,
so
a
defer
industry,
I
I,
don't
think
it
should
be
too
surprising,
based
on
what
we
already
went
over
for
defer,
but
so
there's
a
pending
for
the
Deferred
data.
There
is
a
appending
for
the
stream
there's
only
one
pending
for
the
whole
stream,
not
one
for
each
item.
F
The
path
points
to
where
the
stream
is
in
the
in
the
tree,
the
incremental
data-
it's
not
going
to
have
a
subpath,
but
because
the
stream
is
always
just
pointing
to
where
the
list
is
it's
items
inside
of
an
array.
So
we
can
return
more
than
one
at
once
that
that
was
that's.
What's
what
we
have
even
on
the
main
branch
of
graphql
with
JS?
Now
you
can
get
the
completed
not
with
any
items,
because
the
stream
can
close,
asynchronously
and
yeah.
F
So
here
here's
like
a
payload
where
there's
no
incremental
at
all,
because
we're
just
saying
that
the
stream
finished
and
then
here's
like
the
data
that
was
from
that
original
defer
that
closes
out
the
whole
response
in
the
case
where
either
you
have
the
stream
and
the
list
item.
F
Then
you're
going
to
get
this
case
where
we
return
the
completed
with
errors
for
the
Stream,
and
so
you
should
take
that
as
indication,
not
gonna,
there's
no
way
to
go
back
and
undeliver
the
stream
items
I've
already
sent,
but
this
is
letting
you
know
that
this
error
would
have
bubbled
up,
and
this.
This
would
also
be
the
case
if
you're
like
underlying
async
interval
or
whatever
it
is
errors
out
after
delivering
some
items.
F
Yeah,
that's
that's
all
that
I
have
for
stream.
I
did
we
did
in
the
previous
version.
Have
a
path
on
the
each
stream
item
saying
like
what
the
index
of
the
first
thing
in
the
list
was
and
I
think
that
was.
There
are
a
few
reasons
for
that.
One
of
the
main
ones
was
that,
because,
because
of
the
way
we
were
branching
to
Furs,
it
was
possible
to
have
multiple
streams
running
simultaneously
on
the
same
field,
which
I
think
is
not
desirable.
F
F
F
I
Yeah
I
think
that
the
the
underlying
like
server
algorithm
at
this
point,
feels
strong,
like
I.
Don't
think
that
there's
a
fatal
flaw
in
like
how
we
are
doing
the
deduplication
I
think
the
potential
fatal
flaws
like
we
we've
been
considering
this
very
much
from
the
server
perspective
like
what
is
required
to
get
this
to
work
on
the
server.
I
What's
required
to
make
sure
that
we
aren't
like
wasting
work
on
the
server
that
we're
not
allowing
injections
and
I
think
that
that
is
the
strongest
piece
of
The
Proposal
so
far
and
I
think
it's
very
strong
but,
like
I,
don't
think,
we've
tried
building
a
few
kinds
of
like
you
toy
clients
or
whatever,
with
the
model,
whereas
we've
built
basically
toy
servers
with
the
model,
and
so
that's
yeah.
F
I
Yep
exactly
yeah
and
that's
why
it's
like
it's
all
format,
questions
in
my
mind.
It's
not
really,
and
it's
like
what
is
yeah.
What's
the
shape
not
what's
the
like,
what
are
we
even
trying
to
do.
F
F
All
right
anything
else
that
we
want
to
go
over
about
this
now.
A
A
We
we
can
on
the
Apollo
side,
we
can
definitely
volunteer
some
time
to
help
vet
things
once
they're
in
the
alpha
on
the
client
side.
Now
you
know
our
clients
are
different
than
other
clients.
So
I,
don't
know
how
much
that's
really
going
to
vet
things,
but
we
can
help
smoke
test.
Some
of
the
ideas
anyways.
C
Awesome
Matt
just
to
ensure
that
I
fully
understand
what
you
would
like
to
see.
Are
you
effectively
saying
given
you
have
this
like
response
stream,
that
you've
built
up
over
time
and
you
have
let's
say
a
path
to
a
field?
You
want
a
basic
algorithm
that
will
give
you
the
value
or
values
that
could
be
used
to
build
that
without
using
mutation.
B
C
All
right,
fantastic!
Well,
thank
you.
Everyone
for
this
very
interesting
discussion.
I
think
we
are
definitely
getting
closer
to
stream
and
becoming
something
that
we
can
merge
into
the
spec,
but
still
lots
to
be
thought
about
lots
of
edge
cases
and
things
to
be
explored.
So,
let's
keep
up
the
good
work
and
I'll
see
many
of
you
on
Monday
yeah.