►
From YouTube: GraphQL Working Group - June 3, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
B
A
A
There's
links
there
in
the
agenda
in
case
you
ever
want
a
refresher
and
then,
as
we
always
do,
let's
do
a
quick
round
the
room
just
to
put
names
to
faces,
we'll
go
in
the
order
in
the
agenda
doc
and
then,
if
we
missed
anybody,
then
we'll
pick
them
up
at
the
end
and
I'll
start
with
me
so
hello,
everybody,
my
name
is
lee.
D
E
So
I
think
I'm
next,
I
don't
have
to
open.
Yet
I'm
rob
I'm
working
on
shopify
on
api
stuff.
G
H
Oh
hey
brian,
I'm
with
apollo
graphql,
and
I
am
also
working
on
deferred
streams.
So
I'm
happy
to
talk
about
it.
G
Hi,
I'm
evan,
I'm
with.
J
I'm
sasha
I
work
at
twitter
and
I'm
just
here
as
a
fly
on
the
wall.
So
I
hear
about
all
the
cool
things.
I
Hi
I'm
jacob
helping
work
on
deferring
stream
and
graphql
tools
schema
kitchen.
A
I
think
we've
got
everybody
great,
so
the
last
couple
folks
feel
free
to
send
pull
requests.
I'll
get
your
names
up
on
the
agenda.
I
see
we
have
as
per
usual
benji
on
notes.
Anybody
else
want
to
volunteer
to
help
out,
especially
since
benji
is
going
to
inevitably
jump
into
the
discussion
at
some.
A
A
You'll
find
the
there's
a
link
to
the
google
doc
at
the
top
of
the
agenda
file.
I
don't
hear
anybody
saying
anything,
but
if
one
or
two
people
could
jump
in
and
help
oh
excellent.
Thank
you.
A
All
right,
let's
take
a
quick,
look
down
the
agenda
and
make
sure
that
we've
got
everything
here.
We
want
to
talk
about
we're
going
to
talk
about
last
meetings,
action
items
as
per
usual,
I'm
going
to
give
a
very
brief
update
on
the
next
version
of
the
spec,
the
and
the
schema
coordinates.
A
I
haven't
talked
about
that
one
for
a
few
months,
but
I
think
that
one's
ready
to
advance,
similarly
so
with
full
unicode
support.
We
also
haven't
talked
about
that
one
in
a
little
while,
but
that
one,
I
think,
is
also
ready
to
advance
fragment
arguments
rfc
that
one's
on
matte,
defer
and
stream.
We
have
two
separate
agenda
items
for
I
imagine
that's
for
spec
and
then
future
looking
stuff
and
that's
everything
we
have
set
sound
right.
Anything
else.
You
all
want
to
talk
about.
D
D
A
A
All
right,
let's
dig
into
action
items,
I
actually
updated
the
links
this
time
in
the
agenda
file
first,
because
I
always
confuse
the
hell
out
of
myself
when
I
open
this
thing
up
and
see
a
bunch
of
closed
issues.
So
the
first
one
is
everything,
that's
ready
for
review
and
then
the
second
one
is
everything,
that's
open
and
we
still
have
21
open
actions,
which
is
a
lot.
I
suspect
some
of
these
are
probably
ready,
and
I
just
didn't
review
them
all-
to
see
if
they're
ready
for
a
review.
A
A
First
of
these
schema
coordinates,
add
functions,
graph.js
for
parsing
and
printing
guess
what
we're
going
to
talk
about
that
in
a
little
bit,
so
I'm
closing
it
next
one
feature
freeze
on
graphql,
js,
repo
and
pin
it.
I
think
this
has
come
and
went
and
is
no
longer
frozen.
Yvonne.
Please
tell
me
if
that's
accurate.
B
I
need
to
keep
up
with
like
reviewing
bunch
of
stuff,
but
in
general
yeah
everything
increase
everything
ready.
Nothing
major
is
expected
like
it
will
be
some
cleanup
after
typescript
migration
still
but
like
we
can
switch
some
things
to
numes
and
use
other
constructs.
A
Huge
shout
out
to
saheed
who
pulled
the
lion's
share
of
that
typescript
migration.
There
was
a
massive
amount
of
work
there,
so
he
did
really
awesome
work.
I
got
to
pitch
in
at
the
end
to
help
fix
some
things
to
use
strict
mode,
but
yeah
very
exciting,
to
see
that
we're
finally
on
pure
typescript,
so
that
one
is
closed
and
next
up
we
have
write
a
list
of
topics
to
cover
in
the
spec
release
marketing
and
this
one
I
have
a
change
log.
A
I'm
actually
going
to
leave
this
one
open
on
brian
for
now,
because
in
the
next
marketing
review
meeting
we
want
to
pull
out
highlights
so
that
change
log
is
really
just
like
a
detailed
list
of
everything.
That's
changed
and
I
want
to
pull
out
some
highlights
so
that'll
help
with
the
press
side
of
things.
So
I'm
going
to
leave
that
open
for
us
for
now
and
I'll
probably
just
leave
ready
for
review
because
I'm
sure
it'll
be
done
by
next
time.
A
Next
one
require
argument,
uniqueness
and
benji.
Anything
to
update
on
this.
D
I
think
this
was
on
an
event
actually
yeah.
I
want.
B
B
A
Okay,
I'll
just
leave
this
open
as
ready
for
review
just
to
know
that
you've
grabbed
onto
it
just
in
case
we
don't
want
to
lose
track
of
it.
It's
exciting
editorial
review
of
no
root
introspection
subscription
rfc.
I
think
this
is
good
to
go.
Editorial
is
complete,
yep
closed
and
then
last
one
was.
A
Yeah
not
only
would
ever
emerge
this
one
is,
is
good
and
close.
So
closing
that
one
as
well,
I
don't
think
we
need
to
go
through
all
of
our
other
remaining
open
issues,
but
just
to
look,
we
have
17
open
action
items,
so
I
might
take
a
scan
through
some
of
these
after
the
meeting
and
see
if
I
can't
get
some
of
these
ready
for
review
glad
to
see,
we
got
some
actions
done,
though,.
A
All
right,
where'd,
my
agenda
file
go
okay.
Moving
on
oh,
the
next
one
is
mine,
okay,
so
next
update
on
the
next
version
of
the
graphql
spec.
So
last
meeting
I
mentioned
that
there
were
a
small
handful
of
remaining
open
issues
and
I
had
set
up
that
milestone
to
track
them.
A
A
This
is
an
asynchronous
vote,
so
tsc
members
are
reading
this
and
then
sort
of
replying
that
they're
good
to
to
vote
on
this
we'll
probably
force
a
vote
at
some
point
in
the
next
couple
days.
Just
so,
we
can
get
this
closed
because
this
has
been
open
now
for
just
shy
of
two
weeks,
but
take
a
look
at
this.
A
If
you
haven't,
I
I
pulled
out
all
of
the
I
made
a
changelog
file
which
hopefully
we
can
do
this
in
the
next
cut
as
well,
that
just
lists
out
all
the
authors,
all
the
changes
and-
and
it
also
shows
a
full
diff.
So
if
you
know
a
lot
of
things
change,
so
if
you
want
to
get
a
sense
of
everything,
that's
changed
in
the
last.
A
It's
just
like
about
two
years,
a
little
over
two
years,
pretty
wild.
It's
actually
a
lot
of
stuff.
That's
gone
into
this.
Let's
take
a
look
at
that,
so
probably
have
that
up
and
landed
as
soon
as
we
can
complete
that
vote.
That's
the
update
there
happy
to
take
questions
if
there
are
any.
B
I
have
like
small
suggestion
and
maybe
like
tiny
discussion
like
we
have
with
30
characters
limits
on
on
a
leaf
of
spec
centers
back
is
formatted.
So
every
time
somebody
had
like
a
word,
everything
is
like
reshuffled.
B
B
D
Yeah
I
found
that
too,
when
I
was
reviewing
it.
What
I
did
is
I
I
used
a
custom,
git
diff
algorithm
that
ignores
all
white
space,
not
just
white
space
at
the
like,
beginning
and
end
of
lines,
but
everywhere
it's
a
command
that
I
often
use
when
prettier
changes
code,
and
I
want
to
make
sure
that
nothing
in
particular
has
changed.
D
I
did
consider
proposing
that
we
adopt
that,
but
the
issue
with
it
is
it's.
It
effectively
puts
all
of
the
diffs
in
line
with
like
curly
braces,
and
it
kind
of
looks
very
weird
and
unnatural
to
people
who
are
used
to
reviewing
diffs.
D
Another
alternative
that
I
considered
was
actually
given
that
we
can
format
the
specs
with
prettier.
Now
I
thought
what
we
could
do
is
keep
the
spec
formatted
nicely
in
the
repo,
but
then
before
we're
about
to
do
the
comparison
format
it
with
prettier
using
the
absolute
like
maximum
line
length.
You
know,
like
you,
know,
9900
or
whatever
on
both
the
previous
and
the
current
revision.
That
way
all
of
the
line
break
changes
won't
be
included
in
the
diff.
A
This
seems
worthy
of
of
a
full
discussion
because
it
seems,
like
we've,
got
some
alternatives
to
consider.
So
maybe,
let's
take
an
action
to
open
this
as
an
issue
and
hash
this
out
good
suggestion.
F
A
B
I
I
brought
this
question
because
it's
like
I,
I
reviewed
pr
before
before
working
group,
and
it
was
my
first
experience.
I
read
like
all
individual
every
everything
that
I
merged
like
bunch
of
stuff
and
I
read
all
the
other
stuff
that
other
people
match.
So
I'm
up
to
date
on
that.
So
it's
not
booking
me
just
like
no
experience
wise
things
that
we
can
do
better.
A
Yeah,
I
think
the
one
thing
that
it
might
block
is
benji's
had
a
long-standing
pr
open
to
apply
prettier
directly
to
all
of
our
source
files,
and
we
should
just
kind
of
decide.
Do
we
want
to
keep
an
80
character
line
length,
or
do
we
want
to
have
infinite
line?
Lengths
for
the
actual
source
sounds
like
that
discussion
will
yield
an
impact
on
that.
A
D
A
Okay,
I
know
we
got
more
stuff
to
talk
about,
so
we
can
move
on
looking
forward
to
resolving
that
conversation
online,
so
schema
coordinates,
update.
I've
been
doing
some
work
over
the
last
couple
of
months
to
get
this
in
a
state
where
it
can
move
from
draft
to
accepted,
I
believe,
the
so.
The
spec
text
itself
has
not
actually
changed.
A
It's
remained
at
draft
for
a
number
of
months
now
I
think
since
april
and
I've
been
working
on
a
reference
implementation,
the
reference
implementation,
I've
required
some
other
changes
which
have
only
merged
earlier
today.
Actually,
so
I
think
the
only
thing
left
is
to
take
a
look
at
that
pr
and
make
sure
that
it's
doing
all
the
things
we
wanted
to
do
and
if
we
believe
that
that's
the
case,
especially
since
the
spectex
itself
hasn't
changed,
just
looking
to
see,
if
there's
any
opposition
to
moving
this
to
approved.
B
Like
I
reviewed
just
before
working
group,
so
I
didn't
have
time
to
left
a
comment,
but
I
think
it's
actually
very
helpful
that
you
create
subsequent
pr
to
adopt
it
for
error
messages,
because
what
way
like
coordinates,
is
actually
used
for
something
and
second
thing-
it's
actually
uncovered
couple
issues
for
me.
First
is
like
meta
field
with
and
go
brackets,
which
is
kind
of
like
hack
in
a
sense,
because
it's
not
parsable
so,
like
underscore
underscore
type
name,
underscore
schema
and
underscore
type.
B
It's
a
fields,
but
the
problematic
fields,
schema
and
type
is
easier
because
you
at
least
can
attach
it
to
some
something.
You
just
don't
know
the
name.
It's
hard
to
figure
out
the
name
of
like
root
type,
because
it's
can
change
but
underscore
underscore
type
name.
It's
like
weird,
because
it's
attached
to
every
output
type,
but
not
input
type,
so
in
a
sense
like
if,
if
people
start
using
it
for
something.
B
We
need
to
make
decision
on
like
what
to
do
with
these
fields.
B
B
A
Yeah
right
now
the
spec
text
reads
that
a
meta
field
does
not
have
a
schema,
coordinate
or
the
actual
wording
is
a
schema.
Coordinate
must
not
refer
to
a
meta
field,
and
so
I
think
the
I'm
happy
to
talk
about
that
specific
case
within
the
bounds
of
that
pr,
because
what
you
mentioned
before
is
accurate,
it's
a
bit
of
a
hack.
It's
a
way
to
make
sure
that
we
can
print
those,
even
though
they're
not
really
schema
coordinates.
B
B
One
of
the
thing
I
think
we
need
to
consider
like
usage
outside
of
error
message,
like
usage,
were
like
it's
used
for
something
and
see
like,
if
underscore
underscore
type
name
create
a
problem
for
it.
B
Except
for
default
values,
we
wait
for
entire
graphql
spike.
We
prefer
like
structural
data,
so
it
introspection.
It's
not
like
his
deal.
It's
not
a
string.
B
A
Interesting,
maybe
we
should
move
this
conversation
offline
in
the
context
of
that
pr,
I
hadn't
considered
a
use
case
for
a
non-string
representation
of
these.
I
want
to
make
sure
that
it's
valid,
and
we
need
a
specification
for
that.
If
that's
something
that
people
are
going
to
use,
I
think
right
now
we
don't
have
that,
but
okay
happy
to
leave
this
still
as
draft
and
not
approve
it
until
we've
resolved
these
questions.
B
That
is
things
like
this
issue
was
uncovered
by
like
second
pairs.
That
start
using
schema
coordinates
if
somebody
else
have
ideas
on
how
schema
cardinals
can
be
used
in
reference
implementation
or
in
spark,
I
think
like
not
discussing
it
like
theoretically,
but
if
somebody
can
suggest
stuff
that
he
he
will
want
to
use
it
or
stuff
that
will
improve
like
over
our
developer
experience
of
other
stuff.
B
K
Where's
the
best
place
to
leave
that
feedback
on
the
on
the
rfc
sure.
A
On
all
right,
moving
on
full
unicode
support,
this
one
has
been
at
proposal
for
a
very
long
time.
I
think
a
year.
I
am
looking
to
move
this
now
to
draft.
I
believe
we
have
pretty
high
confidence
in
the
direction
that
we
have.
So
I
actually
brought
this
to
this
group
last
month,
which
was
before
we
had
feedback
from
some
unicode
experts.
So
I
gathered
some
feedback
from
a
handful
of
folks.
A
I've
worked
with
in
the
past,
who
are
unicode
experts
and
gave
some
feedback
and
then
made
some
changes
to
the
rfc
so
I'll
detail
through
some
of
those,
because
it's
easy
to
lose
track
of
them
in
just
the
comment
threads.
A
A
That
was
a
little
bit
weird
because
we
didn't
forbid
non
ascii
control
characters,
and
I
think
I
like
found
the
original
rfc
that
added
those
it
was
a
little
bit
on
shaky
ground.
So
I'm
going
to
propose
getting
rid
of
that
and
just
making
it
very,
very
simple.
That
sources
are
just
unicode
scalar
values.
A
A
A
So
I
think
we
need
to
keep
doing
that
so
that
those
patterns
don't
break,
and
so
I
added
a
subsection
that
kind
of
describes
how
those
work
and
then
have
incorporated
all
this
into
the
the
pr
the
pr
actually
might
be
the
implementation
pr
might
be
an
easier
way
to
read
kind
of
how
this
is
how
this
is
going.
So
if
you
want
to
kind
of
understand
exactly
what's
happening
here,
I
kind
of
recommend
reading
these
opposite
orders,
starting
with
the
code
and
then
reading
the
spec.
A
So
I
know
this
has
been
open
for
a
bit
and
there
have
been
updates
in
the
last
few
weeks.
But
if
everyone's
feeling,
good
with
the
directions,
is
heading
I'd,
love
to
move
this
to
a
to
a
draft
rather
than
just
a.
B
Proposal
I
look
through
code
and
everything
sounds
reasonable.
One
thing
it's
like
complexity
of,
like
variable
length,
escape
sequences.
B
A
A
Question
yeah
and
that's
why
I'm
not
asking
for
full
to
jump
all
the
way
straight
to
approval.
I
actually
feel
highly
confident
in
where
both
the
spec
text
and
the
implementation
are
right
now,
but
I
do
think
that
that's
sort
of
one
open
question
whether
we
want
to
allow
that
and
our
unicode
experts
sort
of
frowned
on
it.
But
I
think
we've
got
a
trade
to
make
between
doing
a
thing.
That's
weird
and
doing
a
thing
that
could
break
people,
and
I
want
to
avoid
breaking
people.
C
So
the
the
the
main
change
of
this
is
that
we
now
allow
you
to
16
code
points
or
because
the
the
all
the
type
information
and
stuff
like
that.
Well,
the
actual
query
logic
is
still
ascii
only
comments
and
string
values
are
can
contain
unicode
and
essentially,
if
I
want
to
put
all
those
emojis
in
there,
I
would
use
a
unicode
code
points.
A
It
adds
surrogate
pair
escape
sequences,
which
is
sort
of
the
legacy
way.
People
have
been
doing
this
just
because
of
a
hack
of
how
the
lexer
works
in
graphql
gs
and
graphql
java,
at
least
we're
adding
I'm
proposing
to
add
just
explicit
support
for
that.
But
if
you
take
a
look
at
the
spec
text,
I
kind
of
call
out
that
this
is
legacy
and
if
you're
printing,
a
string,
you
shouldn't
print
it
that
way.
But
if
you're
parsing
a
string,
you
should
understand
it
correctly.
A
A
I
think
my
biggest
open
question
at
the
outset
was
whether
we
were
handling
the
source
character
changes
appropriately.
I
feel
pretty
confident
that
this
is
the
right
direction.
Now
it
went
through
a
couple
changes
in
the
last
few
months,
figuring
this
stuff
out,
and
I
think
it's
the
right
way
now.
A
So
I'm
pretty
confident
that
you
know,
barring
this
last
conversation
around
whether
we
feel
confident
and
continuing
to
include
the
legacy
style,
double
escape
sequence,
that
this
is
in
good
draft
state,
especially
since
the
pr
is
essentially
fully
working
and
has
a
full
test
suite
you
support
opposition
to
that.
B
B
A
It
doesn't
tie
to
a
specific
version,
which
is
what
most
other
languages
also
do.
Okay
code
is
designed
to
be
pretty
forward
compatible,
so.
B
A
B
I
mean
like
a
value
we
have
like
graph
kills
or
like
string
a
score.
B
It's
it's
have
validation
for
values,
it's
accept
and
like
so.
If
I
send
surrogate
player
inside
inside
the
s
like
as
literal
inside,
like
query,
surrogate
wires
are
transcoded
into
into
a
string
into
a
proper
code.
But
what
if
I
send
surrogate
pair
s
like
outside
of
queries,
square
query,
variable
payload
like
surrogate
transcoded
or
not,
because
like
one
of
potentially
what
people
can
do
is
so
I
can
wine
arguments
like
people
do
like
transformation,
simplification
inquiries,
especially
like
processes
like
some
proxies
or
gateways
or
other
things.
B
L
A
There's
I
agree
with
you
generally.
We
should
have
the
same.
The
unicode
rules
that
we
apply
to
the
grammar
don't
apply
to
values
that
come
in
in
the
same
way,
just
because
the
graphical
spec
doesn't
have
a
preview
over
them.
We
basically
say
you
use
whatever
encoding.
A
You
want
for
values
as
long
as
they
adhere
to
the
sort
of
defined
coercion
rules
and
for
a
long
time
I
think
we've
said
that
string
values
are
unicode
text,
so
this
pr
actually
clarifies
that
a
bit
which
hopefully
will
improve
the
point
that
you're
making,
but
it
doesn't
say
anything
about
escape
sequences
or
surrogate
pairs
or
anything
like
that
for
a
non-literal
value.
So
if
it's
graphical
language,
then
yeah
like
we
got
to
read
the
characters
and
interpret
them
in
some
way.
A
That's
our
job,
but
if
it's
a
like
a
json
body
or
something
we
leave
that
to
the
json
spec
to
define.
A
So
as
long
as
as
long
as
your
internal
string
can
represent
a
value,
then
you're
good
to
go,
but
that's
a
an
implementation
concern.
But
that's
a
good
point.
I
think
we'll
take
a
careful
read
at
the
text
and
make
sure
that
there's
I'm
pretty
sure
we
already
say
that
a
string
has
to
be
unicode
text.
A
But
if
there's
anywhere
else
in
the
spec
that
we
want
to
sort
of
repeat
that
point,
maybe
in
the
serialization
form
or
something
that
might
be
a
reasonable
place
to
do.
That.
B
C
C
A
Yep
take
a
look
at
the
implementation,
pr
which
might
be
a
good
place
to
start
yeah.
A
I
will
I'll
also
note
that
I
I
made
a
github
group
called
implementers
and
invited
a
somewhat
random
chunk
of
people
to
it,
a
lot
of
folks
that
are
here,
but
some
folks
that
don't
regularly
attend
working
group
meetings,
but
do
maintain
graphical
implementations
at
benji's
recommendations.
So
thanks
for
that
good
recommendation
that'll
give
us
a
way
to
you
know.
If
these
things
come
up
and
we
want
to
kind
of
let
implementers
know
you
can
ping
them
on
github,
provided
that
they
accept
the
invite
and
join
that
group.
K
B
A
Yeah
andy-
and
I
have
been
working
on
this
together,
so
he's
been
doing
it
in
graphical
java
at
the
same
time,
so
that
we
at
least
have
that,
but
java
and
javascript
have
sort
of
the
same
quirk
that
they
interferently
represent
strings
as
utf-16,
and
so
there's
this
alexa
pass
is
kind
of
particularly
quirky,
where
you
have
to
like
convert
good
points,
code
units
to
code
points
on
the
fly,
which
my
hope
is
that
the
vast
majority
of
languages
don't
have
to
do
that
and
can
just
sort
of
like
do
something
more
reasonable
or
have
have
sort
of
like
native
unicode
text
or
utf-8
or
something
that
gives
them
yeah.
A
C
Need
something
that's
more
reasonable.
We
actually
for
that.
We
have
the
same
utf-16
string,
but
to
have
a
fast
implementation.
We
we
are
parsing
directly
on
the
byte
stream,
the
youtube
rate
stuff,
so
that
it
doesn't
expand
in
memory.
So
I
would
have
look.
K
C
Yes,
that's
the
downside
and
then
there's
a
404
when
I
click
on
the
link.
D
I
haven't
said
it's
visible:
was
it
the
link
in
the
docs
that
you
clicked
because
in
the
sorry
notes,
because
I
may
have
just
pasted
it
in
wrong.
B
K
B
A
Oh
there's
something
about
the
visibility
you
have
to
be
part
of
the
organization.
I
don't
know
okay
good,
to
know
I'll
figure
it
out
still
kind
of
getting
a
handle
on
github's
visibility
rules.
A
Good
action
happy
to
do
that.
Any
other
feedback
thoughts,
ideas,
comments
on
union
code,
support.
D
Do
you
want
me
to
assign
that
action
to
you
lee
to
create
the
document
cool.
A
All
right
thanks,
everybody
leave
that
much
a
proposal
keep
hacking
on
it.
Next
up
is
fragment
arguments
matt
this
one's
yours.
F
F
Maybe
not,
maybe
that
specific
implementation
isn't
performant,
but
it'll
provide
the
right,
behavior,
and
so
I've
got
that
up
on
graphql,
js,
plus
a
validation
rule
to
turn
it
off
by
default
or
to
not
allow
you
to
write
fragment
arguments
by
default.
F
The
harder
part
is
actually
going
to
be
defining
the
validation
rules
to
provide
good
errors.
Just
because,
like
we're
now
in
a
weird
previously,
all
arguments
are
always
defined
either
at
the
operation,
root
or
variables
are
defined
at
the
operation
root,
or
you
have
like
field
arguments
and
the
type
of
that
field
argument
is
defined
by
the
schema.
F
Whereas
now
we
have
arguments
that
are
defined,
that
whose
type
is
defined
within
the
executable
document
itself,
and
that
that
is
a
little
bit
of
a
like
weird
thing
to
resolve,
with
our
current
setup
for
validation,.
F
A
F
Yeah,
I
agree
with
that.
One
of
the
potential
validation
rules
that
might
simplify
the
executor
is
one
for
only
allowing
constant
values
to
be
passed
to
the
arguments
that
would
that
would
prevent
having
to
like
having
the
an
implementation
have
to
deal
with
like
getting
a
variable
from
the
query
that
then
gets
passed
through.
F
So
if
we,
for
instance,
allowed
argument
definitions
of
variables
where
those
variables
are
used
like
three
fragments
below
where
they're
defined,
then
you
would
have
to
carry
all
arguments
that
are
passed
through
all
the
way
down
through
during
the
collect
fields
phase.
A
Wanting
to
keep
the
scope
limited
seems
very
worthwhile,
but
I
can
see
that
being
a
tough
restriction.
B
F
That
would
be
like
that
was
one
of
the
options
in
the
original
in
the
original
issue
that
was
posted
in
what
2016.
F
That
was
one
of
the
options
that
dan
laid
out,
but
now
that
we
have
this
giant
corpus
of
actual
graphql,
that
would
basically
mean
only
new
queries.
Like
you
couldn't
put
a
fragment
with
arguments
into
an
existing.
It
would
be
difficult
to
add
a
fragment
with
arguments
that
then
calls
a
fragment
without
arguments
and.
J
B
F
Yeah
and
that's
that's
definitely
a
discussion
point
that
we
should
like
if
that
is
a
worry
to
have
that
mixed
within
the
same
fragment.
We
should
definitely
like
discuss
the
pros
and
cons
on
the
on
the
rfc.
A
Work
is
so
the
next
steps
are
to
continue
work
on
validation,
to
newer
current
execution.
My
other
thought
is,
you
know
the
the
pr
that
you
have
up
now
is
still
in
that
kind
of
intermediate
state
where
it's
like
added,
but
then
disabled.
A
I
would
love
to
see
if
we
can
get
this
just
like
through
the
full
process.
If
we
need
to
chop
it
up
into
pieces
from
the
landing
things
in
graphical
js
point
of
view,
then
then
so
be
it,
but
certainly
from
just
like
a
clarity
of
understanding.
What's
going
on,
you
know.
J
A
F
I'd
also
be
interested
if
there's
anybody
else
like
interested
in
helping
push
this
along,
because
there
are
there's,
probably
like
four
or
five
different
validation
rules
that
need
to
be
written,
and
I
I'd
love
like
if,
if
someone's
interested
to
have
their
help,
so
it
I
don't
know
if
it's
like.
This
is
probably
vaughn's
purview.
F
But
I
don't
know
if
it's
in
a
state
where
we
should
just
either
add
it
to
a
book
or
a
branch
within
graphql
js,
rather
than
coming
from
my
own,
like
personal
repo
and
allowing
peop
other
people
to
make
prs
against
that
or
whether
I
should
just
we
should
be
pushing
to
commit
it
to
graphql
js
as
is
and
iterating
against
master.
I.
A
Basically,
what
you
have
now
the
pull
request
open
you
can.
You
can
always
like
allow
other
maintainers
to
be
able
to
contribute
to
your
pr
so
that
multiple
can
add
to
it
and
then
just
start
building
a
test
suite.
So
it's
like
here's
all
the
things
we
want
to
allow
disallow
and
use
that
as
the
way
to
sort
of
gut
check
your
your
validation
rules.
B
B
That's
why
I'm
like
I'm
okay
to
merge
it
even
without
validation,
like
all
the
validation
rule,
but
with
this
one
so
like
if
person
is
okay,
he
just
disable
one
rule,
and
it's
like
hard
enough
and
to
disable
validation
rules
that
it's
like
explicit
decision.
You
need
to
write
like
10
10
lines
of
code
to
enable
it
so
so
I'm
like
it
said,
maybe,
instead
of
branch,
we
can
just
merge
it,
as
is
like
just
that
commands
everywhere
that
it's
experimental,
but
I'm
against
forex.
J
A
E
Hey
so
sorry,
I
haven't
been
able
to
make
the
past
couple
meetings,
but
I
don't
really
have
too
much,
but
we
just
wanted
to
give
an
update,
so
we've
been
getting
feedback.
Most
of
the
feedback
has
been
either
related
to
the
transport
or
bug
fixes
or
code
improvements,
not
too
much
on
the
spec
in
general.
So
I'm
working
on
a
couple
refactors
based
on
that
I
rebased
the
graphql
js
implementation
onto
main
change.
E
Everything
typescript
and
yvonne
will
is
going
to
publish
an
experimental
version
soon
and
I
think
other
than
that,
I'm
waiting
for
16
to
be
released
and
then
could
start
merging
some
stuff
in
to
get
ready
for
this
version.
17
office-
and
that's
that's
pretty
much
it
for
now.
A
And
brian,
I
know
you
wanted
to
continue
the
conversation
from
there
so
now
that
you've
got
updates.
H
B
B
So
we
have
like
stable
branch
and
we
apply
fixes
if
somebody
discovers
something
but
to
have
like
main
and
unstable
one
and
publish
like
alphas
and
publish
releases
after
ever
set
up
like
automatic
release,
publishing
after
every
pr.
So
every
time
something
is
merged.
Some
alpha,
10,
20,
50
is
published
and
cut
is
back
releases
like
like
once
a
year
or
once
a
half
a
year,
stabilize
it
for
some
time.
So
reverse
so
right
now,
most
of
the
time
mainly
stable
and
like
for
period
of
time,
it's
unstable,
but
idea
is
to
make
it
reversible.
B
So
if
we
discover
like
based
on
feedback
some
problems,
but
like
all
you
probably
knows
in
javascript,
like
a
system
in
knows
like
problems
is
pure
dependencies,
so
we
cannot
publish
too
many
breaking
too
many
new
versions
of
graphql
js,
because
every
time
a
new
breaking
new,
like
module
releases
published
everybody
needs
update
per
dependency
and
that's
huge
problem
for
ecosystem
yeah.
B
So
to
clarify
ideas
like
next
until
next
working
group,
like
next
month
time
frame,
we
publish
six
in
zero
zero,
wait
like
a
couple,
maybe
a
week
or
two
to
to
get
feedback,
that
everything
is
okay
and
we
branch
it
out
into
separate
branch,
switch
to
seventy
zero
zero
alpha
one
merge
stream
on
d4
and
start
working
on
it.
B
H
That's
great
yeah.
I'm
excited
to
see
the
progress
on
defer
and
stream
stuff
in
graphql
js.
I
have
like
one
like
drive-by
code
review
thing
that
maybe
I
should
just
do
it
in
the
repository,
but
graphqljs
is
an
early
adopter
of
async
iterators
right.
Like
one
thing
that
like
bothers
me,
a
lot
is
that
there
is
often
in
graphql
js,
first
subscriptions
and
deferring
stream
there's
always
that
promise
of
an
async
iterator,
which
I
personally
think
is
an
anti-pattern,
and
I
could
go
into
that.
H
But
but
I've
always
thought
that,
like
yeah
like
so,
I
think
the
graphql
js
execute
function
is
going
doing
either
it's
either
doing
it's
either
returning
a
promise
or
it's
returning
an
async
iterator
if
it's
like,
if
it's
a
incremental
right
is
that
is
that
how
it's
gonna
work
yeah?
I
I
always
thought
that
it
should
just
be
promise
or
async
it
or
not,
promise
of
an
async
iterator
but
anyways.
I
just
wanted
to
get
like
a
like
a
a
read
on
like
yeah,
where
defer
and
stream
stand.
H
We're
also
planning
on
implementing
deferring
stream
in
apollo
server,
which
is
exciting.
I
do
have
a
couple
of
questions
about
the
spec
which
I
actually
posted
in
an
apollo
issue.
I
didn't
like.
I
don't
wanna,
I
don't
wanna,
be
like
the
spokesperson
for
apollo,
especially
because
we
haven't
really
been
participating
as
a
company
in
these
are
just
like
my
personal
opinions
right
in
the
in
the
working
group
recently,
and
I
just
so.
I
have
questions
like
for
the
patches
as
they
come
in
like
incrementally.
H
Like
I
mean,
is
there
like
a
topological
order,
like
is
so
like?
The
first
patch
is
like
the
initial
result,
and
then
there's
this
path
right
and
is
it?
Do
I
have
to
treat
it
like
like
make
your
pee
like
in,
in
the
sense
that,
like
I'd,
recursively,
create
objects
like
does
that
make
sense,
rob
or.
H
Is
that
reflected
in
the
stack?
Yet
not
a
hundred
percent.
C
C
C
Essentially,
these
the
deferred
tasks
and
you
put
them
in
a
queue,
and
so
it
couldn't
be
any
other
way
because
you're,
as
you
execute
your
graph,
you
come
to
a
fragment
that
is
being
deferred
that
you
put
away
for
execution
later
and
you
add
these
deferred
tasks
one
after
the
other.
I
mean
it
depends
on
still
how
you
execute
them.
I
C
H
Yeah
definitely-
and
I
could
just
keep
going
and
asking
questions
or
doing
this
async
and
at
any
point
you
can
stop
me.
What
is
the
relationship
between
the
inc,
the
incremental
delivery
stuff
and
also
like
batched
queries
so
like
this
is
a
little
bit
confusing,
because
incremental
delivery
also
has
this
concept
of
batching
on
some
level
in
the
sense
that
multiple
patches
can
come
in.
At
the
same
time,.
C
That
is
actually
very
cleanly
done,
because
incremental
delivery
is
actually
a
part
of
the
graphical
over
http
spec.
So
that
is
transport
and
we
are
using
the
same
spec
now
for
batching,
oh
okay,
and
for
any
anything
else,
because
it
describes
how
you
how
you
can
incrementally
deliver
content.
So
it's
very
very
good.
So
if
you
look
at
the
graphical
over
http
stack
repository,
there
is,
I
think,
rob
opened
the
pi
and
it's
I
think.
C
H
So
there
is
in
a
paul
client
there's
this
there's
like
a
bunch
of
links
and
the
batching
one
is
where
you
send
a
bunch
of
operations
as
like
a
json
array,
and
you
receive
a
json
array.
I
guess
that's
not
what
batching
is
anymore
on
your
side.
M
C
About
a
year
ago,
on
the
graphql
over
http
spec,
I
think
ivan
was
initially
pushing
that,
but
nowadays
a
lot
of
people
work
on
that,
and
that
is
trying
to
specify
the
the
transport.
Until
now,
we
didn't
have
something
like
that.
So
this
is
all
there.
Batching
is
not
really
yet
specified.
We
have
that
we
have
markers
there
that
we
want
to
do
that.
We
have
examples
from
apollo
from
from
our
implementation
from
other
from
other
implementations.
C
So
there's
still
a
lot
of
work
to
be
done,
but
it
kind
of
feels
logical
to
have
this
incremental
delivery.
Do
the
transport
for
these
things.
H
Yeah,
does
anyone
have
a
multi-part
mixed
like
parsing
library,
yet
because
I
just
didn't,
do
it.
C
He
built
this
front
and
I
re
validated
a
lot
with
him
and
I
think
rob
also
knows-
and
this
mariah's
worked
also
with
on
the
transport
staff.
J
H
Cool
one
question
I
have
is
this:
so
the
decision
to
make
defer
work
only
on
fragments-
I
I
I
did
read
something
about
that,
but
is
it
like
one
of
the
things
that
we
at
apollo
are
kind
of
wondering
is
if
it's
like
an
arbitrary
limitation
insofar
as
you
can
have
multiple
fragments,
inline
fragments
on
the
same,
you
know.
C
So
putting
just
a
deferred
frontman
on
a
single
field
and
could
lead
to
people
putting
on
multiple
fields,
just
these
defer
friends,
because
they
just
actually
defer
a
bunch
of
stuff.
But
it
essentially
leads
into
a
very
bad
performance.
C
Issue
where
you
defer,
essentially
very
small
fields
and
lots
of
times,
and
with
this
frontman,
it
becomes
more
clear
that
okay
you're
deferring
this
block
essentially
so
while
it
could
be
on
theoretically
that
this
is
an
artificial
limitation,
so
you
could
put
it
on
the
field
it
kind
of
it's
more
making
that
more
clear.
This
is
a
deferred
block.
This
is
something
expensive.
I
mean
you're,
not
making
your
your
whole
request
faster
you're,
just
prioritizing
prioritizing
delivery,
and
that
makes
it
very
explicit.
C
The
second
thing
is
that
in
relay,
we
also
have
have
this
label
and
this
fragment
approach,
so
that
fits
also
very
good
in
that
composition,
approach,
but
matt
knows,
I
think
more,
but
also.
J
E
H
M
M
C
They
ask:
why
can't
I
just
defer
this
field,
but
but
when
they
start
thinking
about
what
is
the
impact
of
deferring
and
stuff
like
that,
and
also
it's
there
was
this
discussion
is
deferred,
not
really
a
deferrable
and
it
kind
of
is
because
the
server
doesn't
have
to
defer
that
so
hot
chocolate,
for
instance,
inspects
if
you're
just
deferring
the
scalars
that
we
anyway
fetched
in
the
in
the
call
before
in
the
resolver
before
we,
we
actually
won't
defer
the
stuff.
We
analyze
the
the
query
plan
of
our
execution
first
and
then
see.
H
Straight
but
relatedly,
that's
actually
something
that
the
mobile
engineers
are
currently
having
a
lot
of
issues
with
like
the
idea
that
a
defer,
like
a
deferred,
an
operation
which
has
deferred
fields,
it
can
come
in
all
at
once.
If
the
server
decides
that
that's
the
appropriate
thing
to
do,
and
so
I'm
not
too
familiar
with
this,
because
I'm
mostly
working
on
the
javascript
stuff
but
like
in
kotlin,
for
instance,
there
is
there
for
apollo
android
we're
doing
like
code
generation
beforehand
so
and
so
like
just
in
general.
H
Your
thoughts
on
like
and
stream
should
work
with
code
generation
and
type
safety
like
especially
for
environments
like
kotlin.
C
H
C
C
And
then
have
this
nullable
and
it's
essentially
a
patched,
so
we
use
a
reactive
way
to
interact
with
our
data
and
then
the
graph
completes
and
it's
notable
at
the
beginning-
that's
the
way
we
did
it.
I
don't
know
how
others
deal
with
that,
maybe
also
yeah.
F
So
we
have
had
actually
fairly
significant
problems
with
this
as
well,
but
the
reason
we
had
problems
was
because
we
on
our
native
code
generation,
a
big
chunk
of
it,
is
using
type
models.
So,
if
you
just
have
like
a
user,
a
graphql
user
and
you're
trying
to
access
that
in
java,
you
have
no
idea
what
was
deferred
or
not
and
like
you
can't
even
tell
like
from
a
ui
perspective,
am
I
going
to
deal
with?
Am
I
going
to
be
getting
stuff
in
the
future?
F
F
Another
thing
that
has
helped
a
ton
is
we
switched
to
so
we've
we've
transitioned
from
type
models
to
request
models,
request
models
still
have
this
exact
same
problem
to
fragment
models,
so
relay
basically
has
a
fragment
model
approach,
but
if
you
have
strong
boundaries
in
your
models,
where
they're
like
going
from
fragment
through
to
fragment
bar
that
was
deferred,
it's
very
easy
to
say:
oh
r
was
deferred
and
all
the
like-
and
it's
not
here
yet
so
we'll
give
you
back
a
null
and
you
just
can't
interact
with
it
and
then,
when
the
callback
gets
called
again
and
it
is
there,
you
can
interact
with
it.
F
F
Yeah
we
actually
rely
like
we
have
we
input
local
to
the.
I
don't
know
if
this
is
what
you
mean
by
label,
but
we
like
extend
our
types.
If
we
have
a
deferred
fragment,
we
add
a
local
to
the
client
field
that
is,
is
foo,
fragment,
deferred
and
on
the
fir
like
when
we
get
the
deferred
responses
we'll
set
it
to
will.
We
have
a
pass
in
our
parser
that
sets
that
to
false
or
true
depending
on
whether
it
came
back,
but
that's
like
very
implementation
detail
and
doesn't
happen.
C
No,
that's.
That's
also
not
what
I
mean
on
the
defer
on
the
defer
directive.
You
have
a
label.
Oh
you.
D
C
By
the
way,
but
we
use
it
in
in
in
our
implementation-
we
use
it
as
a
mandatory
field,
so
you
just
have
to
set
that.
I
I
just
had
a
question
for
implementers.
I
I
raised
it
on
the
the
spec
in
terms
of
the
has
next
field,
I'm
just
wondering
if
anybody
who's
or
implemented
this
already
is,
is
as
yet
relying
on
it.
I
know
that
in
a
previous
iteration
it
was
called,
is
final
and
I
think
it
was
you
know.
C
The
reason
for
that
is
that
we
wanted
to
keep
the
the
so
so
when
you
have
this
final
on
there,
then
you
would
need
put
to
put
it
on
every
response.
So
that
was
a
discussion
here
in
the
word
group
is
final,
would
have
be
have
to
be
on
every
response
to
mark
it
as
not
final,
because
then
you're
looking
for
that,
so
we
we
flipped
the
the
discussion.
C
The
the
meaning
was
flipped
to
heaven
has
next,
because
then
you,
you
actually
can
send
the
response,
as
is
when
there's
nothing
else,
then
the
response
wouldn't
be
changed,
and
if
there
is
a
has
next
on
it,
then
we
know
that
it's
actually
a
streamed
response
where
we
have
multiple
parts
to
it.
F
Facebook
actually
originally
implemented
with
his
final,
and
I
ran
into
a
bug
when
I
was
trying
to
have
a
client
that
understand
stood
defer
using
his
final
hit.
A
server
that
didn't
implement,
defer
so
never
sent
his
final,
so
we
would
just
like
never
close.
We
would
assume
that
we
still
had
more
information
coming
forever.
I
So,
if
I
understand
correctly,
that
the
main
use
of
of
has
next
is
for
the
initial
response
to
see,
if,
if
it
meaning
it's,
not
it's
for
the
has
next
true,
rather
than
that,
has
next
false
to
make
sure.
I
That
there
are
missing
fields,
it
has
next
faults.
C
You
can
essentially
omit
you
don't
need
that
right,
because
if
there,
if
there
is
a,
has
next
true
on
this,
you
know
there's
another
payload
coming
as
opposed
to.
If
there
is
nothing,
you
know
it's
final
and
that's
actually
how
graphql
works.
If
you
have
a,
if
you
have
a
server
that
doesn't
support
defer,
it
sends
you
a
response,
and
that
doesn't
have
has
next
on
it.
It
has
no
extra
property
on
it.
So
you
know
it's
a
final
result.
C
That's
the
state
of
today
so
with
the
defer
now
in
when
the
when
the
server
says.
Okay,
I
have
here
something
that
is
def,
that
is
deferable,
and
I
actually
defer
that
content.
I
send
down
a
response
that
says:
okay,
that
is
not
the
final
response.
There
is
something
else
coming.
That's
next
result
coming
essentially,
okay,.
I
But
I
mean
is
it?
Is
it
not
the
case
that
I
guess
it
isn't?
Okay,
so
then
my
next
question,
then,
is:
it
gets
into
a
little
bit
of
an
interesting
interaction
with
streams
right
now
in
the
graphql
js
implementation,
you
can
stream
a
field
that
the
server
will
is.
Is
you
know
the
back
end
is
also
getting
an
async
iterable,
iterator
stream.
Let's
say,
and
in
that
case
the
the
server
won't
really
know.
I
You
know
the
server
on
those
payloads
always
says
has
next
true.
Even
if
it's
you
know
it
can't
it
can't,
even
if
it's
not
necessarily
true
so
meaning
because
it
doesn't
really
know
there
will
be
another
payload
until
it,
it
checks
for
it.
I
Right
so
I
mean
this
is
kind
of
like
an.
I
guess
I
I
guess
for
streams.
It's
not
really
that
important.
You
know,
but
you
know,
but
if
the
question
in
my
mind
is,
is
has
next
really
the
right
term
like
do
we
want
us
to
change
it
to
like,
maybe
has
next
or
might
be
have
next,
it
seems
like
it's
doing
something
different
for.
I
I
F
Like
from
the
client
perspective,
you
just
get
a
json
response.
It's
unclear
whether
fields,
just
like,
especially
if
you're
deferring
a
fragment
that
might
not
be
like
inside
of
an
interface
where
your
fragment
is
on
some
concrete
type.
It's
unclear
whether
you
like
are
going
to
get
more
or
not,
or
whether
it's
possible
for
the
server
to
give
you
more
or
not.
H
For
streams
in
particular,
like
we
talk
about
like
deferring
individual
fields,
how
that
could
be
a
miss
feature
or
could
be
misused
in
some
ways,
I'm
also
wondering
like
are
there
ways
that
we
could
not
make
streams?
H
If
you
know,
if
you
know
what
I'm
saying
like
like,
we
can,
you
can
do
a
poor
man
subscription
with
a
stream
directive
right,
you
can
do
pagination
like
there's
no
arbitrary,
there's,
no
like
limit
to
like
how
many
connections
or
how
long
the
connections
can
be
open
for,
for
instance,
although,
like
hd
1.1,
you
can
only
have
six
connections
like.
Is
there
anything
that
we're
doing
to
make
sure
that
stream
doesn't
like
expand
in
scope
to
include
things
like
pagination
or
subscriptions
or
stuff
like
that?
Is
that.
F
M
E
It
is,
it
is
called
out
in
the
rfc
that
it's
really
not-
and
maybe
also
in
the
spec-
edits
that
it's
not
really
intended
to
for
long
live
connections,
and
it's
really
not
about
pushing
data
to
a
later
time.
But
getting
part
of
the
data
sooner
and
in
theory
not
having
the
stream
there,
like
the
whole
total
length
of
the
whole
request,
shouldn't
really
change
by
much.
E
A
The
like
final
payload
to
come
within
a
number
of
seconds
and
you'd
probably
want
your
transport
to
be
different
right
like
hanging
http
is
like
gets
problematic
if
you're
hanging
for
longer
than
30
seconds
you
get
like
weird
error
cases,
but
hanging
http
and
like
multiple
payloads
is
totally
fine.
If
you
expect
all
those
payloads
to
show
up
in
the
course
of
two
or
three
seconds,
so
we
should
be
pretty
clear
about
that
right
that
we
we
don't
want
people
to
be
using
stream
as
a
replacement
for
subscriptions.
C
C
B
Distinction
is
pretty
pretty
clear.
That
subscription
is
a
one
base,
so
something
happened.
You
get
payout,
but
stream
is
data
based
so
like,
for
example,
it's
not
only
like
to
paginate.
It
can
be
used
when
you're
generating
data
so
like
some
data
is
computationally
heavy
and
you
like,
generating
and
putting
them
in.
So
it's
like
based
on
data
and
not
events,
and
I
think
maybe
like
specified
somewhere,
but
it's
clear
require
is
about
to
date.
A
subscription
is
so.
The
entire
route
is
about
events.
H
Yeah
my
spies
are
saying
that
react.
18
is
on
the
horizon.
Have
has
there
been
any
thought
in
about
like
deferring
stream
with
rick,
with
with
regard
to
like
reveal
order?
Stuff,
like
you
know,
like
the
suspenseless,
reveal
order.
I
don't
know
if
you
guys
are
familiar
with
that.
D
I
have
questions
over
how
or
whether
stream
and
defer
should
also
relate
to
subscriptions.
So
I
think
it's
quite
clear
that
they
shouldn't
do
shouldn't
shouldn't
be
applicable
on
mutations.
D
But
on
subscriptions
to
me
it's
it's
less
clear,
like
my
gut
instinct
is:
oh
only
only
support
it
on
queries,
but
the
whole
point
of
stream
and
defer
is
to
let
you
get
part
of
your
payload
faster
right.
So,
if
your
subscription
is
asking
for
real-time
events,
you
might
want
those
real-time
events
instantly,
even
if
it
takes
an
extra
couple
of
seconds
to
you
know,
generate
the
avatar
url,
that's
going
to
come
along
with
it,
so
it
may
be
valuable
to
to
have
stream
and
defer
for
subscriptions.
D
But
I
I'm
concerned
that
the
the
multiplexing
there
may
get
more
challenging
and
you
may
end
up
with
multiple
subscription
payloads
from
subscription
events
all
having
resolved
while
still
having
the
stream
and
deferred
things
continual
continue
to
pile
up
in
in
the
background
and
potentially
get
quite
expensive.
So
what
actually?
What's
the
status
quo
there
and
yeah?
What's
the
status
quo?
There
is
the
first
question
rob.
E
K
E
The
way
that
it
works
for
subscriptions
is
basically
the
async
iterator
which
may
yield
another
async
iterator
from
the
original
event
gets
flattened
into
one,
and
so
that
means
that
you're
not
going
to
get
the
next
event
until
the
last
deferred
payload
from
the
previous
event
comes,
and
I
think
that
so
it's
it's
possible
that
you
could
have
events
from
the
subscription
that
are
coming
in
pretty
quickly
and
have
a
very
slow
resolver,
underneath
that,
and
I
believe
now,
even
without
deferred.
That
would
cause
them
to
kind
of
get
backed
up.
E
So
this
is
not
solving
that
in
any
way
because,
like
I
said
like,
the
idea
is
to
only
get
some
data
sooner
and
not
like
affect
the
length
of
the
request,
so
it's
still
an
issue
with
defer.
E
D
Yeah,
I
think
it
does
make
sense,
and
you
raise
a
good
point,
which
is
you
know
if
you're,
if
you
don't
have
defer
currently
and
you're,
still
requesting
that
same
data,
you're
still
going
to
wait
for
it
in
the
same
way,
I
think
the
the
serialization
that
you
mentioned,
where
it
waits
for
all
of
the
previous
streams
and
defers
to
complete
before
it
then
resolves
the
next
payload
is
sensible,
and
what
that
effectively
means
as
well
is
that
those
events
would
pile
up
the
events
themselves,
but
the
resolution
doesn't
even
necessarily
have
to
start
until
the
previous
one
is
is
fully
complete.
D
So
I
think
that
does
actually
help
to
not
have
too
much
back
end.
Work
and
memory
usage
pile
up
interesting
that
you've
implemented
it
for
mutations.
Actually
thinking
about
it,
I
can
see
how
that
that
would
be
useful.
Has
there
been
any
pushback
against
that.
E
Not
that
I
know
of,
I
think
it's
pretty
common,
especially
when
you
use
relay
that
you
would
have
a
fragment
for
a
lot
of
your
data
and
that
fragment
would
get
spread
on
the
query
for
the
initial
fetch
and
then
also
on
a
mutation
for
when
you
want
to
update
it
and
that
fragment
may
have
another
fragment.
That
is
another
fragment.
That's
deferred
somewhere
down
the
line,
and
so
you
wouldn't
want
to
disable
it
on
the.
D
Yeah
yeah,
that
does
make
sense
to
me.
D
E
One
one
weird
case
with
that
is
the
the
fact
that
multiple
mutations
in
a
single
operation
are
executed
serially,
and
we
discussed
that
for
a
while
last
year
and
I
think
came
up
with
the
conclusion
that
what
ends
up
happening
is
that
the
each
mutation
is
only
waiting
for
the
initial
payload
of
the
previous
one.
So
this
is.
This
is
different
than
how
it's
working
with
subscriptions,
which
is
different
case,
but
that
first
mutation
is
going
to
happen.
E
C
Executed
can
be
then
caused
some
challenging
things,
because
you
expect
that
the
that
the
under
on
the
underlying
fields
actually
return
the
change
state
of
the
server.
So
you
do
the
mutation
you
have
the
beneath
query
fields
that
should
reflect
the
change
state
of
the
server,
but
that
could
now
represent
a
complete
different
result
because
we
are
curing
them
doing
other
stuff
because
we
cannot
capture.
Yet
we
don't
want
to
waste
that
performance.
Do
the
other
mutation
do
maybe
the
third
mutation?
C
F
There's,
I
guess,
there's
like
the
most
logical
to
me
resolution
of
that
would
be
any
like
you,
you,
the
server
could
choose
to
implement
defer
so
that
it
just
ignores
it
in
the
first
mutation
field,
underneath
the
first
mutation
field
just
like
makes
that
entirely
serial,
but
in
the
last
mutation
field,
then
deferred
values
are
allowed
like
we,
we
definitely
even
an
implementation
of
defer
within
mutations.
That
is
just
ignore.
The
defer
is
better
than
like
disallowing
defer
within
a
mutation.
C
Yeah,
that's
that
that's
actually
what
we
do.
We,
we
ignoring
them
in
mutations,
but
I
would
if
we,
if
we
specify
then
that
how
to
execute
them
in
mutation.
I
think
that
could
be
problematic
for
a
lot
of
users
that
implement
that
then,
and
don't
realize
what
the
what
kind
of
effect
that
actually
has
for
the
contract
that
you
actually
tell
your
user,
because
you
expect
actually
that
these
would
then
capture
somehow
the
change
and
send
it
to
you
later,
because
you
want
to
defer
the
processing.
C
H
Have
you
guys
considered
like
how
this
all
works
with
like
a
normalized
cache
or
an
entity
based
cache,
because
there's
like
there's
like
you
could,
if
you
have
multiple
operations
which
have
deferred
like
is
it
last
right
wins
like
is?
Is
it
operations
come
in
order
and,
though
that's
that's
the
older
data
or
is
like
how
should
how
should
all
these.
H
So
is
it,
but
is
it
like
last
right,
wins
like?
If
so,
let's
say
you
have
operation
a
and
operation
one
and
operation,
two,
let's
say
the
operation
two
happens
later,
but
then
the
deferred
fragments
in
operation
one
come
in
later
than
operation
two.
Is
it
like
what.
F
F
It
wouldn't
matter
at
all
right
because,
like
if
you're
just
putting
in
the
last
value
well,
a
query
is
read
only
so
even
within
yeah,
even
while
you're
executing
a
query,
the
data
might
be
changing.
Underneath
you
like
in
the
middle
of
your
execution,
somebody
might
have
made
a
right
to
a
database
while
you're
like
on
one
resolver
and
then
once
you
get
past
that
resolver,
the
data
is
suddenly
like
different,
underneath
you
like
that.
F
F
H
C
Yeah,
how
you
I
mean
the
the
deferred
task.
Also
captures
can
capture
things
like
like
we
have
in
our
server
implementation
for
queries.
We
have
like
the
scope
context
where
we
preserve
essentially
the
execution
context
and
put
that
in
the
deferred
task.
C
I
mean
there's
a
there
there
that
the
there's
a
lot
to
these
things.
So
it's
that's.
Why
I'm
I'm
critical!
Also
on
the
on
the
mutation.
I
think
that
if,
if
we,
if
we
specify
a
flow
of
fermentations
and
make
that
formal
instead
of
again,
I
would
really
ignore
it
in
mutations,
because
that
causes
so
many
problems
for
people
that
then
start
building
a
graphql
and
don't
think
about
these
things
that
I
would
really
ignore
it
or
would
make
that
even
as
explicit
in
respect,
but
this
should
be
ignored.
C
However,
you
could
do
that
subscription.
Essentially,
is
it's
like
a
query
just
that
you
that's
event
driven
get
an
event.
You
execute
essentially
a
query,
send
the
result
down,
so
it
could
be
checked
down.
I
I
don't
know
how
important
that
to
most
people
is
on
subscriptions,
but
I
don't
see
a
problem
there,
but
mutations
is
where
we
have
all
the
side
effects,
so
we
shouldn't
mess
with
them.
H
Oh,
here's
a
question:
how
do
errors
work
with
incremental
delivery?
Errors
can
also
have
paths
right
in
in
the
sense
that,
like
feel
those
specific
fields
can
err.
So
if
an
error
comes
like
can
an
error
come
in
for
before
the
deferred
fragment
comes
in,
like
is
that
so.
C
You
can
you
could
in
each
response.
Each
response
could
be
a
complete.
It's
like
a
complete
graph
response,
could
have
an
error.
B
C
Like
like,
like
you,
could
you
could
you
have
this
deferred
task
yeah
that
you
then
then
you,
you
delivered
the
main
response
which
might
have
had
no
error
right
and
then
you
are
starting
work
on
your
next
resolvers
that
that
you
actually
had
deferred,
and
there
happens
to
be
maybe
a
null
violation.
Then
you
have
to
report
that
in
that
deferred
piece.
H
Sure
yeah,
but
so
are
the
is
the
path
relative
in
that,
like.
C
That's
complete
because
because,
because
what
you
get
you
get
this
patch,
we
actually
our
graphql
id.
We
implemented
that
already
and
when
you
do
defer,
we
complete
the
graph.
So
we
are
applying
actually
the
patches
of
the
first
on
to
the
data
of
the
we
are
applying
the
patches
on
the
organ
original
response,
and
you
see
how
your
graph
completes.
So
it
should
always
refer
to
the
complete
graph,
because
that's
essentially
what
the
user
then
uses
and
that's
also
easier
from
the
execution.
C
I
think
yes,
because
we
have
that
in
the
algorithm,
where
yeah
rob
wrote
that,
by
the
also
what
I
said
with
the
deferred
tasks
that
are
in
keyword,
that's
all
in
the
algorithm.
H
Yeah,
it's
it's
it's
in
the
algorithm,
but
some
non-normal
normative
confidence.
Just
something
would
be
helpful.
I
think
maybe.
E
Laying
out
what
you
have
to
do,
yeah,
I
think,
there's
another
section
that
goes
through
each
of
the
new
fields
that
are
on
the
patch
responses
and
I'm
pretty
sure
it
says.
Maybe
that
could
be
clarified
more.
C
But
we
we
should
actually
capture
those
things
on
the
pull
request
or
on
issues
I
also
opened.
I
think
issues
when
I
had
something
like
this
and
rob
then
merged
them
in.
H
C
Because
now,
if
we
just
discuss
them
here,
they
get
lost
right,
yeah,
yeah,
it's
it's
good!
This.
This
discussion
is
really
good
because
also
we
talked
about
mutations
and
I
haven't
implemented
all
the
last
bit
that
rob
actually
did
so.
I
haven't
read
about
this
mutation.
We
we
just
ignore
the
deferred
directive
for
notation,
but
I
will
also
make
my
notes.
E
Yeah,
I
definitely
need
to
get
more
feedback
on
how
the
mutation
should
work.
What
we
had,
I
think,
is
based
on
feedback
that
we
got
from
facebook
last
year,
but
it's
it's
definitely
like
a
less
conservative
change
than
what
it
could
be.
D
My
gut
says
that
you
should
do
the
same
thing
as
you're
doing
with
subscriptions,
so
you
should
wait
for
the
mutation
and
its
payload
to
complete
before
you
go
on
to
the
next
mutation
we've
been
talking
about
this
for
a
little.
While
do
you
want
me
to
write
down
any
actions
to
put
in
the
in
the
notes?
Are
you
happy
just
pulling
them
out
for
yourself
later.
H
A
Too
awesome
thanks
for
that
great
discussion,
everybody
all
right!
That's
the
last
of
our
agenda
items.
D
I'm
certainly
interested
in
discussing
merging
this
prettier
pull
request
because,
obviously
it's
quite
large-
and
I
don't
want
to
keep
updating
it.
A
D
Yep
sounds
good.
I
think
it
will
make
diffing
next
time
we
do
a
spec
cut.
It
will
make
that
diff
a
lot
easier.
I
think
so.
This
will
be
good.
A
That's
the
hope,
that's
kind
of
why
I
was
thinking
we
should
just
like
that
should
be
the
very
first
thing
to
land
after
the
spec
cut
is
we
can
do
a
diff
from
version
cut
to
version
cut,
and
then
we
can
do
a
diff.
Both
conversion
cut
depression,
cut
and
then
also
like
version
cut
plus
one
which
is
just
like
urgent
couple
with
prettier
replied.
A
So
you
can
see
the
actual
changes
that
have
been
made
instead
of
just
seeing
like
the
pretty
route
like
prettier,
it's
just
gonna
clobber
everything
in
the
in
the
spec
text,
as
it
should,
and
hopefully
that
doesn't
make
it
hard
to
review
future
look
backs.
D
What
I've,
what
I've
found
is
that
the
the
actual
generated
html
is
pretty
much
identical.
The
only
difference
now
is
because
we've
got
the
github
links
that
link
to
specific
lines
of
code.
So
those
obviously
change,
but
everything
else
like
everything
else
is
identical.
So
if
you
trim
out
that
data
tag,
it's
literally
by
for
by
identical
before
and
after
prettier
in
the
generated
html,
that's
what
we
like
to
see.