►
From YouTube: GraphQL Working Group (Primary) - 2023-07-06
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
B
Huge
props
to
denji,
who
did
like
a
Monumental
amount
of
YouTube
uploading.
The
last
couple
of
days.
C
E
It's
all
ultimate
well
semi-automated,
now
I
still
have
to
run
the
script
every
now
and
again,
and
then
there's
rate
limits,
so
I
can't
do
it
too
often,
but
yeah.
It's
made
a
huge
difference,
so
hopefully
we'll
be
able
to
get
these
meetings
within
a
few
days
now,
rather
than
months
as
it
has
been
before.
B
That's
huge
I
think
getting
it
into
a
script
is
the
smart
thing
to
do.
I
did
the
same
thing
for
building
our
agenda
files
because
I
used
to
manually
copy
and
paste
them,
and
when
there
was
one
a
month
there
was
like
very
reasonable
to
do
that
and
then
now
that
there's
three
there's
like
too
many
subtle
things
to
screw
up
we're
all
Engineers.
Why
aren't
we
writing
scripts
for
all
this
stuff
in
the
first
place?
I,
don't
know
why
we
just
figured
out
doing
that
now.
A
B
Maybe
give
a
couple
minutes
for
folks
to
show
I
see
we'll
have
a
slightly
tighter
audience
today,
which
makes
sense,
I
think
it's
holiday
week
in
the
US,
and
some
folks
may
be
traveling,
and
it's
also
getting
pretty
dang
close
to
mid-summer
in
Europe.
B
B
All
right,
we
can
go
ahead
and
get
started.
We've
got,
let's
see
one
two,
three,
four,
five,
six,
seven,
eight
nine
folks
listed
as
wanting
to
attend
and
six
here
you
can
count
that
as
Quorum
and
hopefully
get
a
handful
of
folks,
you
want
to
join
later
to
join
later
a
lot
of
them
introduce
themselves
at
that
point.
B
All
right!
Kick
it
off
for
me!
Welcome
everybody
July
edition
of
the
primary
working
group
meeting
as
we
are
all
here.
Of
course,
it
means
we
agree
to
the
spec
agreement,
participation,
guidelines,
contribution,
guide,
conduct
all
fantastically
written
documents
if
you
ever
want
to
go,
read
them
great.
B
Beach,
reads
good
time
here
for
that,
as
per
usually
just
do
a
quick
intro
of
attendees
welcome
folks
who
are
joining
we're
just
at
the
introing
phase,
we'll
do
a
quick
around
the
room
in
the
order
that
we
see
it
listed
in
the
agenda
file
for
folks
who
are
here,
which
is
great
to
perfect
name
to
a
face,
especially
for
anyone
who's
watching
this
on
YouTube
later
I'm.
At
the
top
of
that
list,
hello,
everybody,
my
name-
is
Lee
I'm
the
lead
with
the
grafco
project.
G
B
A
A
F
Koi's
gonna
be
30
minutes
late,
but
did
want
to
talk
about
the
defer
execution
stuff.
So
it's
good
that
that's
lasts.
A
B
All
right
welcome
everybody,
and
thanks
for
getting
yourself
on
the
agenda
list,
Stephen,
we
have
a
slightly
updated
graphql
work
group
notes.
Doc,
I
see
a
handful
of
folks
pop
that
open
to
help
take
notes.
Thank
you
for
doing
that.
B
B
These
meetings
is
remembering
to
build
a
Google
doc
to
write,
notes
and,
and
then
we've
gotten
into
this
slightly
informal
practice
over
the
last
handful
of
months,
so
just
building
one
Google
doc
per
month
and
using
the
same
doc
to
write
notes
in
for
the
both
the
primary
meeting
and
the
two
secondary
meetings,
and
it
made
me
think
why
don't
we
just
have
one
dock
where
we
can
just
add
a
header
and
write
in
the
notes
every
meeting,
and
we
don't
need
to
think
too
hard
about
making
new
docs
every
time
and
then
there's
one
less
annoying
thing
to
have
to
do
so.
B
I
did
that,
so
you
should
be
able
to
find
that
node
stock
is
just
labeled
as
continuous,
but
we
will
set
up
a
a
new
header
at
the
beginning
of
each
meeting,
and
so
thank
you
for
doing
that
and
as
especially
as
Benji
is
speaking
if
anyone
is
willing
to
chime
in
with
note
taking.
That
would
be
very
helpful.
B
B
Yes,
I
think
that
the
same
risk
that
there
has
been
in
the
past
with
the
other
note
stocks,
which
are
similarly
available
to
anyone
who
joins
the
meetings
to
hop
in
and
help
take
notes,
certainly
hope
that
nobody
does
that
and
the
the
valuable
thing
to
do
would
be
after
meetings,
wrap
up
to
copy
out
the
section
and
translate
it
into
the
notes
that
we
actually
file
into
GitHub,
and
that
would
avoid
graffiti
actually
breaking
us
any
significant
way
and
we'll
clean
it
up
if
we
ever
spot
any
but
fingers
crossed
their
Community
is
good,
and
that
does
not
happen.
F
B
No,
no,
let's
take
a
quick
look
at
the
agenda.
I
do
think
it's
useful
to
kind
of
look
back
at
the
last
couple
of
meetings.
One
of
the
things
that
the
agenda
bootstrapping
script
does.
Is
it
auto
links
the
previous
two
meetings
given
any
one
meeting,
and
we
can
take
a
quick
look
at
that?
It's
been
a
while,
since
we
popped
open,
Action
items,
I,
don't
know
how
diligent
we've
been
about
filing
them
necessarily,
but
it's
probably
worthwhile
to
spend
a
few
minutes
just
doing
a
little
bit
of
cleanup
exercise.
B
Bench
has
got
an
update
on
us
for
us
on
for
those
on
the
the
TSC.
How
we
wanted
to
use
the
mailing
list
and
then
major
topic
for
the
day
is
the
execution
model
for
different.
That
is
a
relatively
tight
agenda,
which
is
totally
fine,
but
is
there
anything
that
anyone
would
like
to
talk
about
today?
That
is
not
listed
on
the
agenda
that
we
would
like
to
add.
B
Cool
I'm
here
in
silence,
one
thing
that
I'll
have
and
I'll
bring
it
up
when
Benji
gets
into
things.
The
TSC
mailing
lists
topic
is
probably
a
good
announcement
topic.
B
B
But
otherwise,
let's
do
a
quick
review
of
Prior
meetings
for
folks
who
are
able
to
attend
those
there's
the
secondary
APAC
meeting
got
canceled.
We
had
no
agenda
that
one
so
that
one's
pretty
easy
didn't
happen,
that's
a
little
bit
par
for
the
course.
Those
I
would
say
that
the
typical
standing
attendance
for
that
meeting
fluctuates
months
a
month
in
any
given
month.
B
It's
either
on
or
off,
and
if
it
is
on,
then
we
have
usually
about
a
half
a
dozen
people
in
the
room
that
includes
the
handful
of
folks
from
atlassian
who
are
based
out
of
Sydney
and
a
couple
other
cities
in
Australia
and
so
they've
been
finding
it
very
useful
and
we've
gotten
to
get
through
a
handful
of
good
topics.
B
Historically,
just
last
month
we
did
not
have
a
topic
to
talk
about,
so
we
canceled
it
and
then
they
have
the
secondary
one,
which
I
think
also
ended
up
getting
canceled
due
to
a
lack
of
agenda.
Benjamin
Benji
checked
me
on
that.
If
that
was
correct,.
B
Cool
I'm,
actually
I'm
kind
of
curious
for
folks
in
the
room,
have
been
just
sort
of
keeping
their
eyes
on
these.
If
my
hypothesis,
because
the
in
the
in
sort
of
the
first
handful
of
months
of
this
year,
we
had
a
lot
of
participation
across
all
three
meetings
per
month,
a
lot
of
continued
discussion
and
therefore
had
sort
of
come
to
the
conclusion
that
it
was
really
good
for
us
to
add
those
secondary
meetings
and
the
May
one
were
thin.
B
If
not
canceled
I
can't
remember
the
June
one
both
of
the
two
secondaries
end
up
getting
canceled
due
to
No
Agenda.
B
My
hypothesis
is
just
we're
kind
of
getting
into
the
midst
of
the
summer
and
summer
things
just
slow
down
a
little
bit
because
there's
more
vacations
and
things
going
on
in
people's
world
is
that
hypothesis
resonate
with
folks?
Does
anyone
have
an
ALT
hypothesis
on
what's
going
on
and
if
these
secondary
meetings
are
still
useful,.
B
E
I
think
because
of
that
and
because
of
the
people
that
are
involved
with
that
who
are
often
the
people
that
bring
topics
to
the
other
meetings,
we're
all
lacking
time
to
do
anything
else
at
the
moment.
But
hopefully,
once
we've
got
streaming
deferrals
sorted,
then
probably
I
think
momentum
will
pick
up
again
in
the
main
working
group.
D
Yeah
and
it's
really
like
the
the
thing
when
I'm
at
conference
to
one
of
the
most
awaited
features
like
the
the
pities
but
like
we
have
a
version
out
there
and
I
always
have
to
yeah.
Okay,
you
can
use
it,
but
it's
completely
different,
but
it's
coming.
But
but
it's
like
you
can
see,
there's
a
lot
of
anticipation
for
this
feature
really.
B
Yeah
I
think
it
makes
a
lot
of
sense
for
us
to
be
somewhat
single
tracked
on
that,
given
just
how
much
work
and
complexity
there
is
to
figure
it
out,
it
probably
would
be
useful
for
us
at
some
point,
and
we
can
certainly
do
this
asynchronously
in
a
GitHub
thread
or
something
is
just
do
a
little
bit
of
a
back
up
and
say
maybe
looking
into
the
second
half
of
the
year
or
looking
ahead
a
little
bit
into
next
year,
like
what
should
be.
B
What
are
the
priorities
that
we'd
like
to
see
happen
across
graphql
and
what's
the
current
state
of
each
of
them,
because
I
know
there's
a
number
of
initiatives
that
are
in
some
various
paused
state
where
either
the
champion
for
them
has
gotten
busy
or
other
things
have
changed.
I
know,
there's
the
client-side
nullability
is
in
a
bit
of
that
state.
There's
input
union
says
in
a
little
bit
of
that
state.
There's
a
handful
of
interesting,
exciting
work
where
we've
realistically
back
burner
to
focus
on
yeah.
D
Especially
science
null
ability,
it's
also
something
that,
when
I
to
talk
to
people
they
it's
also
a
very
anticipated
thing.
It's
a
bit
over
these
things,
like
also
schema,
coordinates
one
off.
There
are
so
many
good
things,
but
it's
like
at
the
moment
overshadowed
by
this
one,
big
effort,
but
I.
Think
it's
a
when
this
is
true.
You
can
do
all
the
smaller.
D
As
well
yeah,
it's
it's
I
I
can
see
that
because
it's
a
super
utility
for
like
you,
get
this
kind
of
error
boundaries
in
there,
but
so
this
this
null
shaping
on
the
client
side.
It's
also
yeah
yeah,
especially
for
client
Developers,.
B
My
opinion
is
that
that
kind
of
error,
boundary
client-controlled
nullability
stream
of
work
is
probably
of
equal
importance
as
the
the
stream
and
deferrer
work.
It's
just
it
sort
of
an
incrementally
smaller
change.
It's
still
a
decently
large
change,
but
it's
obviously
the
streaming
diverse,
just
like
a
monumentally
big
evolution
of
the
work,
which
is
why
it's
demanded
so
much
time
and
attention
anyway.
B
The
reason
why
I
bring
it
up
is
As,
I,
I
sense,
there's
a
little
bit
of
a
correlation
between
the
the
these
sort
of
like
core
meetings
thinning
out
a
bit,
and
it's
because
the
group
has
gotten
fairly
single
tracked.
That
is
extremely
acceptable,
given
how
much
work
that
is,
but
I
do
think.
There's
probably
an
opportunity
for
us
to
start
to
pick
up
some
of
the
other
threads
in
the
coming
months.
D
J
D
E
So
the
the
one-off
implementation
finally
got
merged
into
graphql.js,
only
like
a
week
or
two
ago
as
well,
so
that
is
gently
progressing
in
the
background.
D
Yeah,
we
also
have
it
in
hot
chocolate
now
for
one
and
a
half
years
or
so
so
it's
and
it's
used
and
we
get
good
feedback
on
it,
but
we
all
needed
the
input
side.
Eventually.
Did
you
see
JavaScript
implementation
also
doing
the
output
side
or.
D
Yeah
I
waited
on
the
output
side
because
it
was
not
clear
if
we
want
to
do
that
or.
B
There's
a
just
a
quick
search
for
the
things
that
are
sort
of
late
phase
is
the
different
stream
we're
getting
dialed
in,
of
course,
default
value.
Coercion
schema
coordinates
which
has
flagged
yeah.
D
Yeah
I
think
that
was
like
the
the
main
thing
about
client
control.
Nullability
is
a
bit
like.
There
was
a
syntax
discussion
that
we
had,
although
I
think
I
like
the
syntax
I
got
more
used
to
I.
Think
the
main
main
main
issue
people
were
discussing
was
like
the
list
thing
like
with
the
brackets
yeah
I.
Think
it's
just
polishing.
C
I
mean
my
understanding
with
CCN
I,
followed
this
one
pretty
closely.
Is
that
the
biggest
thing
that
has
blocked
it
from
finishing?
Is
that
Alex
Riley
really
has
you
know
not
championed
it
recently
after
he,
he
changed
jobs
and
that's
totally
reasonable
and
okay
and
understandable,
also,
but
I
think
somebody
needs
to
take
ownership
of
it
again
and
we
need
to
just
make
it
happen
at
some
point.
C
I
A
I
Talked
with
Alex
about
kind
of
trying
to
pull
this
along
I'd
hope
to
have
have
something
to
to
bring
up
today,
but
there's
basically
a
slew
of
PRS
that
are
in
various
stages
and
and
so
I.
My
read
currently
is
I.
Think
where
you
know
we
could
actually
maybe
push
it
along
to
the
next
RFC
stage.
You
know,
based
on
the
criteria,
but
what's
unclear
to
me
now
is
like
where
we
yeah
are
there.
Are
there
finer
points
where
we
maybe
don't
have
sufficient
consensus.
I
Around
maybe
like
error
boundaries
and
discussion
with,
like
the
the
the
the
relay
folks,
you
know
because
there's
you
know
more
more
complex,
behavior
and
requires
and
and
so
I
think,
that's
the
main
outstanding
thing
all.
I
Yeah,
the
syntax
with
brackets
but
I,
think
with
that
we
can
always
just
start
simple.
D
Yeah
there
was
a
syntax.
There
was
like
a
general
Behavior
discussion.
That
also
is
like
the
the
syntax
notes
that
at
the
moment,
are
defined
also
in
the
in
the
JavaScript
implementation
and
also
in
the
hot
chocolate
implementation
was
not
it's
not
the
right
names
that
that
were
used.
There
I
think
that
was
also
in
transition,
but
I
think
it's
like
it
would
maybe
take
a
couple
of
weeks
to
iron
this
out.
But
it's
not
like
fully
far-fetched
together
and
I.
F
I
think
the
key
error
boundary
bit
with
relay
is
that
relay
error.
Boundaries
with
on
fragments
and
client
control.
Mobility
cannot
because,
like
you
basically
end
up
with
error
boundaries
that
consume
sibling,
fragments
because
of
the
way
that
client-controlled
Mobility
bubbles
up
that
has
to
be
on
fields
and
we
don't
have
aliased
fragments
even
as
an
option,
and
that's
that
I
think
was
the
key
like
relay
issue
with
the
Airbound
trips
like.
F
Fragments
then
I
think
client
control
null
ability,
basically
or
like
keyed,
fragment
response
values
as
an
option,
even
not
even
as
a
requirement
that
that
basically
resolves
the
client-controlled.
Nobility
error,
boundary
problem.
B
C
Think
this
is
the
fragment
isolation,
yeah
RFC,
okay,.
B
B
Swirly
and
we'd
run
out
of
time
and
as
a
result,
progress
was
stalling
a
bit,
and
so
probably
what
we
need
is
whoever
is
gonna
whether
Alex
returns
to
be
the
the
champion,
or
somebody
else
wants
to
pick
up
the
champion.
The
thing
that
they
probably
need
is
like
a
clear
person
in
the
relay
team,
who's
ready
to
be
their
partner
to
make
sure
that
the
path
through
works
for
client
tools.
F
Yeah
yeah
there's
not
a
clear
I'm,
probably
the
clearest
relay
person
at
this
point,
because
they're
really
you
realize
in
all
sorts
of
weird
States,
but
the
other
thing
is
I
think
this
was
mostly
happening
before
we
had
the
three
meetings
a
month
and
one
of
the
things
I
wanted
to
Circle
back
on
is
I.
Think
the
three
meetings
a
month
they
operate
as
pressure
release
valves
for
discussions
like
this
like.
F
If
you
are
in
a
Groove
and
having
continual
discussion,
then
you
basically
get
to
have
that
discussion
every
week
and
like
move
a
little
bit
forward
without
necessarily
having
a
formal
like
like
the
first
stream
working
group,
has
the
formal
weekly
meeting
you
don't
need
to
set
up
that
structure
in
order
to
keep
making
progress
which
I
think
is
really
healthy,
even
if
sometimes
we
don't
have
anything
to
make
progress
on.
C
I
mean
it
seems,
like
the
error.
Boundary
stuff
is
a
pretty
significant
blocker
and
championing
this
work
is
going
to
not
make
a
lot
of
progress
unless
we
resolve
that
specific
issue.
I
think
that
the
the
other
questions
are
on
error
boundaries
and
the
stuff
with
the
listen
tax,
it
seems
like
we've,
got
pretty
close
to
consensus
and
there's
a
couple
of
things
to
sort
out,
but
if
we
have
problems
with
relay
error,
boundaries
and
Matt
you're
saying
the
really.
The
only
way
forward
on
that.
Currently
that
we've
seen
is
fragment
isolation.
C
F
And
it's
not
it's
not
that
we
necessarily
need
fragment
isolation.
It's
that
I
can't
think
of
a
client
assuming
a
client
most
clients
work
like
relay
that
will
be
able
to
take
full
advantage
without
basically
compiling
away
a
big
chunk
of
the
client
controlled
nullability
annotations,
like
that's
the
key
like
how
do
you
like
if
we
introduced
the
spec
change
and
then
on
the
client
side
at
just
90
of
the
time,
because
90
of
usages
are
using
a
smart
client.
It's
like
that.
J
A
F
C
I
The
basic
contention
here,
where
you
know,
if
you,
if
you
Market
you
know
not
null
within
you,
know
one
fragment
but
there's
another
fragment
that
doesn't
like
where
you
have
conflicting
designation
between
two
different
fragments.
F
That's
not
even
it's
not
even
that
it's
that
okay
I
can
have
a
sibling
fragment
for
a
component
that
doesn't
need
this
field.
That
is
annotated
as
never
being
allowed
to
be
null
that
field
bubbles
up
unless
you
have
fragment
bound
error
boundaries,
if
it
bubbles
up
even
one
level
in
the
path
and
makes
that
level
above
it
null
that
can
wipe
away
a
completely
unrelated
well
related.
Only
in
the
parent
field,
you
have
to
some
the.
B
I
remember
the
the
frame
that
we
had
gotten
to
that
was
seemed
the
most
promising,
and
this
was
roughly
when
Championship
started
to
lose
ability
to
focus
on.
This
was
one
appreciating
the
fact
that
there
was
this
sort
of
parallel
thing
that
we
wanted
to
make
a
priority
of
what?
B
What
actually
would
it
look
like
to
have
modularized,
fragments
and
sort
of
Designing
them
in
parallel,
a
bit
at
least
to
the
degree
where,
like
the
design
space
that
we
were
interested
in
fragment
modularity,
would
would
would
feel
like
a
natural
evolution
of
client-controlled
nullability
rather
than
in
conflict
with
it.
B
And
then
that
was
one
where,
like
co-designing
with
someone
on
the
relay
team,
would
have
beared
a
lot
of
fruit
and
then
the
other
was
appreciating
the
fact
that,
because
this
is
part
of
the
query,
rather
than
part
of
the
schema
and
relay
has
a
compiler,
is
that
there's
always
room
for
Relay
to
do
clever
things
around
thing
like
I
will
actually
like
interpret
this
nullability
thing
locally,
when
I
read
from
the
store,
rather
than
when
I
send
a
query
out
to
the
server
and
like
interpreting
the
developer
demands.
B
Rather
than
blindly
passing
it
through,
and
it's
like,
okay
when
you
when
you
appreciate
the
fact
that
it's
input
to
a
compiler
and
you've
got
control
all
of
a
sudden,
the
the
idea
that
adding
the
feature
will
cause
everything
to
blow
up.
It
gets
less
scary
and
you
realize
that
it's
a
tool
and
knowing
that
relay
already
does
something
similar
with
some
like
somewhat
custom
directives
that,
like
there,
was
like
a
not
immediately.
A
B
What
the
answer
was,
but
at
least
like
design
space
was
like.
Oh
okay,
that's
interesting.
We
can
explore
that
and
find
a
thing
that
unblocks
the
main
threat
of
work,
while
also
like
leveraging
the
intent
of
the
tool
for
the
output
of
a
compile
so
suffice
to
say.
There's
like
not
a
thing
that
we
would
answer
today.
B
I
For
financial
nobility,
assuming
that
we
want
to
be
able
to
get
this
merged
before
you
know
more
bigger
plans
with
fragment
modularity,
it
seems
like
the
main
thing
we
need
is
confidence
that
this
isn't
going
to
block
future
plans
with
fragment
modularity.
You
know
just
make
sure
we're
not
going
to
design
ourselves
into
a
corner
beyond
that.
I
A
F
Yeah
I
think
the
consensus
was
it's:
okay,
if
for
now
relay
basically
completely
wipes
it
away
from
the
like
what
we
what
gets
sent
to
the
server,
but
it
would
be
like
it
would
lose
a
lot
of
its
value
if
all
clients
end
up
having
to
do
that,
because
there's
just
no
way
to
make
modularity
work
like
component
composition,
work
without
that
compiler
step.
A
hundred
percent,
removing
all
client
controlled,
nullability
annotations.
I
Maybe
it
may
be,
a
useful
exercise
would
be
someone
involved
with
Apollo
client
to
see
you
know.
Realistically.
What
does
this
look
like
for
Apollo
clients?
It's
still
useful
yeah
without
yeah,
clever.
C
Client
stuff
definitely
I
mean
we've
got
the
implementation
of
client
controlled
availability
for
our
code
generated
models
that
Alex
Riley
already
implemented
and
we
merged
into
an
experimental
Branch,
and
it
was
really
it
was
working
great
in
just
all.
C
We
were
really
doing
was
consuming
the
client
control
nullability
annotations
to
change
the
null
ability
of
fields
on
the
generated
models
and,
if
you're,
assuming
that
the
you
know
server,
is
giving
you
the
data
that
it's
supposed
to
give
you
and
if
a
field
is
marked
required,
then
it
gives
you
an
error
that
everything
just
works:
fine,
because
we
have
an
execution
layer
on
the
client
that
actually
validates
that
every
field
is
there
and
it's
based
on
the
null
ability
of
the
fields
on
the
generated
models.
C
C
I
can
see.
Definitely,
how
that
would
cause
issue
and
really
the
way
that
that
would
function
today
in
our
implementation
would
be
that
if
the
field
is
null
you
just
don't
get
any
of
it,
that
entire
entity
is
marked
as
no
right
up
to
the
next
error
boundary
above
it
I
see
why,
especially
in
a
world
where
fragments
are
being
used
as
component
models,
that
really
you
know
breaks
a
lot
of
things
and
fragment.
Isolation
like
like
you
said,
Matt
really
does
seem
the
natural
progression
to
solve
that
I.
C
I
do
agree,
though,
that
this
is
that
feels
like
a
developer
decision.
Slash
error
there
in
in
like
this
is
what
this
tool
does.
This
is
how
graphql
specification
works.
If
you
say
it
feels
required,
it's
required.
It
has
to
be
there.
If
it's
not
there,
then,
like
you,
don't
get
that
entity.
I.
Think
that
that's
something
that
well
that
that's
not
necessarily
the
intended
behavior
when
you
have
these
component
fragments.
C
It's
also
not
something
that
developers
are
going
to
be
shocked
or
offended
by
that
behavior,
and
if
you
need
that,
then
you
have
to
make
the
field
nullable,
like
you
guys,
have
to
say.
Oh
okay,
well
I
have
to
make
this
nullable.
Now
my
my
thought
was
like
the
first
real
use.
Cases
for
this
were
fields
that
are
part
of
the
identity
of
an
entity
right
ID
fields
and
things
that
it
you
want
to
ensure
are
there
on
every
single
operation
and
we
actually
wanted
to
build
something.
C
On
top
of
this,
with
like
a
schema
extension
that
you
could
and
you
could
indicate
certain
fields
that
were
required
fields
on
the
entity
and
it
would
automatically
our
compiler
would
add
those
fields
just
like
we
add
the
type
name
field
to
every
entity.
We
would
add
those
required
fields
to
all
of
those
entities
and
we
would
make
them
CCN
required.
So
we
guarantee
you
have
them
and
you
can
use
them
to
like
make
schema-based
models
work
much
better.
C
B
All
right
moving
us
on
to
agenda
items,
considering
that
this
quick
discussion
ended
up
eating
some
time
in
it
I
think
a
healthy
way.
This
is
a
clear,
clear
appetite
for
us
to
get
through
that
one.
B
If,
if
anyone
here
is
interested
in
picking
up
Champion
chip
from
Alex
I
think
there's
a
lot
of
room
to
do
excellent
work
and
we've
got
a
decent
amount
of
road
ahead
of
us
that
has
a
song
Clarity
I,
see
Iman
and
Kawaii
join
a
few
to
want
to
intro
your
subtles
that'd,
be
awesome
and
then
we'll
move
on.
B
Welcome
folks,
we
are
right
actually
at
the
top
of
the
agenda,
and
we
just
got
done
talking
about
some
of
the
other
priorities
that
were
on
the
plate
that
have
been
absent
recently.
And
why
and
that.
B
That
it
was
a
little
bit
of
fun
next
on
the
list
is
a
TSC
mailing
list
update
Benji
I'll,
let
you
take
it
from
there
and
I
got
one
thing
to
add.
On
top
of
that,.
E
Sure
hi
folks
excuse
me,
so
you
probably
know
that
the
TSC,
mostly
don't
do
much
right.
The
whole
point
is
the
working
group
does
all
the
stuff
and
the
TSC
just
handled
the
the
few
little
things
that
need
some
special
legal
oversight
or
whatever.
E
So
we
have
a
TSC
private
mailing
list
for
this
that
gets
used.
Incredibly
rarely
I
went
to
send
something
to
it.
E
The
other
day
and
I
noticed
that
it
still
has
the
TSC
as
set
up
or
had
the
TSC
as
set
up
as
of
last
year,
so
it's
not
even
up
to
date
with
the
latest
TSC,
so
I've
been
working
with
Jory
to
get
this
updated,
however,
I
have
sent
some
emails
to
TSC
members
and
haven't
received
replies
or
acknowledgments,
so
it's
possible
that
the
email
addresses
that
we
have
for
you
are
not
the
best
email
addresses
so
I'd
like
to
request.
E
B
The
follow-up
I
wanted
to
just
spend
just
a
couple
of
minutes
on
was
just
doing
a
quick
assessment
of
the
communication
tools.
We're
using
are
they
working
any
feedback,
any
thoughts,
the
feedback
or
the
communication
tools
that
we
have
are
one,
of
course,
the
primary
graphical
working
group
and
graphql
spec
GitHub
repos,
which
both
have
discussion
channels
and
and
threads
turned
on
the
TSC
has
the
mailing
list
that
Benji
described.
B
It
also
has
a
there's,
a
TSC,
specific
graphql
discussion
forum
thing
as
well,
which
again
gets
used
somewhat
rarely
for
TSC
specific
things
which
don't
come.
B
I'll
get
Hub
form.
Thank
you
and
then
appreciating
that
at
one
point
in
the
past
we
also
had
a
a
slack
Channel
and
we
had
de-prioritized
slack
to
move
everyone
over
to
Discord,
and
that
has
my
senses
like
worked
reasonably
well
for
public
forum
and
has
been
kind
of
awful
at
building
private
forms.
B
E
E
Yeah
so
dming
me
on
on
Discord
is
probably
the
best
way,
but
if
you
don't
have
Discord,
then
you
can
find
I
mean
you
can
find
me
on
Twitter
or
whatever
email
I
have
a
public
email
address
on
GitHub,
but
it's
so
full
of
spam
that,
like
then
I
may
well
miss
that.
B
The
primary
reason
why
is
I
was
somewhat
curious
about
potentially
maybe
drawing
slack
again,
but
this
time,
rather
than
treating
it
in
the
same
way
that
we
had
treated
Discord.
It's
just
sort
of
like
a
general
public
forum
where
anyone
could
show
up,
which,
unfortunately,
is
just
antithetical
to
Slack's
pricing
model
to
instead
limit
it
expressly
to
people
who
are
working
on
core
graphical
projects.
So
graphical
board
members
TSC
core
working
group
members
who
show
up
to
at
least
one.
Maybe
two
meetings
would
get
pulled
into
there.
E
So
I
can
definitely
understand
why
you
might
want
to
do
that,
because
the
the
Discord
itself
can
get
quite
overwhelming,
because
there's
just
so
many
channels
and
lots
of
people
just
asking
for
help
and
stuff.
My
biggest
concern
with
moving
to
a
more
private
slack
is
that
a
lot
of
any
communication
that
happens
there
isn't
going
to
be
seen
by
the
general
public,
which
is
going
to
mean
that
we're
going
to
be
dealing
with
the
same
questions
over
and
over.
E
It
I
think
Discord
is
adding
tools
as
well
to
allow
people
to
better
customize
the
channels
that
they
join.
It
used
to
be
that
when
you
joined
a
server,
you
would
just
see
like
every
single
Channel,
which
is
a
lot
but
I
think
that
they're
adding
more
tools
there
and
I'm
also
happy
to
help
set
up
roles.
If
we
want
to
give
people
roles
to
join
certain
chats
and
things
like
that
to
to
help
you
narrow
it
down,
that's
my
thoughts.
B
B
Okay,
we
don't
need
to
think
too
hard
about
that
if
anyone's
thoughts
feel
free
to
shoot
me
a
message
before
we
can
find
me
nothing
that
I
want
to
take
action
on
immediately.
Just
thought
would
be
useful
together,
input
apprecially,
we
got
everyone
here
that
we
want
to
talk
about.
The
defer
execution
model
I
will
hand
it
to
Rob
to
take
us
through
to
the
end
of
the
meeting.
G
Yeah,
so
this
is
a
a
something:
we've
been
discussing,
the
last
couple
of
incremental
working
group
meetings
that
we
have
every
Monday,
the
idea
of
like
when
you
have
fields
that
are
deferred.
When
exactly
should
the
graphql
server
execute
them.
G
So
we've
been
calling
the
two
options
that
we
have
early
versus
delayed
execution
just
to
review.
What
each
of
these
links
mean,
so
early
execution
would
be.
We
have
this
query
at
the
root
level.
Selection
set,
there's
a
and
wrapped
inside
a
defer
at
the
root
is
C
nested
under
a
is
B
and
we're
saying
with
early
execution.
That
means
that
when
we
begin
the
executing
the
root
selection
set,
which
is
a
and
C
we're
going
to
execute,
C
pretty
much
right
away
now
it
doesn't
mean
it
it
could
be.
G
There
could
be
some
kind
of
delay,
we'll
we're
going
to
let
servers
add
some
sort
of
they
could
delay
the
execution
of
the
derivative
of
the
Deferred
fields
buy
as
much
as
they
want,
but
it's
probably
going
to
be
sometime
shortly
after
a
begins
execution.
It's
not
going
to
wait
for
a
to
finish.
G
It's
not
going
to
wait
for
B
to
B
to
finish
it's
they're,
it's
going
to
kind
of
be
happening
in
parallel
in
the
background,
while
we're
also
executing
the
fields
in
the
initial
results,
and
this
is
what
we
have
implemented
in
graphql.js-
that's
what's
been
in
there,
since
as
long
as
we've
had
defer
in
that
it's
not
it's
not
on
the
stable
version,
it's
in
the
alpha
versions,
but
that's
what's
been
there
for
the
past.
G
G
What's
the
benefit
of
doing
early
execution?
You
start
something
sooner.
They
can
finish
sooner.
You
can
get
the
Deferred
results
sooner
and
without
early
execution.
That
could
mean
that
the
amount
of
time
that
your
server
is
spent
on
a
single
query
could
be
stretched
out
pretty
significantly
because
without
defer
stuff
on
the
same
selection
set
is
going
to
be
executed
somewhat
concurrently.
G
G
But
there
are
a
number
of
issues
that
come
up
with
early
execution.
That
delayed
execution
would
address
so
and
most
of
these
relate
to
you
don't
want
your
deferred
fields
to
slow
down
the
delivery
of
the
initial
data,
the
non-deferred
field,
so
you
could
have.
G
You
could
be
a
server
in
a
single
threaded
environment
and
there
could
be
synchronous
code
in
your
resolvable
resolvers
for
the
Deferred
Fields,
that's
taking
up
CPU
time
that
could
be
spent
and
the
initial
data
you
could
have
a
bunch
of
resolvers
that
are
hooked
up
to
a
single
database
connection.
That's
not
doing
multiplexing.
That
means
you're,
potentially
slow
queries
that
are
in
the
Deferred
resolvers
might
get
sent
to
your
queue
before
the
queries
for
the
initial
data
that
could
end
up.
G
You
know
slowing
down
the
initial
data,
to
the
point
that
it's
waiting
for
all
the
Deferred
data
to
be
completed,
making
the
defer
kind
of
useless
for
you
could
run
as
issues
where
you
have
data
loader
and
within
the
same
tick
of
the
process,
you're
sending
IDs
to
it
from
both
initial
data
and
deferred
data
they
get
batched
together.
You
don't
end
up
getting
the
initial
data
until
the
Deferred
data.
That's
batched
with
it
is
returned.
G
So
I
want
to
call
out-
maybe
maybe
delayed
execution,
isn't
the
solution
to
these.
Maybe
there
are
other
ways
to
address
them.
One
option
could
be
that
we
have.
We
provide
more
information
to
the
resolvers
to
let
them
know
that
a
field
is
being
deferred
or
not,
and
you
could
construct
separate
database
connections.
G
Maybe
we
even
just
give
you
a
promise
that
you
could
await.
So,
if
you
do
want
to
delay
your
execution,
you
can
await
that
promise
in
your
resolver
Yakov
actually
wrote
a
wrapper
around
data
loader
that
adds
priority
levels.
That
would
help
you
with
prevent
that
kind
of
batching
from
happening.
G
So
the
big
question
is:
what
should
we
put
in
the
spec
I
think
that
either
way
we
kind
of
want
to
allow
allow
servers
to
have
some
control
of
this
I'm,
not
sure.
If
there's
like
a
one-size-fits-all
approach
that
we
want
to
dictate
for
everyone.
G
So
do
we
want
to
specify
early
execution
but
say
that
Implement
implementations
are
allowed
to
delay
or
do
we
want
to
specify
delayed
execution?
But
implementations
are
allowed
to
do
things
a
bit
earlier
and
you
may
be
asking
like:
what's
the
Practical
difference
between
that
and
here's
an
example,
so
I
think
that
overall,
like
if
you're
writing
when
you're
writing,
this
back
delayed
execution
ends
up
being
simpler,
spec
you'd
want
when
you
don't
start
executing
stuff
until
the
other
stuff
is
delayed,
you
don't
have
to
worry
about
error
bubbling.
G
G
So
if
you
don't
start
executing
this
deferred
bar
until
you
have
already
completed
executing
non-nullable
fielded
errors,
what's
going
to
happen
is
that
this
is
going
to
this
error
is
going
to
Bubble
Up
to
Foo
and
you're
good
you
just
there's.
No
there's!
No
second
results.
You're
only
getting
the
initial
result
you're
going
to
get
food
and
I'll.
G
If
you
had
start
at
started
executing
bar
before
non-nullable
fielded
errors
has
even
finished.
You
have
to
be
careful
to
make
sure
that
you
know
that,
once
this
error
happens,
you
don't
actually
go
and
send
some
data
that
is
pointing
to
a
location.
That's
been
nulled
out
in
a
previous
result,
so
we
we
actually
have
discussed
this
issue
a
while
back,
but
but
and
came
up
with
an
algorithm
that
handles
this,
and
we
have
this
built
into
graphql.js
and
the
spec
draft.
G
Another
thing
that
I'm
not
sure
is
like
really
a
big
issue,
but
I'm
a
little
bit
concerned
about
is
that
is
there
a
backwards
compatibility
concern
with
delayed
execution
being
that
it
would
allow
you
to
control
like
the
order
that
specific
fields
are
resolved
in
you
could
come
up
with
a
query
that
depends
on
depends
on
that.
So
it's
kind
of
like,
like
a
nested
mutation,
thing
that
some
people
have
talked
about
where
previously
a
lot
of
these
would
be
executed
in
parallel.
G
If
you
write,
basically,
if
you,
if
you
took
this
back
and
write
it
for
early
execution,
you
could
go
you.
Could
it's
pretty
easy
to
say
that
you're
also
like
supporting
delayed
execution,
because
you
could
say
at
this
point
in
time
just
wait
longer,
whereas
if
you're
writing
it
from
the
idea
of
delayed
execution
first,
it
could
be
that
the
way
that
you're
writing
this
fact
depends
on
it
and
it
might
be
harder
to
switch
to
the
other
one.
If
we
did
one
I
like
allow
it
in
the
future.
G
And
then
there's
a
question
of
like
what
do
we
want
to
have
in
this
pack
and
also
what
do
we
want
to
have
in
the
reference
implementation?
What
are
we
going
to
suggest
other
implementations
to
have
I
I
think
that
we
would
want
graphql.js
to
match
pretty
closely
to
what's
in
the
stack,
but
do
we
want
to
allow
both
via
configs
at
some
on
the
execution
field,
levels
stuff
like
that
yeah?
So,
basically,
the
question
is:
what
do
we
want
to
specify
and
just
want
to
get
everyone's
thoughts
on
that.
B
This
is
super
interesting
from
a
like
first
principal's
point
of
view
like.
What's
the
original
reason
why
this
is
exciting
or
the
original
intent
on
getting
into
this
was
the
defer
directive
was
intended
to
be
a
communication
to
the
server
that
the
client
prefers
the
other
values.
B
First,
it's
like
this
is
a
this
thing
is
lower
priority
and
in
those
earlier
conversations
there
was
the
idea
that
if
then,
that
the
server
gets
to
determine
how
to
utilize
that
signal-
and
it
has
more
context
and
more
information
and
show
so
it
should
do
the
thing
that
is
most
performant
overall,
while
respecting
that
preference,
which
might
mean
ignoring
it
all
together.
B
If
it
decides
deferring
that
field
actually
will
be
for
performance,
so
it
will
not
versus
actually
doing
the
deferring,
and,
if
we're
going
to
align
to
that
same
thought,
then
that
would
imply
that
we
actually
want
to
write
this
in
such
a
way
that
it
provides
the
server
optionality,
which
may
mean
something
subtly
different
than
either
of
these
two
specific
things
like
it
would
be
quite
interesting
if
the
spec
actually
like
it,
especially
if
we
want
to
have
the
graphical
Jazz
implementation,
closely,
mirror
the
spec
text,
and
we
want
to
make
graphql.js
production
useful
to
be
able
to
make
these
kinds
of
decisions.
B
Then.
Presumably
that
means
the
spec
text
would
need
enough
enough
information
provided
to
allow
either
of
these
two
behaviors
to
be
performed
based
on
whatever
the
server
decided
was.
The
right
in
the
moment,
which
might
my
intuition
is
that
that
is
the
first
of
these
two
or
early
execution,
where
several
invitations
are
allowed
to
delay,
because
we
sort
of
describe
a
resolver
function
as
a
async
operation
which
gets
back
to
you
at
some
point
and
a
completely
valid
implementation
of
that
would
be
sleep.
B
1000
then
do
work,
that's
a
terrible
up
execution
but
like
that,
would
align
to
this
idea
of
delayed
execution,
and
then
we
should
just
decide
like.
What's
the
missing
information
that
we
should
be
providing
resolver
functions
and
I,
don't
know
if
it's
configuration
or
not,
but
maybe
it's
I
think
you
had
a
slide
earlier,
where
the
thought
was
past
the
parent,
like
a
promise
that
describes
the
parent
phase
in
where
it's
just
a
like
a
promise
of
void.
B
It's
that
way
you
can
decide
like
I,
actually
do
want
to
wait
until
the
previous
phase
fully
completes
before
I
begin
doing
any
work,
or
literally
just
still
like.
Are
you
deferred,
true
false,
which
could
then
allow
the
paraloder
technique
to
work
by
providing
a
priority
level.
C
Yeah
I
I
think
it
kind
of
mirrors
a
bit
of
what
Lee
was
saying.
There
is
I'm
just
unclear
why
we
need
to
put
this
in
the
spec
at
all,
like
I,
think
I
think
it's
worth
discussing
and
thinking
about.
If
we
want
to
put
something
in
the
spec
about
this,
but
is
there
some
reason
that
I'm
not
clear
on
or
I'm
missing
about?
Why
we
have
to
do
this?
C
It
seems
to
me,
like
asley,
was
saying
that
you
know
the
server
has
more
context
on
how
this
is
used
and
should
be
able
to
make
either
a
decision.
Is
there
not
some
I
I
worry
that
adding
too
much
opinionated
text
about
how
the
Deferred
fields
are
resolved
in
The
Ordering
of
those
May
allow
client
developers
to
make
assumptions
about
things
that
are
locking
us
into
you
know
continue
to
do
things
in
a
way
that
later
on,
we
say,
oh,
maybe,
that
wasn't
the
most
optimal.
Maybe
that
wasn't
the
best
for
performance.
C
If
we
at
least
for
this
initial
pass
at
getting
this
into
the
spec
just
say,
a
deferred
field
may
or
may
not
be
returned
at
any
point.
In
the
multi-part
response,
that
seems
to
be
enough
flexibility
to
allow
servers
to
make
decisions
about
optimization
and
clients
to
not
make
any
not
rely
on
any
guarantees
about
anything
that
we
are
not
comfortable
ensuring
for
the
future.
Do
we
have
to
make
a
decision
here
on
this
and
put
this
in
the
spec.
E
E
What
we're
effectively
saying
is
deferred
execution
is
fairly
straightforward
to
actually
specify
and
Implement,
because
it
basically
follows
the
existing
algorithm
in
the
spec
and
just
means
returning
a
couple
of
things
from
various
places,
rather
than
just
one
thing:
here's
the
stuff
for
now,
here's
the
stuff
for
later,
when
an
error
happens,
all
of
that
blows
up
together
and
it
never
triggers
the
latest
stuff,
and
it's
really
easy.
It
just
sort
of
falls
out
of
the
algorithm
without
any
effort.
E
If,
however,
we
want
to
allow
people
to
do
early
execution,
the
argument
is
and
I
think
it's.
A
good
argument
is
basically,
there
is
a
lot
of
foot
guns
if
you're
going
to
do
early
execution,
there's
a
lot
of
ways
that
this
is
going
to
go
wrong.
If
you
don't
handle
the
errors
properly,
then
you're
going
to
start
sending
data
that
doesn't
make
sense
to
the
client,
and
we
don't
want
to
do
that.
B
That's
a
good
point,
ideally
there's
major
classes
of
those
edge
cases
which
we
can
guarantee
correct,
Behavior
via
the
spec.
D
Yeah,
it
depends.
It
depends
also
a
bit
about
what
kind
of
Technology
yourself
around
on
like.
If
you
have
a
single
thread
server,
if
you
have
multi-thread
server
that
can
take
advantage
of
more
more
CPUs,
and
so
it's
difficult
to
make
a
definitive
like
I,
like
the
the
kind
like
there
was,
a
discussion
trained
by
Benji
and
I
like
to
like
give
it
give
an
outline.
What
what
is
a
straight
path
through,
but
basically
say.
Okay
here
are
maybe
two
input
boxes.
D
You
can
go
this
way
or
that
way,
or
even
both,
depending
on
your
use
cases
and
not
prescribed,
you
have
to
do
it.
This
way.
H
Just
Just
one
thought
is
that
to
modify
a
little
bit,
what
just
what
Ben,
what
Benji
had
in
his
presentation
and
we're
discussing
what
to
specify
meaning.
H
It
I'm,
not
sure
I'm,
not
sure
intentionally,
but
as
a
question
of
what
we
think
the
majority
of
of
people
will
use
and
I
would
say
that
we
should
probably
in
the
spec
I
think
this
is
similar
to
what
other
people
have
been
saying.
We
have
to
write
this
back
in
a
way
to
accommodate
what
even
a
minority
of
people
will
use
and
that
basically
favors
writing
this
back
in
a
way.
H
That's
you
know
compatible
with
early
execution
in
terms
of
the,
in
my
opinion,
I
think
that
the
differences
in
the
specification
text
are
are
quite
are
quite
large
and
it
might
be
helpful.
I
raised
this
in
our
in
our
weekly
meetings.
It
might
be
helpful
to
even
have
an
appendix
that
details.
You
know
sort
of
both
algorithms,
meaning
one
algorithm
in
the
main
text
and
one
and
one
in
an
appendix
and
I
think
that
might
be
valuable.
C
Thought
it
also
seems
likely
that
servers
would
be
using
a
combination
of
both
of
these
right
dependent
on
their
context
of
if,
if
this
field
really
makes
sense
of
her,
if
it's
already
in
a
hot
cache
right
now,
we
don't
need
to
defer
it.
If
we
know
the
field
comes
back,
no,
we
don't
need
to
defer
it
right.
C
E
Be
clear
on
what
you're
saying
there
Anthony
the
the
conversation
around
inlining,
which
is
where
you
say,
I've
already
got
this
data
I'm
just
going
to
write
it
straight
out
and
not
not
defer
anything.
That
is
a
separate
conversation.
A
F
Difference
between
whether
you
choose
early
versus
delayed
is
most
likely
going
to
be
a
difference
of.
Do
you
spawn
multiple
threads
for
a
single
web
request
and
if
you
are
in
a
situation
where
you
every
web
request,
lives
in
one
thread
and
owns
that
thread
entirely,
but
cannot
spawn
a
second
thread
that
I
think
Kawaii
has
some
empirical
like,
because
that's
that's
how
Facebook
servers
work.
J
Yeah
I
think
empirically
right,
single
threaded,
I
think
Michael.
You
mentioned
this
because
it's
single
threaded
and
like
most
functions.
They
are
CPU
heavy
in
the
beginning
and
then
I
will
have
it
later
on
so
like
basically
empirically.
J
We
just
have
to
always
defer
it,
regardless
of
like
how
we
think
things
are
because
like
in
the
beginning,
everybody
is
like
haggling
over
that
CPU
resources
and
then,
like
you
know
later
on,
when
it's
like
not
as
important
anymore,
then
we
were
like
okay,
the
maybe
we
can
potentially
like
inline,
as
as
Benji
said,
like
later,
flushes
will
start
in
line
stuff.
One
of
the
things
I
do
want
to
mention
is
I.
Think
in
the
beginning,
Facebook
or
meta.
J
We
had
this
conversation
in
a
in
the
context
of
what
should
we
name
the
directive
and
it
was
deferable
versus
differ.
So
basically,
like
I
think
you
have
a
slide
where
early,
which
one
should
we
specify
right
so
early
execution,
essentially
that's
like
deferable
would
be
the
name
for
it
right.
It's
like
okay,
saying
like
we
are
potentially
can
do
that
and
then
delay.
The
execution
is
where,
like
defer,
the
name
would
Maps
like
kind
of
more
closely
with
that
yeah.
So.
D
Yeah,
so
so
the
feedback
from
our
user
Community
was
like.
We
had
this
late
execution
in
the
beginning
or
the
late
execution,
and
we
switched
at
some
point
to
early
execution
because
of
user
feedback,
because
we
are
multis,
we're
executing
with
multiple
threats.
So.
D
J
Yeah
I
think,
regardless
in
a
spec,
it
would
be
like
good
to
have
a
note
like
empirically.
If
you're,
a
single
threaded,
like
you
know,
delayed
execution
would
probably
be
our
preference.
If
you
are
like
multi-threaded
early
one,
the
early
execution
would
be
the
one
just
yeah
practically
that
that's
your
choice.
B
For
what
it's
worth
I
like
that-
and
it
aligns
with
some
of
the
other
sort
of
tone
Norms
through
the
spec
of
like
using
these
note-
call
outs
to
provide
helpful
hints
on
how
to
use
these
tools.
F
Given
that
I
think
it
would
like
I
would
personally
prefer
a
simpler
algorithm
to
end
up
in
the
spec
if
possible,
so
it
might
make
sense
for
the
spec
to
explicitly
be
delayed
execution,
because
that's
single
threaded,
that's
like
okay,
you're
you're,
a
graphql
server
developer.
You
need
to
make
a
choice
and
you
have
one
day
to
implement
this,
implement
it
with
delayed
execution
because
that's
easier
and
you
can
just
follow
the
spec
and
then,
as
an
aside
here,
is
an
example
of
an
algorithm
that
fulfills
the
same
spec.
G
The
way
I
was
thinking
that
we
would
write
it
was
that
we
write
it
with
the
we
write
it
as
if
we're
doing
early
execution,
there's
a
line
somewhere
right
before
you
execute
the
fields
for
the
defer
that
says
something
like
you
can
you
can
delay
this
at
any
amount
of
time.
Servers
are
allowed
to
delay
this
any
amount
of
time
or
you
could
delay
for
as
long
as
it
takes
to
execute
the
other
fields
that
were
not
deferred,
and
then
we
do
have
these
other
functions
in
there
that
that
says.
G
This
is
stuff
that
you
like.
This
is
how
you
handle
like
filtering
when
no
bubbling
happens,
but
there
could
be
a
note
on
those
functions
that
says
that
if
you
have
delayed
for
as
long
as
the
it
takes
to
wait
for
the
other
things,
then
this
whole
function
could
be
skipped.
G
That's
that's
kind
of
how
I
was
thinking
that
we
would
spec
for
early
execution,
but
allowing
to
delay
and
then
we're
not
like
writing
two
different
algorithms
for
doing
the
same
thing,
but
it's
clear
that,
like
you,
could
skip
over
this
more
complicated
stuff.
If
you
don't,
if,
if
you
don't
have
to
do
it.
F
Brother,
the
the
algorithm
that
needs
to
go
into
the
spec
is
one
that
fulfills
the
response
that
describes
a
way
to
fulfill
the
response
shape.
That's
how
all
of
our
algorithms
work
like
even
our
like,
merge,
Fields
algorithm,
like
you,
could
use
a
different
algorithm
for
merging
fields
that
is
more
complex
and
maybe
better
for
your
specific
server,
but
it
must
produce
the
same
shape
of
response
as
this
spec
algorithm
and
I,
don't
think.
Early
versus
delayed
produces
a
different
shape
in
either
sense.
G
If
we
wanted
to
really
have
like
a
simple
spec,
then
we
don't
have
to
put
this
extra
Logic
for
handling
error
bubbling
with
early
execution.
We
could
write
delayed
and
we
could
say
you're
allowed
to
do
early
execution,
but
it
means
that
if
you
are
implementing
early
execution,
you
kind
of
have
to
like
make
sure
that
you're
doing
the
stuff-
and
there
isn't
like
a
good
reference
for
it.
Yeah.
F
I
do
think
that
having
a
normative
note
or
whatever
of
like,
if
you
decide
to
be
multi-threaded
here,
are
known
areas
like
the
fact
that
you
you
it
would
be
an
invalid
response
to
have
this
error
happen
and
still
have
the
Deferred
payload
right.
That
seems
high
value
to
me
of
like
these
are
anti-patterns
or
like
anti-invalid.
You
need
to
make
sure
that
your
server
does
not
produce
things
in
this
manner,
and
we've
seen
servers
produce
them
when
they
are
due
early
execution
rather
than
delayed
execution.
Something
like
that.
I
don't
know.
H
So
I
I
would
say
that
again
again,
we
want
to
make
sure
that
the
spec
we
definitely
want
to
advance
Simplicity
in
the
specification
I
mean
that
is,
is
like
a
lead
on
towards
correctness.
I
mean
the
the
the
simple
we
are,
the
more
likely
we're
actually
getting
it
right,
but
considering
that
again
that
that
a
minority
of
you
know
you
know
like
a
definite
minority
of
servers,
will
be
multi-threaded
and
I
think
even
in
the
single
threaded,
even
in
a
single,
a
single
threaded
environment.
H
If,
let's
say
you
have
a
persisted
operation,
you
know
where
you
can
plan
ahead
of
time.
You
know
you
might
you
might
still
want
to
use
early
execution.
I
mean
there
are
a
lot
of
scenarios,
I
think
where
you're
going
to
be
mixing
early
execution
or
you
know
allowing
it,
even
if
in
general
you
have
delayed
and
I
think
I
think
we
have
to
yeah.
You
know
it's
not
just
the
response
shape,
it's
also
meaning
we
have.
H
We
have
that
section,
but
we
also
have
a
whole
section
on
execution
and
it
feels
a
little
bit.
H
It
feels
a
little
bit
unhelpful,
I
guess
to
just
have
a
note,
as
opposed
to
the
entire
algorithm
that
we've,
actually,
you
know,
worked
out.
You
know,
we've
said
you
know,
you
know
over
the
over
the
course
of
you
know
a
couple
months,
meaning
if
we
have
that
available
to
us.
You
know
maybe
a
maybe
belongs
in
an
appendix
I
suppose,
but
I
think
I
think
it
behooves
us
to
like
you
know,
share
it.
C
Is
there
any
value
in
the
algorithm
being
more
defining
expectations
and
requirements
about
what
is
a
valid
incremental
response?
Given
the
previous
responses,
rather
than
like
the
graphqljs
spec
implementation
is
for
the
way
that
JavaScript
Works
in
a
single
threaded,
you
know
environment,
it's
not
necessarily
the
way
that
developers
in
other
languages,
with
other
tools
are
going
to
write
their
code
of
their
algorithm.
C
I
I
feel
like
defining
this
as,
given
you
have
made
a
field
null
or
a
field
has
had
an
error.
Any
other
nested
Fields
cannot
be
returned
in
incremental
responses,
and
a
number
of
validation
rules
like
that
that
just
Define
what
are
invalid
behaviors
can
allow,
maybe
some
more
flexibility
around
how
developers
Implement
early
into
the
late
execution
here.
D
J
Michael
correct
me
up,
but
I
think
even
in
in
potentially
right
too
right,
like
we
can't
say
that,
like
here's,
a
two
pseudo
pseudo
code
for
your
Earth
execution,
delayed
execution,
because
the
response
like
that's
what
Matt
was
saying
right,
the
response,
our
and
Anthony
would
say
the
response
to
the
client
is
exactly
the
same.
The
client
doesn't
really
care
right,
so
I,
I,
I,
I,
guess
like
it's
it's
okay
right!
We
don't
will
need
to
adjust
that
so
by
one.
D
Yeah
yeah,
we
did,
we
just
need
from
my
perspective.
We
should
Define
one
and
then
have
nodes
like
it's
that
explain
that
there
are
other
options
to
execute
and
that
this
is
allowed.
I.
Think
for
me
as
an
implement-
or
this
is
the
most
significant
thing
that
we
outline
that
that
you
can
drop
the
first,
if
you,
if
you
think
you
can
optimize
it
better
and
you
can
do
early
execution
if
you
have
the
use
case
for
that,
and
that
might
be
if
you
are
multi-stranded.
A
G
Yeah,
but
but
like
as
yaakov,
was
saying
like
why?
Why
not
just
like
have
the
spec
be
the
the
more
complex
one?
That's
clearly
lays
out
everything
you
need
for
early
execution
instead
of
just
like
just
notes,
saying
like
be
aware
of
these
things.
So.
F
I'd
argue
it's
similar
to
like
the
data
loader
versus
the
spec
execution
algorithm
like
the
species,
doesn't
include
the
same
algorithm
that
data
loader
uses,
even
though
data
loader
is
like
a
way
of
optimizing.
Your
performance
in
almost
every
server
should
use
a
data
loader
like
merging
algorithm
or
like
execution
whatever
on
top,
so
that,
in
my
opinion,
would
be
like
this
is
the
simplest.
This
is
the
simplest
way
to
achieve
the
behavior
of
defer
and
that's
probably
going
to
be
a
single
threaded
way.
F
D
B
How
distinctly
different
do
we
expect
them
like
I
thematically
agree
with
Matt's
push
to
prefer
the
simpler
algorithm
and
rely
on
the
you're
you're
free
to
do
whatever
crazy
version
of
this?
That's
that
you
know
has
the
same
observable
output,
one
I'm
slightly
worried
that
it
is
not
exactly
the
same
I'm
just
a
little
bit
observable
output.
B
If
the
spec
describes
the
thing
that
is
delayed
execution,
and
then
you
get
a
non-delayed
behavior
that
that
does
not
necessarily
align
to
that,
but
also
at
like
is
the
early
execution
thing
like
dramatically
more
complicated
spec
text,
or
is
it
slightly
more
complicated
like
if
it's
dramatically
more
complicated,
then
we
should
definitely
weigh
the
complexity
argument.
If
it's
only
slightly
so
then
it's
going
to
carry
a
little
bit
less
weight.
G
I
would
say
it's
it's
significant,
like
you
have
to
keep
track
of
our
bubbling,
and
you
have
to
understand
that
there's
these
payloads
that
may
be
impacted
and
go
through
them
and
make
sure
that
they're
canceled
out.
That's
that's
the
difference
really
and
I.
The
reason
I
like
doing
it
in
terms
of
early
execution
is
because,
like
you
could
describe
delayed
execution
in
those
terms,
it's
just
a
specific
amount
of
time,
you're
delaying
and
then
this
other
stuff.
C
So
I
think
what
I've
been
trying
to
get
at
here
is
that
it
seems
that
this
is
a
situation
in
which
the
status
quo
of
defining
new
functionality
in
the
spec
as
algorithms
maybe
doesn't
fit
perfectly
and
I.
It
seems
to
be
something
that
you
know
has
worked
really
well
in
the
past,
but
how
strongly
does
the
steering
committee
feel
that
this
needs
to
be
defined
as
an
algorithm,
rather
than
a
set
of
rules
can
be
depart
from
the
way
we've
done
this
in
the
past
and
Define
this
in
a
different
way?
C
D
My
opinion
yes
I
mean
that
is
the
value
of
the
spec
that
like
when
I
started.
Implementing
this.
This
thing,
I
could
straightly
Implement
on
the
algorithms
and
then
deviate
as
I
got
more
mature
with
that.
If,
if
we
would
not
have
that,
I
would
think
that
there
are
a
lot
of
less
yeah
that
that
the
graphical
ecosystem
would
degrade
for
new
servers.
B
The
value
of
having
the
algorithms
listed
explicitly
in
the
spec
is
only
partially
so
that
a
server
server
implementer
can
look
to
ensure
that
their
behavior
is
correct
and
more
so
that
really
it's
like
kind
of
our
own
weird
programming
language,
that's
implemented
in
in
our
brains,
as
we
run
through
this.
That
should
describe
appropriate
behavior
like
if
you
were
to
use
a
actual
graphql
server
and
gave
it
some
request
and
got
some
response
and
then
mentally
stepped
through
the
algorithm
steps
in
the
spec
and
said.
B
Wait,
that's
not
possible
like
how
on
Earth
did
it
give
me
that
there's
no
way
I
can
imagine
any
implementation
of
this
algorithm.
That
would
have
behaved
in
that
way,
then
that
server
is
not
spec
compliant
like
that
is
the
that's
the
value
and
in
doing
so
really
what
we're
doing
is
we're
describing
guarantees
to
the
client,
which
means
in
in
areas
where
we
want
to
provide
some
variability
for
a
server
implementer
to
have
some
Choice.
B
It's
it's
somewhat
counterintuitive
to
me
that
the
early
execution
path
ends
up
being
the
more
complicated
of
the
two,
because
I
would
imagine
just
explicitly
describing
the
like
first
weight.
For
this
other
thing
to
complete
then
go
do
this.
Other
thing
next
would
have
would
have
some
complexity
to
it
itself,
but
it
is
now
making
me
slightly
worried
that
if
the
early
execution
behavior
is
more
complicated,
however,
it
is
a
variant
of
the
behavior
that
some
server
authors
will
want.
Then
it's
actually
describing
like
real
behavior.
B
E
I
think
from
the
client's
perspective,
the
the
difference
broadly
and
there
are
like
some
potential
edge
cases,
but
I
think
they
are
real
edgy
edge
cases.
Broadly,
the
difference
is
with
the
early
execution.
Your
initial
payload
may
be
delayed
further,
but
your
total
request
time
isn't
likely
to
increase
much,
whereas
with
the
delayed
deferred
execution,
your
initial
payload
is
likely
to
arrive
much
sooner,
but
the
total
request
time
may
increase.
This
isn't
from
the
back
end
perspective
like
a
a
performance
or
a
Major.
E
G
G
Yeah
and
about
about
delayed
execution?
Are
we
worried
about
clients,
depending
on
this
type
of
behavior,
where
they
have
Fields
with
side
effects
and
they're
using
defer
in
a
way
that
they
can
control
like
the
order
that
the
server
executes
them.
E
B
B
But
I
I
do
think,
especially
if
the
the
spec
is
written
in
a
way.
That
implies
that
such
behavior
is
normal
and
while,
at
the
same
time,
having
a
note
that
says,
clients
can
have
more
eager
execution
that
than
the
eager
execution
would
be
surprising
and
confusing
for
a
client.
F
Yeah
I
think
Kawaii
brought
up
in
the
com
in
the
pr
or
whatever
thread
that
we've
been
I,
think
experimenting
with,
like
only
having
a
single,
no
matter
how
many
deferrs
you
have
only
having
a
single
deferred
payload,
because
that
can
actually
be
in
most
cases
more
optimal,
especially
in
our
single
threaded
environment.
G
Yeah
and
like
regardless
of
regardless
of
like
delayed
or
early
execution,
our
response
format
allows
that,
because
we
can
return
data
from
multiple
defers
in
the
same
payload
and
we
are
allowed
to
inline
any
defer
so
like
with
both
of
those
two
things
like
a
server
and
if
a
server
wants
to
do
that.
That's
within
spec.
G
Yeah
interesting
comment
from
Matt
that,
like
the
way
that
you
make
an
alternative
that
would
make
the
early
and
delayed
execution
early
execution.
Basically
as
simple
as
delayed
execution
is,
we
could
make
this
a
valid
response
and
clients
have
to
drop
this.
You
don't
and
then
the
server
doesn't
need
to
worry
about,
know
about
handling
filtering
with
no
bubbling.
H
Just
one
one
other
consideration
is
that
we're
going
to
want
the
graphql.js
implementation
to
to
match
the
specification
text
and
I
know:
we've
been
talking
or
emphasizing
again
that
one
of
the
main
distinctions
will
be
singular
multi-threaded,
but
I
think
I.
Think
considering
that
we
have
these
helpers
for
resolvers
out
there,
I
I
think
even
in
the
single
threaded
context,
especially
with
persisted
operations
that
you
very
well
might
may
want
to
allow
early
execution.
H
Even
in
you
know
not
in
every
case,
maybe
not
even
the
majority
case
but
in
but
in
a
you
know,
a
non-negligible
minority
of
cases,
and
so
we
may
want
to
possibly
disable
by
default
but
allow
it
in
our
in
our
JavaScript
implementation,
popular
JavaScript
invitation,
so
I
know.
The
main
motivation
for
the
reference
implementation
is
to
match
the
specification,
but
considering
it's
its
role
there
as
a
popular
JavaScript
implementation.
H
I
think
it's
still
something
we
should
keep
in
mind,
so
I'm
just
wondering
if
we're
allowing
you're
allowing
for
the
possibility
of
maybe
having
the
early
execution
algorithm
specified
along
alongside
an
appendix
or
something
like
that,
I
still
think
we
should
specify
it
in
some
way
that
allows
us
to
get
it
into
the
graphql.js
implementation.
K
B
B
I
think
maybe
my
closing
thought
is
I
would
love
for
us
to
explore
that
algorithm
space
a
little
bit
I'm
feeling
pretty
underwhelmed
about
the
idea
that
there's
two
separate
algorithms
described
one,
that's
in
the
core
of
the
definition
and
one
that's
in
an
appendix
that
need
to
be
kind
of
forever
co-managed,
one
that
could
get
swapped
out
anything
that
adds
in
the
future.
B
You
need
to
like
make
sure
that
the
appendix
one
doesn't
broke
in
like
there's
a
version
of
that
that
feels
fairly
fragile
and
and
I'm
wondering
if
there's
a
sort
of
like
one
notch,
less
complicated,
eager
execution
algorithm.
That's
that's
reasonable!
There's
something
there
where,
like
playing
with
the
the
spec
text,
hopefully
finds
a
sweeter
spot
of
simple
but
flexible.