►
From YouTube: GraphQL Working Group - February 3, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Week
is
going
well,
thank
you,
especially
after
seeing
hot
chocolate,
add
federation
support.
B
B
C
Yeah
I
mean
you
have
at
the
moment
you
have
to
dig
the
proto
files
that
are,
I
think
they
are
not
in
the
specification,
so
their.
B
Yeah
yeah,
if
you
run
into
yeah,
we'll
talk
more
offline,
feel
free
to
ping
me
anytime.
If
you
have
any
questions
about
that,
because
there's
there's
a
lot
of
stuff,
we
could
be
doing
better
to
help
help
people
with
it.
So
I'd
love
to
hear
your
thoughts
on
that
for
sure.
F
And
thanks
again
for
putting
putting
the
linking
those
comments
into
the
duplicate
input
sphere.
I
I
All
right,
let's
see
if
we've
got
any
last
minute,
pull
requests
seems
not
fantastic.
All
right,
let's
go
and
then
welcome
everybody
yeah
happy
february,
second
working
group
meeting
of
the
year
looking
at
our
agenda,
real
quick
of
course
by
us
all
being
here.
We
agreed
to
the
spec
membership
agreement,
representation,
guidelines,
contribution
guide
and
code
of
conduct
links
all
there
right
there
in
the
agenda.
Do
you
ever
want
to
peruse
them?
I
L
Was
one
of
those
silences
for
me?
I
guess
I'm
on
my
phone,
so
I
didn't
see
this
list
in
the
same
order:
hey
I'm
eloy
from
microsoft,
amsterdam
and
we
are
starting
a
graphql
at
microsoft,
meetup
and
there's
already
a
lot
of
participants.
So
this
will
be
bringing
this
news
to
every
we
have.
We
now
have
the
meet
up
every
first
monday
after
the
first
thursday
of
the
month,
so
that
is
fun
scheduling,
stuff.
A
Yeah,
I'm
matt
I'm
at
meta.
Now
I
guess
in
new
york.
H
Sorry
about
that
I
definitely
was
I'm
I'm
yankov
and
I
individual
contributor
working
on
schema,
stitching.
I
All
right
well
aligned
to
the
list
and
the
mark
on
file
so
fantastic
great,
to
see
all
your
beautiful
faces.
Again.
We
have
a
pretty
tight
agenda
today,
so
hopefully
that
means
two
things.
One
is
that
we
can
go
a
little
bit
deep
into
some
of
these,
and
two
is
that
we
will
end
on
time
or
earlier.
I
I
Let's
take
a
quick
look
at
what
we
got
on
our
plate.
We,
I
think,
have
a
handful
of
open
action
items.
Hopefully
we
can
use
a
couple
minutes
to
dig
through
those
and
then
the
meat
of
the
meeting
we've
got
three
things
to
talk
about.
One
is
deprecation
of
inputs,
another
update
on
defer
and
stream.
I
I
All
right,
let's
take
a
quick
look
at
actions.
I'm
gonna
take
a
look
at
anything,
that's
ready
for
review.
We
have
one
thing:
marked
ready
for
review.
Client,
controlled,
nullability,
add
to
the
list
of
unanswered
questions.
Knowledge's
error,
boundary
alex.
Maybe
you
can
give
us
a
quick
update
on
what's
happened
since
this
time
or
since
the
last
meeting
I
know
you
were
you
were
sort
of
in
input
mode
trying
to
make
sure
you
understood
all
your
open
questions.
I
Yeah.
Is
it
cool.
K
If
I
I
share
my
screen,
real
quick
yeah
go
nuts
okay,
so
we
had
access
to
discussions
now.
So
I
opened
up
a
bunch
of
these
discussion.
Threads
there's
three
there's
one
for
the
list:
syntax
another
for
null
propagation
and
another
for
error,
handling,
I've,
separated
out
error,
handling
and
null
propagation.
Since
I
could
see
a
world
where
maybe
those
are
two
different
pieces
of
syntax
or
they're
handled
separately,
something
like
that.
I've
added
poles
to
the
top
of
these
just
to
get
where
sentiment
is
totally
non-binding.
K
Just
you
know
see
where
we're
at
there's
different
threads
for
each
option
feel
free
to
you
know,
throw
yourself
behind
whichever
option
you
like
the
best
and
I'm
hoping
that
gets
gets
discussion.
Moving
along.
We
can
we
can
land
on
something
yeah.
I
I
I
don't
know
if
we
have
time
later.
Maybe
we
can.
We
can
talk
more
about
what's
going
on
here,
but
that's
that's!
What's
up
now
and
if
there's
anything
else,
I
can
do
to
make
discussion
easier,
more
pleasant.
K
I
tried
to
do
what
I
could
to
make
it
like
a
lower
barrier
to
entry
before
you
kind
of
had
to
read
the
entire
thread
to
figure
out.
What's
going
on
so
hopefully
now
you
don't
have
to
do
that
anymore.
I
This
is
awesome
thanks
for
for
leading
some
experimental
process
through
reviewing
our
rscs.
This
one's
definitely
got
some
complicated
pieces,
so
I
I
like
this
model
and
that's
a
good
idea.
If
we
have
got
time
on
at
the
end
of
the
meeting,
then
maybe
we
can
take
a
minute
to
dig
through
those,
as
a
group
at
least
give
everybody
context,
and
then
we
can
break
and
let
people
follow
up
within
those
threads
yeah.
I
Thanks
for
quick
update,
though
much
appreciated,
I
went
ahead
and
closed
that
one
that
was
marked
ready
to
review
just
taking
a
quick
look
at
the
rest
of
our
open
items
and
our
I'm
taking
a
look
at
our
projects
list.
We
still
have
a
handful
of
actions
that
are
many
months
old,
so
I
might
have
to
come.
Take
a
look
at
these
later
to
figure
out
which
ones
are
super
old.
I
I
Alternatives
I'll
go
ahead
and
close
this
one,
then,
since
you
got
some
and
if
we
want
to
do
fallouts,
we'll
just
open
new
ones,
that's
great,
and
one
of
these
was
just
opened
recently.
So
that's
definitely
not
going
to
be
ready
for
review.
Yet,
okay,
I
think
that's,
probably
all
the
ones
we
have
open
to
review.
Anyone
see
any
of
those
open
actions
if
you
all
are
looking
at
the
same
thing,
I
am
anything
that
seems
like
it
should
be
ready
to
close
or
there's
updates
on.
E
Yeah,
I
have
one
actually
the
input.
The
one
of
rfc
one
of
the
community
members
has
actually
started
work
on
the
graphql
js
implementation
of
this.
I've
not
had
time
to
review
it
fully.
I've
had
a
glance
over
it.
It
looks
fine
one
of
itself
is
is
quite
a
simple
change
to
the
graphql
specs,
so
fortunately,
the
the
changes
in
graphql
js
itself
is
not
particularly
complicated,
but
if
anyone
else
wanted
to
take
a
look
at
those
changes,
that
would
be
great.
E
I
think
it's
actually
on
a
fork
right
now,
rather
than
as
a
pull
request
to
graphql
js,
I
can
encourage
it
to
be
moved
to
graphql
js.
If
that
makes
sense.
E
The
other
thing
was
the
security
policy.
I
think,
is
one
of
the
open
issues
somewhere.
Maybe,
but
I
have
opened
an
rfc,
so
if
any
tsc
members
can
have
a
look
at
that
and
anyone
else
as
well,
of
course,
we
need
a
couple
of
approvals
on
it.
E
I
think
matt's
already
approved
it,
but
yeah
it's
what
we
discussed
last
month,
just
now
formally
filed
I've,
also
gone
ahead
and
opened
a
pull
request
on
the
spec
as
well
with
a
much
smaller
security
policy
file
that
effectively
links
to
the
tsc
one.
So
yeah,
getting
both
of
those
merged
at
some
point
soon
would
be
good.
I
Nice
yeah,
I
just
shared
a
link
to
that
and
feel
free
to
take
a
look.
Hopefully
I
think
this
is
all
pretty
non-controversial.
I
The
one
thought
that
I
had
was
we
should
make
it
really
super
obvious
what
to
do
if
you
want
to
disclose
a
security
issue
and
not
not
in
love
with
the
idea
that
just
like
go
find
a
tsd
member
and
message
them
not
a
super
scalable
approach
but
looks
like
brian's
going
to
help
us
set
up
an
email,
we'll
spread,
or
I
mean
an
email
account.
Security
graphical.org
list
is
recommendation
like
it,
but
yeah.
Thank
you
for
getting
that
put
in
place.
I
The
other
ones
that
I
see
that
might
be
interesting
to
just
get
a
gut
check
on
are
there's
two
that
are
both
marked
everyone
review.
One
is
everyone
reviewing
spec
text
changes
for
argument,
uniqueness,
rfc.
I
think
that
was
from
like
summer
of
last
year,
but
we
just
we
were
at
a
backlog
for
opening
up
action
items.
I
I
So
I
think
those
are
just
seeking
feedback
so
that
they
get
unblocked.
I
All
right,
I
think,
that's
all
of
our
action
items.
I
Let's
keep
going
steven,
I'm
gonna
hand
it
to
you
to
talk
about
the
application
of
inputs.
F
All
right,
thank
you
lee,
so
deprecation
of
inputs.
This
is
a
a
pr
rfc.
That
is
it's
at
stage
two
currently
and
so
a
goal
today
is
to
get
it
officially
to
stage
three
or,
if
not
identify.
You
know,
action
items
on
that,
and
so
so
quick
context
on
this
for
those
not
familiar.
This
is
we're
talking
about
the
the
deprecated
directive
that
previously
could
not
be
applied
to
either
arguments
on
fields
or
input
objects,
and
so
this
this
changes
to
make
it
do
that.
It's
it's
been
discussed.
F
F
F
Maybe
you
could
possibly
do
that
since
then,
it's
been
implemented
in
graphql,
js
and
also
graphql
java,
and
it
was
implemented
with
a
validation
that
just
made
it
that
it
was
impossible
to
to
deprecate
something
that's
required
and
there
hasn't
been
any
any
pushback
from
that,
and
so
the
latest
a
revision
of
this
goes
ahead
and
specifies
that
that
using
this,
the
stronger
wording
of
of
shall
or
must
that
you
can't
deprecate
something
if
it's
required
so
and
then
the
other
pending
thing
was
just
you
know
having
it.
F
You
know,
it's
been,
it's
been
fully
implemented
with
the
validation
and
everything.
So
any
any
discussion.
Thoughts
on
that.
I
I
think
the
only
thing
that
would
block
this
getting
to
stage
three
is
a
editorial
review
for
me,
and
this
is
pretty
short,
so
I'm
gonna
preemptively
put
it
on
stage
three
as
a
forcing
function
to
make
sure
that
I
do
that,
but
considering
that
it's
short-
and
this
has
been
implemented
for
a
while,
unless
anybody
has
objections-
we're
gonna
assume
consensus
here
that
this
is
the
right
thing
to
do.
I
I
Well,
that
was
a
fast
action
item.
Amazing
all
right,
rob
I'll
hand
it
to
you
to
give
us
an
update
on
what's
going
on
in
the
world
of
deferring
stream.
J
J
J
Basically,
I've
updated
the
spec
that
whenever
the
execution
algorithm
comes
across
a
deferred
stream
directive
and
it
starts
executing
the
fields
from
that
directive,
it
keeps
a
reference
to
a
data
structure
that
has
what
would
be
in
javascript
a
promise
of
the
whole
execution
and
that
gets
passed
down
to
the
execute
selection
set
through
collect
fields
through
execute
fields,
complete
value
through
all
those
functions
and
when
another
deferrer
stream
is
encountered
that
reference,
the
reference
that
data
structure
is
passed
through
and
as
part
of
that
execution,
we're
specifying
that
if
this
parent
data
structure
is
passed
in,
it
has
to
wait
for
that
one
to
be
finished
before
it
itself
can
be
completed.
J
So
in
javascript
it
just
likes
having
another
dot.
Then,
if
that
thing
exists
to
make
sure
you
wait
for
it
and
that
has
the
the
effect
of
not
ever
getting
a
payload
from
a
deferrer
that
was
inside
of
another
defer
before
that
parent
one
was
resolved.
J
Similarly,
as
when
you're
doing
a
stream,
each
field
has
a
reference
to
the
previous
one
so
that
you
don't
get
a
field
out
of
out
of
order.
You
don't
get
a
higher
numbered
index
field
before
a
lower
numbered
one
and
all
the
combinations
of
stream
and
side
of
defer
and
defer
inside
of
stream
should
be
accounted
for.
With
this.
J
I
have
like
a
more
detailed
explanation
of
this,
and
I
have
a
bunch
of
examples.
I
think,
notably
the
only
one
that
might
be
confusing
is
this
case.
J
I
I
think
so
just
to
repeat
what
you
just
said
to
make
sure
I
understood
it
if
you
mark
a
fragment,
spread
as
deferred
and
that
has
another
fragment
spread
within
that
that
is
deferred.
Those
are
just
essentially
merged.
That
ends
up
being
like
one
total
set
of
things
that
are
deferred
at
the
same
level.
Is
that
right,
right.
J
Yeah
to
me
conceptually,
it
makes
sense
and
it
solves.
I
think
it
solves
that
original
approach
of
never
having
a
reference
to
a
field
that
doesn't
exist.
It's
a
little
bit
more
conservative
than
that
so
like
maybe,
if
there
are
other
optimizations,
where
we
only
want
that
behavior,
but
not
the
entire
tree.
Preserving
that
could
be
worked
out
in
the
future.
But
I
think
this
is
like
a
conservative
way
to
move
forward.
C
I
Yeah
yeah,
I
feel
like
this.
This
is
going
to
be
actually
the
hardest.
Part
of
this
whole
process
is
the
documentation,
after
the
fact
right,
like
luckily
rob,
you've
been
doing
a
great
job
and
and
having
a
super
high
quality
of
documenting
decisions
along
the
way,
so
hopefully
it'll
be
fairly
straightforward,
but
I
can
imagine
building
like
an
entire.
I
J
I'll
I'll
read
I'll
I'll,
definitely
like
be
keep
reading
through
this
back
and
see
where
it
makes
sense
to
add
more
examples
versus
we.
We
do
have
these
and
I'm
happy
to
transfer
this
repo
to
the
graphql
org
at
any
point,
if
that
makes
sense
or
move
these
issues
somewhere,
whatever's
easiest
for
everyone.
I
Actually,
that's
probably
a
good
idea,
if
you,
if
you
since
I
know
you've,
got
other
collaborators
working
on
this
with
you.
If
you
want
to
put
this
in
the
graphql
github
org
I'll,
just
I'll
just
give
you
admin
rights
over
that
repo.
I
A
Rob
I
think
I
missed
a
little
bit
on
the
is
so
I
see
your
four
examples.
The
example
that
you
were
saying
that
can
be
a
little
bit
confusing.
Was
that
basically
you
on
example
c,
I
don't
know
if
you
explicitly
do
this
is
this
if
you
have
dot
dot
top
fragment
and
then
underneath
top
fragment,
if
you
have
a
spread
immediately
like
above
homeworld,
is
that
what
you
were
just
describing
as
being
like
yeah.
C
I
J
I
Was
I
was
going
to
say
this
is
at
this
point
this
is
probably
beyond
just
like
approve
or
reject
just
because
it's
detailed
right.
So
I
think
you
know
people
can
read
this
and
if
there's
extra
feedback
provide
it,
but
what
maybe
what's
helpful
rob
is
like?
What's
your
confidence
with
with
this,
do
you
see?
I
J
I
feel
like
this
is
the
most
conservative
approach,
so
I
think
that
it's
possible
that
there
are
scenarios
where
we
could
be
trading
some
performance
for
making
it
easier
for
clients
to
digest
payloads,
and
I
I
feel
like
I
like
I'm
happy
with
this,
because
I
think
that
if
this
because
handling
any
of
those
other
scenarios
would
be
more
complex
than
what
this
is.
So
I
feel
like
if
that's
needed
it
should.
It
could
be
looked
at
later,
whereas
this
just.
I
Yeah,
okay,
I
think
that
aligned
super
well
with
the
philosophy
we've
had
going
into
this
proposal.
All
along,
which
is
you,
don't
defer
something
because
you
want
to
get
it
earlier,
you
which
would
be
the
the
sort
of
side
angle
right
it's
like
if
you
could
develop
that
deferred,
fragment
earlier
and
deliver
it,
then
why
not?
Some
servers
might
even
decide
to
not
even
begin
preparing
those
fields
until
the
previous
fragment
has
been
completed
and
delivered
just
to
optimize
time
to
delivery
for
that
first
bit.
I
So
I
like
this
feels
really
good
to
me,
like
the
constraints
here
are
very,
very
easy
to
describe
and
it
you
know
it
creates
a
lot
of
behavior.
That
seems
quite
reasonable
and
intuitive.
J
All
right
cool,
so
the
next
one
is
for
non-nullable
fields
that
are
in
different
stream
payloads.
So
the
spec
says
that
if
you
encounter
a
non-nullable
field
that
does
return
null,
that
null
should
bubble
up
to
the
first
nullable
field.
J
So
my
my
thoughts
on
it
were
that,
if
a
fragment
for,
if
a
deferred,
fragment
bubbles,
all
the
way
up
to
null
are
we
are
we
actually
breaking
any
of
these
guarantees
about
noble
fields
that
graphql
makes
because
it's
not
we're
not
specifically
saying
that
field
is
null
we're
saying
that
this
fragment
isn't
here
anymore.
J
So
I
feel,
like
that's
kind
of
like
a
a
fragment
that
was
skipped
or
not
included,
it's
more,
that
the
whole
thing
errored
out
and
you
didn't
receive
it,
whereas
stream
is
a
little
bit
more
complicated
because
for
stream
we're
actually
returning
the
fields
inside
the
list
and
if
those
are
non-nullable,
then
we
we
have
to
say
it's
null.
So
I
was
thinking
just
a
validation
rule
prevent
prevent
stream
on
non-nullable
lists.
J
That
will
just
avoid
the
problem
entirely,
and
maybe
we
could
address
it
later
with
client
control
nobility
but
defer.
I
think
that
it
might
be
okay
to
just
bubble
up
and
null
to
the
data
field
on
the
fragment,
and
then
clients
should
treat
that
as
we
just
don't
have
this
fragment
and
it's
not
ever
coming.
C
Yeah
essentially
these
are.
We
have
broken
up
the
request,
so
we
cannot
do
the
night
propagation
of
the
whole
result.
Otherwise
the
client
would
need
to
do
that.
I
I
I
actually
see
the
cd4
in
this
case,
like
the
old
batching
approach,
that
with
the
export
directives
just
with
a
nicer
syntax
and
there
you
would
have
the
same
effect
so.
I
Yeah,
I
agree
I
feel,
like
you,
probably
don't
even
need
to
limit
this
with
the
validation
rule
or
the
mental
model
that
I've
had.
You
can
tell
me
if
I'm,
if
I'm
under-informed
was
there's
a
equivalence
between
defer
and
stream
and
assembling
independent
queries.
Some
of
those
independent
queries
could
be
like
extremely
annoying
and
that's
the
the
value
of
this
feature.
I
But
if
you
were
to
create
those
independent
queries,
they
would
just
kind
of
like
independently
error
at
the
top
boundary
another
way
to
think
about
this
would
be
you
know
that
you're
splitting
these
into
separate
incremental
payloads
in
the
stream,
and
now
it's
the
client's
responsibility
to
decide
how
they
want
to
do
that.
Error
bubbling
behavior.
So
they
know
that
a
streamed
list
is
not
done
until
the
request
is
completed
and
if
they
get
a
stream
payload
that
is
errored
with
no
data
right.
I
The
error
has
bubbled
all
the
way
to
the
top
and
cleared
out
the
data,
then
that
client
will
still
need
to
have
some
appropriate
behavior
for
for,
like
raiifying
the
final
list.
Right,
like
it's,
got
to
take
all
these
streamed
payloads
and
turn
it
back
into
the
final
set,
and
it
could
decide
I'm
okay
with
having
a
partial
list
or
it
could
decide.
Oh,
I
got
a
like
a
top
level
error.
I
My
sense
is
that,
and
probably
why
I
see
joe's
comment
above
he's
like
seems
fine.
It's
like
if
relay
were
to
get
any
top-level
error
anywhere.
It
would
just
throw
the
whole
thing
out
and
try
again
later
and
so
like
maybe
we're
overthinking
it,
but.
A
Yeah,
I
would,
I
would
definitely
lean
towards
stream,
just
like
treating
the
same
way
as
defer
like
it's.
The
client's
responsibility
don't
like.
There
are
a
lot
of
cases
where
people
will
want
to
stream
where
the
list
has
a
non-nullable
inner
element
and
what
they're
doing
by
having
a
non-noble
inner
element
is
providing
a
contract.
A
They're
like
this
list
will
never
be
null
if
they
ever
do
get
errors,
like
that's
a
major
issue
in
their
implementation
that
they
have
to
fix
up,
and
I
don't
think
I
don't
think
we
should
restrict
good
implementations
from
being
able
to
use
stream
with
non-nullable
list
items
just
because,
like
a
it,
is
undefined.
What
the
like
client
is
supposed
to
do
when
a
bad
implementation
ends
up
in
this
state.
E
We
also
just
straight
away
can't
do
that
because
it
would
be
a
conflict
at
the
moment
taking
a
nullable
field
and
making
it
non-nullable
is
a
non-breaking
change.
But
if
that
query
were
to
have
a
stream
directive
on
it,
you
can
no
longer
safely
change,
non-nullable
fields
to
being
sorry,
nullable
fields
to
being
non-nullable,
because
that
could
break
previous
queries
that
were
previously
valid.
I
What's
rob,
I
guess
kick
us
back
to
the
top
of
your
dock?
What's
what's
the
sort
of
like
stated
current
behavior
and
proposal.
J
Yeah,
the
current
behavior
is
that
it's
just
that
the
novel
bubble
bubbles
up
as
high
as
the
data
field
for
both
deferred
stream.
I
guess
I
was
just.
I
was
just
concerned
about
a
client
that
has
generated
types
that
says
it
can't
ever
have
a
null
inside
of
this
array,
but
then
it
does
happen
because
of
here
and
the
client
doesn't
like
explicitly
know
to
do
something
about
it.
C
But
that
that
that
only
can
the
only
the
client
can
edit
that,
and
I
I
like
what
joseph
said
it's
essentially,
each
of
these
payloads
is
there's
no
boundary
between
these.
J
C
J
Yeah,
my
thinking
was
just
that
putting
the
validation
on
there
would
just
kind
of
kick
the
can
down
the
road
a
little
bit,
but
but
benji's
point
about
how
that
validation
would
cause
that
side
effect.
That's
definitely
a
problem,
so
you
guys
think
that
it
works
to
treat
basically
deferred
stream
as
null
boundaries
and
that
yeah.
So
that's
that's
what
I
already
have
so
then
I
don't
think.
Would
there
even
need
to
be
changes,
but
we'll
make
note
of
it
in
the
spec.
I
I
guess
my
one
other
thought
is:
do
we
need
do
we
want
to
include
like?
Is
there
any
missing
information
here
about
how
to
handle
that
error
and
or
I
get
so
like
two
subtle
differences
that
could
happen
in
this
incremental
payload
one
is
that
we
could
decide
whether
or
not
to
include
the
data
field
at
all,
which
I
believe
is
what
we
do.
I
Someone
can
correct
me
if
I'm
wrong,
I
believe
that's
what
we
do
for
a
just
a
typical
response,
if
an
error
bubbles
all
the
way
to
the
top,
or
is
that
no
that's
the
difference
between
an
error
before
the
data
generation
begins
and
an
error
happening
during
execution.
I
Yeah,
okay,
so
saying
corrected.
This
is
this
is
the
appropriate
response
in
that
case,
for
the
data
field,
but
the
other
would
be
do
we
want
to
have
some
kind
of
information
about
what
to
do
with
the
error
like
if
the,
if
that
path
can
have
a
null
placed
there
or
not?
You
know
like.
Is
that
a
safe
thing
to
do?
I
Should
we
kind
of
leave
it
to
the
client
to
know
enough
about
the
schema
to
decide
whether
it
can
naively
merge
the
value
at
data
into
the
path
there
or
or
whether
we
want
to
have
something
that
that
you
know
indicates
that
this
is
like
that
this
new
error
needs
to
propagate
further
or
like
this
is
not
safe
to
merge.
You
know
what
I
mean.
I
So
I
feel,
like
the
the
naive
client
that
you
would
really
love
to
be
fully
fully
functional,
would
just
be
anytime.
You
get
an
incremental
response
in
the
stream
of
responses.
I
You
look
at
the
path
you
go
into
your
data
structure
and
you
take
the
value
at
data
and
you
stick
it
there
and
then
you
just
like
repeat
every
time
you
get
a
new
thing
accumulating
a
final
result
and
if
there's
ever
a
case
where
that
is
not
the
right
thing
to
do,
then
we
should
make
sure
that
we
include
explicitly
in
a
piece
of
information
that
makes
that
the
case
like
you
shouldn't,
need
side
side
knowledge
in
order
to
know
whether
that's
a
safe
operation
to
do.
J
J
I
Right
not
necessarily
changing
it
to
nullable,
but
this
is
what
I'm
saying
right
like
if
for
for
this
example
that
you've
got
up
on
screen,
the
the
async
iterable
nominal
error
field
is
returning
you
a
array
of
non-nullable
strings,
but
then
the
payload
says:
hey
the
one
index
of
that.
That
array
is
the
value
null
and
a
naive
client
would
just
do
that,
like
it
would
say,
like
all
right,
like
my
naive
client,
doesn't
know
anything
about
the
current
schema
and
to
benji's
point
like
it.
I
But
ideally
you
could
build
a
naive
client
that
doesn't
need
to
have
complete
schema
knowledge
to
do
the
right,
runtime,
behavior
and
if
it
just
takes
the
path
and
inserts
the
data
at
that
path,
what
you've
done
is
inserted
a
null
there
and
if
someone
downstream
had
like
human
read
the
documentation
and
read
that
this
is
a
list
of
non-nullable
strings
like
great.
I
never
need
to
worry
about
doing
null
checks.
Then
this
could
result
in
a
null
pointer
error.
So,
ideally,
you'd
have
some
bit
of
data
there.
I
Such
that
your
your
like
minimal,
client,
that's
capable
of
handling
deferred
stream
could
say.
Oh
I
actually.
I
cannot
naively
merge
this
path.
Maybe
maybe
it's
subtle
like
killing
the
data
field
out
of
the
payload,
to
indicate
that
null
is,
is
actually
like,
not
the
appropriate
value
to
provide
here,
because
there's
a
difference
between
there.
Actually
we've
loaded
the
data
and
it
was
fine
and
the
value
is
null
versus
hey.
We
tried
to
load
this
data
and
we
couldn't
do
that.
I
Actually
that
speaks
to
a
slightly
different,
subtle
piece
here,
which
is
for
the
top
level
response.
The
data
field
is
always
an
object,
because
the
data
field
always
corresponds
to
the
query
body,
which
is
always
an
object.
So
if
you
get
something
that
is
not
an
object,
you
know
it's
wrong,
but
yeah.
These.
I
I
C
Yeah
until
now
omitting
it
means
we
never
got
to
the
point
of
execution,
but
I
mean
it's
it's
it's
anyway.
We
have
a
special
case
for
subsequent
payloads,
because
in
subsequent
payloads,
as
they
already
said,
we
are
allowing
scalars
as
data,
so
we
could
make
a
new
rule
for
data
what
it
means.
If
data
is
not
there
or
we
could
I
mean
you
could
also,
if
you
talk,
if
we
are
talking
about
not
very
well
implemented,
clients,
maybe
make
it
more
explicit.
I
don't
know.
J
Omitting,
it
is
kind
of
nice
because
then
we
could
also
do
something
do
the
same
thing
for
defer
where
diverge
a
deferred.
Payload
should
always
be
an
object,
but
but
it'd
be
a
nice
parallel
that,
if
it
errored
out,
because
it
all
bubbles
all
the
way
up
that
it's
we
just
don't
include
it
for
both
defer
and
stream
yeah.
A
What,
in
the
case
of
defer
what
oh,
if,
if
there's
nothing,
it'll
end
up
just
being,
if
there's
like
nothing
in
the
deferred
response,
it'll
just
be
an
empty
like
do
we?
What
do
we
do
when,
like
the
connection
closes
or
like
if
the
server
wanted
to
say,
I
can't
I
can't
actually
deal
with
that?
C
But
you
also
send,
has
next,
you
don't
send
you
send
in
has
next
four
yeah.
In
this
case
you
have
the
same.
A
Right
right
right,
I
want
it
basically
there's
there's.
The
error
of
the
data
was
wrong,
in
which
case
in
normal
response,
we'll
have
a
null
data
field,
a
data
key,
whereas
the
data
like
I
didn't
even
get
to
executing
the
data,
because
the
server
like
panicked
or
something
underneath-
and
I
just
want
to
close
this-
give
you
a
has
next
false.
A
C
Not
only
that
it's
it's
also
with
er
er
iterables
your
weight,
your
weight
on
this
stream,
and
maybe
at
some
point
the
stream
just
says:
okay,
I'm
finished,
but
you
already
sent
the
you
sent
down
a
payload
wait
for
the
next,
but
then
the
stream
says:
okay,
I'm
actually
finished
and
then
at
the
moment
we
send
down
a
payload
without
data
and
say:
okay,
this
is
there's
no
more
payload
and
in
this,
in
this
case
I
met
you
correct.
We
would
have
a
conflict,
but
we
could
combine
that
with
the
errors.
A
I
A
I
Goes
beyond
naive
versus
non-naive
client
you're,
just
talking
about
like
you're,
like
non-normal
response
closing.
I
Yes,
you
like
what
happens
if
you're
you're
the
last
response
in
the
stream
that
has
next
true
and
then
the
stream
transport
closes
like
whether
that's
like,
whatever
you're
doing
http
long
polar
like
whatever
it
is
like
that
thing's
just
like
and
you're
done,
like
you're
you're
a
response,
your
stream
is
done,
you're
like
what
what
does
that
mean?
We
should
probably
like.
Obviously
that's
that's
incorrect
but
like
we
should
have
something
in
the
spec
text
that
describes
like
how
a
how
that
scenario
should
be
interpreted
like
a
that
it.
You
know.
I
No,
no,
no,
like
normal
operation,
should
produce
that
result,
but
like
what
is
the
appropriate
way
to
handle
that
you
should
probably
interpret
it
as
if
the
final
payload
errored
errored
fatally
and
the
other
is
like
what
happens.
If
you
you
say
hey,
I
have
I
actually
do
have
next
and
then
through
normal
operation.
You
realize
like.
Oh
actually,
I
did
not
have
anything
next
like.
D
I
In
the
case
of
the
defer
that
should
not
be
the
case
like
you've
got
something
that
you
deferred.
Of
course,
there's
going
to
be
something
to
work
on
in
the
case
of
the
stream,
you
would
just
payload
out
an
empty
array.
D
J
D
J
A
Right
exactly
otherwise
exactly
the
error
case
that
I
like
the
fact
that
we're
considering
data
being
gone
being
like
the
oh,
this
air
bubbled
up
is
the.
I
think
that
that
case
exactly
describes
why
that
might
be
a
little
weird.
H
H
I
think
just
one
one
another
another
angle
toward
you
know
always
wrapping
the
data
in
a
stream
payload
with
an
array
is
that
if,
if
we
want
to
go
in
the
direction
of
maybe
adding
arguments
to,
maybe
stream
payloads,
like
I
don't
know
two
at
a
time
three,
at
a
time
whatever
at
a
time
that
sort
of
gives
us
a
jump
on
that.
H
Yeah,
I
mean
so
far
we're
just
doing
single
payloads,
but
it
was
so
it
would
be
the
array
list,
I
guess
whatever,
but
you
know
we
have
the.
C
This
this
all
introduces
new
problems,
because
at
the
moment,
when
we
patch
you
say
on
which
item
you
patch
it,
but
when
you
provide
an
array
where
you,
where
do
you
put
these
items,
you
would
need
for
each
item.
H
A
pass
right,
so
you
need
like
a
start
path
and
an
end
path.
You
know
right
now
we
have
a
path
and
you
need
with
an
index
and
right
now,
you'd
have
as
the
last
element
of
that
path,
and
if-
and
you
know,
I
guess
you
know
this
is
talking
about
in
the
future.
I
mean
not,
but
I
guess
you'd
need
another,
the
payload
to
say
what
the
end
path
is.
If
you
have
more
than
one
element,
I
mean
really
it's
just
the
list
size.
E
Yeah,
I'm
glad
you
brought
this
up
myself
as
well,
because
I
think
often
in
these
sort
of
back
end
things
say
you
were
pulling
like
thousands
of
records
from
a
database
table,
it's
quite
common
to
batch
them
like
100
or
50
at
a
time
and
at
the
moment
there's
quite
a
lot
of
overhead
because
you
have
to
put
the
the
wrapping
payload
around
each
of
those
individual
entries,
whereas
being
able
to
send
all
50
and
then
the
next
50
and
then
the
next
50
more
efficiently
would
be
nice.
I
Yeah
then,
you
just
said
what
I
said:
I
we
should
recognize
that
not
all
of
these
server-side
streaming
primitives
operate
the
same.
The
fact
that
the
like
async
iterator
operates
on
a
unit
by
unit
basis
is
a
the
javascript
and
c
sharpie
thing,
but
I
know
the
like
hack
language
does
batches
of
arrays.
I
So,
like
you,
you
emulate
async
iterator
by
having
batches
of
arrays
of
size
one,
but
you
could
have
batches
of
arrays
of
any
size
variable
size,
and
that
gives
you
the
ability
to
do
chunked
streaming,
and
this
actually
has
pretty
real
ramifications
on
graphql
query
execution
performance,
because
if
your
back
end
is
streaming
you
stuff
like
three
or
four
units
at
a
time,
and
then
you
are
sort
of
artificially
just
by
the
way
that
the
payload
process
works
chopping.
Those
into
one
then,
and
those
each
have
like
a
depth
to
execute
right.
I
You
could
be
like
doing
all
of
the
depths
of
execution
for
the
first
one,
then
all
the
depth
of
execution
for
the
second
one
and
you've
turned
a
parallel
data,
fetching
problem
into
a
serial
data,
fetching
problem
and
that
could
be
a
pretty
serious
performance
penalty
and
that
feeds
into
what's
actually
in
front
of
us,
which
is
like
what
do
you
do
with
nulls?
That
would
actually
sidestep.
I
yeah
come
to
your
point
earlier,
like
then.
I
That
makes
a
lot
of
sense
like
if
you
define
that
a
stream
the
for
a
defer
payload
the
data
field,
is
always
an
object
and
for
a
stream
payload,
the
data
field
is
always
an
array.
Then
a
null
for
either
of
those
is
is
very
clearly
node
data,
rather
than
the
value
null
for
some
particular
path,
and
then
it
does
mean
that
we
need
to
differentiate
something
like
maybe
actually
like
to
the
point
about
having
a
start
path
or
an
end
path.
I
I
You
go
to
the
path
and
you
take
the
data
object
and
you
do
a
merge.
You
say
this:
this
object
is
being
merged
at
this
path
and
for
stream
you're
not
doing
a
merge
right
now,
you're
doing
a
set
you're
saying
for
this
particular
path.
This
value
is
being
set
here,
but
actually,
if
it
were
an
array,
it
would
be
a
concat.
I
guess,
unless
you
could
stream
out
of
order,
which
is
that
something
we're
supporting
right
now
rob.
I
Yeah,
I
think
that's
right
default
behavior.
I
know
there
are
some
servers
which
support
like
one
is:
a
is
a
literal,
linear
stream
and
the
other
one
is
like
a
parallel
stream.
Where
you
say
I
got
10
of
these
things
and
the
order
actually
doesn't
really
matter.
I
I
just
want
you
to
send
them
to
me
in
the
order
in
which
they're
ready
and
that's
probably
the
wrong
default
behavior,
like
the
vast
majority
of
lists
that
you'll
find
in
graphql,
probably
are
ordered,
and
that
might
not
even
be
something
we
want
to
support
on
day
one
but
having
a
payload
that
is
flexible
to
support
that
behavior.
Should
we
ever
want
to
get
there
like,
for
example,
we
could
say
path
is
the
same.
I
We
just
remove
the
last
piece
of
the
path
which
shows
you
your
index
and
then
that
gives
you
a
path
to
a
list,
and
then
the
behavior
is
whatever
the
data
field.
Is
you
concatenate
it
to
the
path?
And
if
that
data
is
null,
then
you
know
that
that
concatenation
is
not
a
safe
operation
to
do,
but
that
precludes
the
ability
to
use
that
same
payload
mechanism.
For
a
out
of
order
stream,
if
we,
if
that's
something
that
we
might
want
to
support
ever
in
the
future,
which
seems
reasonable.
A
I
Anyhow,
it
seems
like
for
this
particular
house
how
to
handle
non-nullables
for
data.
This
actually
seems
really
straightforward.
If,
if
the
behavior
for
a
naive
client
is
to
take
what
is
expected
to
be
an
object
payload
at
data
and
merge
it
into
the
whatever
you
find
that
path,
then
merging
a
null
is,
should
give
you
an
indication
that
something
has
gone
clearly
wrong.
I
And
if
we
can
repeat
that
behavior
for
stream
in
some
way,
then
that's
probably
the
right
path
for
this
specific
one
but
yeah.
I
think
we
got
to
make
sure
that
we
got
one
is
batched
batch
payloads
for
stream.
I
By
the
way,
both
of
these
could
be
things
that
we
decide
not
to
support
on
day
one
batch
streams
and
parallel
streams,
but
having
a
payload
mechanism
that
is
resilient
to
support
those
in
the
future
seems
wise.
That
way,
we
don't
end
up
having
to
like
create
yet
more
response.
Payload
forms
that
will
still
allow
us
to
stay
lean
and
have
a
a
way
to
shift
this
with
like
a
minimal
set
of
features.
And
then
we
can
say
like
all
right
batch
streams
and
parallel
streams
bring
their
own
set
of
of
quirks.
I
That
we'd
want
to
work
through,
but
at
least
you
know,
we
kind
of
know
the
the
ramifications
on
the
response
form
factor.
C
I
think
also
I
mean
with
the
past.
I
just
reflected
a
bit
on
that
because
it
still
can
point
to
a
specific
item,
because
if
it's
an
array,
it's
like
an
insert
insert
at
this
position
and
then
you
still
have
the
the
the
ordering
that
we
don't
actually
need
an
end
pass.
If
we
say
okay,
we
have
a
chunk
of
three
items,
insert
them
at
position.
Two
when
you
start
at
position,
two
insert
them,
and
you
have
it
before
that.
Like
seven
ten
minutes.
I
J
I
In
terms
of
like
what
you're
capable
of
supporting
overall
in
the
proposal,
I
think
that's
fine,
like
the
spirit
of
keeping
this
as
as
minimally
scoped
as
possible,
so
that
we
can
land
on
something
that
actually
gets
out
to
people.
That's
the
right
call,
but
making
sure
that
whatever
we
do
land
is,
would
support
like
viable
like
it
would
be.
Really
it
would
be
a
bummer
if
we
come
back
to
this
and
be
like
all
right,
an
immediate
follow-up
we
want
to
do
is
turns
out.
I
This
database
yields
its
rows
five
at
a
time
and
like
how
do
I
get
query
efficiency
there
we
go
well,
you
can't,
because
our
payload
response
form
factor
doesn't
support
that
like
that
would
be.
That
would
be
unfortunate,
so
giving
us
some
flexibility
to
support
that.
I
I
think
would
be
good,
and
it's
it's
just
sort
of
coincidentally
nice,
that
the
like
both
this
non-nullable
field
problem
and
that
problem
feel
like
two
views
at
the
same
problem,
so
that
that's
giving
me
a
sense
that
just
our
response
form
here
needs
to
be
like
minorly
iterated
to
be
slightly
more
generalized,
and
hopefully
that
solves
a
bunch
of
issues.
I
Stream-
I
don't
know
if
there's
consensus
there,
that's
what
I'm
saying
I
think.
Maybe
we
want
to
open
a
a
discussion
thread
about
like
what
what
what's
the
appropriate
response,
incremental
response
form
for
a
stream
payload
that
you
could
imagine
how
it
would
be
extended
to
batch
how
it
could
be
extended
to
parallel
how
it
could
be
extended
to
managing
bubbled
up
errors.
I
But
I
think
I
like
that.
So
thanks
jacob
for
the
suggestion,
it's
probably
the
right
one
but
yeah
can
I
just.
H
H
When
you
use
the
word
lee
parallel,
are
you
referring
to
out
of
potentially
out
of
order?
Is
that
what
you're
referring
to.
I
Yeah
we
use
the
same
list,
abstraction
to
model
sets
and
lists,
and
the
majority
of
them
are
lists
they're.
The
ordering
is
important
for
sets.
You
just
model
them
as
a
list
where
you
say
like
yeah.
If
there's
indexing
here,
you
just
like
ignore
it,
because
it's
set,
if
you
wanted
to
stream
a
list,
everything
we
have
here
is
exactly
what
you
want.
I
What's
the
appropriate
argument
you
put
in
the
directive,
is
it
a
different
directive
like
there's
a
whole
design
space
there
to
go,
make
sure
we
get
right,
but
I
think
if
you
had
a
set
of
things
and
you
didn't
care
about
the
order
and
you
wanted
them
streamed
you're,
just
like
I
don't
I
don't
care
which
one
you
send
me
first,
just
like
start
sending
them
to
me
in
whatever
order
they're
ready,
then
really.
I
What
you
want
to
do
is
you
know,
maybe,
under
the
hood,
it's
you
know,
you
get
a
you
get
an
array
of
ids
and
you
got
to
go
fetch
them
all
from
your
database,
and
you
tell
your
database
just
like
stream
me
stream,
these
id
rows
to
me
as
you
have
them
available,
and
you
don't
worry
about
what
particular
sort
order
you
give
your
database.
I
Similarly,
so
for,
for
you
know,
you
gotta
prepare
their
subtrees,
so
you
go.
Do
all
the
like
field,
completion
for
each
of
those
payloads
and
then,
rather
rather
than
saying,
I
gotta
wait
for
the
everything
to
complete
for
the
first
one
before
I
can
deliver
that
before
the
second
one
you
can
just
whatever
one's
ready.
First,
you
just
send.
That
would
be
clearly
bad
behavior
for
a
list.
I
think
that
was
the
conclusion
from
the
previous
discussion
and
since
the
majority
of
lists
are
in
fact,
ordered
lists.
I
That
is
exactly
the
right
behavior
to
at
least
have
as
a
default,
and
probably
the
right
behavior
to
just
have
period
for
it
for
like
an
initial
change
to
the
spec,
but
just
like
acknowledging
the
fact
that
people
model
sets
as
lists,
and
that
might
be
something
that
they
very
reasonably
want
to
see.
This
behavior
be
able
to
expand
to
in
the
future.
J
I
J
I
Yeah
my
intuition,
like
I
said
I
don't
want
to
have
to
make
a
decision
here
on
exactly
the
right
response
shape.
We
should.
We
should
follow
up
that
way.
We
can
like
write
stuff
out
and
like
look
at
it,
but
my
intuition
is
two
small
moves.
We'll
fix
this
one
is
yakov's
suggestion
which
is
the
the
data
field
is
an
array
and
then
the
second
change
is
take
the
index
out
of
the
path
and
make
it
its
own
field.
I
The
behavior
is
always
merge
and
for
an
object,
that's
object,
merge
and
for
an
array,
that's
concatenate,
and
the
index
information
tells
you
where
we
expect
that
list's
length
to
have
been
when
you
started
that
concatenation
and-
and
I
need
to
like
stare
at
it
a
bit
and
think
about
whether
my
idea
actually
makes
sense
to
make
sure
that
that
works
for
parallel
as
well.
C
A
Index
has
value
for
smart
clients
like
if,
if
a
client
is
merging
this
into
a
graph,
the
index
does
have
value.
It's
basically
like
what
are
we
overwriting?
Where
do
we
start
that
overriding?
Because
you
could
have
queries
coming
in
at
the
same
time,
updating
the
exact
same
path,
so
so
there's
value,
even
if
naive
clients
would
just
like
concave.
I
Yeah
and
then
also
just
to
get
parity
with
things
like
error
message:
paths
like
if
you,
your
error,
message
path,
that's
misaligned
to
your
data,
merge
data
you
can
get
in
confusing
states,
but
I
I
I
see
what
you're
saying
like
on
day,
one
if
we
say
these
things
definitely
always
come
in
the
right
order.
Then
array
concatenation,
just
kind
of
gets
you
there
and
you
really
don't
need
extra
anything
extra.
My
other
thought
is
like
in
adding
in
this.
I
As
a
separate
field
is
now
you
have
a
super
clear
delineator
between
what
is
a
stream
payload
versus
what
is
the
fur
payload?
I
guess
you
could
also
look
at
hey.
Is
the
data
field,
an
array
type
versus
an
object
type
but
like
this
is
a
good
case
where
you
might
not
be
able
to
do
that
if
it
comes
back
as
a
null,
then
you're
like?
I
Oh,
I
guess
I
have
to
follow
this
path
and
see
if
that
path
leads
me
to
an
array
or
an
object,
but
having
some
definitive
piece
of
information
that
you
go
all
right.
I
know
for
sure
that
this
was
intended
to
be
a
stream
result.
Not
a
deferred
fragment
seems
like
a
very
useful,
like
sanity
check,
for
a
client
to
have
to
know
whether
it's
about
to
do
bad,
behavior
or
not.
I
E
So
I've
got
a
couple
of
other
thoughts.
One
is,
I
think,
that
stream
and
defer
themselves
effectively
imply
nullability
anyway,
so
I
wrote
an
example
into
chat
a
little
while
ago,
20
minutes
ago,
if
you've
got
four
nullable,
four
non
three
non-nullable
fields
and
you
were
to
set
one
of
them
to
be
null
in
your
response.
E
So
they,
though
they're
not
necessarily
nullable,
they
are
optional
and
I
know
we
have
issues
between
the
difference
of
those
two
terms,
but
similarly
with
a
with
a
list,
if
you
had
a
non-nullable
list
of
non-nullable
elements-
and
you
were
to
throw
an
error
for
one
of
those
elements
for
some
reason,
it
wouldn't
be
right
to
just
not
add
that
item
to
the
list,
because
then
you'd
feel
like
that
list
had
a
certain
length
that
was
different
from
the
actual
length
like
if
it
said,
there's
there's
actually.
E
Four
people,
but
one
of
them,
threw
an
error,
and
now
the
list
has
length
three
that's
potentially
problematic
as
well.
So
I
do
wonder
if
I
mean
effectively,
we
do
need
to
raise
the
fact
that
there
were
errors.
We
can't
be
too
naive.
Otherwise,
client
assumptions
are
going
to
be
wrong.
I
That
that's
why
I
like
the
the
described
client
behavior
of
if
you
get
an
incremental
payload
and
the
data
says
null.
That
means
that
that
incremental
payload
failed
to
generate,
and
the
most
naive
behavior
is
to
be
like.
I
can't
trust
anything
here
then
so,
like
whatever
my
client's
state
was
up.
Until
this
moment,
I'm
gonna
like
throw
it
out
and
like
retry
or
or
something
like
that,
that's
basically,
what
relay
does
today
is
my
understanding,
and
that
would
be
fine.
I
The
more
sophisticated
behavior
would
be
to
do
the
crawl
and
say
like
all
right.
If
I
were
to
escalate
this,
this
error
is
there
something
further
up,
above
that
I
could
slice
and
end
in
a
reasonable
state
for
deferred
fragments.
I
This
is
a
little
bit
easier
because
you
can
kind
of
mental
model
it
as
two
parallel
queries,
where,
when
query
succeeded-
and
the
second
query
failed
for
streamed
lists,
what
you
described
is
very
correct,
like
if
you,
if
you
you
definitely
don't
want
to
merge
in
a
null
or
like
concatenate,
a
null
to
an
array
like
that.
I
I,
but,
like
you,
said
something
really
important
here,
which
is
like
the
difference
between
nullable
versus
optional,
which
I
think
actually
is
very
relevant
here,
because
if
you
know
field
b
says
it's
a
non-nullable
int
and
you
don't
query
field
b,
your
your
payload
is
going
to
have
a
and
c
in
it,
and
your
your
client
is
going
to
be
aware
that
that's
a
completely
valid
state
like
the
fight,
despite
the
fact
that
b
can
never
be
null.
I
E
Yeah,
I
think
so
I
think
it's
one
of
these
things
where,
if
we
were
to
have
these
fragment
fields
like
what
matt
suggested
previously
the
what
was
the
name
of
your
new
meta
field
map,
key
modularity.
A
Oh
before
that
the
is
fulfilled
or
yeah.
E
Yeah,
exactly
so
that
is
fulfilled
would
be
able
to
resolve
this
issue
like
it
would
still
be
non-nullable
if
that
fragment
was
fulfilled,
so
there's
there's
definitely
overlap
there.
I
think.
I
Yeah,
my
hope
is
that
clients
do
this,
rather
than
with
the
meta
field,
with
a
a
different
marker
between
the
difference
between
the
value
null
versus
it,
not
existing
in
the
fragment
in
the
first
place,
which
they'll
need
to
for
something
that's
deferred,
we'll
have
to
have
sort
of
a
similar.
Parallel
to
there,
there's
always
going
to
be
some
intermediate
state
where
that
piece
of
data
is
not
there
yet,
and
it
would
be
incorrect
to
say
that
that
data
is
currently
null.
D
But
another
question:
what
if
we
see
it
as
a
nullability
boundary,
so
you
can
return
the
null
what's
the
server
responsibility
after
that
so
like,
if,
if
it's
a
null,
that
would
make
a
certain
part
of
the
response
bubble
up?
If
it
wasn't
a
boundary,
then
is
the
server
still
allowed
to
respond
with
a
non-null
further
thing
that
would
have
also
been
within
that
larger
boundary?
D
It
is
allowed
to
respond
with
some
of
them,
but
not
others
of
them
because,
like
if
you
know
what's
going
to
happen
to
the
client,
if
you
get
a
it,
it
tracks
it
up.
It
decides
this
whole
subset's
null
and
it
did
that
extra
tracking,
not
the
most
naive
client,
but
then
it
gets
something
else.
That's
in
the
part
it
already
threw
away.
D
G
I
Like
it
shouldn't
necessarily
be
the
last
payload,
because
if
a
client
wants
to
do
some
nuanced
air
management
behavior
like
basic
emulate
the
null
bubbling
error,
error
management,
behavior
on
the
client,
then
it
would
be.
I
It
would
be
probably
not
the
right
thing
to
kill
all
other
ongoing
work
on
the
server
for
defer.
This
seems
like
certainly,
if
there's
like
child
defers
below
something
that
failed.
I
don't
know
if
that's
even
a
possible
scenario
to
end
up
in
basically
the
same
constraints
that
you
talked
about
in
the
previous
part
of
this
discussion
like.
If
you
break
one
of
those
constraints,
then
you're
you're
doing
the
wrong
thing.
So
for
stream,
if
our,
if
our
constraint
here,
is
you
can't
by
the
by
default
or
at
all
the
constraint?
I
Is
you
can't
send
a
later
item
in
the
list
before
you
send
a
previous
item
in
the
list?
And
a
previous
item
in
the
list
was
not
delivered
because
of
an
error,
then,
like
that's
it
like
that's
the
last
one
for
that
particular
list.
But
if
you
had
two
things
like
you
had
two
streams,
two
lists
being
streamed
at
the
same
time
like
that,
one
of
them,
you
know
you
shouldn't
get
the
later
one
before
the
other
maintain
your
constraint,
the
other
one
might
be
able
to
continue
to
send
payloads
exactly
yeah.
I
agree.
F
Yeah
go
ahead,
stephen
for
the
case
of
fields
abc
and
field
b.
You
know
comes
like
like
field
b
is
deferred
and
and
comes
back
later
as
null.
Then
you
know
if,
if
you
were
to
apply
the
bubbling
up,
behavior
client
side,
it's
kind
of
like
schrodinger's
null,
it's
like
it's.
F
You
know
it's
it's
just
you
don't
until
you
see
it,
you
don't
know,
and
so
then
you're
retrospectively
like
like
bubbling
up
and
and
removing
data
you
already
had-
and
I
wonder
if
it's
worth
specifying
don't
do
that
and
and
to
say
that
this
is,
you
know
an
exception
to
bubbling
up,
because
you
know
defer
kind
of
creates
a
boundary,
and
so
you
know
when
you
get
that
first
bit
of
data,
if
it
as
a
whole,
you
know
follows
the
rules.
F
Then
that's
okay
same
as
this,
as
if
you'd
you've
done
it
in
two
queries,
because
otherwise
you
know
just
just
adding
defer
in
there.
You
know
kind
of
changes.
The
behavior
where
let's
say
there
is
a
null
that
you
would
see
but
like
that
defer
you
just
never
actually
get
that
final
bit.
That
tells
you
about
that
null.
Then
you
you
know,
then
you
don't
bubble
up,
but
if
you
do
see
it,
then
you
do,
but
maybe
you've
already
acted
on
the
data.
Does
that
make
sense?
I.
E
It
feels
a
little
bit
like
as,
if
they've
effectively
got
like
a
skip
directive
on
them,
but
you
don't
know
whether
it
was
past
a
true
or
false
right
like
when
you
defer
something
you
may
or
may
not
get
it
depending
on.
If
an
error
is
thrown
or
not.
G
I
think
the
question
is:
should
we
at
all
specified
here
in
the
spec,
we'll
leave
it
to
particular
case
and
implementation,
because
there
might
be
different
absolutely
way,
different
cases
when
all
this
happens,
and
sometimes
the
stream
or
the
third
thing
is
like
stock
ticker?
Who
cares
if
we
skipped
a
few
or
a
chat
in
a
legal
firm
where
it's
a
whole
bunch
of
things
I
mean
so
basically,
what
I'm
saying
is
that
does
it
need
to
be.
The
specific
behavior
can
cannot
be
left
to
the
implementers,
and
we
just
don't
specify
it.
I
I
Behavior
here,
is
hard
to
define
because
you're
starting
to
factor
in
like
product
requirements
of
how
this
api
is
gonna
get
used,
and
you
know
we
can't
decide
that
on
behalf
of
folks,
I
do
think
it's
reasonable
to
have
have
an
opinion
and
like
this
is
what
we
think
reasonable
behavior
is
like
should
be
in
most
cases,
but
but
you
know,
rather
than
being
a
requirement,
it
being
a
suggestion
and
pointing
people
towards
the
thing
that
they
should
be
thinking
about
is
more
important
than
than
actually
specking.
I
I
Well,
this
was
certainly
a
fruitful
discussion.
I
feel
like
we
got
through
a
bunch
of
sticky
problems
and
have
some
interesting
ideas
rob
what
else
do
you
want
from
us
at
this
point.
J
I
guess
what
what
I'm
mostly
worried
about
is,
if
changing
the
the
stream
payload
at
this
point,
is
pretty
pretty
big
and
might
have
a
lot
of
additional
input
from
a
lot
of
people.
I
It
might,
I
don't
think,
that's
crazy.
I
think
you
could
keep
them
rolling
in
parallel
and
especially
if
you
get
to
the
a
state
where
defer,
everyone
has
really
high
confidence
that
the
fur
is
in
in
its
final
form
and
we
should
shift
it
and
stream
still
have
some
open
questions.
Then
we
can
do
that.
I
I
don't
know
that
you
necessarily
need
to
kind
of
decide
that
now
I
think
it's
actually
been
quite
helpful
to
be
thinking
about
them
simultaneously,
because
a
lot
of
the
runtime
mechanisms
are
similar
in
the
cases
where
they're
different,
it's
quite
interesting,
where
they're
different
and
it
has
design
ramifications
for
both
of
them
but
yeah.
I
I
agree
with
you
like.
I
don't
end.
I
Let's,
let's
make
sure
that
we
don't
have
we
don't
we
don't
block
defer
from
the
from
work
ongoing
with
stream,
but
also
that
we
don't
limit
ourselves
for
doing
the
right
thing
for
stream,
just
because
we've
gotten
as
far
as
we've
got
so
far.
J
Yeah
definitely
definitely
I
bring
it
up
a
lot
because
it
does
seem
like
a
lot.
More
people
are
looking
to
use
deferred
than
stream
and
stream.
Definitely
and
stream
seems
like
it
has
a
lot
more.
J
There's
been
like
a
lot
more
feedback
on
that,
but
but
this
case
with
the
non-nulls
also
it
seems
like
they
should
be
handled
together.
So
so
so
the
next
steps
for
this.
J
I
Yeah-
and
I
think
like
does
the
the
like
design
bike
shedding,
is
probably
going
to
be
the
easiest
part
of
that
like
actually,
the
top
part
is
what
you
just
said
like
what
is
what's
the
ramifications
of
this
on
all
the
other
decisions
we've
made
so
far,
and
the
implementation
like
does
is
all
this
you
know
reasonable
to
roll
out
or
is?
Is
there
some
other
like
pretty
significant
thing
that
we're
not
seeing
as
to
why
this
could
be
like
super
difficult
to
do.
J
Yeah,
I
I
think
the
biggest
the
biggest
ramification
is
that
it
would
unblock
this
non-null
thing,
potentially
with
allowing
us
to
return
data
null
unambiguously
right.
Yeah.
I
I
Are
there
other
other
things
and
built
implementation
so
far
or
other
parts
of
the
spec
surface
area
that
all
of
a
sudden
get
way
more
confusing
like
we're?
Looking
at
this
in
a
really
narrow
lens
through
this
specific
problem,
but
you
have
a
lot
more
context
than
the
whole
surface
area
of
where
these
things
touch.
J
Yeah
yeah.
I
have
to
think
through
that
because
yeah
I
haven't
thought
of
that
possibility.
Yeah
totally
yeah.
H
Sorry,
I
I
thought
there
I
heard
some
consensus
before,
but
I'm
not
sure
if
there
really
was
about
whether
whether
it
sounds
like
I'm
being
told
that
there
wasn't-
and
I
guess
we
could
continue
discussion
on
that
same
this-
the
same
issue
that
we're
looking
at
as
stream
as
an
error,
boundary.
I
My
sense
is,
I
don't
know
I'm
hearing
like
mostly
agreement
about
the
remaining
open
problems
and
what
we
want
to
do
about
them
and
not
like
a
lot
of
back
and
forth
with
like
different
proposals.
So
I'm
actually
I'm
I'm
feeling
really
optimistic
that
we'll
kind
of
burn
through
this
stuff
pretty
quickly
as
soon
as
we
get
like
a
shared
understanding
of
what
we're
talking
about
which
we
just
need
to
write
it
down.
I
Yeah
yeah,
that's
that's
what
it
sounds
like
to
me
as
well,
so
great
awesome.
Well,
thank
you
rob
that
was
certainly
more
than
20
minutes,
but
that's
okay,
because
we
had
the
time-
and
that
was
a
super
productive
conversation.
J
I
Who
put
the
sauce
agenda
item
up?
I
don't
have
a
name.
I
G
So
it's
basically
I
see
it
as
a
really
minor
thing
and
quite
logical
thing
to
do,
and
essentially
I'm
not
proposing
to
add
something
to
the
stack,
but
rather
remove
some
unreasonable
restriction
which
seems
like
out
of
place
and
for
not
never
use.
G
Recursive
or
fragments
that
form
a
loop,
so
I
bring
up
the
benefits.
I
show
that
even
the
graphical
query,
the
graphical
introspection
query,
contains
a
rather
strange.
You
know:
multi-led
multi-step,
looking
like
a
letter
fragment
to
retrieve
and
wrap
the
type
of
the
field,
and
this
can
be
essentially.
G
I'm
new
to
zoom,
so
I
might
be
not
very
efficient,
but
basically,
this
is
the
type
of
fragment
that
a
graphical
uses
to
retrieve
the
information
and
makes
assumption
of
maximal
maximum
complexity
of
the
type
of
the
right
type,
and
it
doesn't
look
pretty
anyway,
and
instead
this
that's
the
thai
pref
rec
rex
tunnel
for
recursive.
G
And
another
case,
for
example,
of
the
the
cases
for
this
are
all
around
the
place.
We
use
the
recursive
fragments
to
unfold
the
chains
of
data
references.
Another
one
is,
for
example,
the
cheat
oop,
sorry,
okay,
but
probably
those
who
work
with
microsoft
outlook.
There
is
a
page
where
you
can
see,
look
up,
somebody
and
you
can
see
the
chain
of
responsibility
of
managers
of
an
employee
going
up
to
the
ceo
and
it
can
be
like
10
people
long
in
corporation,
like
microsoft
and
but
now
the
question.
G
G
Basically,
this
spec
says
that
the
fragments
should
never
form
loops
because
they
result
in
infinite
recursion.
While
it's
not
true,
as
you
see,
the
fragment
can
be
recursive.
If
the
self
reference
is
inside
the
selection
subset
of
some
of
the
fields,
you
can
still
hit
the
infinite
loop
if
the
data
itself
is
looped
and
that's
potential
trouble.
But
what
I
argue
is
that
in
any
case,
the
server
implementation,
the
server
the
framework
that
implements
the
graphql,
it
should
have
multiple
circuit
breakers
against
abuse
or
errors
in
the
queries.
G
And
now
I
I
understand,
that's
concern,
but
what
about
the
client
who
says?
I
don't
care.
I
know
it's
limited
like
if
I
want
to
show
the
chain
of
managers
of
my
managers
and
basically
it's
in
the
the
client
and
server
usually
from
the
same
kind
of
team.
The
client
is
the
client
app
is
downloaded
from
the
server
right.
So
basically,
I
know
that
this
is
the
limited
chain
of
responsibility,
probably
no
more
than
20.,
and
I
don't
want
to
build
this
multi-step
fragment.
G
In
most
cases,
this
would
be
an
limited
chain
right.
I
Yeah,
this
is
not
the
first
time
that
we've
talked
about
this,
although
I
think
it's
been
quite
some
time.
Since
the
last
I
was
trying
to
find
the
references
to
the
previous
discussions.
I
I
couldn't
find
it
so
it's
probably
in
the
early
days
of
the
working
group,
but
it
was
very
intentional
that
this
limitation
was
added
part
of
my
one
of
my
favorite
anecdotes,
which
is
one
of
the
members
of
the
original
graphql
team,
was
always
upset
that
we
called
it
graphql
and
felt
that
it
should
have
been
called
tree
ql,
because
we
really
talk
about
modeling
and
returning
trees,
not
graphs,
and
we
we
talked
about
this
recursion
problem,
really
motivated
by
exactly
the
example
that
you
put
up
here
for
introspection
of
you
know.
I
Introspection
is
in
fact
a
graph
and
you
really
want
to
be
able
to
query
it
as
a
graph,
but
we
have
limitations
in
our
response.
Form
response
is
always
a
tree,
so
I
think
three
major
concerns
have
limited
us
from
addressing
this.
In
the
past,
one
is
two
variants
of
a
security
problem,
the
first
of
which
is
just
infinite
recursion
which,
to
your
point,
there's
there's
ways
to
put
limitations
in,
but
that
limitation
would
not
be
a
pure
removal
of
something
right.
I
We
need
to
specify
what
is
the
behavior
when
something
is
an
infinite
recursion?
How
should
it
halt
and
when
the
second
one
is
an
overlap
between
a
security
issue
versus
a
data
quality
issue
which
is
entering
the
same
value
multiple
times
and
ending
up
with
multiple
iterations
of
that
in
the
payload?
There's
other
examples
where
this
happens,
but
recursion
is
is
a
way
to
you
know
generate
that.
So
you
can
imagine
something
is
cyclic.
I
Ideally,
you
would
like
to
identify
that
as
cyclic
right
away
and
immediately
halt
at
the
point
that
you
find
that
it's
cyclic,
some
servers
do
not
have
the
ability
to
do
that
for
various
reasons
and
they
may
sort
of
like
loop
through
you
know
a
dozen
times
before
they
finally
decide.
Hey
I've
been
going
pretty
deep.
This
something's
probably
going
wrong
here,
I'm
going
to
stop,
and
I
don't
remember
what
my
third
one
was.
I
Oh,
the
third
one
was
evaluating
just
like
what
is
the
what
is
the
cost
of
not
having
this,
and
is
it
high
enough
that
it
warrants
addressing
all
these
kind
of
easy
to
get
wrong
security
and
data
quality
ramifications,
and
that
one
in
the
past
not
to
say
that
past
decisions
always
have
to
hold
in
the
future.
But
that
decision
in
the
past
was
no.
It
is
not
like
it
is
always.
I
It
is
always
easy
to
use
a
combination
of
a
fragment
and
a
significantly
nested
type
to
model
exactly
the
behavior
that
you
wanted
to
have
gotten
from
that
recursive
fragment.
So,
for
example,
the
example
that
you
put
up
here
for
the
recursive,
the
recursive
return
types
or
a
field
in
practice.
I
It's
it's
a
little
bit
different
in
that
there's,
a
fragment
that
describes
all
the
fields
that
you
want
to
get
there
and
then
there's
a
separate
fragment
that
describes
that
recursively
that-
and
it
literally
just
looks
like
of
type
spread
of
fragment,
of
type
spread
of
fragment,
of
type
spread
of
fragment,
and
you
can
just
read
how
many
levels
deep.
That
is,
if
it's
six
or
seven
or
eight,
you
can
just
count
them,
and
that
is
essentially
the
the
de-sugared
version
of
what
we
would
do
if
we
had
a
fragment
recursion.
I
So
you
know
the,
I
think
the
cost
of
screwing
this
up
is
really
high.
It's
part
of
the
reason
why
we
haven't
addressed
this
in
the
past.
You
can
imagine
a
poorly
built
client
failing
to
handle
these
security
issues
and
creating
a
ddos
vector
for
themselves,
and
so
we'd
have
to
be
very,
very
careful
to
make
sure
that
we
certainly
our
reference
implementations,
didn't
suffer
such
a
fate,
but
also
we
were
very
crisp
about
what
it
would
mean
to
have
a
person
fragment.
I
think,
the
last
time
we
talked
about
this.
I
If
I'm
remembering
correctly,
I
think
it
was
joseph
and
I
that
were
investigating
this-
the
path
that
we
were
going
down
before.
We
ultimately
decided
to
cut
it
loose
and
not
do
it
was
if
a
fragment
was
recursive,
we
would
have
a
validation
rule
that
asserted
that
it
had
a
directive
on
it.
That
tells
you
told
you
the
maximum
depth,
so
you
had
to
specify
a
depth
and
we
started
looking
at
this
and
looking
at
all
the
validation
code,
that
we
were
writing
and
we're
like.
I
G
Okay
thanks,
I
wish
you
edited
this
to
the
my
original
post.
G
Yeah,
but
the
thing
is
what
really
strikes
the
same
arguments
apply
to
any
programming
language
or
programming
system
regarding
recursion,
the
dangers
and,
however,
all
of
them
go
okay
and
they
probably
everybody
knows
the
dangers
of
recursion
recursion
right.
If
you
are
really
stupid
and
directly
call
yourself,
I
actually
seen
some
case.
G
Try
catch
handle
error,
try
catch
and
inside
the
catch
handle
error
code.
You
know
recursion
inside
the
catch,
it
was
a
real
case
in
real
application
and
they
were
wondering
why
why
the
services
came
and
died
completely.
But
what
I'm
saying
is
that
so,
while
it
is,
has
definitely
some
sense
of
these
arguments,
why
we're
so
different
from
other
languages
which
actually
this
all
these
things
are
equally
apply
to
any
other
system.
E
What
other
apis
support.
I
I
Like
c-shape
yeah,
this
is
actually
really
important
for
static
analysis,
like
essentially
the
halting
problem
right
like
you,
you
really
want
to
be
able
to
give
in
a
query
and
there's
whole
tooling
chains
out
there
that
do
this,
and
you
know
part
of
the
thing
that
I
think
we
should
hold
close
is
to
make
sure
that
we're
enabling
those
tooling
chains
that
try
to
look
at
a
query
and
give
a
prediction
of
that
query's
complexity
like
roughly
how
many
units
of
data
might
it
consume
min
max?
I
What's
you
know
it?
Can
it
become
a
ddos
vector
and
that's
already
a
relatively
tricky
problem,
but
a
manageable
one,
but
as
soon
as
you
start,
adding
in
programming
language
ask
features
you
make
it
considerably
harder,
because
you
run
into
the
halting
problem,
and
you
essentially,
like
you,
just
have
to
run
the
query
to
know
how
long
it's
going
to
take,
which
is,
is
probably
not
where
we
want
to
be
from.
Like
a
a
query,
safety
point
of
view,.
A
Yeah
I
mean
there
is:
there
is
past
precedent
of,
like
so
basil,
for
instance,
google's
compilation.
Language
is
basically
a
subset
of
python.
That
does
not,
that
is
guaranteed
to
halt
and
like
they've.
Had
it's
very
strange.
What
has
to
be
done
and
it's
hard
right?
It's
it
like
from
a
language
design
perspective.
G
A
But
we
do
like
currently
graphql
does,
and
part
of
that
is
that
graphql's
goal
is
different
from
a
generic
from
a
like
generic
programming
language
right.
The
goal
of
graphql
is
not
to
enable
arbitrary
program
creation.
I
L
K
I
Going
to
be
limitations
to
stack
analysis-
and
I
don't
know
that
it's
fair
to
say
that
we
always
we
do
it.
We
do
it
perfectly.
There
are
things
that
we
can
do,
statically
that
we
know
we
have
complete
knowledge
of
because
of
the
nature
of
the
language.
There's
others
that
we
don't
like
query
analysis
is
a
good
example
or
a
complexity
analysis
rather
there's
just
inherent
limitations
like
you
just
you,
because
you
just
don't
know
under
the
hood
there
is.
I
You
know
during
complete
arbitrary
execution,
that's
going
on
so,
but
I
think,
rather
than
viewing
it
as
a
binary
of
things,
we
want
to
preserve
or
not
preserve.
We
should
view
these
as
trade-offs
and
for
a
data.
Query
language,
especially
one
that's
publicly
exposed
like
the
nice
thing
about
a
programming
language,
is
that
you
probably
are
vetting
all
the
code.
That's
running
like
you
hire
or
pay
the
people
or
you
could
review
the
code
or
there's
some
mechanism
to
like
control
what
code
gets
executed,
oftentimes
for
a
graphql
or
any
api
surface?
I
That's
not
always
the
case
you're
going
to
take
arbitrary
input,
and
you
need
to
be
able
to
do
some
analysis
to
know
that
it
is
safe
to
execute
and
that's
a
that's,
an
important
thing,
but
that's
not
to
say
that
there's
like
hard
lines
about
what
we
can
and
can't
do.
It's
just
a
consideration
that
we
need
to
weigh
against
the
other
things
that
we
want
to
do.
I
I
We
should
just
view
this
as
a
as
a
de-sugaring
step.
Right
like
you,
should
be
able
to
take
a
query
with
the
recursive
fragment
and
convert
it
into
one
that
has
no
recursive
recursion
and
by
and
use
the
max
death
to
do
that.
But
then,
as
we
got
to
that
we're
like
we,
we've
looked
at
that
and
said.
How
frequently
do
we
expect
this
to
be
encountered
and
how
painful
is
it
to
manually?
Do
that
de-sugaring
yourself
and
how
complicated
is
it
to
implement
this
de-sugaring
and
that
net
we
realized
like
this?
I
Is
this
wasn't
the
most
important
thing
for
us
to
do
at
that
moment
in
time?
But
our
conclusion
wasn't
that
that
this
was
not
a
useful
problem
to
solve
we.
I
You
know
we
saw
it
as
an
important
problem
to
solve,
and
it
may
be
the
fact
that
new
things
have
come
to
light
in
many
years,
since
we
last
talked
about
that
where
this
is
a
more
useful
thing
to
do,
I
think,
if
I
was
going
to
give
some
feedback
and
some
direction
for
next
steps
into
doing
this
would
be
kind
of
fleshing
out,
rather
than
this
being
purely
a
removal.
I
What
are
the
other
things
that
we
would
need
to
add
to
ensure
the
same
level
of
safety
that
we
have
today
to
the
spec
and
then
the
second
is
a
more
real
motivation
of
the
problem.
I
think
you've
done
a
good
job
of
highlighting
the
the
abstract
cases
and
the
specific
case
for
introspection,
but
generally
for
a
typical
developer.
How
frequently
do
they
encounter
this?
You
know,
beyond
the
the
example
use
cases
of
an
organizational
chart
like
do
people
write
organizational
chart
apis
all
the
time
and
we're
missing
something.
I
Is
this
a
common
request
from
the
community,
or
is
this
something
that
people
come
across
every
once
in
a
while
and
kind
of
are
grumpy
that
they
have
to
do
something
that
feels
like
it
could
be
written
more
reasonably
if
they
had
recursive
fragments,
or
is
this
like
a
regular
pain
point
in
api
design?
That
makes
sense.
G
Yeah,
but
I
want
to
add
one
just
simple
mode:
again:
I'm
a
server-side
developer
and
I
feel
like
in
the
spec
in
general,
speaks
too
much
about
server-side
execution
rather
than
leaving
to
server
implementation.
Take
care
of
things
and
like
the
goal,
I
feel
like
the
goal
should
be
expressing
in
the
most
simple,
concise
way
what
the
client
wants
to
be
delivered.
G
Not
how
and
I
think,
allowing
recursion
and
the
less
restrictions
the
better.
Basically
how
to
do
this
and
how
what
kind
of
troubles
the
server
will
run
into
it's
up
to
the
server
like
n
plus
one
problem.
You
know
it's
hidden
inside
and
nothing
we
can
do,
and
it's
about
server,
efficient
server
implementation
and
taking
care
of
this
right.
G
So
I
feel
like
we,
the
spec
at
least,
should
stay
kind
of
away
from
the
troubles
that
potentially
server
can
can
have.
Yes,
there
is,
there
might
be
loops,
but
we
will
take
care
of
this
as
a
servo
developer,
I
can
say
so.
The
benefit
is
less
restrictions
and
be
more
like
other
languages.
By
the
way,
the
example
of
the
language
query,
language
that
allows
loops
is
sql
in
sql.
You
can
have
common
table
expressions
which
reference
themselves
and
that's.
I
Yeah
yeah,
to
be
clear:
I'm
not
I'm
not
saying
that
this
is
not
a
thing
that
we
should
do.
I
just
want
to
provide
the
appropriate
context
to
make
sure
that,
as
we
move
forward
that
we
do
so
in
a
way
that
factors
in
fast
and
current
concerns
and
your
framing
of
the
of
the
intent
of
the
spec,
I
think
that's
super
fair
and
very
aligned
with
how
this
group
has
thought
about
it.
I
Thus
far,
the
the
spec
needs
to
describe
to
a
client
what
to
expect
as
a
result,
given
their
query
and
sometimes
they
need
a
mental
model
for
what
the
server
is
going
to
do,
but
there's
actually
a
line
pretty
high
up
in
the
the
spec
that
says,
as
long
as
the
server
has
equivalent
behavior.
It's
it's
happy
to
implement
these,
as
as,
in
whatever
form
they
do
as
long
as
it
produces
a
result
that
would
be
visibly
equivalent
to
the
the
steps
described
by
the
by
the
spec
for
the
case
of
recursion.
I
You
know
you
you'd
want
some
mechanism
to
describe.
Am
I
going
to
get
two
levels
deep,
10
levels,
deep
like
as
a
client?
What
are
the
boundaries
of
the
things
I
should
be
able
to
expect
and
if
you
were
to
implement
the
same
thing
in
you
know
multiple
services
like
what
is
the
allowed
variance
and
what
is
the
constraint
you
want
to
be
able
to
specify
those
things
so
that
clients
know
what
to
expect
and
how
to
interpret
their
payloads.
I
But
beyond
that,
then
I
I
completely
agree
with
you.
It's
really
important
to
allow
a
lot
of
flexibility
to
the
server.
That's
an
important
part
of
how
servers
evolve.
I
We
have
10
minutes
left.
Do
we
want
to
use
them
to
quickly
go
through
the
open
discussions
that
you
have
alex?
I
Oh
yeah
sure
we
can
do
that
just
to
give
people
a
sense
of
what
you've
opened
up
since
last.
K
Yep
yeah
here
let
me
share
my
screen
again.
K
Cool
yeah
so,
like
I
said
it's
split
into
three
things:
there's
yeah,
let's
see,
there's
there's
you
know,
there's
four
options
for
for
lists
and
tax,
there's,
not
a
ton
to
say
here.
I've
linked
previous
discussions
for
each
of
these
I
yeah
I
I
I'm
still
not
sure
what
we
do
to
like
you
know
to
to
move
forward
with
this
with
this
discussion
really,
but
I'm
looking
forward
to
hearing
what
more
people
have
to
say
and
then
on
the
null
propagation
side.
K
K
It
was
the
last
thing
that
he
introduced
and
the
other
thread
where
nulls
propagate
from
exclamation
points
to
question
marks.
There's
non-destructive
exclamation
points
where,
where
you
know
no
change
is
made
to
the
data
and
then
there's
the
status
quo
where
it's
treated
as
it's
treated
as
non-nullable
and
it
propagates
with
the
the
current
non-nullable
rules.
I've
added
examples
for
each
of
these
alex
your.
E
K
Yes,
yeah
yeah,
so
basically
I
think
I
think
people
like
in
our
discussions.
People
were
attributing
a
pretty
wide
variety
of
behavior
to
the
question
mark.
I
I
you
know
in
terms
of
what
it
does.
K
I
I
think
that
marking
things,
non-nullable
and
marking
things
and
and
error
catching
are
are
two
pretty
pretty
distinct
things,
and
so
I'm
hoping
we
can
talk
about
them
separately
and
then,
if
we
decide
that
we
want
behavior,
we
want
the
behavior
that
we
can
mark
things
nullable
and
we
want
the
behavior
that
errors
can
be
caught.
Then
we
can
discuss
if
the
question
mark
should
do
both
of
those
or
if
those
should
be
two
different
pieces
of
syntax
or
something
like
that.
E
Yeah
totally
I
just
I
was
just
thinking
that,
from
a
from
a
one
way
of
presenting
one
of
these
combinations
right
is
that
exclamation
point
makes
something
non-nullable
and
question
mark
does
the
opposite,
and
that
is
the
only
change
like
it.
It's
a
very
straightforward
thing
to
think
about
then,
and
it
feels
like
that
should
be
presented
as
a
cohesive
option
where
they
they
both
work
together.
In
that
way,
it
feels
like
that
is
option.
One
except
option.
One
doesn't
actually
specify
the
behavior
of
the
question
mark.
Do
you
have?
K
No,
I
haven't,
I
don't
think
I've.
I've
specifically
mentioned
the
question
mark,
except
for
in
this
option.
I
don't
think
I've.
I've
mentioned
the
question
mark,
but
I
can
add
something:
yeah
I'll
figure,
something
out.
E
I
I
know
we
effectively
get
combinatorics
here
like
here.
Are
the
behaviors
for
exclamation
point
here
are
all
the
behaviors
for
question
mark,
but
I
do
think
that
the
one
where
it
just
changes
the
nullability
and
does
nothing
else,
just
pretends
that
that
was
what
it
always
was
in.
The
schema
is
a
cohesive
option
that
should
be
analyzed.
K
Sure
yeah
I'll
find
a
place
to
introduce
that
one
for
sure
and
then
yeah.
So
I
have
those
three
here
and
then
second
zoom
is
covering
up
my
tabs.
I
can't
get
to
my
tabs.
There
we
go
and
then
the
third
discussion
is
error
handling
and
we
have
no
error
handling.
We
have
errors
caused
by
client
control
ability.
K
Our
tree
is
distinct
from
all
existing
errors
and
then
query
syntax
is
introduced
to
catch
errors,
and
this
is
sort
of
what
people
have
been
talking
about
as
an
option
for
question
marks
so
far,
I
use
different
syntax
to
try
to
not
confuse
things,
although
I
don't
know,
maybe
that
confuses
things
more.
We
can.
K
We
can
talk
about
it,
but
each
of
these
also
has
a
bunch
of
sub
options
that
we
would
then
discuss
if
one
of
these
options
is
is
chosen,
but
the
a
lot
of
these
are
are
sort
of
like
like
implementation
details,
you
know
how
do
we
achieve
the
thing
that
we're
we're
trying
to
achieve.
I
Matt,
maybe
you
can
help
us
poke
joe
to
get
some
attention
on
these
since
a
lot
of
his
bold
feedback
led
to
the
creation
of
these
asperger
grounds,
to
make
sure
that
we're
iterating
to
the
right
spot.
K
Yeah,
I
can
do
that.
Oh
yeah,
also
matt.
There
was
a
an
error
handling
option
that
I
think
you
introduced
at
one
of
these
discussions,
and
then
I
don't
remember
if
you
you,
you
still
wanted
it
to
happen,
but
where,
where
a
field
marked
with
like
a
catch,
syntax
could
be
either
the
the
piece
of
data
or
an
error.
A
A
I
think
that
that,
as
a
solution
would
be
like
very
nifty
and
clean
from
like
a
design
perspective,
if
we
were
like
doing
a
clean
room
solution,
but
also
probably
touches
it
is
like
very
out
there
on
what
how
it
works
right
now.
So
I'd
I'd
be
a
hard.
A
K
Yeah,
okay,
yeah,
that's
the
impression
I
I
I
got
what
was
happening,
cool
and
then,
and
then
roman.
I
think
I
think
had
a
suggestion.
He
left
me
some
early
feedback,
so
I
gotta
I
gotta
edit
some
stuff,
but
that's
that's!
What's
here
so
far.
I
want
to
try
to
direct
discussion
away
from
the
main
spec
thread
and
into
these
discussion
threads.
I
think
it'll
be
easier
to
follow
and
easier
for
people
to
digest.
They
can,
you
know,
write
huge
walls
of
text
in
favor
of
some.
K
You
know
their
favorite
option
without
you
know,
clogging
up
discussion
for
everything,
so
they
should
feel
free
to
do
that.
I
K
I
Yeah,
thank
you
for
getting
these
open.
I
think
this
is
already.
I
know
you've
only
a
couple
people
weighed
in
so
far,
but
it's
already
way
easier
to
follow
along
with
what's
going
on.
So
thanks
for
doing
that
and
the
this
poll
thing
is
cool.
I
had
no
idea
that
that
was
something
you
could
do.
That's
super
nice
and
it
it's
helping
a
spot
where
there's
already
like
firm
consensus,
I
think
you're,
like
lists
and
taxes,
getting
pretty
pretty
dang
close
to
consensus
there
and
versus
where
there's
like
active
discussion.
K
I
I
think
I
think
it's
definitely
helpful
some
something
I
want
to
point
out,
though
you
know
in
case
people
feel
uncomfy
about
this.
You
can
vote
multiple
times
in
these
polls
and
you
can
vote
for
multiple
options.
So
if
people
are
like
you
know,
voting
security
in
our
in
our
graphql
polls-
that's
that's
fine.
We
can
talk
about
that,
but
yeah.
I
I
mentioned
that
these
are
you
know,
non-bindings,
just
to
get
a
sense
of
of
sentiment.
L
Awesome,
can
I
just
sorry:
can
I
just
ask
about
the
the
ad
catch
directive?
I
picked
my
interest,
seeing
as
I've
dealt
with
errors,
as
unions
in
this
came
out
previously,
and
this
seems
like
an
interesting
way
to
be
able
to
add
that
to
an
existing
schema.
L
You
were
and
I'm
saying
you
met.
You
were
saying
that
it
was
not
something
you
would
pursue
in
the
context
of
this
nullability
discussion
right.
I
would
love
to
okay.
Yeah.
Was
there
anything
written
up
previously,
or
was
this
discussed
in
the
working
group
I'll
need
to
dig
it
up
then.
K
I
D
I
K
K
I
All
right
folks,
that's
time,
love.
When
we
have
a
small
number
of
agenda
items
we
get
to
go
deep
into
each
of
them,
so
really
useful
discussion
thanks
everyone
for
your
participation
and
I'll,
see
you
all
next
month,
thanks
everyone,
goodbye.