►
From YouTube: GraphQL Working Group - January 7, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
A
A
A
A
B
A
A
C
Let's
just
drop
in
and
give
a
quick
update
before
the
tsc
meeting
sounds
good.
C
D
F
E
List
is
also
pretty
small
today,
which
I'm
not
that
surprised
early
january
1st
of
the
year.
So
if
we
do
have
something
that
we
need
to
take
a
vote
on
from
tsc,
we
might
need
to
run
that
virtually.
C
Yeah,
I
think
we
got
to
run
it
virtually
anyhow,
because
that
way,
we've
got
the
a
nice
easy
paper
trail
for
the
yeah.
The.
C
So,
just
I
guess,
before
we
get
the
meeting
started,
I
should
give
a
quick
overview.
Basically,
I
was
talking
to
scott
nicholas
here
at
the
linux
foundation
who's
the
attorney,
who
helped
get
everything
set
up
here
for
the
spec
membership
agreement,
and
basically
we
are
on
the
v4
version
of
the
jdf
documents.
C
That's
the
specification
membership
agreement
that
everybody
has
to
sign
and,
as
it
turns
out,
the
v4
version
of
the
documents
requires
a
vote
and
actually
a
signed
document
from
all
contributors
for
every
release,
which
is
something
we've
been
trying
to
avoid.
We,
the
idea
was,
you
know
we
signed
it
once
and
then
and
then
everything's
good
to
go
from
there
on
the
v5
documents
which
were
developed
after
graphql
set
up
and
after
jdf
was,
you
know,
integrated
into
the
linux
foundation.
C
They
don't
have
that.
You
basically
sign
the
membership
agreement
once
and
then
that's
good
for
all
future
releases.
The
bad
news
is
it's
the
documents
change.
The
good
news
is
that
there's
an
upgrade
provision
built
into
the
existing
documents,
so
the
tsc
just
basically
needs
to
vote
so
we'll
do
that
over
email.
Generally,
you
know
for
other
other
projects,
it's
easier
to
run
these
types
of
votes
over
email,
because
then
you
can
track
like
you
know
who
who
still
needs
to
vote
and
so
forth.
C
If
we
don't
have
critical
mass,
so
it's
probably
better
if
I
just
send
out
an
email
anyhow,
so
I
can
also
drop
that
into
the
into
the
meeting
minutes
here.
Benji,
so
I'll
open
up
a
pr
and
basically
summarize
what
I
just
said.
D
E
All
right,
the
tiny
list
looks
pretty
stable,
so
we
can
get
started.
I
have
merged
some
last
minute
prs.
So
if
you
got
the
agenda
up,
give
it
a
reload,
so
you've
got
the
latest
one
and
welcome
everybody
to
the
first
wiki
group
meeting
of
2021..
E
Hopefully,
2021
is
a
an
uphill
climb,
rather
than
a
downhill
descent
into
madness,
and
this
group
remains
a
nice
thing
to
do
every
month.
It's
always
good
to
talk
shops
with
all
of
you
and
make
progress
on
things.
So
I
hope
that
can.
Can
you
continue
to
be
that
for
the
rest
of
you?
E
As
always,
we
start
off
with
reminder
that
by
participating
here
we
agree
to
the
membership
agreement,
participation,
guidelines
and
code
of
conduct.
There
are
links
for
all
those
in
the
agenda.
H
E
Want
a
reminder
of
what
those
are
and
then,
let's
just
go
through
everybody
really
quick
put
names
to
faces.
I
think
everybody
here
is
a
familiar
face,
but
always
good
to
just
have
a
good
video
record
of
who's.
Here
I
have
come
up
with
a
much
simpler
sort
in
the
agenda
list
for
this
year,
which
is
my
name
stays
on
top
and
then
all
their
names
below
that
I
just
sort
by
last
name
alphabetically.
I
James
yep,
hey
everybody,
james
baxley,
director
of
engineering
apollo.
A
G
Mike
cohen,
I
work
for
indeed
we're
doing
lots
of
things
with
graphql
and
I'm
interested
in
how
the
spec
is
progressing
and
how
I
can
help.
B
Hi
everyone,
I'm
benji,
I
maintain
the
graphile
suite
of
tools
and
I'm
also
interested
in
various
tasks
on
the
graphql
spec
itself.
J
Okay,
I'm
after
mark
I'm
matt,
I'm
at
facebook
working
on
graphql
on
the
client
side.
E
H
C
This
is
brian,
I
think
I'm
next,
I'm
with
the
linux
foundation.
I
help
support
both
graphql
foundation
as
well
as
graphql
project
and
here
to
talk
a
little
bit
about
some
changes
to
the
graphql
specification
membership
agreement.
E
That
I
think
I
have
some
people
in
here
twice
which
it's
probably
my
fault
when
I
did
the
office
or
so
sorry
for
that
I'll
clean
that
up
thanks
for
everybody
great
to
see
all
of
your
faces
in
the
new
year.
H
A
pr
with
my
entry,
yeah
hi
emma
hi,
I'm
andy
working
in
krafkow
java
and
also
working
for
atlassian
yeah,
hey
everybody.
E
Sorry
to
have
skipped
over
you
andy
glad
you're.
Here
it
looks
like
we
have
our
our
standard,
no
champion
benji,
already
organizing
the
notes
mark
had
mentioned
that
he
would
be
able
to
help.
Take
notes
he's
not
here
yet
in
the
case
that
benji's
engaged
discussion
is
anybody
willing
to
volunteer
to
be
taken
back
up.
E
We
are
going
to
review
the
previous
meetings
action
items
then
look
at
advancing
no
introspection
at
root
subscription
operation,
which
is
a
mouthful
but
benji,
will
explain
that
I'm
sure
discuss,
schema,
coordinate,
spec,
the
default
coercion
updates
default
value,
coercion,
update
and
one
advancement
in
the
query,
ambiguity,
discussion
and
then
updates
on
the
defer
and
stream
proposal.
H
E
Completely
warranted,
yes,
I
see
that
I'm
merging
your
polar
crest
in
mouth
that
added
that.
Thank
you
for
that.
E
I
think
I
see
we
had
a
couple
of
additions,
so
I
think
anybody
who's
here
who
didn't
get
a
chance
to
introduce
themselves
feel
free
to
do
so
on
merging
these
extra
photographs.
E
B
B
B
B
B
E
Oh,
I
see
you've
got
a
link
there
at
the
bottom
yeah
considering.
D
E
E
Confusing,
oh,
it
looks
like
yeah.
This
is.
This
was
just
a
mistake
in
tag
editing
back
when
we
were
reviewing
this.
E
But
since
it
is
merged,
actually
you
know
what
I'm
gonna.
I'm
gonna
double
check
this
after
after
the
meeting,
so
I'm
gonna
leave
this
one
open
on
me
just
to
double
check
that
nothing
else.
Weird
happened.
I'm
like
95,
confident
that
that
this
can
be
closed.
But
let
me
review
this
afterwards.
B
Absolutely
sounds
reasonable
and
then
we're
up
to
november.
There
was
a
task
for
give
feedback
on.
The
schema
coordinates,
suggested
graphql,
spec
appendix
which
was
on
mark.
I'm
not
sure
if
mark's
here
yet
doesn't
seem
to
be.
E
He's
not
here
yet,
but
he
did
mention
that
this
seems
ready
for
review.
E
E
Yeah-
and
there
was
really
good
discussion
about
this
in
the
last
two
working
groups-
and
I
I
suspect
mark
will
show
up
a
little
bit
later
because
he
does
have
something
on
the
agenda
where
we'll
continue
to
discuss.
So
I
think
this
is
safe
to
close
as
well.
B
B
B
B
Do
you
want
me
to
close
that
or
go
for
it,
your
recital
desk
and
then
the
other
one
is
to
convene
a
meeting
of
the
graphql
input
unions
working
group?
Hey
everyone,
there's
going
to
be
a
meeting
of
the
graphql
input
unions
working
group
on
the
21st
of
january
at
7
pm
utc,
details
of
it
are
in
slack.
You
can
also
find
out
more
about
it
in
this
issue,
which
is
565
on
the
working
group
repo.
B
It's
also
linked
on
the
agenda
for
this
meeting
as
well.
I
believe
that's
done
excellent.
B
Lovely
okay,
so
that's
just
the
one
that
you
are
ready
to
that
you'll
be
looking
at
again.
Does
anyone
else
have
any
other
action
items
that
they
feel
can
be
closed
at
this.
E
E
Looks
like
there's
a
couple
at
the
bottom
here
that
are
all
assigned
to
me
that
I'm
going
to
make
a
note
to
resolve
those
today
after
the
meeting,
so
those
are
definitely
still
open
just
about
github
infrastructure.
F
B
B
F
Figure
out
like
how
to
dump
the
list,
I
I
decided
not
to
do
it
on
holidays
because,
like
not
overbought
and
like
in
books
after
holidays,
I
will
send
it
to
everybody
and
we
already
discuss
a
format.
So
it's
like
soft,
pink
saying
like
do
you
want
to
contribute
or
not
so
it
sounds
like.
E
Trying
to
remember
the
context
of
this
one,
I'm
pulling
it
up
in
the
notes,
so
the
this
was
about
just
managing
the
number
of
committers.
F
Yeah
yeah
because
we
have
like
there
is
like
a
bunch
of
people
and
there
is
like
even
people
who
never
contribute
anything
like
mostly
from
facebook
and
even
some
non
facebook
contributors
who,
like
attribution
like
years
ago.
So
idea
is
like
a
soft
notice.
F
Asking
like
do
you
want
to
still
contribute
or
not,
and
please
respond
in
like
one
month
if
you're
interested
in
continue
contributing
and
if
you
responded
in
one
month
like
basically,
if
you
not
respond
in
one
month
or
if
you
say,
I'm
not
interested
just
to
remove
these
people.
E
That
make
sense,
I'm
thinking
in
in
retrospect
on
this
one,
that
if
someone.
H
E
Is
not
showing
up
to
contribute,
then
it
seems
that
they
might
also
not
respond
to
messages,
and
so
we
might
speed
this
up.
I
just
proactively
calling
that
list
and
we
can
always
add
people
back
to
the
list
if
they
decide
they
want
to
come
back
with
like
if
someone
who's
a
previous
contributor
who
had
commit
rights
shows
up
later
with
the
pull
request
and
wants
commit
rights,
we
can
just
give
them
back
rates,
so
I'm
I'm
totally
comfortable
doing
this
without.
F
E
E
B
Do
we
are,
we
happy,
does
brian
want
to
talk
at
this
point,
brian
added
his
his
note
quite
late.
I
think
it's
made
its
way
to
the
top
of
the
agenda.
C
Yeah,
I
dropped
in
a
summary
of
what
I
said
at
the
very
beginning
here,
but
just
for
anybody
who
may
have
missed
it.
One
of
the
things
which
we
found
when
we
were
planning
the
spec
release
at
the
tail
end
of
last
year
was
the
current
version
of
the
documents
that
we're
using
for
the
graphql
specification
membership
that
they
required
a
substantial
amount
of
overhead,
because
we
need
to
get
signatures
every
time.
We
want
to
cut
a
release
from
everybody
who
contributed.
C
That
would
be
pretty
unworkable
and
it
turns
out
that
one
of
the
things
that
happened
when
jdf
was
absorbed
into
the
linux
foundation
or
as
as
part
of
the
linux
foundation
family,
was
that
we
changed
the
documents
we
rev
them
up
to
v5,
and
that
includes
removing
that
requirement
where
everybody
needs
to
sign
a
document.
Every
time
you
do
a
release.
The
the
process
of
upgrading
from
v4,
which
is
what
graphql
is
currently
on
to
v5,
is
fairly
straightforward.
It's
just
a
vote
by
the
tsc.
C
Once
I
get
those
documents
which
I'm
expecting
them,
hopefully
today
or
tomorrow,
I'll
send
them
out,
send
them
out
to
the
tsc.
For
review,
so
they
can
vote
and
we
can
remove
that
requirement
of
chasing
people
down
for
signatures
every
time,
there's
a
release,
so
it's
kind
of
unilaterally
a
good
thing.
It's
just
something
we
need
to
get
done
and
that's
my
update.
B
Thanks
brian,
I'm
sure
everyone
will
agree
to
that.
But
out
of
interest.
Is
that,
like
a
majority
vote
or
is
it
everyone
needs
to
agree
on
the
tsc.
C
C
Generally,
for
something
like
this,
though,
just
as
a
matter
of
practice,
I
do
try
and
chase
everybody
down
and
get
a
vote
from
everybody.
You
know
not
not
because
it's
required
just
because
it's
good
to
make
sure
that
everybody
has
read
through
it
and
is
you
know
feeling
that
they
understand
it
well
enough
to
vote
on
it.
E
I
suspect
the
slack
tsc
channel
and
email
simultaneously
is
probably
the
best
way
to
cast
a
wide
net.
B
B
There
was
a
mixture
of
opinions,
but
we've
decided
that
the
the
most
simple
route
forward
right
now
is
to
disallow
it,
since
it
already
doesn't
work
in
the
reference
implementation
and
then
we
could
open
up
the
the
spec
in
future
to
allow
additional
fields
or
define
what
that
would
mean
for
there
to
be
an
introspection
field
at
the
root
of
the
subscription.
B
So
to
progress.
This
I
have
written
I've
edited
the
the
spec
pull
request,
which
I
think
is
pretty
simple.
It's
like
yes,
so
seven
lines
added
so
very
small
edit
and
I
have
written
a
pull
request
to
graphql
js
that
implements
the
change.
In
doing
so.
B
I
think
I
discovered
that
the
graphql.js
validation
of
this
previous
requirement
may
not
work
quite
right
when
it
comes
to
fragments-
and
I
think
that's
something
that
someone
else
has
already
noted
as
well,
so
I
actually
ended
up
having
to
do
quite
significant
changes
to
the
the
validation
structure
within
graphql
js.
B
E
Definitely
especially
since
the
spec
change
itself
is
quite
terse,
then.
E
You
definitely
have
the
pr
open,
it
looks
the
js
one.
There's
definitely
a
lot
of
work
happening
here.
I
guess
there's
just
lots
of
arguments
in
these
there's
some
recursion
happening,
so
it's
it's
definitely
not
trivial.
It
deserves
a
thorough
review.
Yes,
but
this
it
definitely
looks
comprehensive,
looks
like
you
got
a
bunch
of
tests.
I
feel
very
comfortable
moving
this
to
stage
two.
E
Anybody
agree
or
disagree
with.
B
F
F
I'm
and
I'm
interested
in
that
change.
But,
like
I
totally
agree
that
first,
we
need
to
wow
a
point
field.
We
need
to
make
expect
sound
and
when
discussed
like
a
woman,
type
name
at
root,.
E
E
Sounds
right
to
me
great,
we'll,
move
this
up
to
stage
two
and
the
next
actions
on
this
are
one
for
yvonne
to
give
you
that
more
thorough
review
and
see
if
there's
any
structural
change
needed
with
that
pull
request,
and
then
also
an
action
on
me
to
do
an
editorial
review
of
the
spec
change.
E
E
Let's
move
on
to,
I
think
mark
is
still
not
here,
so
maybe
we'll
skip
over
the
schema
coordinate
spec
in
the
chance
that
he
arrives
late
and
with
that
benji
you've
got
the
one
after
that.
So
we'll
just
let
you
keep
on
going
and
talk
about
default
value,
coercions.
B
B
It
seems
that
if
you
have
these
sorry,
I'm
not
sure
if
you
can
see
my
my
webcam
feed,
but
if
you
have
various
queries
that
I've
outlined
in
the
spec
pull
request,
793,
which
I'll
send
a
link
into
the
chat
that
all
of
those
look
like
they
should
return
the
same
result,
but
that
doesn't
actually
seem
to
be
the
case,
and
that
is
basically
because
the
default
values
on
arguments
and
input
object
fields
are
treated
as
I
guess,
pre-baked.
B
B
So
we
check
that
the
objects
can
coerce
to
the
targeted
types,
but
if
we've
got
something
like
a
default
value
on
one
of
the
fields-
and
we
don't
specify
that
field
in
the
in
the
in
the
parent
default
object,
then
that
field
might
be
omitted
in
which
case
is
going
to
be
seen
as
null,
which
might
break
a
not
null
requirement,
for
example.
So
there
are
things
where
this
goes
wrong,
but
also
more
broadly,
I
would
just
say
that
it
doesn't
meet
the
expectations
of
people
who
see
what
it.
H
B
If
you
see
a
default
is
equal
to
the
empty
object,
you
would
assume
that
passing
the
empty
object
to
that
field
would
give
you
the
exact
same
behavior,
and
that
is
not
the
case.
Currently,
it
turns
out
that
this
is
actually
semi-complicated
to
solve,
due
to
the
way
that
graphql
works
and
the
due
to
where
the
the
defaults
are
actually
implemented
in
the
spec.
At
the
moment,
we
just
use
the
the
default
object
directly.
We
don't
do
enough
checking
on
it.
B
B
But
the
main
issue
that
I
have
at
the
moment
is
that
there
is
a
situation
where
you
could
define
default
values
in
such
a
way
that
an
infinite
an
infinite
loop
could
occur,
and
I
do
not
know
how
best
to
write
into
graphql
spec
language
that
that
shouldn't
be
allowed.
So
at
the
moment,
I've
just
said
do
not
allow
an
infinite
loop,
but
we
really
need
to
fill
that
out
with
actual
spec
language,
and
that
is
something
I'm
struggling
with.
B
The
next
thing
to
do
would
be
to
actually
build
an
initial
implementation
of
this
for
the
reference
implementation,
assuming
that
we
decide
that
this
is
something
that
we
want
to
fix,
rather
than
just
we
could
try
patching
it
by
just
tweaking
the
actual
validation
that
we
do
on
the
default
object,
and
just
say
this
is
a
artifact
of
how
graphql
was
defined
and
put
up
with
the
fact
that
passing
an
empty
object
and
saying
the
default
to
be
an
empty
object
is
different.
B
E
B
So
the
way
that
I
see
it
is
that
when
we
build
the
schema,
we
can
at
that
point
populate
the
default
values
and
actually
like
treat
them
as
if
they
were
provided
at
run
time,
but
have
a
value
that
we
then
use
like
memoize
it
effectively.
But
I
think
we
definitely
need
to
have
effectively
two
values.
We
need
to
have
the
the
default
value
that
the
developer
specified
and
then
the
digested
default
value.
That
applies
all
of
the
defaults.
B
And
I
think
that
is
the
best
route,
that
is
the
route
that
is
least
surprising
to
a
user
of
graphql
who
doesn't
know
the
intricacies
of
like
how
the
spec
works
and
how
the
implementation
works
and
to
me
I
think
that
is,
that
is
by
far
the
biggest
issue.
The
fact
that
we
actually
break
our
promises
in
graphql
is
a
massive
issue,
and
we
can
we
should
fix
that.
B
But
I
think
also
the
fact
that
it
breaks
that
expectation
is
is
almost
a
bigger
issue,
because
it
just
doesn't
do
what
a
user
would
expect
it
to
do,
and
I
think
we
should
fix
that.
E
So
the
good
news
is
that
this
is
fixable.
It's
just,
unfortunately,
really
kind
of
ugly
and
complex
to
fix
right
there.
There
are
plenty
of
other
cases
in
the
spec
where,
where
recursion
is
possible-
and
we
explicitly
thwart
it,
but
both
in
the
spec
language
and
definitely
in
the
reference
implementation
they're,
some
of
the
most
complex
parts
of
those
things
which
is
which
is
definitely
unfortunate,
but
perhaps.
E
E
E
That
means
that
someone
could
produce
this
schema,
deploy
it
to
production,
and,
assuming
that,
may
I
go
maybe
there's
some
integration
test
that
triggers
that
memoized
case,
but
because
it's
not
until
technically,
it's
not
until
run
time
that
this
broken
schema
could
make
it
through
and
result
in
a
broken
surface,
and
I'm
wondering.
H
B
Yeah,
so
I
think
that
this
is
an
invalid
schema.
It
is
currently
valid
like
it
validates
in
graphql
currently,
but
I
don't
think
it
should
be
a
valid
scheme
and
I
think
people
looking
at
it
would
see
that
this
looks
like
it
should
never
end.
B
So
what
I've
put
into
the
spec
pull
request
is
effectively
that
part
of
the
validation
of
the
the
arguments
for
fields
or
the
input
fields
of
object,
type
of
input
object
types
is
that
we
already
have
a
requirement
that
the
default
value
there
is
coercible
so
effectively.
We
just
extend
that
to
say
it's
coercable
and,
and
it
does
not
cause
an
infinite
loop,
but
effectively.
It
would
be
schema
build
time
that
you
would
get
that
issue.
B
J
J
E
E
Came
from
evan
about
the
whether
this
happens,
that
execution
time
or
build
time
is,
is
really
an
implementation
detail,
because
it
is
so
easy
to
memorize.
I
Yet
because,
like
I
think
the
the
fact
that
we
don't
know
how
to
specify
this
beyond
just
don't
cause
an
infinite
loop
suggests
to
me
that
the
actual
implementation
of
this
is
going
to
be
really
difficult,
and
maybe
we
should
do
the
implementation
first
and
figure
out
what's
like
feasible
to
implement
before
we
try
and
and
write
something
in
the
spec
that
turns
out
is
impossible
to
write,
like
you
know,
don't
want
to
accidentally
put
the
whole
thing
problem
into
our.
F
F
Recently,
like
last
year,
where
the
ways
that
forbid,
infinite,
recursion
and
input
objects
and
be
have
like
a
prevention
of
infinite
recursion
for
fragment
spreads
so
like
technically,
like
checking
for
recursion
is
not
something
new.
It's
already.
Zero
is
right
every
time
it's
like
in
crisco
size,
every
time,
it's
like
plus
600
lines,
so
like
400
wides
of
code,
and
we
didn't
figure
out
how
to
generalize
algorithm.
F
So
every
time
we
just
like
copy
paste
like
part
of
like
checks,
but
it's
not
like
unique
in
that
sense,
I
also
have
to
like
I
want
to
mention.
I
left
like
some
notes
previously
some
objection
to
to
his
proposal.
I
actually
thought
about
that
for
longer
time
and
I
changed
my
opinion.
I
can
still
increase
our
complexity
budget,
but
at
the
same
time
as
bencher
said,
it's
like
expectable
behavior.
F
So
thinking
not
thinking
not
about
like
for
matters,
but
thinking
about
like
clients
who
use
graphql,
it's
like
meet
the
expectation
and
it's
like
it's
a
simplified
mental
model
or
graphical
for
them.
F
So
I'm
back
in
favor
of
this
change.
One
thing
yeah,
like
precaution,
is
not
like,
even
not
infinite
recursion,
but
the
way
that
previously
default
value
was
propagate
to
one
level
so
for
field
we
use
default
value
and
stop
zero,
but
right
now
we
need
to
go
deeper
and
deeper
and
deeper
and
deeper
and
deeper.
Even
if
it's
not
recursive,
we
still
need
to
go
on
every
level
and
so
like
it's
making
me
like
nervous,
but
at
the
same
time
I
agree
with
like
purposeful
exchange
and
by
the
way
about.
F
About
check
and
check
during
schema
build
time,
I
think,
like
bench
actually
put
it
in
section
3
type
system,
so
checks
from
section
3
is
done
on
schema
build
time,
so
yeah
yeah,
you
put
it
to
x,
second
yeah
type
system.
So
it's
like
all
the
checks
are
done
in
in
schema,
built
like
before
execution
schema
validation.
We
have
a
special
function
to
validate
schema,
not
hdl,
not
query,
but
just
schema.
So
technically
it's
already.
We
already
prescribed
that
is
done
during
schema
validation,.
E
E
Be
a
little
bit
more
stringent
on
whether
it's
a
problem
we're
solving
so
one
problem
that
absolutely
is
important
and
should
be
solved.
E
E
I
I
think
like
having
having
default
values
for
input
fields
is
reasonable,
because
that
means
you
can
provide
partial
information
at
runtime,
but
is
it
reasonable
to
have
a
default
value
of
a
empty
object
at
a
query
called
site?
Because,
if
it's
not,
then
we
can?
We
can
really
resolve
this
in
a
much
cleaner
way,
even
though
it
might
restrict
some
possibilities.
B
So
let
me
tell
you
the
story
of
how
I
came
across
this
issue
and
unfortunately
it
does
mean.
Yes,
I
do
think
it
is
a
sensible
thing
to
do
so
in
post
file.
There
is
a
when
you,
when
you
request
a
collection
of
resources,
you
can
add
a
condition.
Should
you
want
to
by
default,
we
don't
go
as
deep
as,
for
example,
hasura
do
with
deep
complex
queries,
but
we
do
allow
you
to
do
simple
things
like
equality.
B
So,
if
you
want
to
say,
show
me
all
the
posts
that
aren't
archived,
you
could
say
all
posts
condition
archived
force
now.
What
you
might
want
to
do
is
to
make
it
so
that
there
is
a
default
condition
which
is
don't
show
me
posts
that
are
archived,
and
then
you
can
flip
it.
So
you
can
actually
explicitly
set
it
true
or
false
if
you
want
to
for
that
to
work,
because
it's
under
the
the
condition
argument,
the
condition
needs
to
have
its
own
default.
B
Now
you
could
populate
that
that
value,
but
if
you
were
to
go
the
kind
of
route
that
hasura
have
in
their
complex
filtering
you're,
going
to
have
to
populate
quite
deeply
through
that
tree.
But,
more
importantly,
there
are.
There
are
situations
where
conditions
I
don't
know
how
to
express
this.
B
They
sort
of
like
build
up
the
complexity
of
doing
that
of
actually
setting,
so
that
all
of
your
default,
empty
objects
actually
have
all
of
the
relevant
things
in
them
is
an
expensive
thing
for
you
to
do
as
a
developer,
if
you're
doing
it
manually,
but
also
it's
something
that
you
would
expect
to
be
done
for
you,
because
you've
already
set
the
default
properties
on
the
fields.
B
So
you
would
expect
them
to
already
have
been
set
in
my
opinion,
so
the
the
main
issue
that
I
had
is
I
I
set
the
default
condition.
B
Sorry,
I
said
the
condition
for
don't
show
me
archive
things
and
then
I
set
the
default
for
the
condition
to
be
an
empty
object
so
that
that
actually
was
invoked
because
otherwise
it
was
null
and
it
was
ignored
and
it
didn't
work,
and
I
thought
that
was
a
bug
in
my
code.
But
it
turned
out
to
be
an
issue
in
the
graphql
spec
itself.
B
E
And
I
mean
that
conclusion:
I
definitely
agree
with
the
fact
that
it
is
confusing.
I
I
know
that
I
said
that
this
is
controversial.
I
just
want
to
illustrate
that.
There's
a
potential
second
way
out
here
and
I
appreciate
that
it
moves
to
complexity
rather
than
totally
solves
it,
but
I'm
curious
if
it
results
in
less
confusion.
E
Where
you
know
the
the
alternative
path
here
is
every
every
default
value
written
into
a
schema.
Whether
that's
in
you
know
the
the
query,
oh
fan,
I
guess
part
of
what
makes
this
tricky
is.
I
think
what
I'm
saying
makes
sense
here.
If
it's,
if
it's
part
of
a
query
text,
then
that
query
text
should
be
expanded,
because
that
is
something
that
is
provided
by
a
client
if
it
is
part
of
a
schema.
That
is
something
provided
by
the
server.
E
I
think
it
is
not
totally
unreasonable
to
say
a
server
should
always
give
you
fully
formed
information
right.
So
if
you,
if
you
and
this
kind
of
adheres
to
a
principle
that
you
see
with
interfaces
as
well
right
like
if
you
look
at
a
type,
you
see
all
the
fields,
you
could
query
on
that
type.
E
Documentation
to
learn
about
what
it
is
that
you're
looking
at,
and
I
wonder
if
the
same
principle
applies
here
for
default
values
where,
if
you
have
a
complex
default
value,
if
you
have
an
input,
object
default
value
that
and
seeing
something
that,
like
this
very
very
first
bit
here
type
query
example:
input
object.
Example:
object
equals
empty
object.
E
The
casual
observer
reading
this
before
they
click
through
example,
input
object
to
see
what
the
heck
that
thing
is
looks
at
that
says.
The
default
value,
for
this
is
nothing
it's
an
empty
object
with
no
fields.
E
I
think
it
there's.
Certainly
it
would
be
surprising
if
you
get
a
null
or
something
like
totally
busted
there
that's
clearly
broken.
We
should
fix
that,
but
I
think
it's
also
somewhat
surprising
to
find
out
that,
in
fact,
that
empty
object
had
more
properties
in
it.
That
only
happened,
because
example,
input
object,
put
them
there.
E
E
But
this
is
why
I'm
asking
about
this
is
like.
Is
this
a
is
this
a
real
problem,
a
real
problem
that
people
are
having
that
we
need
to
solve,
because
in
my
experience,
most
of
these
input
objects
are
used
in
one,
maybe
a
few
cases
where
there's
like
a
mutation
field
that
has
that
input,
object
or
there's
a
query
field
that
has
that
breakdown
and
there's
like
three
variants
of
it?
Maybe
is
it
reasonable,
in
that
case,
to
say,
like
I'm
gonna,
you
gotta,
you
know.
The
value
that
goes
here
is
gonna,
be
fully
formed.
I
So
I
have
a
thought
inspired
by
the
the
parallel
with
interfaces
which
you
just
brought
up,
which
is
that
yeah
in
the
schema
in
the
the
sdl.
Obviously,
objects
have
to
have
the
fields
of
the
interface
that
they
implement,
but
in
like
the
code
that
defines
the
schema
in
languages
that
do
it.
That
way,
you
don't
so
like
in
ruby,
you
can
implement
node,
and
then
you
don't
also
have
to
manually
redefine
field
id
that
just
happens
automatically
when
the
scheme
is
generated.
B
I
think
there's
one
subtle
thing
that
this
previous
discussion
doesn't
cover
and
that's
where
these
things
are
in
use,
showing
that
an
object
has
different
defaults
may
be
something
that
a
user
wants
to
know.
So,
if
you
look
again
at
the
example
with
the
number
you
might
already
know,
that
example
input
object,
has
that
number
has
a
default
of
three.
So
when
you
see
the
empty
object,
you
know.
Oh,
it's
just
going
to
use
the
default
behavior
for
all
the
properties.
B
If
we
had
another
property
and
we
wanted
to
make
it
so
one
particular
field
uses
the
same
input
object,
but
has
a
subtly
different
default
like
where
it
specifies
an
additional
field
in
there
makes
it
more
obvious
so,
like
maybe
it's
defaults
to
type
cat
rather
than
type
dog,
or
not
having
a
default
for
that,
it's
much
less
obvious
for
the
user.
B
If
you
specify
that
object
like
in
what
you
see
through
through
the
graphql
introspection
to
be
that
deeply
nested
object
with
all
these
extra
properties,
it's
very
hard
to
spot
that
this
one
thing
is
different
from
the
norm.
I
think
there's
a
lot
of
redundancy
in
in
indicating
that,
if
you've
already
specified
in
the
schema,
the
number
is
always
default.
Three
for
this
input
object,
then
putting
number
three
for
wherever
that
input
object
is
used
for
take
the
postgrad
file
example.
B
If
it's
a
condition,
those
conditions
are
used
anywhere
that
those
collections
exist
right
so
at
the
root
of
the
schema,
also
like
on
any
things
that
have
relations
to
those
tables,
loads
and
loads,
to
know
to
different
places,
you're
going
to
end
up
specifying
that
same
default
field
over
and
over
and
over
again
in
your
schema,
it's
going
to
be
big.
It's
going
to
be
hard
to
read
and
reason
about.
I
do.
I
do
understand
the
the
point
of
view
of
saying.
B
Well,
if
it's
fully
specified,
you
can
just
look
at
it
and
you
don't
need
to
look
at
the
other
defaults,
but
again
that
that
could
also
be
solved
with
client
tooling,
without
adding
that
extra.
B
It
basically
comes
down
to
me
to
saying
if
you
see
empty
object
in
the
schema
as
the
default,
it
should
be
the
same
as
if
you
pass
an
empty
object
to
the
schema.
That
just
seems
to
me
to
be
a
very
straightforward,
easy
to
understand
definition,
though
I
suppose
what
lee
is
saying
here
is
well,
we
would
never
have
an
empty
object
because
it
would
always
be
specified,
so
the
situation
wouldn't
arise.
J
Yeah
I
so
I
I
think
this
like
this
perspective,
that
what
you
see
when
you
put
the
empty
object
versus
what
you
get
should
always
be
the
same
like
this
makes
sense,
but
also
the
idea
that
like
well,
you
might
not
want
to
specify
every
field.
We
already
have
that
situation
with,
for
instance,
interfaces
where
we
have
concrete
types
that
implement
the
interface
and
nine
out
of
ten
of
the
fields,
return
the
same
type
or
are
of
the
same
type,
but
one
of
the
fields
is
a
like.
J
If
interface
foo
has
field
bar,
where
bar
is
another
interface
and
the
concrete
type
has
concrete
bar
as
its
type
like,
you
still
have
that
okay
yeah,
I
just
copied
10
interface
fields
and
only
changed
one
right,
so
we
still
have
like
that
seems
pretty.
Parallel
to
okay,
you
have
to
specify
everything
basically.
J
Yeah,
I
I'm
actually
heart
coming
down
very
much
in
favor
of
if
you
have
an
input
that
has,
if
you
have
an
input,
object
with
a
default
value,
you
have
to
fully
specify
what
it
looks
like
all
the
way
down
at
the
point
of
the
sdl
definition,
rather
than
like,
which
just
simplifies,
simplifies
execution.
A
lot.
E
Benji
the
premise
that
you
framed-
and
you
said
it
makes
there's
some
intuitive
sense
to
it.
I
completely
agree
with,
I
think,
that's
probably
part
of
why
the
last
time
we
discussed
this,
that
was
the
direction
we
we
were
going
down,
because
that
premise
does
make
a
lot
of
sense.
Whatever
you
put,
there
is
a
default.
E
It
makes
sense
that
that
would
behave
the
same
as
if
you
had
just
provided
that
as
a
client,
but
the
what
I'm
kind
of
responding
to
here
and
why
I'm
I'm
proposing
not
not
a
perfect
here,
but
but
a
potential
alternative
is
there's
two
separate
pieces
of
complexity
that
unfold
from
this
one
is
this
new
concept
that
has
to
be
introduced
in
the
spec
language?
That's
the
difference
between
a
default
value
and
a
coerced
default
value.
E
It's
now
happening
like
to
take
a
written,
spec
first
schema
document
and,
like
turn
it
into
a
final
form
that
that
part
is
some
incidental
complexity
and
then
also
now
this
this
infinite
recursion
case
right,
like
part
of
the
I
we've
talked
about
infinite
recursion
in
in
input
objects
before
and
and
decided
that
as
long
as
input
objects
couldn't
self-refer
right,
you
couldn't
create
a
cycle.
E
Then
we
had
nothing
to
worry
about
right
because,
as
long
as
they
describe
plain
data,
then
types
can
be
recursive
and
data
will
not
be,
and
as
soon
as
we
introduce
this
nested
default
value
resolution
scheme,
where
anything
along
that
path
can
be
complex
input
objects.
Then
we
get
this
infinite,
recursion
problem,
and
so
this
is
why,
like
I'm,
not
necessarily
saying
that
this
path
being
proposed,
is
the
dead
end,
and
this
other
path
is
the
right
thing.
E
What
I'm
saying
is:
is
this
an
important
problem
that
solving
it
is
worth
the
cost
of
these
additional
pieces
of
complexity?
It's
worth
the
implementation
cost
of
solving
the
infinite
recursion,
and
it's
worth
the
incidental
complexity
of
the
difference
between
course
default
values
and
regular
default
values.
E
But
a
simpler
way
is,
is
like
the
right
budget
of
of
cost
to
value,
and
I
think
the
way
that
we
might
solve
this
or
answer
this
question
is
by
looking
at
more
concrete
examples,
queries
that
people
are
actually
writing,
and
so
I
think,
if
there's
like
a
motivating
example
from
from
post
graph
file
or
from
asura
or
from
some
of
these
schemas
that
really
heavily
use
input,
objects
and
default
values
within
those
input
objects,
then
that
might
motivate
right
like
if
we,
if
we're
struggling
to
find
a
case
where
nested
input
objects
with
lots
of
default
values
exist,
then
maybe
this
is
too
much,
but
if
we
find
that
there's
actually
a
bunch
of
those
cases
and
the
out
of
the
box
to
sera
and
postgraph
file,.
B
B
Is
how
do
we
write
it
in
spec
language
for
this
case,
rather
than
how
do
I
write
it
in
code?
I
think
I
could
write
it
in
code
in
about
an
hour,
it's
just
tracking,
where
you've
been
and
seeing
if
you're
trying
to
go
there
again.
Basically-
and
I
think
in
terms
of
complexity
of
the
implementation,
we
already
have
the
situation
where
we
are
doing
this
for
the
variables
that
we
pass
through
query
documents.
B
So
we
already
have
the
the
default
that
is
specified
by
the
user,
which
could
be
the
empty
object,
for
example,
and
then
the
value
that
is
actually
used
by
the
server,
which
is
the
the
baked
version
of
that
default
value.
So
we
do
already
have
that
situation
you
outlined,
but
you're
right
that
we
don't
have
it
purely
in
the
schema
side.
B
I'm
not
sure.
Well,
one
of
the
reasons
I
don't
think
that's
particularly
important
is
because
the
fix
that
I've
applied
is
effectively
during
execution.
Now
we
can
memorize
it
and
we
can
do
it
as
schema
build
time,
but
it's
the
same
code
during
execute
like
I've,
just
copied
the
execution
code
from
what
we
do
with
variables.
B
So
it
is
literally
like
in
terms
of
schema.
Spec
changes
in
execution
is
like
an
extra
line.
That's
really,
it
is
let
coerce
value,
be
the
coerced
value
of
the
of
the
default
value
done,
use
that
it
seems
to
me
pretty
straightforward
to
do
that
and
then
at
validation
stage.
B
B
It
might
be
that
I
need
to
find
the
relevant
part
for
fragments
in
the
spec
and
then
rewrite
that
wording
to
reference
types
and
fields,
but
I
think
one
of
the
important
things
that
we're
making
a
decision
on
here
is:
are
we
going
to
solve
this
as
part
of
the
graphql
schema?
So
that
people
don't
have
to
think
about
it,
or
are
we
going
to
punt
this
problem
to
every
single
schema
implementation
so
that
every
single
developer
has
to
discover
this
anyone
who's
writing
schema
definition.
B
Language,
for
example,
has
to
actually
write
every
single
value
out
if
they're
using
these
defaults-
and
I
I
think
it's
something
that
we
can
solve
without
a
huge
amount
of
fuss
in
the
graphql
schema
and
save
a
huge
amount
of
human
effort
outside
of
the
graphql
specification
itself.
By
doing
so,.
G
There's
this
is
mike
cohen
from
indeed
we
we
actually
encountered
this
problem.
You
asked
for
some
examples.
I
do
have
a
schema
that
we
use
in
production,
where
we
we
were.
G
We
bumped
into
this
problem,
we're
a
little
bit
surprised
that
the
so
we
actually
specified
the
some
of
the
fields
on
the
nested
input
types
as
required
and
specified
some
default
values
and
found
at
runtime
that
that
they
weren't
present.
So
what
we
did
was
sort
of
lift
those
up
and
specify
the
individual
fields.
G
Explicitly
as
part
of
the
default
value
specification,
which
is,
I
think,
lee
what
you
were
suggesting
as
as
an
alternative,
so
we
we
definitely
hit
it,
and
this
was
on
one
of
our
I'm
happy
to
share
the
example.
If
it's,
if
it's
interesting
or
useful,.
E
It's
definitely
yeah,
it
would
definitely
be
interesting,
and
I
I
want
to
be
clear.
I
definitely
this
is
a.
This
is
a
super
important
problem
to
solve
the
fact
that
you
can
end
up
with
an
unintuitive
result,
especially
the
motivating
example
here,
which
is
that
you
can
have
a
schema
which
declares
something
is
non-null,
and
then
you
get
null
that
that's
like
risk
for
npes
that
will
knock
out.
Servers
like
this
has
to
be
solved.
The
question
is:
what
is
the
path
to
solve?
It?
E
Benji
made
a
really
good
point,
which
is
that
these
nested
input
object.
You
know
default
values.
We
could
get
into
recruiting
case
with
just
the
provided
input
as
well,
not
just
these
default
values.
So
certainly,
I
think
the
the
blast
radius
of
figuring
this
out
is
bigger
and
it
you
know,
complexity.
Cost
is
just
one
thing
to
keep
in
mind
here.
The
other
is
you
know.
You
mentioned
this
solving
or
saving
time
for
people.
I
do
think
this
gets
back
to
this
related
principle
to
the
idea
of
you
know.
E
E
To
be
honest,
I
I
don't
totally
know
I
just
want
to
make
sure
that
we're
talking
through
all
of
the
potential
options
I
I
I
do
want
to
make
sure
that
we're
solving
for
readers
of
schemas,
more
so
than
we're
solving
for
writers
of
schemas.
E
So
if
you're
writing
the
sdl
and
you
have
to
type
a
lot-
I'm
sorry,
you
know
presumably
way
more
people
are
going
to
be
reading
that
thing
than
it
is
you
writing
it,
and
so
it's
probably
the
right
thing
for
you
to
be
writing
more.
If
that
means
that
it
is
going
to
be
easier
to
read
now,
if
it
won't
be
easier
to
read-
and
you
have
to
write
more-
and
it
is
more
painful
to
read-
then
it's
the
wrong
trade-off,
but
that
that
I
think
is
probably
the
the
crux
of
figuring
out.
E
Then
that
would
indicate
that
that's
the
path
that
we
should
follow
where
the
these
default
values
that
appear
in
the
schema
are
always
fully
formed,
or
are
there
a
lot
of
motivating
cases
or
even
just
a
handful
of
particularly
egregious
ones
where
you
know
having
to
do
that
actually
makes
the
schema
particularly
less
readable,
and
I
guess
to
be
crisp
about
what
I'm
asking
for
here
is
a
couple
of
specific
examples
of
edge
or
corner
cases
within
schema
that
are
real
schema,
where
we
can
see
the
the
real
effect
that
this
decision
might
have.
E
B
I
think
it's
worth
noting
that
the
the
issue
you're
currently
discussing
is
kind
of
like
a
third
problem
on
top
of
the
previous
two,
and
it
is
solvable
with
the
the
spec
edits
that
I
have
currently
by
adding
one
additional
small
change,
which
is
when
you
output
an
introspection.
What
the
value
is
you
just
use
the
coerced
version
rather
than
the
raw
version?
It's
that,
then
it's
solved,
that's
your.
B
F
So,
like
property
of
sdl
and
introspection
that
you
can
make
cycle,
you
can
take
the
sdl,
you
can
pass
through
introspection
and
you
can
print
this
sdl
and
you
result
with
the
same
sdl.
So
I
think,
like
it's
super
important
property
to
keep.
So
we
cannot
expand
value
from
from
hdl
and
use
expanded
value
introspection.
So
when
a
client
print
introspection
into
sdl,
he
will
receive
different
sdl.
F
E
Property
right,
I
think
that
is
a
path
if
we
would
just
to
to
preserve
the
property
that
yvonne
has
pointed
out,
we
would
we
would
have
to
use
a
technique
that
evan
suggested,
which
is
to
push
it
below
the
layer
that
the
spec
has
visibility
on,
and
so
the
implementation
would
allow
you
to
generate
that
thing
and
it
would
be
exposed
up
in
a
way
that
adheres
to
the
spec
rather
than
the
spec
doing
that
coercion
itself
or
implying
that
there's
a
coercion
to
be
done
between
what
the
schema
says
is
there
versus
what
you
see
come
out
of
it.
E
Here
is
there's
going
to
be
incidental
complexity
in
that
path
as
well,
because
we
don't
have
an
algorithm
right
now
that
that
declares
some
value.
It's
fully
formed
while
ignoring
default
values,
which
is
essentially
what
we
would
be
looking
for
here,
right
to
say
like
if
you've
got
some
input,
object
and
there's
a
number
field,
and
it
takes
three
that
that
number
field
has
to
exist
in
this
particular
form.
E
That
would
be.
That
would
be
a
new
thing
for
us
to
have
to
go,
build
so
yeah,
more
incidental
complexity,
so
just
to
kind
of
put
that
there
that
I
think
there's
there's
probably
some
complexity
in
either
path
that
we
go
to
solve
this.
I
think,
actually,
that
helps
narrow
it
down.
F
Complexity
kind
of
agree,
like
one
to
note
and
which
wasn't
obvious
for
me,
is
that
default
values
default
values
for
input
objects
is
part
of
coercion
of
input,
object
so
basically
like
correction
for
scours
and
cache,
and
we
use
recursion
for
input
objects
to
put
default
values
in
place
so
like.
F
Basically,
we
need
to
define
for
every
type
we
need
to
define
now,
favorite
input
type.
We
need
to
define
our
two
function:
curse
and
validate,
because
we
need
to
validate
the
default
values
that
we
specify
is
valid
against
it.
So
I
agree
with
this
so
like,
even
if
we
stick
with
fully
specified
values,
cy
complexity,
instead
of
two
callbacks,
every
input
type
should
have
work
all
right,
so
the
one
yeah
it
should
have
iq,
coyotes
and
validate,
and
one
thing
I
would
strongly
like
it's
like
strong
position
for
me.
F
It
should
be
the
same
for
for
scholars
and
input
objects.
It
should
be
the
same
algorithm
because
it's
the
same
algorithm
right
now.
So
if
we
define
fully
specified
value
so
it
should
be
fully
specified
no
senses,
so
it
should
be
fully
specified
as
like
scours,
and
it
should
be
fully
specified,
is
like
input
object.
F
E
E
I
Oh
okay,
he
was
just
saying
that,
like
default,
input,
objects
in
general
are
pretty
low
value
potentially,
and
so
maybe
we
should
just
forbid
them
entirely
and
short
circuit.
This
entire
conversation,
which
is
investing
because
yeah
shopify,
basically
doesn't
use
them.
So
it's.
J
Not
out
of
the
question
for
us
yeah,
it's
it's
an
extreme
thing,
but
also
right.
They
I
think
they
are
used,
but
very
very
rarely
within
facebook.
So
the
like,
at
least
that
may
be
a
history,
though
a
history
issue,
though,
because
I
believe
we
like
have
most
of
like
how
you
copy
and
paste
code
to
get
build.
New
types
happened
before
I
think
we
allowed,
like
input,
objects
to
have
defaults,
so
that's
that
may
not
be
actual,
but
from
within
facebook
yeah.
They.
I
J
And
the
the
other
thing
here
is
like:
if,
if
we
could
get
95
of
our
value,
and
then
we
just
say,
oh,
you
can
have
default.
Scalers
and
part
of
the
problem,
though,
is
that
the
horse
is
kind
of
already
out
of
the
barn
like
it
would
be
difficult
for
us
to
release
a
new
version
of
the
spec
that
disallows
input.
It
defaults
on
input
objects,
given
there
are
schemas
that
exist
with
them,
but.
E
Yeah,
really
I'm
not
going
to
do
that,
but
that's
what
I'm
saying
here
is
not
to
take
them
away.
It's
just
to
like
make
them
make
them
annoyingly
complicated.
E
By
saying
that
you
have
to
repeat:
if
there's
any
additional
default
values
within
those
like,
you
got
to
repeat
them
all
the
way
down,
but
that-
and
this
is
like
going
back
to
what
I'm
asking
for
here
is
I
I
just
want
some
use
cases
mike.
I
think
you
mentioned
before
that
there
were
some
cases
within
a
schema
that
you
had
some
preview
over.
E
E
List
out
all
their
nested
defaults,
it
is
just
like
not
not
a
problem,
then
I'd
pitch,
that
we
go
that
route.
If
we
find
that,
even
if
we
find
there's
only
a
few
right
like
there's
just
enough
cost
there
that
it's
painful
then
like
probably
the
original
proposal
here
is
the
right
one
that
we
should
just
restrict
the
restrict
recursion
and,
like
you
know,
that's
also,
it's
not
a
possible
thing
to
do.
There's
plenty
of
other
places
where
we
prevent
recursion.
G
So
our
our
examples
are
generally
around
actually
our
job
search
operation.
So
I
I
think
I
haven't
looked
at
every
single
one,
but
this
is
the
one
that
comes
to
mind.
I
think
it
may
be
the
only
place
where
this
is
relevant
for
us,
so
it's
basically
a
search
operation
and,
as
you
can
imagine
somewhat
intricate,
I
think
probably
every
other
operation
does
not
have
this.
E
E
H
E
E
E
L
Right,
I
think
what
you're
asking
for
is
examples
where
it
would
be
prohibitively
horrible
to
explicitly
specify
every
component
of
the
default
input
object
right
like
we,
we've
definitely
got
customers
who
use
default.
Real-World
schemas
out
there
that
use
default
input
objects
where
they
even
specify
multiple
levels
of
object
within
a
default
which
is
kind
of
ugly
but
they're
already
doing
it.
So
I'm
not
like.
I
think,
that's
not
a
counter
example
to
your
your
simplicity
proposal.
E
I'm
also,
maybe
over
correcting
here,
by
responding
to
the
discussion
that
matt
and
evan
were
having
about
whether
obviously
we're
not
going
to
go
all
the
way
to
removing
things,
because
that
would
break
stuff
but
to
just
kind
of
push
the
discussion.
That
way.
For
the
the
sake
of
argument,
you
know
even
just
looking
at
are
there
cases
where.
E
It
sounds
like
in
the
schema
that
matt
and
evan
have
purview
over
that
they
could
just
pull
literally
all
the
examples
and
just
list
them
out,
and
that
would
just
kind
of
be
interesting
to
look
at
and
so
mike.
If
you've
got
lots
of
examples,
then
maybe
just
picking
the
ones
that
are
most
relevant
to
the
discussion
will
be
helpful.
E
H
E
On
on
this
one
for
a
while,
I
am
motivated
to
get
this
solved
because
I
I
think
the
root
issue
here
is
is
pretty
concerning,
but
I
do
want
to
make
sure
we
make
a
good
decision
and
if
we
end
up
doing
the
original
proposal
based
on
additional
data,
then
I'll
feel
really
really
good
about
that.
E
Also
benji,
if
you
wouldn't
mind
pulling
some
examples
of
cases
you
found
within
postgraphal
use
and
then
also
I
know
you
did
a
little
digging
in
his
surah
and
I
I
know
that
they
heavily
use
objects
so
yeah.
I.
B
Don't
know
if
they
honestly
I've,
not
I've
not
used
missouri
yet
so
I
don't
know
whether
they
actually
use
any
defaults
on
their
filters,
but
I
know
that
they
do
have
deeply
nested
filters,
so
you
can
imagine
a
situation
where
specifying
a
default
filter
might
be
beneficial
but
yeah,
maybe
I'll
reach
out
to
tamlai
and
see
if
he's
got
anything,
that's
a
great.
G
Idea
this
is
maybe
a
really
small
detail,
but
one
of
the
things
that
the
github
api
does,
I
think
somebody
just
linked
to
their
their
api.
Is
I,
if
I
recall
correctly,
they
include
just
one
input
object
for
operations
that
represents,
you
know
all
of
the
arguments,
and
so
that
may
be
something
to
consider.
I
I
think
that's
pretty
prevalent
in
the
community
because
it
makes
consumption
by
clients
a
bit
simpler
when
you're,
you
know,
when
you're,
when
there
are
a
lot
of
inputs.
G
L
That
all
right,
so
I
for
me
just
tell
me
I
don't
know
at
some
point
if
it
gets
important
to
see
something
like
because
I
I'm
looking
at
schemas
that
are
not
mine,
they're
from
our
customers,
so
I
can't
share
it.
But
if
there's
something
that
is
going
to
make
a
difference
in
the
conversation,
I
can
try
to
follow
up
to
either
just
share
the
pattern
or
try
to
get
permission
from
someone
or
whatever.
L
But,
like
I
just
looked
at
one
example
and
searched
and
it's
you
know,
the
search
people
are
saying
in
the
chat
and
it's
got
is
the
same
thing.
I've
done
it's
got
over
200
cases
of
it.
So
I
mean,
like
you,
said.
Clearly,
people
are
using
it
at
least
when
I
guess
people
who
decide
that
it's
a
pattern
that,
for
whatever
reason,
maybe
not
for
good
reasons
just
because
they
didn't
know
which
graphql
but
the
people
who
go
into
it
might
start
using
it
again
again.
B
I
I
would
also
point
out
that
if
we
look
at
existing
schemas
we're
going
to
find
things
that
are
fully
specified
because
it
literally
doesn't
work
if
we
don't
fully
specify
them.
So
we
have
to
worry
about
a
little
bit
of
a
bias
in
the
sampling
there.
B
But
yes,
it's
good
to
find
out
where
it
is
being
used
and
whether
putting
defaults
in
different
locations
would
make
sense,
for
example
the
github
one
they
they
use
it
for
order
by,
for
example,
and
they
specify
fully
what
the
order
by
is
by
saying
the
field
and
the
order.
B
E
Yeah,
that's
right
if
people
are
listing
out
default
values
or
their
omitting
fields,
because
of
the
the
bug
that
you
found
veggie,
they
probably
are
broken
and
not
performing
the
way
that
they're
supposed
to
or
they
are
just
listing
everything
out,
and
so
I
guess
specifically
what
we're
looking
for
are
cases
where
either
you
you
find
that
someone
is
very
likely
facing
this
bug,
which
case
especially
more
since
you
have
clients
who
are
building
these
schema.
E
You
should
let
them
know,
but
then
also
if
there
are
cases
where
there's
a
really
complicated
looking
one,
especially
in
the
scenario
where
the
input
object
that
is
being
used
as
a
default
value.
The
type
of
that
input
object
itself
has
default
values
that
are
being
repeated
there.
E
B
Is
there
a
good
collection
of
graphql
apis?
I
know
ivan
you
had
one
at
one
point
and
the
api's
guru
is
there.
What
I'd
really
like
is
just
a
repository
that
is
full
of
just
graphql
schemas,
just
literally
graphql
schema
files
that
would
make
answering
these
kind
of
questions
much
more
straightforward.
F
So
we
actually
wanted
to
build
it
and
ideas
that
we
have
like
a
list
in
s
readme,
and
we
also
have
a
company
in
json
and
inside
which
json
we
actually
tried
it
and
I
think
it's
possible
with
github
action.
So
if
anybody
is
going
to
to
contribute
that,
I
will
gladly
merge
it.
You
have
actions
that
use
list
and
dump
schemas
to
github
pages,
a
branch
yeah.
I
agree
it
would
be
useful
for
for
research
projects.
B
I
wonder
if
apollo
have
anything,
because
they
obviously
have
their
schema
store.
I
wonder
if
they
know
which
of
those
schemas
are
are
public
and
if
they
could
produce
something
like
that
relatively
easily.
I
G
F
We
have
to
read
it
it's
what
I
meant
by
like.
We
will
accept
our
much
contribution
if
somebody
implement
github
action,
the
dumb
introspection
so
inside
this
repo,
it's
not
only
like
readme
with
the
list,
but
also
registered
files.
So
it's
like
it's
possible
to
to
get
introspection
in
theory.
In
practice,
you
need
also
to
have
like
client
keys
for
some
of
the
ips.
A
I
do
want
to
mention
something
so
a
few
years
ago,
if
your
caller
is
another
person
that
joined
us
often
his
name
was
eric
whitten.
I
worked
on
a
research
paper
with
him
called
an
empirical
study
of
graphql
schemas,
where
we
actually
collected
around
8
400
graphical
schemas
from
github
projects,
but
I'm
not
sure
I
don't
think
we
open
source
this
repository.
I
think
there
is
like
some.
A
We
cannot
republish
publicly
accessible
data
from
github
youtube
usage
rights,
but
I
don't
know-
maybe
I
can
think
of
some
way
where
I
can.
Let
you
guys
use
this
schema.
E
Somehow
amazing,
even
just
kind
of
running,
that
kind
of
simple
search
that
that
evan
suggested
just
like
looking
for
equals
space,
open,
brace.
A
Yeah
and
show
you
in
the
study
examples
collected
much
more
than
that,
but
we
we
basically
we
we
ensured
that
all
the
schemas
were
unique
and
also
some
schemas.
They
were
divided
into
multiple
different
files
within
a
repository
and
we
found
ways
to
to
combine
all
that
together,
but
I'm
not
sure
how
I
can
allow
you
guys
to
access
this
kind
of
detail
because
of
usage
rights.
H
Graphical
java,
we
recently
added
the
schema
anonymizer
function,
which
lets
you
produce
anonymized
version
of
a
schema
which
retains
all
the
structure.
H
B
It
might
be,
it
might
be
value.
Valuable
andy
says,
because
it
means
that
you
could
potentially
share
the
anonymized
versions
we
can
find
which
ones
of
those
are
relevant,
and
then
you
can
then
decide
whether
or
not
those
original
versions
can
ultimately
be
shared
by
checking
the
license
or
whatever
just
that.
One
schema,
rather
than
8600.
A
Actually,
not
on
thinking
of
it.
If,
if
I
can't
do
the
anonymization
and
then
I
I
let
you
access
that
kind
of
data
and
we
do
identify
schemas
that
could
be
useful
within
this
data
set.
I
also
have
references
to
you,
know
the
specific
repository
and
the
commit
that's
associated
with
it.
So
in
that
case
I
can
probably
at
least
link
you
guys
to
to
the
schema,
and
I
won't
be
directly
sharing
data
with
you.
A
E
That
sounds
awesome,
that's
an
amazing
data
set
to
have,
but
even
in
the
case
service.
Without
that
context,
I
think
it
would
still
be
useful,
because
really,
I
think
what
we're
after
here
are
we
seeing
this
kind
of
shape
of
these
like
complex
and
nested
default
input
objects
in
real
schema,
so
even
if
the
context
was
lost
just
knowing
that
that
shape
existed
and
it
was
found
in
a
real
schema
that
someone
was
using.
E
E
All
right,
I
know
we
spent
a
lot
more
than
15
minutes
on
that,
but
worthwhile
I'm
gonna
do
things
a
little
bit
out
of
order
just
because
you've
had
you
do
a
lot
of
talking
and
give
you
a
little
bit
of
a
break
and
we'll
move
on
to
talk
about
deferred
stream
updates
for
rob
before
we
move
back
the
agenda
so.
K
Yeah
not
not
a
huge
update
again,
but
so
I
think
we're
mostly
blocked
on
javascript
side
by
the
typescript
migration,
but
we
do
have
the
official
experimental
branch
of
graphql
js
and
express
graphql
liliana,
and
I
talked
at
a
couple
meetups
and
we
wrote
a
blog
post
which
the
graphql
account
tweeted.
K
So
I
think
anyone
who
would
be
interested
hopefully
knows
about
these
branches
and
can
try
them
out.
We
got
a
bunch
of
feedback
about
a
few
things
on
an
issue
that
I
had
posted
and
I
just
had
like
two
small
things
that
have
come
up
that
I
want
to
discuss.
K
The
first
one
is
in
the
stream
directive.
We
have
the
initial
count
argument
which
we
had
originally
said
that
it
will
be
a
required
argument.
Benji
recommended
just
making
it
optional
and
defaulting
to
zero.
I
don't
really
have
like
a
super
strong
opinion
about
it.
I'm
just
doing
it
out
of
being
a
conservative
where
making
things
required
instead
of
assuming
like
assumptions
of
it
not
being
there,
but
I
guess
I
just
kind
of
want
to
get
opinion
on
what
direction
everyone
thinks
we
should
go
with
for
that.
K
Right
so
on
this,
you
put
the
stream
directive
on
a
list
field
and
whatever
you
set
initial
account
to
you'll,
get
that
many
results
in
the
initial
payload
and
then
each
result
after
that
will
get
streamed.
So
if
you
said
zero
in
your
first
response,
you're
gonna
get
an
empty
array
and
then
each
result
will
come
back.
If
you
say
three,
you'll
get
the
first
three
in
the
initial
payload.
K
Anything
like
the
main
use
case.
Is
you
have
a
you're
showing
a
list
item
on
your
screen
and
above
the
fold
of
the
page,
you
could
only
fit
so
many,
so
you
would
have
how
many
ever
you
could
fit
in
that
initial
count,
and
so
you
get
those
immediately
while
everything
below
the
fold
comes
in
asynchronously.
E
E
K
Yeah,
I
I
think
that's
reasonable.
We
originally
were
talking
with
kuwait
in
jafar
a
while
back
about
getting
really
exotic
with
how
things
get
could
get
batched
together,
where
it's
the
servers
defining
it
and
instead
of
getting
one
result
at
a
time
you
could
get
groups
of
them,
but
I
think
we
decided
not
to
do
that
because
it
gets
really
complicated
very
quickly
and
so
was
just
trying
to
be
conservative
with
arguments
but
but
yeah.
I
think
we
could
just
it's
pretty
nice
to
just
use
stream
without
any
arguments
at
all.
K
So
I
could
make
that
change.
F
F
So
even
like,
even
if
during
resolution
like
server
have
like
some
items,
it
should
still
wait
and
send
it
afterwards.
K
E
Information
about
what
information
is
ready
to
go
and
requiring
it
to
withhold
data
that
is
ready,
seems
seems
counter-intuitive
like
if
it.
If
I
ask
it
to
stream-
and
it's
got
something
I
don't
know-
maybe
just
the
act
of
preparing
the
parent
body
brings
one
or
two
items
from
that
list
in
cash
and
that's
just
like
ready.
E
Then
then,
why
not
send
them
out
of
the
gate?
Presumably
almost
always
you
know,
especially
in
this
case
of
initial
count
being
zero.
You
would
expect
that
empty
object,
but
like
if
you
said
initial
count,
three
and
the
first
you
know
streamed
batch
from
the
back
end
has
five
in
it
like.
Why
would
the
server
not
just
send
you
five.
F
Yeah
about
it,
I
agree
with
rob,
so
we
even
have
another
argument.
It's
like
do.
You
have
predictable
stream
and
predictable
shape
of
responses
so
like
if,
if
it
can
be
more
like,
you
can
get
like
everything
in
one
one
response
or
you
can
get
in
couple
responses
if
it's
like
five
elements
in
collection
at
all,
like
server,
can
send
one
response
and
doesn't
send
consequences.
One
or
like
so
I
agree
with
rope.
F
K
Where
initial
count
two
means
you
get
two
in
the
first
batch
in
the
inline
previous
response,
without
any
with
everything,
after
two
being
asynchronous.
B
I
think
that
the
the
usage
of
a
term
like
inline
is
probably
clearer,
independent
of
what
ivan
just
said
is
probably
clearer
than
initial
count,
because
it's
not
clear
when
you're
streaming,
whether
the
initial
count
is
what
you're
going
to
get
on
the
first
follow-up
payload
or
whether
you're
going
to
get
it
actually
as
part
of
the
initial
graphql
request
right,
because
the
stream
it
might
be
like
when
you
start
streaming
in
a
moment
once
you've
resolved
everything
else,
then
give
me
the
initial
five,
whereas
really
what
it's
doing
is
actually
in
lining
it
into
the
query,
which
is
different
from
the
rest
of
the
stream.
H
H
E
That
you
could
just
do
some
some
bike
shedding
on
the
name
and
if
you
kind
of
decide
the
name
still
makes
sense,
then
that's
totally
fine,
but
probably
a
good
thing
to
especially
discuss
with
keyway
and
crew.
E
I
suspect
that
there's
enough
kind
of
behavioral
complexity
with
how
streaming
works
just
kind
of
by
necessity,
that
kind
of
no
matter
what
we
name
these
things
we'll
need
some
explainers
around
how
to
use
streaming
data.
So
I
I
wouldn't
hold
out
for
being
able
to
completely
self-describe
this
with
the
names
alone,
but
it
is
worth
making
sure
that
the
terms
we're
using
here
the
best
ones
that
we
can
think
of
there.
It
is
also
I
I
do
want
to
just
kind
of
challenge
the
requiring
that
initial
set
to
be
a
particular
size.
E
Maybe
that's
the
right
thing.
I
know
your
your
team
and
keyways
team
have
talked
about
this
a
lot,
and
so
maybe
it's
the
right
thing.
I
just
want
to
push
on
it
a
bit
because
it
seems
unintuitive
to
me
the
case
that
you
made
makes
sense
right
like
hey.
If
you
say
I
want
an
initial
two,
because
I
know
I'm
building
some
objects
that
are
expensive
and
the
server
has
100
that
like
building
2,
is
a
lot
cheaper
than
building
100
and
that's
what
I
want.
E
E
I
know
I'm
by
the
client
no
better
than
you
the
server
about
how
long
you
the
server
takes
to
produce
information.
So
I'm
telling
you
to
go,
build
two
items
with
three
items
or
four
items
or
whatever
that
that's
the
part
that
seems
unintuitive
to
me.
I
would
expect
the
server
would
have
a
lot
of
information
about
what
is
or
isn't
fast
and
if
the
client
says
hey.
B
To
expand
on
what
lee
just
said,
de
facto,
is
a
minimum
anyway,
because
if
the
server
doesn't
support
stream,
you
will
get
all
of
the
results.
Inlined.
K
Well,
we
had
said
that
if
the
server
doesn't
support
stream,
it
should
pretty
much
flat
out,
reject
the
directive,
and
I
think
like
if
we
had
like
we
talked
about
a
similar
idea,
would
defer
where
we
were
saying.
Should
the
server
be
allowed
to
say.
I
know
that
this
deferred
fragment
can
be
loaded
quickly,
so
I'm
not
going
to
defer
it
at
all
and
where
we
came
up
with.
K
That
was
that
we
would
say
in
this
fact
that
we
would
allow
the
server
to
do
that,
but
we
would
make
it
really
clear
that
that
really,
you
should
really
have
like
a
pretty
advanced
use
case
where
you
are
able
to
know
these
things
and
because
we
really
want
to
discourage
people
writing
servers.
Let's
say
they
do
support
defer,
but
then
actually
just
ignore
it,
and
the
reason
for
that
being
that
when
you're
writing,
something
you,
it
might
be
better
to
split
something
up
into
separate
queries
instead
of
using
defer.
K
I
don't
think
I'm
explaining
that
very
well,
but
so,
if
I'm
writing
some
ui
code-
and
I
have
a
choice
where
I
could
use
defer,
I
cannot
use
defer
or
I
could
split
it
up
into
separate
queries.
The
order
of
best
performance
might
be
defer.
First,
separate
query,
second
and
no
defer
at
all.
Third,
so
you
wouldn't
want
to
write
with
defer
but
then
end
up
getting
the
lowest
performing
one
when
you
could
have
done
the
alternative,
so
yeah
so
anyway.
K
The
point
is
we
said
that
we
would
highly
encourage
that
if
a
server
supports
defer,
it
should
honor
it
unless
you're
really
really
sure
that
you
know
that
not
deferring
something
is
better.
So
we
would
allow
that
case.
So
I
think
it
makes
sense.
You
could
do
the
same
with
stream,
where
the
client
should
be
able
to
reliably
be
reliably
depend
that
the
data
is
going
to
get
streamed.
But
if
the
server
really
knows
that
sending
more
than
the
initial
account
is,
we
could
leave
it
open
in
the
stack.
J
I'm
sure
rob
and
kawai
have
discussed
this,
but
like
internally,
we
we
actually
like
so
facebook
has
had
a
huge
amount
of
usage
streamed
for
internally.
The
we
internally
require
initial
count
says
this
is
what
you
will
get,
but
we
also
have
an
escape
hatch
of
like
if
you
want
to
allow
the
server
to
define
what
to
do
it.
J
Here's
a
boolean
like
I,
the
product
and
now
abdicating
the
responsibility
down
to
the
server
and
right.
It's
not
clear
to
me
that
that's
the
actual,
correct,
like
spec
version
of
defer,
but
that
probably
the
action
item
on
my
like
is
get
kawaii
to
chime
in,
to
explain
exactly
why
this
choice
is
so
that
we
can
have
a
concrete
like
this
is
what
makes
sense
for
the
spec
or
not,
I
will
say
like
it
does
make
sense.
J
There
are
very
explicit
use
cases
where,
if
you're
trying
to
get
something
on
screen
as
quickly
as
possible,
the
cost
of
the
server
the
cost
of
the
server
getting
the
100
items
in
the
list
might
be
less
than
the
cost
to
for
the
client
to
be
downloading
those
hundred
items
over
the
network.
J
E
I
think
what
you're
suggesting
is,
is
kind
of
what
I'm
proposing,
which
is
the
server
shouldn't
just
like
ignore
the
requirement
here,
just
because
it
happens
to
have
something,
but
only
in
the
case
that
it
knows
better
right
and
that
that's
that's
kind
of
the
principle
that
I'm
asking
that
we
kind
of
deduce
out
of
this
is
how
can
we
make
sure
that
the
knowledge
or
the
requirements
of
the
client
and
the
knowledge
of
the
server
combine
best
to
produce
the
best
outcome
right,
because
you
can
also
imagine
the
case
where,
if
you
say
you
know
we're
talking
about
stream
and
not
divert
here,
but
if
you
say
defer,
if
you
put
a
defer
on
a
boundary,
it
will
always.
H
E
Back
split
always
as
a
rule,
and
you
can
kind
of
get
in
the
situation
where
you
actually
have
a
local
cache.
So
now
all
the
data
is
available,
but
because
of
the
requirement
like
ui
has
now
made
some,
you
know
the
ui
will
break.
If
you
happen
to,
you
know,
draw
it
in
one
pass:
instead
of
two
passes
and
now
what
you've
done?
Is
you
just
put
like
a
weird
redraw
blip
in
your
ui?
E
That's
like
unavoidable,
where
what
you
would
probably
would
expect
to
happen
is
if
all
that
data
is
available
in
a
local
cache,
where
you
don't
even
need
to
go
to
the
server
to
go,
get
it
then,
like
you,
probably
shouldn't
defer,
it
should
just
like
all
render
immediately,
but
that
is
against
the
the
thing
that
you
said
is
like
has
to
come
back
in
futures.
E
So
as
long
as
there's
sort
of
like
room
for
knowledge
below
the
query
boundary
to
make
decisions
that
go
against
what
the
I
don't
want
to
end
up
in
a
situation
where
you're
like
yeah
yeah,
well,
the
spec
says
x,
but
the
spec
is
too
strict.
E
If
we
had
to
break
the
spec
in
order
to
get
reasonable
behavior
like
that's
exactly
what
we're
trying
to
avoid
right
like
we
want
to,
we
want
the
spec
to
describe
the
constraints
and
describe
not
normally
the
best
behavior
and
the
ideal
outcomes
based
on
these
principles
and
then
allow
client
servers
to
get
to
something
reasonable.
E
E
K
Just
how
we
already
had
like
an
escape
patch
for
deferred
that
the
server
doesn't
have
to
do
it.
I
I
we
could
have
the
same
for
the
number
of
items
that
get
sent
over
in
stream,
and
it
just
just
needs
to
be
clear
that
that
doesn't
mean
that
a
server
should
just
say
that
it
supports
streaming
deferred
and
just
ignore
them
all
the
time.
Because
then
that's
a
bad
developer.
Experience.
K
Okay
and
then
the
the
second
point
was
that
in
the
directive,
location
in
the
schema
there's
a
bunch
of
options
and
for
stream,
it's
field,
but
there
isn't
it's
not
really
statically
saying
that
it's
a
list
field,
so
I
I
haven't
dug
like
very
much
into
it,
but
I
just
get
the
sense
that
that
could
be
like
a
breaking
change
if
we
were
to
add
a
new
one.
That
represents
an
existing
concept
all
right.
K
E
I
think
I
I
understand
that
the
issue
being
proposed
here,
the
solution
being
proposed.
I
don't
know
that
I
necessarily
agree
with
so
the
the
purpose
behind
the
directive.
Location
was
so
that
arbitrarily
defined
directives
produced
by
anyone
could
have
that
base
level
of
validation
that
all
tools
would
agree
upon,
even
though
that
joel
had
no
knowledge
about
that
directive
right.
So
if
you
came
up
with
rob,
richard's
super
cool
deferred
directive-
and
that
was
the
name
of
it
that
you
could
say
like
hey.
E
E
Necessarily
rely
on
only
the
ones
that
were
available
for
for
for
arbitrarily
defined
directives
which,
if,
if
this
was
going
to
be
an
arbitrarily
defined
directive,
maybe
this
would
be
the
right
thing.
The
other
piece
is
that
the
directive
locations
describe
parts
of
the
the
spec
ast,
and
so
this
would
imply
that
a
field
and
a
list
field
are
two
different
kinds
of
structure
of
the
schema,
and
I
I
think
that
will
probably
just
they're
not
really.
E
So
my
proposal
would
be
leave
it
alone
to
say
that
you
can
only
put
it
on
the
field
in
terms
of
directive
locations,
but
you
probably
don't
want
to
wait
only
until
run
time
to
say:
hey,
you
put
this
directive
on
the
place
where
it's
not
allowed
to
go.
If
this
is
going
to
be
a
directive,
that's
defined
in
the
spec,
then
it's
worthwhile
to
also
include
validation,
logic,
to
make
sure
that
it's
used
correctly
and
that
validation
logic
seems
reasonable.
E
Yeah,
but
during
not
during
explaining
query
validation
but
yes,
yeah.
E
K
Okay,
that's
that
was
it
I'll
make
those
updates
for
our
initial
count
and
I'll
make
sure
that
we
have
the
the
validation
rule
in
graphql
js
and
in
the
stack-
and
I
guess
just
just
anyone
else-
that
benji's
been
reviewing
the
stack
which,
which
is
great
and
just
any
other
eyes
we
can
get
on.
It
would
be.
E
Helpful
thanks
again
for
your
persistent
work
on
this.
I
feel
like
every
every
month,
there's
updates
and
every
month,
there's
actions,
and
so
this
is
the
one
that
when
y'all
first
proposed
that
I
think
it
was
at
the
beginning
of
the
year
that
we
started
talking
about
it
yeah
one
year
ago
it
was
going
to
go
on
forever
and
actually
this
is
one
of
the
most
complex
additions
to
the
spec
and
it's
happening
really
fast
in
comparison
to
almost
everything
else,
even
though
it
feels
like
it's
taking
a
long
time.
E
E
Welcome
mark
we
didn't
forget
you.
We
we
held
your
agenda
item
in
hopes
that
you
would
show
up
to
talk
it
through.
M
Yeah.
Sorry,
sorry
about
that
everyone,
I
had
a
rough
yeah
a
rough
couple
days
there,
but
well
thanks
for
waiting,
so
I
guess
I'll
I'll
get
into
it.
Then.
B
Before
before
we
start
with
mark,
would
it
be
okay
for
andy
to
have
his
topic,
because
I
know
he
needs
to
pop
away.
E
Yes,
thanks
for
watching,
let's,
let's
talk
about
alternative
meeting
times
before
you
have
to
disappear.
Andy
with
you.
H
Yeah,
yes
thanks
benjamin
m
yeah.
I
just
wanted
to
quickly
ask
if
it
would
be
possible
to
maybe
alternate
the
meeting
terms,
sometimes
or
adjust
them
on
a
on
a
scheduled
tourism,
so
that
it's
a
little
bit
more
friendlier
for
the
australia
and
new
zealand
area.
H
Little
bit
brutal
and
it's
it's
yeah
like
just
moving
two
hours
ahead.
For
example,
this
is
a
little
bit
more
specific
ask
would
be
making
it
start
at
six
a.m.
What
the
downside
of
moving
two
hours
ahead
is,
especially
for
the.
H
Indian
area
and
asian
area,
it
would
get
quite
late,
so
it's
there's
no
perfect
time
to
cover
the
whole
world.
Obviously,
but
maybe
we
can
alternate
it
sometimes
and
yeah.
I
wanted
to
open
up
for
discussion
and
asking
for.
E
For
pacific
coast
u.s,
this
is
first
thing
in
the
morning,
so
I'm
the
idea
of
pushing
it
back
by
two
hours
is
very
palatable.
I
know
that
that
pushes
things
back,
I
think,
especially
for
folks
in
europe
that
gets
a
little
bit
trickier.
I
know
this
is
already
a
little
bit
after
working
hours.
H
So
for
europe
it
will
be
pushed
in
a
little
bit
after
working
wars,
but
I
would
argue
still
into
the
I'm
awake
hours
like
it
depends
a
little
bit
what
the
goal
is
to
keep
it
in
working
hours
or
just
reasonable,
but
I
agree
for
europe.
It
would
be
pushed
a
little
bit
on
the
edge
of
the
day.
It
would
start
for
at
9
00
pm,
for,
like
I
see
on
the
meeting
planner
for
helsinki,
finland,
it
would
be
9
p.m.
H
For
example,
for
central
europe,
like
germany
itself
would
be
8
pm
the
starting
time.
H
Even
I
think
you're
our
most
persistent
europe
resident
at
the
moment
or
one
of
them.
What
is
your
opinion.
F
M
F
B
Okay,
I'm
I'm
okay
with
this,
I'm
in
london,
well,
london
time
zone,
I'm
actually
in
south
coast,
but
I
do
wonder
whether
we
need
to
consider
the
knock-on
consequences
for
the
technical
steering
committee
that
we're
meant
to
meet
at
the
beginning
of
each
session
yeah.
I
just
wanted
to
make
that
point.
I
actually
I'm
happy
whatever.
J
F
F
Can
we
do
like
include
everybody
in
this
with
call
and
to
reach
everybody
from
technical
student
committee
and
ask
them
to
specify
not
preferred
hours
but
possible
hours,
and
if
everybody
is
so
okay
with
like
moving
into
hours
back,
we
can
because
right
now,
if,
if
it's
okay
for
everybody,
so
we
don't
need
to
make
complex
machinery.
I
would
say
like
people
on
wisco
and
like
technical,
syrian.
E
Committee-
I
I'm
okay
with
this
and
I'm
I
kind
of
agree
with
you.
That
predictability
is
is
ideal
just
because
it
it
is
one
less
thing
to
be
confusing,
but
I'm
also
okay
with
the
time
shifting.
E
E
E
E
Part
of
the
reason
why
you
brave
the
early
wake
up
to
join
us.
But
I
suspect
perhaps
if
the
time
wasn't
so
inconvenient,
that
perhaps
poor
people
in
your
time
zone
or
near
areas.
E
And
similarly,
so
it's
it's
relatively
late
in
the
day
in
europe
and
we've
had
actually
pretty
reasonable
attendance
from
folks
who
were
in
europe.
But
I
imagine
if
we
permanently
pushed
it
out
by
two
hours
that
that
might
suffer.
E
H
You're,
right
and-
and
I
actually
agree
that
you
should
consider
the
bigger
picture
and
make
it
suitable
for
the
biggest
amount
of
people
most
of
the
time
that
should
be
like
the
underlying
goal
and-
and
this
would
probably
call
for
maybe
even
three
different
meeting
times-
we
alternate
to
cover
the
whole
world
in
a
reasonable
time.
H
If
you
want
to
go
down
this
road,
which
would
be
also
totally
fine
for
me,
it
was
based
on
my
personal
experience
but
also,
like
you
said
it's
a
bigger
issue.
We
want
to
make
it
as
open
as
possible
for
everybody
around
the
world.
I
think-
and
we
don't
have
anybody
from
asia
right
now,
probably
because
it's
in
the
middle
of
night
now,
because
I
don't
know
it's
like
one
a.m
or
so.
D
H
It
could
be
definitely
a
reason,
and
I
know,
for
example,
at
least
one
person
I
think
who
wants
to
attend
more
often
from
this
area,
and
he
could
not
make
it.
I
think,
or
it's
very
hard
to
make,
because
it's
in
the
middle
of
the
night.
E
E
E
It
sounds
like
at
a
bare
minimum,
at
least
having
moving
this
meeting
time
forward
by
two
hours
sounds
palatable
to
the
europeans
on
the
call
and
seems
also
reasonable
to
people
in
east
and
west
coast
u.s
north
american
time
zones.
E
So
at
least
we've
got
some
choices
there
yeah
I'm
seeing
some
calls
for
doodle
polls,
so
I'll.
Take
a
look
at
that
too.
E
Thank
you,
yeah,
thanks
for
using
this
okay
mark
to
you.
First
game
of
coordinates,
update.
M
Cool
thanks,
hello,
everyone
so
yeah.
I
have
spoken
about
this
a
lot
in
the
past.
A
few
working
group
meetings,
but
just
as
a
super,
quick
30-second
reminder
for
those
who
have
not
attended
any
of
the
previous
meetings.
Schema
coordinates
is
the
spec
that
takes
the
existing
convention
of
referring
to
the
name
field
on
a
user
type,
there's
user
dot
name.
M
So
it
kind
of
shows
that
up
as
a
like
formal
spec
and
adds
some
more
things
onto
it
as
well,
such
as
field
arguments.
So
the
past
few
months,
the
rfc
and
the
spec
edit
has
been
out
on
github
and
thanks
everyone.
So
far,
who's
reviewed
those.
I
think
we
made
a
lot
of
great
progress.
M
The
the
last
kind
of
where
we
left
off
last
time
was
that
field
arguments
was
the
kind
of
last
thing
that
people
had
some
discussion
over
and
kind
of
what
the
syntax
of
that
is
going
to
look
like,
and
they
were
kind
of
like
two
main
contenders.
There
was
you
know
if,
if
you
have
like
you
know,
you
want
to
search
for
businesses,
so
you
would
have
you
know,
query
dot.
M
You
know
business
and
maybe
business
has
like
a
id
field
or
something
so
you
would
have
like
query.business.id
to
refer
to
the
id
argument
or
you
might
have
business.query.business
and
then
brackets
id.
Maybe
maybe
someone
can
type
high
in
the
chats
id
colon
a
closed
bracket,
so
it
was
kind
of
like
what
that
syntax
should
look
like,
and
I
think
we
wanted
a
bit
more
consensus
before
moving
forward
and
emerging
stuff.
M
So
we've
had
a
lot
of
discussion
on
the
github
threads
and
now
I
guess
I'm
kind
of
here
to
yeah
take
any
discussion
that
people
wanted
to
have
in
person
and
see
kind
of
like
where
that
consensus
lies
and
see
kind
of
when
how
where
we
make
next
steps-
and
I
guess
I
look
to
lee
for
his
sage
council,
on
how
we
how
we
do
that.
E
Well,
first
I'll
just
do
some
sort
of
mechanical
things
here.
This
is
a
little
bit
overdue,
but
since
you
split
those
docks
into
two
different
parts,
I
got
the
appropriate
labels
on
the
actual
rfc,
which
is
just
the
rrc
document,
and
that
looks
like
most
of
the
discussion
thread
there
is
settling
out.
So
I'm
just
going
to
merge
that
one
and
it's
got
the
appropriate
rfc
document
tag
that
doesn't
mean
that
you
can't
change
that.
E
The
whole
point
of
kind
of
adding
that
in
is
that
any
additional
discussion
on
that
rc
document,
if
necessary,
can
be
added
as
additional
requests.
Then
I
just
moved
the
proposal
rc1
label
over.
So
if
we
wanted
to
talk
about
advancing
that
we
can,
but
I
just
wanted
to
make
sure
that
we
have
that
in
the.
E
B
E
E
So
I'll
be
honest
that
I
haven't
had
a
chance
to
catch
up
on
the
changes
that
have
been
made
in
the
last
couple
of
weeks.
I
think
I
did
a
review
after
a
meeting
last
time
and
mark.
I
know
you've
put
a
bunch
of
work
into
it
between
then
and
now,
but
I
guess
what
we
should
talk
about
here
is
whether
this
should
be
lifted
from
rfc
one
to
rfc.
Two,
it
seems
like
the
spec
changes
have
advanced
quite
a
lot
they're
in
a
really
solid
state.
E
Benji
is
also
pretty
happy
with
it.
So
those
are
probably
in
draft
form.
They
can
probably
use
some
review,
but
it
looks
like
at
least
benji's
really
happy
with
them,
and
you
mark
you
mentioned
that.
There's
javascript
changes
in
place
is
that
is
that
right.
M
I
think
that
was
one
of
the
open
questions
and
we
said
that
we
will
not
make
any
javascript
changes.
I
I
think
there's
space
for
some
like
library,
land
utility
helpers,
for
example.
If
you
have
some
documents
that
you
want
to
turn
into,
a
set
of
schema
coordinates
things
like
analysis,
but
I
think
we
said
that
there's
probably
enough
chasers
there
and
you
know
like
behavior.
That
needs
to
be
decided
that
we'll
just
leave
that
for
library
land
and
we
won't
make
any
changes.
So
I'm
happy
with
that.
M
I
think
we're
in
a
pretty
we're
in
like
a
pretty
similar
state
kind
of
like
last
last
month,
but
we
just
said
that,
since
there
was
this
open
question
over
the
syntax
of
field
arguments,
we
wanted
just
some
time
for
it
to
marinate
and
let
people
have
discussion-
and
I
myself
have
like
was
like
50
50
last
month,
but
having
thought
about
it
and
I
just
kind
of
yeah
I
agreed
with.
I
think
what
benji
was
saying
and
I
think
they'd
have
a
lot
of
discussion
there.
M
So
I'm
like
myself
even
more
confident
now
about
the
option
proposed
but
yeah.
I
wanted
to
just
kind
of
I
think
we
wanted
to
leave
it
out
there
just
for
a
little
bit
more
time
and
see
if
any
more
discussion
happens.
I
haven't
seen
any
more
discussion
than
what
we
last
had,
but
I
guess
that's
part
of
like
why
I'm
here
today
to
see
if
it
was
anything
anyone
had
in
person.
M
But
beyond
that,
I
don't
think
there's
anything
else,
hugely
controversial.
That
should
block
us
from
moving
forward.
E
In
terms
of
javascript
or
reference
implementation
changes,
I
would
suggest
at
least
that
we
add
the
parsing
logic
like
there
should
there
should
at
least
be
like
a
parser
and
a
printer
function
right,
so
that
like
given
given
one
of
these
schema
coordinates.
You
should
be
able
to
parse
it
into
an
ast,
and
given
one
of
these
asts,
you
should
be
able
to
print
it
back
out
into
a
coordinate.
E
I,
I
think,
that's
probably
a
minimum,
because
that
would
give
people
the
tools
that
they
need
to
actually
use
this
and
then
the
two
additional
ones
that
I
I
certainly
wouldn't
require.
In
order
to
move
this
rc
to
two
but
seems
like
they
might
be,
the
most
useful
utilities
is
given
a
piece
of
a
scheme
like.
E
If
you
have
some
reference
to
some
part
of
a
schema,
can
you
create
a
schema,
coordinate
from
it
via
some
function,
like
there's
some
function,
that
you
put
a
piece
of
schema
reference
in
and
you
get
a
schema
coordinate
out
or
and
that
one
might
be
tricky,
because
you
don't
always
have
kind
of
the
back
references
to
know
how
you
found
that,
but
the
other
one
that,
I
think
is
is
probably
very
useful,
is
given
a
schema
and
given
a
schema,
coordinate
return
out
like
basically
follow
the
algorithm
as
as
written
in
the
spec
right.
E
Given
a
schema
and
a
schema
coordinate,
return
out.
The
thing
that
that
schema
coordinate
refers
to
that
that
I
think
I
I
wouldn't
require
that
to
go
to
rc2
by
any
means.
But
that
seems
like
probably
the
most
useful
thing.
M
Yeah,
that
makes
sense
so
just
to
clarify
we
have
these
two
printing
functions
in
graphql
js,
but
you,
but
you
don't
think
that
this
blocks
the
rfc
advancing.
E
For
the
rfc
advancing,
I
would
like,
like
at
least
the
part
short
printer,
and
that's
that
one
should
be
probably
very
easy
to
to
go
build.
I
hope,
because
the
the
syntax
here
is
relatively
straightforward
and
that
just
because
that
will
allow
you
to
write
the
unit
tests
for
it
like
it's.
It's
just
the
kind
of
the
other
side
of
the
coin,
of
making
sure
that
we've
got
all
of
our
eyes
dotted
and
t's
crossed
with
anything
that
is
language
related.
E
And
so
I
I
would
say
like
if,
as
long
as
we've
got
those
up
ready
for
pr
like
that's
the
bar
for
for
stage
two,
that
we've
kind
of
written
down,
that
the
changes
are
ready
for
for
ready
for
review
means
that
they're
rfc
two
ready
and
then
once
they're
sort
of
like
merged
and
released.
Then
it's
accepted
and
so
I'll
just
kind
of
leave
it
to
you
to
decide.
E
If
you
want
to
kind
of
go
further
than
that
and
add
this
additional
piece
which
implements
this
logic,
that
given
a
schema
and
and
schema,
coordinate,
return
out.
The
part
of
the
schema
that
you're
talking
about
sounds
fun
to
write.
I
think
that
that
one
is
probably
going
to
be
pretty
useful
to
have,
but
if,
if
you
don't
get
to
that
one,
I
think
that's
okay,.
M
Yeah,
no
totally
happy
to
kind
of
look
into
that
and
yeah.
I'm
sure
we'll
have
discussion
on
the
eventual
pr
to
talk
about
future
functions.
You
could
add
there
as
well.
That
sounds
great
to
me.
I'm
happy
to
kind
of
go
away
and
do
that
homework.
The
one
one
other
like
little
agenda
item
I
had
for
the
scheme
of
coordinates
back
is-
and
maybe
we
just
kind
of
like
discussed
this
next
time.
M
But
what
generally
happens
when,
like
a
new
thing
happens
in
in
the
graphical
language
or
the
spec
like?
Does
it
just
quietly
get
added
to
the
schema
and
people
have
to
like
know
about
it
or
like?
Is
there
a
twitter
account?
You
know
like?
I
don't
know
how
like
to
tell
people
about
this.
Basically
like
there
might
be
some
library
authors
who
would
care
about
us.
E
Oh,
this
is
a
great
question,
so
we
have
a
graphql
marketing
committee
now,
which
is
just
a
couple
of
people
from
the
board
who
control
the
twitter
account.
What
we've
done
in
the
past
that
I
actually
think
has
worked
out
pretty
well
is
encourage
the
spec
champions
to
write
a
blog,
post
or
or
sort
of
whatever
form
is
most
reasonable
and
then
have
a
lot
of
folks
kind
of
share
that
out.
That
way,
you
kind
of
get
the
credit
for
putting
the
work
into
this.
E
You
can
kind
of
write
the
blog
post,
explaining
it,
what
it's
for
and
why
it's
useful
and
how
it
works,
and
in
the
backstory
or
whatever
we'll
make
a
good
blog
post,
and
we
can
tweak
that
from
the
graphql
twitter
account
and
send
it
around
in
all
the
places
you
want
to
send
it
around
so
that
that's
one
that
I
think
that's
the
most
useful
one
for
sort
of
real
time.
E
As
things
get
approved
and
added
to
the
spec,
then
these
blog
posts
can
come
out
and
then,
as
you,
you
showed
up
late,
but
brian
was
here
earlier,
describing
the
kind
of
last
phase
of
the
process
to
ratify
the
spec.
There's
some
sort
of
legal
mumbo
jumble
that
we
got
to
get
through,
but
once
we've
done
that
another
piece
will
be
sort
of
like
a
fully
formed
change.
Log
of
like
here's,
everything
that
happened
between
the
last
official
cut
of
the
release.
M
F
One
one
actually
like
at
least
suggested
about
this
function.
I
I
think
it's
actually
is
the
most
useful
function.
You
give
a
schema,
coordinate
schema
and
you
receive
an
object
problem.
Is
that,
like
you,
don't
know
what
you
receive
so
a
new
value
and
argument
like
just
like
objects
with
properties,
so
we
basically
need
to
return
like
type
of
the
thing
is
that
argument
is
that
a
new
value?
Is
that
and
the
thing
itself
so
a
question,
and
I
also
thought
about
another
function
that
you
give
schema
coordinate
and
it's
return.
F
What
it's
points
to
so
is
it
like
a
type
so
type
is
valid.
Schema
coordinates
right,
just
a
type
name
in
shorter
form,
so
you
can
distinguish
a
type.
You
can
distinguish
directive
or
you
can
distinguish
argument
inside
the
field,
but
if
it's
type
name
dot,
if
you
don't
type
name
dot,
something
you
don't
know,
is
it
like
a
field
name?
Is
it
like
a
new
value?
F
E
This
is
this
is
part
of
the
reason
why
I
like
the
process
of
of
writing
reference
implementation
at
the
same
time
as
writing
the
spec
text
is
it
helps
you
think,
through
these
things,
with
real
good,
you
can
throw
unit
tests
at
them
in
a
way
that
helps
you
look
for
edge
cases,
it'll.
Actually,
I'm
I'm
coming
coming
around.
I
still
won't
block
rc2
on
it,
but
I
I
think
it'll
actually
be
really
useful.
E
It's
just
kind
of
gut
check
the
logic,
that's
written
for
the
semantics,
because
I
mean
it's
not
huge,
but
it's
there's
a
good
couple.
Dozen
lines
of
semantics
there
and
those
semantics
would
basically
like
literally
almost
be
lined
for
a
long
like
you
could
just
like
copy
that
out
put
it
in
as
comments
and
like
start
filling
in
the
implementation
and
you'll
probably
find
there's
a
one-to-one
match,
and
if
you
do
that-
and
you
realize,
like
oh,
oh
crap,
there's
some
case
that
we
missed
that.
E
I
need
to
do
in
order
to
find
this
one
particular
thing
or
hey
the
way.
We
phrase
that
is
actually
like
really
weird:
it
doesn't
match
the
kind
of
code
you'd
have
to
write
and
we
should
kind
of
refine
that
those
are
the
kinds
of
like
really
nice
things
that
you
get
out
of.
Writing
the
writing
reference
implementation
at
the
same
time
as
the
spec
text,
but
what
avon
suggesting
is
probably
also
like
in
the
realm
of
implementation
detail,
I,
I
think,
and
you'll
you'll
probably
figure
that
out.
E
If
you
start
kind
of
writing
some
code
here
like
how
much
of
this
is.
Oh,
this
is
a
javascriptism
like.
I
need
to
return
back
the
type
of
the
thing
as
long
in
addition
to
the
thing
itself,
whereas
if
this
was
ruby,
then
maybe
you
wouldn't
do
that,
because
you
could
just
kind
of
like
check
to
see
what
type
of
the
thing
it
is
itself,
because
it's
a
ruby
object.
So
I
probably
a
good
thing
to
keep
in
mind
for
the
job
from
specific
augmentation.
E
If
I
want
to
bring
up
some
great
points
I'll,
let
you
guys
take
that
to
the
pr
discussion
thread
when
ability
arrives
but
yeah.
This
is.
This
is
a
good
stuff
that
you'll
will
probably
deduce
from
writing
so
code.
B
Cool
and
before
we
move
on
this
is
related,
but
slightly
off
topic.
One
of
the
things
that
we've
been
talking
about
with
the
with
the
schema
coordinates
is
potentially
expanding
them
and
what
other
use
cases
are.
B
So
I
just
wanted
to
raise
that
I've
been
talking
with
danielle
mann
from
the
apollo
team
and
we
were
discussing
different
ways
that
it
could
be
used
one-
and
I
just
wanted
to
highlight
this
comment
that
I
have
on
on
the
pull
request:
746,
but
basically,
there's
there's
a
few
different
ways
that
apollo
use
paths
that
are
like
this
already
they're,
normally
doing
it
in
html.
B
Obviously,
so
they
have
like
the
different
types
that
are
clickable
with
with
an
arrow
between
them,
but
I
quite
like
the
idea
of
using
the
the
greater
than
symbol
to
say,
like
you
know,
follow
this
path.
We
might
go
with
more
of
a
dash
greater
than
or
something,
but
we
can
use
it
for
things
like
indicating
what
the
history
is
through
the
the
graphical
documentation
sidebar
for
example.
So
that
way
you
could
permalink
an
entire
stack
of
documentation.
B
You
could
also
use
it
like
when
you're
deprecating
a
field,
maybe
you
deprecate
a
root
level
field
and
say:
instead,
you
should
request
it
through
this
path
through
the
graphql
schema.
So
that's
another
idea
and
we
could
have
graphical
automatically
even
expand
that,
for
you
potentially
there's,
also
ideas
of
yet
linking
between
documentation
of
various
things
and
showing,
where
particular
fields
and
types
can
be
accessed
from,
and
we
could
even
use
it
for
things
like
tracking,
like
usage
characters
of
various
parts
of
the
schema.
B
So
this
is
all
things
that
extensions
of
the
schema
coordinates.
Rfc
could
be
used
for
if
people
have
more
ideas
of
where
this
could
be
used,
now
would
be
the
time
to
raise
them
so
that
we
make
sure
that
whatever
syntax
we
use
in
schema
coordinates
is
useful
for
these
other
purposes,
and
if
it's
not,
if
that,
isn't
a
problem
and
it
shouldn't
stop
schema
coordinates,
but
it
it
does
help
us
to
decide
between,
for
example,
doing
the
the
query.business.id
versus
the
query.business
parenthesis
id.
M
I
think
probably
the
art,
the
rfc,
that
menu
was
just
looking
at,
because
that
that's
where
the
theoretical
discussion
of
the
syntax
has
mostly
been
taking
place,
but
I'm
wondering
if
it
makes
sense
to
start
like
a
new
place
to
talk
about
schema,
coordinates,
2.0
or
whatever
the
syntax
is
to
avoid.
You
know,
riders
on
my
bill.
Basically,
it's
like
a.
I
assume
it's
going
to
be
like
a
separate
spec
proposal,
but
obviously,
as
benji
says,
there
might
be
some
semantics
of
the
future
imagined
2.0
thing
that
changes
the
syntax.
M
We
have
his
the
current
spec,
so
yeah,
I'm
keeping
it
in
there.
Rfc
proposal
sounds
good,
but
let
benji
decide
because
he
proposed
this.
B
I
I'm
happy
with
it
just
going
on
in
the
rfc
proposal.
For
now
I
think
that's
fine,
should
it
be
the
the
issue
or
the
pull
request.
M
Oh
yeah,
I
was
gonna,
say
probably
the
issue,
because
that
that's
where
the
majority
of
the
discussion
around
this
is
taking
place.
E
It's
fine
for
that
to
happen
there.
This
is
one
of
my
gripes
with
with
github.
Is
that
it's
oftentimes
hard
to
find
active
discussion
on
pr's
that
have
been
merged
so.
E
Barring
anywhere
else,
this
is
a
fine
place
to
continue
discussion.
If
there's
a
specific
topic,
you
want
to
dive
into
deeper,
which
is
how
schema
coordinates
could
evolve.
Then,
maybe
it's
worth
opening
a
new
issue
and
just
kind
of
referring
to
these
things
that
we
have
a
non-closed
place
to
continue
discussion.
G
I
I
just
wanted
to
provide
some
examples
where
I
think
this
would
be
useful
for
us
and
indeed,
as
a
preview,
we
have
a
way
to
track
usage
of
different
schema
elements
to
support
our
to
support
deprecations,
and
this
would
this
would
have
been
fantastic.
E
That's
great
and
evan.
I
think
you've
got
opinions
too
about
advanced
paths
into
the
schema,
as
opposed
to
just
coordinate.
I
Yeah
we
have
the
like,
like
we
have
paths
internally
and
as
part
of
our
public
tooling
now
for
informing
clients
without
deprecated
usage,
deprecated
api
users.
F
M
M
F
M
F
M
You
want
me
to
like
commit
commit
everything
and
just
like
have
a
stream
of
updates.
That
makes
sense.
I
also
wonder
if
there's
like,
I
don't
know
like
code
like
a
code
sandbox
or
something
way
of
like
graphql.js
and
whatever
yeah.
I
just
I
say
your
suggestion.
I'll
just
commit
everything,
so
they
can
like
review
line
by
line
as
we're
going
forward
sounds
good.
F
You
can
reach
me
zero
and
okay,
like
yeah,
we
can
even
like
schedule
a
call
something
if
it
would
be
a
particular
complicated
problem.
It's
just
like
I'm
personally,
not
sure
what
to
put
to
this
utility
function
or
how
to
name
this
thing
or
how
to
look
like
that's.
Why,
like
I'm
like
suggesting
making
it
iteratively
and
save
us.
M
Okay,
awesome
yeah
thanks
we'll
we'll
do
the
the
mind
meld
required,
then
for
this
and
we'll
try
and
get
something
out,
appreciate
it
and
yeah
thank
thanks
again
to
benji
in
particular,
and
everyone
else
who
has
helped
with
this
and
pushed
us
along.
Thank
you.
E
Yeah
thanks
for
your
hard
work
on
this
okay,
we've
bounced
around
the
agenda
a
bit.
I
think
we
have
one
left,
which
is
crazy
ambiguity.
B
I
was
going
to
say
I'm
happy
to
punt
this
to
next
month.
If
you
want
basically,
this
is.
I
did
a
big
topic
on
on
query
ambiguity
before
and
I
basically
made
all
of
the
spec
editors
I
possibly
could
so
that
we
could
see,
which
ones
would
make
the
most
sense.
I
think
consistently.
B
One
of
the
issues
that
I
found
is
the
use
of
the
term
query
error
in
the
spec,
where
the
error
can
happen
in
a
mutation
or
in
a
subscription,
as
well
as
a
query,
but
it
can
also
just
happen
as
part
of
validation
of
the
the
operation
document
itself,
which
again
we
call
query.
So
it's
the
use
of
this
word
query
to
describe
so
many
different
things.
It
makes
the
text
less
clear.
B
What
we're
referring
to,
we
do,
I
believe,
have
the
concept
of
a
graphql
request,
which
is
where
we
actually
issue
to
you
know.
The
graphql
function
here
is
the
schema
here
is
the
document.
Here
are
the
variables
here's
the
context,
go
and
go
and
do
the
thing.
I
believe
we
call
that
a
request,
so
I'm
suggesting
that
we
rename
query
error
to
request
error.
B
This
may
not
be
sufficient
in
itself.
We
might
want
to
actually
go
through
and
rewrite
a
few
other
things
to
use.
The
word
request
rather
than
query,
but
we
can
pull
that
from
the
original
thing
I
just
wanted
to
test
the
water
before
I
put
any
more
time
in
here
to
see
what
people's
general
thoughts
were
on
this.
E
It
makes
sense
to
me
I
might
do
an
annoying
thing
and
suggest
that
while
we've
opened
the
can
of
worms,
we
should
look
at
how
we
describe
errors
in
general.
E
This
is
kind
of
related
to
the
idea
of
a
glossary
which
I
know
we
have
an
open
action
item
to
investigate,
but
I
think
maybe
what
we're
missing
is
barring
having
a
full
glossary,
just
like
literally
one
place
where
we
say
in
graphql,
there's
two
different
kinds
of
errors
that
we
describe
and
here's
what
we
call
them
so,
just
like
a
mini
glossary
specific
to
the
various
kinds
of
errors,
because
we
also
have
validation,
errors
and
field
errors
and
they
all
kind
of
have
a
slightly
different
behavior.
E
B
Me
I'm
gonna
have
to
watch
that
back
in
the
in
the
in
the
zoom
recording,
because
I'm
trying
to
write
it
down
whilst
also
listening-
and
I
I
missed
a
little
bit
of
it-
I'm
afraid.
But
yes
that
sounds
that
sounds
desirable
and
I
imagine
there's
going
to
be
a
whole
bunch
of
those
as
I
try
and
solve
query
ambiguity.
So
I
should
prepare
myself
very.
E
E
I'm
happy
to
help
with
that,
but
if
you
want
to
take
a
crack
at
it
as
part
of
this,
then
that
would
be
great.
B
Would
it
make
sense
to
do
it
just
as
a
separate
thing,
try
and
keep
things
small
and
easy
to
advance.
E
It
could
I
I'd
like
just
to
I'm
gonna
tag
up
this
to
be
stage
one
rfc,
just
because
I
think
everyone
this
is.
This
is
clearly
an
extension
of
the
thing.
E
Korean
coriander
that
we
all
agreed
was
a
problem
we're
solving,
so
I
think
that
qualifies
it
for
stage
one.
E
I
would
at
least
like
us
to
investigate
it
if
it's,
if
it's
gonna
end
up
being
like
a
pretty
major,
complicated
change
to
overhaul,
how
we
describe
errors,
then
I'm
totally
on
board
with
being
a
follow-on,
separate
rfc,
but
if
it
ends
up
being
actually
kind
of
small-
and
it's
just
a
way
to
describe
what
a
response
error
is.
E
Evan,
can
you
describe
out
loud
what
you
typed
in
the
chat.
I
Sure
yeah,
so
a
lot
of
schemas
have
like
an
error
type
in
the
schema,
an
object
type
that
is
returned
from
mutations
for
things
that
are
not
like
at
the
graphql
layer,
but
are
like
you
know.
Some
business
validation
failed
or
something
like
that,
and
I've
seen
like
everybody
calls
them
something
slightly
different.
I
Sorry,
I
know
that's
like
piling
onto
the
like
random
additional
work
that
benji
didn't
really
sign
up
for
here,
but
if
we're
doing
errors,
that's
the
thing
that
bugs
me
about
errors.
E
That
one
does
seem
maybe
tangential
enough,
that
it
deserves
its
own
rc,
but
I'm
all
I'm
all
for
listing
all
of
our
gripes
with
errors.
We
can
improve
them.
I
think
it's
spike
couldn't
be
much
more
clear
about
how
we
talk
about
ours.
B
Yeah,
I
think
that's
an
interesting
idea
and
there's
there's
a
real
mixture
of
opinions
in
what
I've
seen
about
how
you
would
model
errors,
as
part
of,
for
example,
unions
on
on
the
payload
types
of
mutations,
and
things
like
that.
But
I
don't
think
in
in
my
opinion,
I
don't
think
as
a
ecosystem.
We've
got
to
a
point
where
that's
sufficiently
advanced
that
we've
determined
what
that
non-normative
note
in
the
spec
would
be
about
that.
B
F
So
I
have
a
little
bit
contact
so
noise,
because
I'm
mentoring,
mentored
carolyn
for
google,
seasonal
docs
and
she
collected
a
bunch
of
questions
that
community
have-
and
this
is
was
one
of
the
questions-
how
to
do
validation,
errors
so
yeah.
It's
definitely
not
part
of
benji's
effort
to
clarify
word
query,
but
as
encouragement.
F
If
some
somebody
village
is
interested
in
clarifying
this
non-normative
knowledge
can
look
like
our
errors
are
intended
to
like
fake,
unexpected
errors
and,
if
somewhere,
a
part
of
your
domain,
and
you
have
like
a
field-
and
you
want
to
have
some
selection
on
them-
you
should
define
them
in
schema.
Even
that
sentence
will
clarify
things
for
a
bunch
of
people,
because
right
now,
they're
like
wondering
why
why
we
don't
have
for
errors
and
why
you
cannot
predict
error,
content
and
stuff
like
that?
So
definitely
not
part
of
like
benches
ever.
F
But
if
somebody
else
willing
to
do
it's
like
important
question
for
community
not
to
go
on
like
on
the
site,
how
you
should
do
it
like
you
and
your
also
interface
or
something
else,
but
just
to
say
like
we
know
this
is
an
issue.
You
know
that
like,
if,
if
you,
if
something
if
errors
say,
is
part
of
your
domain,
you
should
define
them
in
schema,
not
rely.
I
I
E
E
Having
somewhere,
where
we
just
like,
basically
the
first
place
where
we
say
response
error,
we
just
kind
of
describe
what
that
thing
is,
but
otherwise
this
change
looks
really
good
so
stage,
one
at
least
we'll
go
from
there.
I
I'm
on
our
agenda:
yeah
I've
gotta
run
now,
but
I
think
we're
almost.
We
are
done.
The
agenda
anyways
so
have
a
great
day.
Everybody.
E
For
everybody
thanks
everyone
for
coming,
we
got
through
a
lot
today.
So
thank
you
and
look
forward
to
seeing
everybody
next
month.