►
From YouTube: GraphQL Working Group - 2022-09-01
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
C
E
How
has
thursday
been
unfolding
for
all.
F
G
B
E
D
A
C
B
A
B
B
As
I
see
him
awesome
all
right:
well,
let's
get
things
kicked
off
since
we
have
most
folks
here
and
anyone
who
gets
to
join
late,
we'll
just
welcome
them
in
so
thanks
everybody
for
joining
the
very
first
day
of
september
september,
graphical
working
group
meeting
which
send
a
file
out
there.
Hopefully
you
all
got
it
if
not
posting
the
link
in
the
chat
first
thing
as
per
usual,
of
course,
since
we're
all
joining,
we
means
we.
B
We
will
go
through
the
order,
as
it
shows
up
in
the
agenda
because
doing
it
in
the
order
of
zoom.
Attendees
is
always
a
little
bit
of
a
mess,
and
I
leave
myself
on
top
since
I'm
using
the
one
talking
first,
so
hello,
everybody,
my
name
is
lee.
C
F
B
Awesome
welcome
all
mike,
I
think,
you're,
the
only
one
that
I
don't
have
on
my
agenda
item
or
agenda
attendees,
yet
yeah,
I'm
doing
that.
Now.
Sorry,
pr
and
I'll
get
emerged
thanks
everybody
for
doing
quick
intros,
especially
since
we
got
a
couple
new
faces.
Welcome
everybody
benji,
I
see
is
already
hot
on
the
tail
of
the
notes
and
has
his
beautiful
background
projection.
So
we
can
watch
him
do
it
as
he
does
it.
B
B
Let's
take
a
quick
look
over
the
agenda,
make
sure
we're
covering
everything
we
want
to
today
and
nothing's
missing.
I
added
a
couple
pieces
to
the
agenda
just
within
the
last
hour,
so
so,
if
anybody
else
wants
to
test
anything
on,
let
me
talk
about
it
now,
as
usual.
Take
a
quick
look
over
action
items
see
if
there's
anything
we
can
close
out
just
remind
ourselves
what's
on
our
plate.
B
Benji
very
helpfully
did
an
update
to
how
we
talk
about
meeting
cadence
in
tse,
which
is
on
theme
of
updating
our
meeting
times
in
general,
which
I
had
a
proposal
a
meeting
or
two
ago.
I
think
one
meeting
ago,
and
I
have
some
updates
on
that.
B
Looking
for
some
feedback
on
expanding
the
subtype
rfc,
that
yacht
has
been
working
on
and
then
I
noticed
that
our
agenda
is
a
little
bit
thin,
so
I
figured
if
we
have
time
perhaps
it'd
be
interesting
to
pick
up
the
schema
metadata
applied
directives,
topic
of
conversation,
as
opposed
to
one
specific
rfc
around
that
that's
a
decent
amount
of
things,
anything
else
anybody
would
like
to
talk
about
today.
We
don't
have
on
this
list.
B
Let's
then
take
a
quick
look
at
any
action
items,
I'm
just
going
to
pop
open
these
links
here
to
see
we
have
none
tag
ready
for
review,
which
is
okay,
but
we
do
have
a
couple
that
were
opened
from
the
last
meeting
that
might
be
worth
just
double
checking
on
and
seeing
where
they
are
at
one
was
benji.
B
G
Well,
if
we
do
do
that
discussion
later
that
you
mentioned
the
schema
metadata
one,
it
is
one
of
the
things
that
I've
posted
near
the
bottom
of
that
discussion
thread.
I've
talked
about
a
few
additional
use
cases
that
I
think
that
the
other
solutions
to
metadata
don't
solve
the
structures
does
better,
but
other
than
that,
I
think
you
know.
We've
already
got
quite
a
few
use
cases
for
it
documented.
B
Okay,
I'll
leave
it
open
for
now,
just
as
long
as
that's
kind
of
flinting.
Thank
you
for
the
update.
The
other
was
pulling
folks
on
what
they
feel
the
domain
of
the
spec
should
cover.
This
was
based
on
a
conversation
that
roman
had
started.
A
Yeah,
I
put
a
note
there.
I
asked
thank
you
for
for
more
time
on
this
and
to
figure
out
the
next
step.
I
feel,
like
the
item
says
action
item
says:
oh,
what
people
think
it
is
now,
while
I
think
the
the
problem
is
what
it
should
be,
and
I
want
to
come
up
with
a
more
solid
arguments
and
presentation
about.
If
you
ask
me
what
it
is
now,
I
will
give
an
answer
that
I
will
be
not
happy
with
myself.
B
Yeah
no
rush
at
all.
Just
thank
you
for
writing
in
the
update.
Thank
you
yesterday.
That's
very
helpful
awesome
that
is
all
we've
got
floating
right
now.
Oh,
I
noticed
that
our
projects
got
kicked
into
projects
classic
thanks,
github
for
renaming
your
projects.
B
I
don't
think
there's
any
here
that
are
fully
closed.
We
have
some
actions
that
are
very
old,
so
we
may
at
some
point,
need
to
go
digging
through
some
of
these.
I
think
they
have
one
that's
almost
six
months
old,
but
we'll
leave.
B
Okay
benji
I'll
hand
it
to
you
to,
I
guess,
probably
just
update
everyone.
Let
us
know
how
you
want
to
use
the
time
to
talk
about
the
change
that
you've
made
to
the
tsu
server.
G
Sure
I
don't
think
this
actually
needs
very
long,
mostly
just
to
let
the
tfc
know,
though
I
think
most
of
the
tsc
noticed
that
today
or
yesterday,
so
as
you
propose
changing
the
meeting
times,
matt
pointed
out
that
that
doesn't
mesh
great
with
what
our
tsc
wording
was
currently.
G
So
what
I've
done
is
tried
to
make
some
minimal
changes
to
the
tsc
document
that
allow
us
to
manage
the
fact
that
meetings
might
not
be
monthly.
They
might
be
more
frequently
than
monthly,
while
still
allowing
people
tsc
members
to
be
treated
as
attending
members.
So
there's
been
some
discussion
on
that.
I
think
it's
already
been
approved
by
a
significant
number
of
the
tsc
one,
two
three
four
plus
myself,
five
five
of
the
tsc.
G
So
I
think
the
plan
was
we'll
leave
it
open
for
review
for
the
fun
for
the
next
72
hours
and
then
we'll
go
ahead
and
merge
it.
I
guess.
B
Yep
late
face
thank
you
for
to
both
matt
still,
not
quite
on
the
call,
yet
that's
okay,
matt
for
catching
it
and
doing
the
earlier
review.
Benji
foreign,
it's
the
right
change,
and
it's
also
a
good
segue
for
me
to
give
some
updates
on
those
meeting
time
changes.
B
I'm
sure
most
of
you
have
seen
the
discussion
thread.
I
had
a
chat
with
elisa
pm
and
she
proposed
she
actually
framed
a
different
problem,
which
is
that
it's
we
have
kind
of
a
mess
from
a
calendaring
point
of
view.
So
luckily,
this
meeting
is
more
or
less
entirely
operated
through
our
github
markdown
files.
B
But
there
is
a
calendar,
invite
I
think,
there's
probably
multiple
calendar
invites
they've
been
floating
around
different
calendars
and
it's
become
a
bit
of
a
disaster
and
anytime
elisa
wants
to
go
through
and
just
get
an
understanding
of
what's
happening
when
she
gets
confused.
So
it
was
opportunistic
that
she
framed
that
problem
that
she
was
trying
to
figure
out
how
to
solve.
B
At
the
same
time
that
we've
been
talking
about
updating
things
and
so
there's
a
link
in
the
agenda
doc
to
a
google
sheet,
where
I'm
just
trying
to
get
a
full
list
of
what
are
all
of
the
events
that
happen
over
the
entire
domain
of
the
graphql
foundation.
B
And
when
have
they
been
happening
and
what's
like
a
new
cadence
that
we
can
align
them
to,
there
are
subtle
changes
to
the
proposal
that
I
had
originally
had.
That
is
worth
talking
about
here,
to
just
make
sure
that
if
there's
any
significant
feedback
that
I
fold,
those
in
the
most
significant
change
for
this
crew
is.
B
I
think
that
gets
tricky
where
the
bi-weekliness
shifts,
based
on
whether
we
months
have
four
or
five
weeks
in
them,
and
that
can
get
kind
of
messy,
especially
when
the
board
meeting
is
monthly,
and
so
I'm
I
was
worried
about
those
colliding
with
each
other.
So
instead
I
have
to
set
up
where
we'll
we'll
keep
the
first
thursday
as
we
have
it
and
then
we'll
just
add
the
third
or
thursday
as
well.
So
that
way,
we've
got
a
solid
two,
a
month,
which
means
half
the
time.
B
It'll
be
three
weeks
between
that
one
and
the
next
one,
but
that'll
keep
us
on
the
monthly
cadence
and
also
shifting
the
time
to
be
sort
of
splitting
the
difference
between
what
I
proposed
before,
which
is
much
earlier
than
this
time,
slot
that
we
have
today,
which
I
think
helps
it
line
up
a
little
bit
better
for
a
handful
of
folks
who
gave
me
feedback,
so
that
would
be
10
30
a.m,
to
12
pacific
time,
which
means
it
would
start
a
half
hour
earlier
than
this
call
usually
starts,
and
it
would
end
one
hour
earlier
than
this
call
usually
ends
so
fairly.
B
Close,
hopefully
means
minimal
sort
of
re-calendaring
for
everybody
who
joins
this
call
on
a
regular
basis
and
then
for
the
asia-pacific
working
group,
adding
that
at
the
end
of
the
day,
wednesday,
north
america
time,
which
would
put
it
conveniently
also
thursday
morning
for
asia
pacific.
So
you
can
kind
of
think
about
the
working
group
as
a
thursday
morning
call
depending
on
where
you
are
us,
or
asia,
pacific
or
thursday
evening
call,
if
you're
in
the
in
the
eu,
so
that
made
sense
in
my
brain.
B
B
We
don't
end
up
in
a
situation
where
we
have
them
colliding
with
each
other
or
on
haphazard
days,
but
that
is
net
new,
so
first
feedback
on
the
cadence
of
this
meeting
people
feel
like
that's
good,
any
feedback
there
and
then
second
feedback
on
this
subcommittee
sort
of
reservable
hour
blocking
kind
of
scheme.
C
So
the
one
thing
that
would
be
good
to
really
have
one
calendar
with
all
the
events
that
we
have
that's.
B
Yeah
there's
going
to
be
at
least
our
zoom
account
will
be
sort
of
the
master
in
command
of
all
these,
and
then
anything
that
is
publicly
visible,
which
I
think
is
everything
except
the
board
meeting
in
my
101
with
lisa
will
be
on
one
calendar
that
everyone
can
see,
and
hopefully
that
makes
everything
much
easier
to
follow.
Along
with.
G
Lee,
if
I'm
reading
your
spreadsheet
correctly,
it
does
seem
to
indicate
that
you've
put
the
same
time
for
the
two
for
the
two
working
groups
on
the
first
thursday
and
the
third
thursday.
But
what
you
were
saying
before
suggested
that
they
would
be
at
different
times
or
am
I
misreading
it.
B
B
B
Cool
how
about
this
subcommittee
meeting
scheme
of
having
sort
of
a
blocked
out
chunk
of
time
twice
a
month
that
subcommittees
can
reserve
our
blocks
out
of.
C
Yeah
I
like
that,
because
I
missed
I
wanted
to
attend
a
couple
of,
and
then
it
somehow
slipped
always
and
it
would
be
good
to
have
a
yeah
more
sliced
into
one
block.
There.
B
B
D
I'm
confused
a
little
bit
about
a
spreadsheet
because
I
looked
at
it
at
last
minute,
but
I
I
will
like
if
you
post,
updated
version
inside
discussion.
I
will
answer
it.
One
thing
I'm
like
always
for
moving
it
earlier
because,
like
we
shifted
it,
it
started
from
7
p.m.
My
time
initially
and
now
it's
like
9
p.m.
D
My
time,
which
is
like
a
bit
weight
and
every
time
with
twice
shifted
like
a
later
total,
like
energy
to
connect,
and
since
now
we
have
like
a
proper
meeting
like
shifting
it
earlier
and
finishing
one
hour
earlier.
So
it's
for
me.
It's
like!
Actually,
meanwhile,
instead
of
11
p.m.
It's
like
10
p.m,
which
is
like
way
better
for
my
like
sleep
schedule,
so
I'm
definitely
before
like
for,
for
what
part
of
proposal
is
about
subgroups?
D
I
will
answer
asynchronously
if
you
update,
because
initially
I
look
into
discussion
and
it's
not
there
yet
so
yeah.
B
Yep,
I
appreciate
that
this
is
how
the
presses-
and
I
you
have
not
had
a
time
to
dig
into
it.
Yet
I
will,
since
everyone
is
in
sort
of
broadening
agreement,
that
this
seems
correctly
correct.
I'll
get
this
into
the
discussion
thread.
B
My
senses,
the
sort
of
aside
from
the
board
meeting
timing,
which
I
need
to
dial
in
with
lisa.
I
think
everything
else
here
looks
good
and
we
will
open
up
sort
of
a
like
reservation
of
time
slots
from
the
these,
like
second
and
fourth
thursdays
and
use
those
for
the
subcommittees.
B
I
think
my
preference
would
be
to
prefer
the
second
thursday
because
it
kind
of
slots
right
between
the
first
and
the
second
working
group
meeting,
so
you
can
kind
of
like
do
the
working
group
have
some
thoughts
go
to
the
subcommittee?
Have
something
come
out
of
it
and
then
come
to
the
third
one?
Something
to
talk
about,
but
we'll
have
both
that
way.
There's
ample
times
to
choose
from.
G
Slightly
tangential,
but
I
was
talking
with
elisa
about
trying
to
create
just
a
an
index
somewhere
of
the
working
groups
that
we
have.
I
suggested
that
maybe
the
actual
github.com
graphql
would
be
the
right
place.
Add
a
readme
there
and
it
can
link
out
to
the
various
different
working
groups,
maybe
might
be
a
good
place
for
it.
B
Well,
that's
a
good
idea,
yeah.
My
original
thought
was
to
put
that
at
the
front
of
the
working
group
and
just
listing
out
that
there
are
subcommittees
and
here's
where
the
repos
and
timings
are,
but
certainly
having
like
a
directory
at
the
top
level
is
a
really
good
idea.
I
like
that.
B
G
Cool,
yes,
sorry
about
the
delay
there,
so
this
one's
quite
straightforward.
Well,
actually,
it's
less
than
straightforward.
In
the
spec
we
detail
that
the
the
response
can
contain
extensions.
I
think
we
detail
that
errors
can
contain
extensions
and
various
things
like
that.
So
what
it's?
What
it's
good
for
when
we
do
that
is,
it
means
we
can
reserve
the
top
level
space.
G
So
we
can
add
any
other
additional
things
that
we
need
to
in
future
without
introducing
any
problems
or
conflicts
and
people
that
use
the
software
that
use
graphql
can
add
extra
features
inside
of
extensions
and
that's
their
place
to
do
that.
So,
for
example,
things
like
persistent
operations
could
be
identified
by
something
inside
of
extensions
without
having
to
have
it
specified
by
the
spec.
G
However,
when
it
comes
to
requests,
we
don't
actually
strictly
document
the
shape
of
the
request.
What
we
do
is
we
document
what
the
request
must
contain,
but
we
don't
say
how
it
has
to
contain
that,
and
the
reason
that
I
bring
this
topic
up
is
because
we've
been
trying
to
formalize
the
graphql
over
http
specification,
and
for
that
we
can
say
well.
G
So
I
think
that
we
probably
should
so
I've
added
a
small
paragraph
that
indicates.
You
may
also
find
that
there's
extensions,
I'm
not
sure
whether
this
does
belong
in
the
spec
or
not.
So
I
was
really
seeking
an
opinion
on
it
and
if
we
feel
it
should
live
there,
what
exactly
it
should
say.
C
B
Because
the
purpose
of
explicitly
defining
extensions
in
other
places
was
because
we
were
already
talking
about
a
data
structure
and
we're
specifically
concerned
of
naming
classes
within
that
data
structure.
So
the
response,
like
the
spec,
defines
the
data
structure
of
the
response,
at
least
in
terms
of
like
at
a
high
level
like
there
is
a
map.
There
are
key
value
pairs
and
there
was
the
potential
problem
of
if
I
wanted
to
add
something
else
into
the
response
like.
B
Where
would
I
put
that
and
if
I
named
it
something
that
collided
with
the
spec
in
the
future?
How
do
I
circumvent
that
problem?
But
execution
does
not
describe
it?
There's
no
data
structure,
there's
just
as
you
as
you
mentioned
here,
there's
just
sort
of
a
list
of
things
that
you
need.
You
need
the
schema.
You
need
the
document,
you're
executing
and
then
optionally
a
couple
other
things,
but
because
we
don't,
we
don't
necessarily
need
to
talk
about
that
in
terms
of
a
data
structure.
G
So
the
main
concern,
and
the
main
reason
that
I
feel
that
maybe
we
should
put
it
in
the
spec-
is
because,
when
we
come
to
document,
for
example,
graphql
over
websockets
or
graphql
over
other
protocols,
they
will
likely
also
need
to
specify
extensions.
I
mean,
if
you're
doing
persistent
operations,
for
example,
you
need
to
do
that
over
whatever
channel
you,
you
are
issuing
those
those
requests
and
I'm
concerned,
I
don't
want
each
of
those
specs
to
each
be
responsible
for
defining
this
same
concept.
G
Over
and
over
again,
we
need
kind
of
a
central
location
for
it.
But
equally
I
agree
with
you
because
there
is
no
structure
defined
for
requests.
So
that's
what
makes
it
a
little
bit
more
tricky.
B
Is
this
maybe
a
non-normative
note,
because
we
have
a
note
in
that
same
section
that
you're
editing
that
mentions
the
fact
that
graphql
doesn't
require
specific
serialization
format
or
transport
and
that
those
are
those
are
chosen
by
a
service?
B
Is
it
worth
adding
a
note
that
says
something
similar
to
what
you're
framing
here
that
the
the
list
above
is
sort
of
the?
This?
Is
the
requirement
for
graphql
to
function?
But
it's
not
the
limit,
like
you,
could
add
additional
values
to
implementation,
specific
values
for
a
request,
if
necessary,
but
in
like
then,
maybe
that's
a
good
place
to
mention
the
potential
problem
of
a
collision
with
a
graphql
required
field.
B
A
What
is
collision
can
be,
I
think
it's
outside
the
request
right,
it's
where
now
we
have
variables
the
query
and
now
extensions
right.
It's
on
the
very
top
level,
this
field-
I
like
it.
Actually,
it
seems
like
equivalent
of
you-
know
for
some
headers
in
http,
with
whatever
you
want
and
get
whatever
you
want.
A
A
Clients
and
servers
get
together
and
use
this
additional
field
and
kind
of
standardize
so
that
clients
decline
tools
know
about
it.
No
surprise.
E
I
think
I
I
do
think
this
is
something
that
is
important
to
specify
somewhere,
whether
it
be
in
the
graphical
spec
or
the
hd
spec.
I
don't
have
a
great
handle
on
that.
The
only.
H
E
H
E
E
Apollo
started
using
for
sort
of
informally
as
an
extension
of
the
spec
for
these
persistent
queries
along
like
four
years
ago,
or
something
and
an
actual
challenge
that
we
did
actually
have
that
like
made
it
challenging
for
us
to
like
set
to,
like
you
know,
have
clients
that
sort
of
optimistically
start
using
these
extensions
and
then,
if
they're,
ignored
by
the
server.
Whatever
is
that
there
were
actual
servers
whose
you
know
for
posts,
for
example
who's
like
json.
You
know
they
would
parse
the
json
and
they'd
say:
oh
unknown.
E
Thing
and
again,
I'm
not
positive.
This
needs
to
be
http
level
or
spec
level
is
that
server
should
not
return
an
error
if
there
is
an
extensions
field
at
the
top
level,
whereas
you
can't
imagine
a
wanting
to
return
an
error
for
other.
Like
you
know,
typo,
you
know
operation
name
with
an
underscore
or
something
you
know.
That's
an
error.
It's
a
typo
but
yeah
preventing
servers
from
rejecting
things
with
extensions
is,
I
think,
the
important
goal
here
not
specifying
what
they
should
do
when
they
get
it.
B
D
D
An
extension
is,
like
extensions,
is
important
and
is
actually
spreading,
because
initially
we
had
only
one
extensions
in
response
when
we
added
extensions
in
errors
now
we're
adding
extensions
inside
stream
and
default
payload.
So
now
we
have
to
like,
even
inside
the
spec
itself,
up
to
with
four
places
where
extensions
are
used,
so
I
would
suggest
to
to
like
make
it
like
a
proper
section
describe
it,
and
maybe
it's
like
in
the
middle
ground.
B
B
Just
in
order
to
do
a
request
like
these
are
the
total
set
of
things
that
need
to
exist,
which
maybe
means
that
we're
we
could
be
more
clear
about
what
is
necessary
there,
but
or
maybe
the
simplest
possible
change,
is
to
add
just
another
thing
to
this
list.
B
That
just
says
an
implementation
defined
set
of
extensions
or
transport
specific
data
as
necessary,
as
well
just
to
david's
point
making
sure
that,
if
a
service
sort
of
rejects
something
be
like
wait,
this
seems
not
aligned
with
this
piece
of
guidance
that
we
could
accept
implementation,
specific
additional
information.
I
think
the
common
one
that
usually
sticks
out
to
my
mind
is
headers.
So
people
have
asked
a
bunch
of
questions
about
headers
before
like
is
it
okay
to
add
custom
headers
to
an
http
request
that
includes
that
runs
a
graph
skill
execution?
G
Okay,
so
so
going
forward,
it
sounds
like
it
might
be
a
good
idea
to
add
an
entry
to
the
above
list.
That
speaks
more
generically
about
this
extensions
rather
than
using
the
the
backticks
and
using
the
word
entry
and
then
potentially
adding
a
small
subtitle
that
talks
about
why
there
would
be
extensions
and
also,
I
think
this
is
why
I
didn't
want
it
to
be
a
non.
A
non-normative
note
specifying
that
it
should
be
a
map.
B
It
also
well,
I
mean,
have
brought
up
headers
kind
of
intentionally
as
a
counterpoint
to
the
map
it's
like,
if
that
it
implies
something
about
that
you're
going
to
capture
this
as
a
json
body
or
something
as
part
of
the
submission
which
may
make
sense,
but
maybe
it's
a
form
multi-part.
Maybe
some
of
this
data
isn't
a
header,
maybe
something
that
comes
out
of
a
cache
and
gets
side
loaded.
D
B
Yes,
we
use
it
for
the
entirety
of
the
response
in
some
cases
for
input
for
variables,
but
I
guess
I'm
missing.
The
value
like
it
seems
like
it
adds
restriction
without
a
lot
of
benefit,
because
if
we
define
that
it
must
be
a
map,
that
means
that
you
cannot
use
a
header
to
supply
some
of
this
metadata,
which
feels
like
an
unhelpful
restriction.
A
I
think
I
think
it's
okay,
if
we,
if
anybody
wants
some
extensive
structure,
put
it
under
well-known,
you
know
or
under
some
key
inside
the
map.
I
think
it
will
be
also
easier
for
graphical
and
other
tools
to
show
it
at
least
at
the
top.
We
we
say
it's
key
value
pairs
beyond
this.
We
don't
care
and
they
can
actually
reserve
a
grid
or
something
there
in
the
ui
to
show
this
stuff.
A
It
will
be
much
more
challenging
if
we
don't
specify
any
structure
at
all.
B
H
Like
headers,
don't
work,
content
types,
don't
necessarily
work
like
that
seems
very,
very
restrictive,
especially
for
things
like
extensions.
I
feel
like
headers.
C
B
F
B
B
That
seems
reasonable,
because
I
I
do
agree
that
that
section
feels
underspecified
and
it
is
in
some
cases
unclear,
because
the
schema,
for
example,
is
provided
by
the
server
not
by
the
client
and
they're
just
sort
of
intermixed
as
like.
This
is
the
full
set
of
things
that
you
need
to
do
in
execution,
and
we
could
be
more
more
clear
with
that.
E
I'll
just
throw
out
there
that
a
challenge
with
using
headers
is
that
there's
typically
a
you,
know,
length
limit
on
the
max
size
of
all
headers.
So
if
you're
thinking
about
putting
long
things,
then
headers
can
be
a
challenge
and
if
there's
also
the
cores
issue
of
like,
if
every
little
feature
you
want
to
add
involves
like
a
bespoke
header,
then
that
probably
means,
if
you
want
to
use
the
web
context
that
you're
going
to
need
to
end
depending
on
whether
or
not
you've
configured
your
server
to
accept
all
headers
or
not.
B
B
If
the
spec
made
it
made
it
not
possible
to
use
a
header
for
a
feature
that
seems
like
it's
not
adding
a
lot
of
value
and
it's
removing
that
value,
because
your
guidance
here
is
sound,
but
there
are
definitely
cases
where,
where
a
header
could
make
sense-
and
so
I
think
the
goal
that
I
want
to
uphold
here
is
that
we
put
as
minimal
restrictions
on
the
transport
as
possible.
D
B
Benji,
it
seems,
like
you,
have
two
possible
paths
with
probably
up
to
you
to
decide
which
one
you
want
to
take,
one
which
is
sort
of
the
minimum
viable
path
of
just
mentioning
the
fact
that
such
an
extension
could
exist
and
to
be
aware
of
it
and
the
second
sort
of
more
maximalist
path
of
improving
this
section
and
doing
a
better
job
at
defining.
What
the
actual
in
total
input
is
to
an
execution
as
in
the
form
of
a
struct
which,
once
it
distracts,
then
there's
this
question.
A
I'm
afraid
just
making
a
serious
section,
there
is
nothing
except
these
two
sentences,
and
I
don't
know
essentially
it
says
it's
free
form,
kivo,
repair
steam,
optional
and
not
much
to
describe.
B
D
We
can
describe
like
fields
as
optional,
not
as
required,
but
field
optional
fields
that
we
know
for
sure
will
be,
can
be
used
like
operation,
name
or
query
or
just
not
make
it
optional.
A
D
B
H
Hi,
so
I
have
spent
some
time
the
past
couple
weeks
getting
the
spec
pr
up
to
date
with
all
of
our
previous
discussions.
So
I
think
that
is
ready
for
some
thorough
reviews.
H
Also,
the
graphql
jspr
that
has
been
open
for
a
while
has
been
merged
to
the
main
branch.
So
that's
on,
like
the
17
alpha.
H
It's
all
the
changes
are
behind
experimental
execution
functions,
so
not
the
standard
execution
but
yeah,
and
there
was
another
issue
that
benji
had
opened
that
I
wanted
to
discuss
a
little
bit.
I'm
going
to
share
my
screen.
One
second.
H
So
the
the
basic
so
what's
happening
here
is
we
have
previously
discussed
about
nulls
bubbling
up
from
with
inside
of
deferred
or
stream
payloads,
and
this
is
kind
of
the
opposite
situation,
so
you
have
a
some
fields
that
are
being
deferred
or
streamed,
and
I
guess
kind
of
adjacent
to
that.
H
There
is
a
field
that
could
encounter
a
non-null
error,
which
bubbles
up
to
the
parent
of
both
this
field
and
the
fields
that
are
being
deferred
or
streamed,
causing
kind
of
the
place
where
you
would
be
setting
these
deferred
values
to
have
become
nulled
out
so
because,
like
the
defer
execution's
happening
in
parallel,
it's
not
it's
not
canceled
out
by
this,
so
I
guess
kind
of
want
to
get
thoughts
on
this
of.
H
I
had
a,
I
wrote
up
a
pretty
quick
implementation
of
something
that
would
strip
out
these
deferred
fields
and
it's
a
little
bit
clunky.
In
my
opinion.
Basically,
what
has
to
be
done
is
at
least
in
graphql
js,
we
kind
of
don't
really
know.
H
We
know
that
a
a
non-null
error
happened
and
it
has
bubbled
up
somewhere,
but
it's
the
mechanism
that
we're
doing
that
is
just
through
throwing
and
catching
javascript
errors.
So
we
don't
really
have
like
this.
Error
corresponds
to
which
fields
have
been
nulled
out.
H
H
We
will
look
at
that
response
and
basically
go
through
all
of
the
executions
that
are
happening
for
a
deference
stream,
look
at
their
paths
and
see
if
their
paths
have
been
nulled
out
on
the
response,
and
if
so,
if
it's
a
stream
with
an
interval,
we
can
return
that
interval.
Otherwise
we
can
discard
it
and
that
ends
up
without
any
of
these
being
sent
to
the
client.
H
C
Did
we
initially
say
that
that
these
defers
are
not
boundaries
like
we
don't
care
about
these
when,
when
in
these
boundaries,
something
happens
now,
it's
happening
on
the
main
request,
you
say,
and
that
leads
to
tasks
being
cancelled.
H
Yeah,
here's
the
the
example
it
could
be
on
the
main
initial
payload.
It
could
be
on
a
deferred
or
a
stream
that
has
kicked
off
other
more
downstream
deference
streams.
E
There's
sort
of
two
issues
here:
one
of
them
is
that
you
can
actually
send
some
data
to
the
client
and
then
five
seconds
later
discover
that
oh
something
below
it,
you
know
nulled
at
the
wrong
time,
and
that
thing
we
actually
sent
like
the
client
is
gonna
have
to
do
some
high
level
understanding
of
like
this.
You
know
this
thing
that
actually
got
sent
has
been
nulled,
and
that
is
basically
unavoidable.
E
I
think
the
other
question
is
what
happens
if
those
two
things
happen
close
enough
in
time
that
you
would
be
sending
them
in
the
same
batch,
essentially
or
or
in
parallel?
Essentially,
and
so
this
is
more
about.
E
Should
we
try
to
do
the
best
that
we
can
do
with
these
conflicts
when
we're
actually
sort
of
setting
things
at
the
same
time
as
opposed
to
or
even
in
the
future,
as
opposed
to
you,
the
client
still
needs
to
no
matter
what
be
able
to
understand
that,
like
you
know,
a
thing
that
was
sent
might
now
have
to
be
an
old
out.
C
To
be
because
if,
if
we
spawn
something
from
another
request
like
from
the
main
request,
if
we
spawn
a
task
from
that
up,
we
cannot
send
this
task
before
we
send
down
the
main
task.
So
we
we
are
guaranteed
to
find
this
null
violation
or
on
the
server
yeah.
C
No,
no,
even
if
they
are
happening
at
the
same
time,
we
we,
the
server,
has
to
hold
them
until
the
main
request
into
which
they
are
patched
into
can
be
sent
down.
E
I
I
don't
think
that's
the
case.
I
mean
the
very
simple
case
here
is
just
like.
I
mean
it's
basically
what
we
have
at
the
screen
here.
No,
I
think,
what's
on
the
screen,
would
be
a
little
more
clear
if
the
if
the
non-null
on
numbers
was
like
on
the
inside
the
parentheses
inside
the
brackets.
The
point
is,
if
you're
not
deferring
like
always
throw
like
that's
that
is
not
deferred
at
all
or
streamed.
It
should
be
sent
at
the
very
beginning,
but
it'll
have
to
go
boom.
E
If
you
know,
if
numbers
starts,
you
know
returns
if
numbers
returns
bad
stuff.
I
think
this
would
be
more
clear
to
further
than
stream.
I
find
it
a
lot
harder
to
wrap
my
head
around
stream.
B
C
C
E
I
think
I
was
confused
because
I
think
I
didn't
understand
what
you
were
saying
about
it
being
a
null
boundary.
I
think
that's.
That
is
something
I
had
not
seen.
That
decision.
C
Yeah
because
we
at
one
point
we
said
so
because
we
could
have
sent
the
main
part
down
and
now
in
the
deferred
task.
We
actually
found
out
that
we
have
a
null
violation
that
actually
would
null
out
the
original
part
that
we
already
said
to
the
sent
to
the
client.
So
we
defined
that
the
defers
are
actually
null
boundaries,
and
that
means
we
don't
have
the
problem
and
we
can
fix
it
like.
C
E
The
null
boundary
was
which
one
the
laundry
was
discussed
on.
Yes,
the
various
broadcast
23..
I
will
say
that
what
is
the
status
of
the
of
the
overall
graphql
proposal
about
like
client
controlled
null
ability,
because
it
seems
a
little
weird
to
put
in
what
is
essentially
client
controlled
mobility
as
like
a
sub
feature
of
defer.
C
F
Yeah
I
mean
just
one
thing
to
mention:
is
that
I
posted
earlier.
We
have
this
discussion
about
enforcing
delivery
order
of
payloads
and
I
think
at
the
time
we
mentioned
that
it
would
be
much
easier
for
clients
if,
if
we
did
enforce
that-
and
I
think
this
is
one
example.
What
david's
mentioning
is
it's
definitely
one
example
I
mean
you
could
have
you
could
get
a
payload
that
would
eventually
have
to
be
milled
out.
You
would
have
to
manage
that,
but
there
is
there.
F
We
did
raise
the
possibility
of,
I
think,
at
the
time
of
one
day,
relaxing
that
requirement
or
maybe
adding
an
argument
to
relax
that
that
requirement
and
then
we
would
get
into
the
the
situation,
but-
and
you
know
possibly
we
might
in
the
future.
Maybe
I
guess
there
would
be
you
know
if
you
would
use
such
an
argument
to
you
know
allow
out
of
order
delivery.
F
Then
then
you
know
you,
might
you
know
this
filtering
might
be
just
you
know
unnecessary,
because
you
have
other
problems,
but
you
know,
as
we
have
it
now.
I
think
it's
a
great
idea.
C
And
out
of
order,
delivery
doesn't
nullify
that
right
because
out
of
order,
the
delivery
we
considered
for
lists
not
for
these
default
boundary.
So
we
would
still
have
like
an
order
like
out
of
order.
Boundary
is
not
not
a
problem
for
for
the
client
to
patch
something
in.
We
just
have
to.
F
Tell
it
yeah,
no,
I
I
think
if
I
were
well
actually,
I'm
not
sure
you
know.
I
I
think
when
I
in
my
head,
considering
that
we
we
you
know,
we
didn't
rule
it
out
completely,
even
in
terms
of
defer,
but
maybe
maybe
we
did
you
know
again.
I.
H
H
F
B
Okay,
I
wonder
if,
because
I
noticed
in
the
thread
there's
this
proposal
to
wait
to
begin
the
stream
until
it's
known,
that
it's
safe
to
do
so
and
then
there's
a
concern
about
that
being
a
performance
burden,
but
what
matt
actually
matters
here
is
the
perceived
result,
and
if
there's
optimizations,
then
we
can
always
do
this
sooner
like
an
equivalent.
B
Is
you
could
if
you
have
graphql
caching,
it
may
be
the
case
that
you've
run
all
these
executors
long
in
the
past
and
you've
cached
the
results,
and
so
you
know
you've
already
actually
done
all
this
work
before
the
spec
says.
These
steps
must
occur
in
this
particular
order
and
you're
just
you
know
recreating
them,
but
to
the
observer.
B
It
looks
as
if
the
algorithm
is
run
in
a
particular
order
and
the
result
makes
sense.
So
I
wonder
if
the
way
that
you
get
yourself
out
of
this
particular
pickle
is
for
anything
that
is
either
streamed
or
deferred
within
a
particular
selection
set.
You
first
wait
until
you're
in
the
complete
value
of
that
selection
set.
So
you
know
that
anything
that
could
have
synchronously
returned
null
or
asymptotically,
just
like
in
the
in
the
course
of
asynchronously
resolving
that
selection
set
you're.
B
Now
to
the
point
where
you've,
where
you've
finished
it-
and
you
don't
have
anything
that
bubbles
and
that's
the
moment
where
you
say
now,
you
begin
doing
your
defers
and
your
streams.
With
a
note
that
says
you
could
start
your
stream
sooner
than
this.
If
you
want
to
for
performance
reasons,
however,
the
result
would
need
to
match
this
behavior.
So
in
the
case
that
something
does
cause
the
selection
set
to
be
milled
out,
you
would
want
to
cancel
that
stream
and
make
sure
that
you're
not
sending
any
payloads.
B
H
But
with
graphql
js
being
like
both
trying
to
be
like
a
very
close
implementation
of
the
spec
and
a
very
widely
used
library.
Would
it
be
weird
that
if,
if
it
graphical
js
wasn't
doing
what
the
spec
described
and
said
doing
this
filtering
for
performance
and.
B
I
don't
think
it's
weird.
Actually,
what
might
be,
because
I'm
thinking
of
cases
where
it's
it's
not
super
intuitive
as
to
which
behavior
would
actually
yield
the
best
server-side
performance
and
a
server-side
author
might
actually
want
control
over
when
these
streams
begin.
B
Like
maybe
it's
the
case
that
the
odds
that
one
of
the
neighboring
fields
throws
is
like
fairly
high
and
a
lot
of
these
streams
get
cancelled
or
the
initializing.
The
stream
is
just
really
expensive
and
the
whole
kind
of
value
add
of
doing
the
stream
to
defer
the
payloads
of
those
was
just
not.
You
didn't
get
the
benefits,
because
it
was
the
initialization
of
the
stream
that
was
expensive
rather
than
each
payload.
B
B
Path
but
allowed
to
de-opt
or
or
vice
versa,
which
would
act
and
be
kind
of
a
nice
way
to
illustrate.
C
B
The
spec
behavior
and
the
optimized
behavior,
but
even
if
you
didn't,
even
if
you
just
did
the
optimized
behavior
I
it
wouldn't
bother
me
that
much
that
there
was
a
deviation
as
long
as
it
like
it,
aligned
to
something
that
was
mentioned
in
a
spec.
And
then
you
could
sort
of
comment
there
and
say
this.
E
The
issue
clearly,
because
I
actually
think
I'd
be
in
this
discussion
and
even
having
read
this
before
I
was
very
confused.
We
are
apparently
all
in
agreement
which
I
didn't
understand
that
we
have
a
null
boundary
in
the
sense
that
things
happening
inside
the
first
stream
do
not
affect
things
outside.
You
can
know
fields
with
the
first
stream
without
affecting
anything
about
like
their
siblings
or
their
nieces
or
whatever,
but-
and
this
is
about
the
other
way
around-
this
is
about
non-deferred
nulling,
pushing
down
into
diver
and
stream.
E
Is
that
accurate,
like
that?
Okay,
I
actually
did
not
realize
that
I
yeah,
I
didn't
realize
the
null
boundary
thing.
I
think
it's
a
little,
I'm
curious
how
folks
like
doing
client-side
things.
If
they
were
aware
that,
like
alessia
or
anyone
to
visit,
you
know
essentially
adding
null
ability
to
graphic
like
ignorable
nullability
for
the
first
time
is
a
little
interesting
but
yeah
david.
G
Think
of
it
more
as
like,
like
a
like,
a
skip,
that's
dynamic,
so
if
they
can't
populate
it,
then
it's
as
if
it
was
skipped
like
those
fields,
just
don't
get
populated
they're,
not
even
there
they're,
not
normal.
They
just
don't
exist.
C
D
H
D
Effective,
even
if
we
wait
for
data
to
resolve
yeah,
it's
still
not
like,
even
if
we
wait
for
entire
power,
what
we
still
need
to
look
into,
so
we
have
technical
ability
to
track
what
we
what
was
born
like
question
here,
like
previously
everything
we
tracked
internally
on
errors
we
exposed
like
publicly
or
in
response
meaning
like
we
track
original
original
path
where
error
originated
and
we
exposed
it
as
a
path
so
back
to
rob's
pr
like
for
performance
reason.
D
It
definitely
makes
sense
to
not
check
like
every
time,
not
check
like
a
data,
but
for
error
track
like
what
was
borne
by
each
error
question
here.
Is
it
right?
We
should
keep
it
as
implementation
detail
or,
like
a
separate
thing,
a
separate
way
proposal.
Is
it
something
that
we're
interested
in
exposing
or
not.
G
I
think
the
client
is
capable
of
figuring
out
where
that
field
is
by
doing
the
data
traversal,
which
it
will
be
doing
anyway.
So
I
don't
think
that
adding
the
the
error
blow
up
path
as
opposed
to
the
error
originating
path,
provides
much
value
and
I'd
rather
not
add
it
unless
we
particularly
think
it
would
be
beneficial,
so
I
think
just
track
it
as
an
implementation
detail.
D
So
example,
on
like
client
gets
coins
gets
response,
and
it
has
next
true
so
like
next
pervert
is
expected,
so
quite
need
to
check
if
a
particular
stream
energy
for
were
blown
out
or
not
so
now,
client
need
to
inspect
like
perfect.
A
A
C
Yeah
that
the
discussion
is
more
around
the
graphical
engine
actually
would
bubble
that
up
further
and
this
logic
the
client
has
to
repeat
if
it
wants
this
behavior,
but
I
think
anyway,
if
a
user
puts
in
the
further-
and
this
is
a
decision-
and
this
has
to
be
handled
then
so
you're
anyway
splicing
your
request,
and
if
you
look
at
the
original
implementation
of
that,
the
batch
implementation
there
I
mean
defer,
allows
us
to
have
that
in
one
query.
But
originally
these
were
two
things.
A
A
D
D
H
I
mean
I
think
that,
like
we
could
describe
in
this
fact
generally
that
that,
like
these
types
of
these,
that,
if
like,
if
you
have
this
case,
like
this
stream
shouldn't
ever
be
returned
to
the
client,
because
it's
like
under
a
field-
that's
that's
been
mulled
out
by
something
that
hasn't
been
deferred
or
streamed.
H
We
we
could.
We
don't
have
to
describe
it
exactly
the
way.
I
wrote
it.
The
way
that
I
wrote
it,
I
think,
is
more
of
a
consequence
of
that.
This
may
not
be
true,
but
it
seemed
more
difficult
for
me
to
keep
track
of
in
the
because
the
error
path
refers
to
the
field
that
nulled
out
and
not
how
high
it
bubbled
and
keeping
track
of
that
seemed
more
difficult
in
graphql
js
at
least.
H
H
I
I
still
just
like,
would
be
hesitant
to
describe
in
this
fact
that
you
should
wait
to
begin
execution
until
the
parent
until
the
other
field's
resolved,
because
I
I'm
just
worried
that,
if
someone
who
implements
that
is
going
to
end
up
with
defers
like
stretching
out
the
time
that
it
takes
to
do
the
whole
request
by
many
multiples,
that
it
would
have
been
without
the
defer
and.
B
B
How
do
we
frame
it
in
the
executor
definition
I'll?
Look
at
the
the
right,
subsequent
payloads?
Okay,
the
other
idea
is
subsequent
payloads.
B
At
that
moment
you
have
that
union
set
of
all
the
every
you
know
sub
path
within
you.
You
have
all
of
those
subsequent
payloads
and
then
you
can
cancel
them
and
return
up
an
empty
set
of
subsequent
payloads,
because
none
of
those
apply
anymore,
and
so,
when
you
get
to
the
top,
you
know
all
the
subsequent
payloads
that
you
have
you've
gotten
from
collecting
that
very
first
field
in
the
in
the
query
type.
G
So,
from
a
spec
point
of
view,
I
agree
with
you
lee
that
is
certainly
a
way
that
we
could
express
it,
but
from
an
implementation
point
of
view,
I
actually
implemented
that
in
a
personal
project
and
also
implemented
the
other
one
where
we
basically
do
it
at
the
top
level,
and
the
other
approach
is
at
least
in
javascript
significantly
faster,
because
you
don't
need
to
create
all
these
temporary
objects.
That
you're
then
throwing
away
these
these
two
pools,
because
you
know
you
can
just
do
it
at
the
root
level.
G
D
B
There's
also
subtle
performance
improvements
you
can
do
for
that.
Like
do
late
unioning,
so,
rather
than
actually
merging
lists
at
every
particular
step
treat
the
empty
case
as
null
and
when
you're
doing
the
merge
just
have
a
list
of
all
of
what
you've
gotten
from
the
children
and
then
wait
until
you're
at
the
very
top
step
to
you
don't
even
have
to
do
a
flattened
list.
You
could
just
sort
of
iterate
the
tree
that
you've
collected
there.
B
B
Those
lists
when
any
one
of
those
children
have
have
that.
So
in
the
case
where
the
entire
subtree
has
created
no
subsequent
payloads
you've
just
noticed
along
the
way,
there's
just
like
you've
gotten
a
bunch
of
nulls
and
then
therefore
there's
nothing
to
merge
and
you
can
just
return
up
null.
That
may
also
have
performance.
I
don't
know
I'm
just
kind
of
roughing
on
on
an
implementation
idea,
but
it's
nice
to
say:
there's,
there's
probably
some
variant
there.
That
allows
you
to
use
the
non-mutable
approach
and
therefore
like
to
get
this
benefit
of.
B
You
know.
Oh
actually,
this
thing
hit
an
error
scenario
and
and
should
not
be
bubbling
up
any
results
and
therefore
also
does
not
bubble
up
because,
like
that's
that,
I
think
the
root
of
the
issue
here
is
that
you
are
mutatively
adding
entries
into
a
list
before
you
know
that
the
behavior
that
you're
operating
is
going
to
successfully
or
non-successfully
complete,
and
ideally
what
you
do
is
like
once
that
is
successfully
or
not
successfully
completed,
is
the
moment
when
you
merge
it
into
that
list.
B
B
H
I
think
I
think,
maybe
even
just
checking
that
immutable
list
at
the
place
where
the
errors
are
thrown.
Maybe
there
maybe
that
could
that's
doable
I'd,
have
to
like
look
into
it
somewhere
do
want
to
say.
Do
we
definitely
not
want
to
consider
the
option
of
this
could
happen?
Clients
have
to
deal
with
it.
C
I
mean
we
are
very
well
equipped
to
not
send
it
down,
and
in
this
case
we
save
bandwidth,
because
we
anyway,
no
it
shouldn't-
be
sent
down
right
now.
B
Yeah,
I
think
it's
good
to
hold
this.
It
also
holds
us
to
the
sort
of
golden
constraint
that
we've
been
trying
to
keep
a
hold
on
this
whole
time,
which
is
subsequent
payloads,
are
definitely
mergeable
into
like
if
you've
merged
all
the
payloads
in
order,
each
net
new
payload
can
also
be
emerged,
and
that
like
drove
the
decision
to
order
them
in
the
appropriate
way-
and
I
think,
is
also
probably
the
right
way
to
think
about
what
happens
when
the
previous
one
had
a
problem.
H
Yeah
I'll
I'll
look
into
I'll
look
into
that
a
little
bit
more
seeing
if
it
could
be
resolved
at
the
time
of
the
error
being
thrown.
H
Yeah
all
right,
I
also
wanted
to
to
get
clarification
on
what
stage
this
proposal
is.
I
think
that,
actually,
a
long
time
ago,
we
had
said
that
it
was
stage
two,
but
I
think
that
the
spec
pr
still
has
the
stage
one
tag,
so
I'm
not
really
sure
exactly
where,
where
we
are.
E
E
For
for
draft
is
consensus
that
the
solution
is
preferred
via
the
working
group.
It
seems
probably
the
case
resolution
of
identifying
concerns
and
challenges.
I
don't
know
if
that
means
that
literally
every
100
of
problems
have
to
be
resolved,
though
precisely
described
with
spec
edits,
or
at
least
close
to
that
and
compliant
implication
in
graphical
js,
which
is
good,
so
that
seems
we're
at
least
very
close
to
all
those
things
being
true
right.
B
I
think
that's
right
yeah,
it's
totally
reasonable
for
a
draft
to
contain
issues
and
errors
that
are
getting
worked
out
where
the
result
of
fixing
those
errors
is
subtle.
Changes
like
obviously
the
mutable
versus
non-metal
listed
thing
is
like
feels
big
in
terms
of
the
spec
language,
but
like
in
in
practice,
is
fairly
minor
relative
to
everything
else
that
that
definitely
still
feels
like
changes
relative
to
a
draft.
So,
okay.
E
B
Yeah,
we
would
certainly
want
that
for
an
accepted
state.
It's
like
especially
this
is
by
far
the
most
complicated
addition
to
graphql
since
it's
open
sourcing,
which
is
part
of
the
reason
why
we've
been
having
sustained
effort
for
so
long,
and
I
imagine
that
there's
going
to
be
a
lot
of
like
material
outside
of
the
spec
to
help
people
learn
about
it
and
a
little
bit
of
stuff
within
the
spec
to
help
them
learn
as
well.
But,
okay
answering
your
question,
it
is
backstage
too.
B
I
think
you're
right
rob
that
we
did
probably
talk
about
that
before
and
just
forgot,
and
so
this
is
more
a
matter
of
just
getting
that
pr
updated
correctly.
It's
exciting!
Congratulations.
A
A
Okay,
but
in
the
context
of
what
lee
said
that
this
is
most
complicated
addition
to
the
spec
since
open
source
in
it,
I
don't
know,
did
you
already
discuss
this
or
not,
but
what
occurred
to
me
when
I
was
reading
the
additions
to
the
spec.
A
A
B
Yeah,
that's
a
good
point
if,
if
possible,
I
would
prefer
that
this
does
not
end
up
being
a
separate
spec,
because
I
anticipate
like
there's
a
both
a
kind
of
a
forking
challenge.
If
every
change
to
the
main
spec
will
need
to
be,
you
know
from
basically
from
the
point
on
that
this
gets
merged.
We'll
always
need
to
be
keeping
in
mind,
asynchronicity
behavior
relative
to
any
other
change
that
we
make
and
doing
that
all
in
one
document
will,
I
think,
be
a
little
bit
easier.
B
B
From
the
rest,
like,
can
you
read
most
of
the
algorithms
of
execution
and
then
just
like
skip
over
the
parts
about
streaming
and
defer
and
it
still
really
kind
of
makes
sense.
It's
like
in
the
case
where
a
deferred
stream
isn't
encountered,
there's
just
like
a
large
branch
of
the
of
the
algorithm
that
you
can
just
kind
of
ignore,
which
might
aid
that
and
right
now
that's
kind
of
true,
and
you
know,
maybe
it
could
be
a
little
bit
better.
B
A
A
Nothing
yes,
having
like
what
is
this
relay
spec
on
something
on
top
of
the
basic
spec
other
specs
that
build
on
that,
but
leaving
separate
documents?
Okay,
but.
C
E
I
I
don't
see
a
realistic
way
that
this
could
be
completely
separate
from
the
main
spec
that
it
really
does
change
fundamentally
like
how
execution
works.
If
we
are
concerned
about
making
big
changes,
I
would
say
I
mean
I
think
it
would
be
nice
to
just
get
the
whole
thing
done.
We've
been
working
on
this
or
you
have
mostly
I've,
been
working
on
this
for
many
years.
E
One
could
consider
if
we
wanted
to
be
a
little
cautious.
You
know
doing
this
incrementally
merging
sort
of
subsets
of
it.
One
can
imagine
maybe
the
first
change,
that's
worse
than
the
spec
is
just
defer
and
not
stream
or
or
pulls
back
on
the
combining
deferring
stream
with
subscriptions,
which,
to
be
honest,
I
think,
has
a
very
strange.
E
I
mean
we
don't
use
subscriptions
much
on
our
platform
anyway,
but
like
the
the
way
that
the
first
room,
works
and
searching
is
feels
a
little
strange
to
me
and
it's
hard
to
imagine
why
people
would
like
how
people
could
really
use
it.
So
if
we
are
concerned
about
the
scope
of
it,
one
option
would
be
like
you
know,
starting
with
defers
on
queries
and
mutations
only
and
leaving
streams
which
a
lot
of
complexities
and
streams
and
subscriptions
for
follow-up.
B
E
B
Point
we
brought
that
up
before
as
well.
I
think
in
anywhere,
where
we're
not
feeling
really
confident
as
counterintuitive
as
it
is,
we
should
in
fact
have
the
like
most
restrictive
behavior,
because
it
means
we
can
come
back
to
it
and
add
in
as
opposed
to
having
a
behavior
that
is
less
restrictive,
makes
it
harder
to
then
go
in
and
add
restrictions
in
the
future.
B
The
adage
of,
if
it
is
possible,
it
will
happen,
seems
to
be
applying
quite
a
fair
bit
in
the
graphical
domain,
so
you
have
to
be
careful
there,
but
david.
I
think
you
make
a
really
really
good
point
like
if
we're,
if
we're
getting
into
the
a
space
where
you
know
the
fur
feels
super
baked
and
ready
to
go
and
stream
is
just
like
constantly
getting
hit
by
thorns
the
next
thing
or
if
the
interaction
with
subscriptions
is
like
the
therapy
dragon's
territory,
I
wouldn't.
B
I
would
certainly
not
be
opposed
to
doing
an
incremental
ship
where
you
know
we
put
restrictions
in
place
that
protect
our
ability
to
come
back
to
that
and
add
it
back
in
while
starting
to
collect
feedback
by
shipping.
This
out
to
the
broader
populace,
I
think
that
would
also
be
really
good,
but
I'm
also
with
you
like.
I
think
the
it
feels
like
the
set
of
problems.
We're
encountering
are
getting
smaller
and
smaller
and
corner
caseier
and
corner
case
here.
So
it
does
feel
like
we're
getting
really
close.
B
Awesome
well,
I
updated
the
pr
to
have
the
appropriate
tag
and
dropped
a
comment
as
to
why
awesome
work
and
thanks
everyone.
Who's
been
digging
in
and
doing
detailed
review
on
the
spectex.
I
think
you
know,
as
we
start
to
round
out
the
implementation
specific
quirks,
we
can
get
to
the
spectex
quirks.
E
Go
ahead,
one
thing
this
is
maybe
a
bit
of
a
make
the
subtext
text
thing.
Of
course
people
have
been
working
on
this
project
for
many
years
at
this
point,
and
there
are
multiple,
you
know
published
implementations
out
there
like
hot
chocolate
and
so
on,
but
it's
not
a
wacky
coincidence
that
a
bunch
of
apollo
folks
showed
up
to
this
meeting
as
folks
may
or
may
not
have
noticed
by
the
what
we've
been
doing,
especially
the
past
few
weeks.
E
We
are
planning
to
you
know,
be
shipping,
the
first
support
in
several
of
our
projects
relatively
soon.
We
are
very
aware
of
the
fact
that
this
is
not
you
know,
finalized
either
on
the
working.
You
know
the
main
spec
or
the
http
spec,
and
we
are
very
cognizant
of
of
wanting
to
not
put
stuff.
E
We
know
that,
on
the
one
hand,
we'll
be
shipping
things
delay,
customers
then
we'll
be
supporting
it,
but
we
do
not
want
to
be
diverging
from
a
future
respect,
so
as
part
of
what
we're
doing
we're
explicitly
making
sure
that
not
only
do
we
do
content
negotiation
about
around,
like
you
know,
multiplying
mixed
in
the
first
place,
which
is
something
that
I've
been
opening
up
back
on
in
graphical
over
http
that
I
think
everyone
should
do,
but
we're
even
planning
to
use
like
a
parameter
in
the
in
the
in
the
accept
flag.
E
That,
like
says
like
we
are,
you
know
this
is
the
202208
22
and
if
a
few
little
things
change
between
now
and
it
being
merged,
we
will
be.
You
know,
make
it
easy
for
our.
You
know
our
customers
in
the
field
to
migrate
over,
so
we're
really
excited
about
being
about
working
with
the
working
groups
and
but
also
about
putting
the
stuff
in
the
hands
of
more.
You
know
real
users,
which
I
think
will
get
us
a
lot
more
feedback
about
the
source
of
practice.
B
That's
great,
I
always
feel
like.
I
have
mixed
reactions
to
that
strategy
because
it's
like,
if
we
could
end
up
getting
stuck
with
the
thing
that
we
think
is
wrong,
but
also
biasing
transaction
is,
is
probably
good
in
the
name
of
feedback
gathering.
C
Yeah
we
already
like
for
graphql
id.
We
now
support
both
payloads
for
the
current
stream,
because
we
had
the
one
that
was
very
similar
to
the
original
result.
Now
we
have
the
incremental
so
we're
already
paying
that
price.
Also
for
this,
without
talking.
B
For
what
it's
worth,
though,
it
does
feel
like
we
are
far
closer
to
done
than
not
done
in
terms
of
describing
the
payload
structures
like.
There
was
a
lot
of
iteration
on
that
earlier
in
this
year
and
I'm
not
trying
to
speak
for
everybody
else.
But
at
least
I
feel
reasonably
confident
that
what
we've
landed
on
is
going
to
capture
all
the
cases
that
we
care
about.
E
B
Yes,
awesome
well,
thank
you
again,
rob
for
being
the
consistent
torchbearer
on
this
one
and
making
sure
that
we're
always
making
progress,
fantastic
work
move
on
yakov.
The
next
couple
are
yours:.
F
Oh
okay,
so
I
basically
just
want
attention
to
a
few
pr's.
I
have
like
five
minutes
on
this
one
15
minutes
of
the
next
one
and
five
and
the
next.
I
think
I
can
basically
I'm
not
actually
super
excited
about
the
schema,
metadata
and
apply
directives.
F
So
I
don't
know
if
it'd
be
okay
with
everyone,
but
I
basically
want
to
draw
attention
to
just
those
three
pr's.
Just
ask
you
know
for
for
review
and
I'll
just
you
know
describe
them
briefly.
So
the
first
one
is
the
separate
the
is
subtype
algorithm
and
they're
all
sort
of
like
you
know.
Some
of
them
are
interconnected.
F
There
is
a
lot
there's.
You
know
referencing
in
different
places
into
this
idea
that
types
are
subtypes.
Another
covariant.
F
And
I
think
it
makes
sense
to
sort
of
formalize
that,
with
like
an
is
subtype
algorithm
and
it's
also
kind
of
useful
item
on
the
agenda,
so
I
mean
that's
basically
all
I
want
to
say
about
that
that
I
didn't
pretend
to
item
11..
The
next
item
on
the
agenda
was
verifying
the
resolve,
abstract
type
algorithm
right
now.
The
spec
just
says
that
we,
you
know,
should
that
each
internal
implementation
there
should
be
an
internal
function
that
is
able
to.
F
You
know,
perform
that
and
graph
ogs
as
a
method
on
the
the
abstract
type
they
can
resolve.
You
know
resolve
it,
but
it
doesn't
describe
actually
what
should
happen
in
the
case
of
errors.
F
So
basically,
I
just
took
the
graphql
js
implementation
and
used
this,
like
is
subtype
function,
to
specify
that
the
idea
of
clarifying
that
or
the
motivation
for
clarifying
that
was
because
a
couple
working
groups
ago,
we
had
the
idea
of
allowing
that
internal
function
to
sort
of
chase
the
interface
inheritance
tree,
meaning
an
abstract
type
could
be
resolved
to
for
an
interface
that
could
be
resolved
to
maybe
another
interface
and
eventually
to
an
object
type.
I
I'm
still
not
sure
if
we
need
you
know
ever.
F
We
need
to
do
that,
but
I
did
notice
that
before
we
change
the
algorithm,
we
probably
should
clarify
its
current
behavior
and
so
those
two
those
two
items.
Basically,
it's
just
a
call
for
feedback.
I
don't
know
if
it
needs
to
be
done
synchronously.
If
people
you
know,
feel
an
urge,
please
feel
free
and
then
this
next
item
is
there's
an
additional
call
for
feedback.
F
You
know
I've
been
jumping
on
these
working
groups
a
couple
of
times
in
the
past
couple
months,
trying
to
work
on
expanding
the
subtyping.
I
actually
don't
have
much
new
to
report
this
month,
but
I
did
write
up
some
of
the
some
of
what
you
know
sort
of
linking
together
some
of
what
you
know
the
different
topics
and
seeing
how
they
interrelate.
F
So
it's
okay
with
everyone,
I'd
like
to
basically
stop
it,
stop
it
there,
because
I'm
just
super
excited
about
really
hashing
out
this.
You
know
metadata,
but
I
should
pause.
I'm
sure
people
got
very
excited
about
these.
Three
topics.
Maybe
have
a
bunch,
maybe
have
some
things
to
say.
So
I
don't
want
to
preempt
myself
if
other
people
you
know
want
to
jump
in
and
so.
B
For
the
separating
out
is
subtype,
I'm
pretty
sure.
I
know
the
answer
to
this,
but
if
you
could
confirm
usually
when
we
change
algorithms
in
the
spec,
we
want
them
to
mirror
real
functions
that
exist
in
the
reference
implementation.
B
F
You
don't
have
to
double
check
if
it's
exact,
I
mean
that
was
basically
what
I
was
working
off
of.
So
if
anyone
who
wants
to
review
it
wants
to
double
check,
my
work,
I
think
it
might
be
called,
is
subtype
of
that.
You
know
instead
of
exactly
but
yes,
indeed,
there
is
a
function.
B
Because
that
would
be
my
one
request
before
merging
this
is
that
there
is
that
parody.
But
if
that
already
exists,
then
this
feels
like
a
glitch
fixing.
D
It
there
and
it's
have
exactly
the
same
name,
so
it's
like
example
of
spec
pr
happening
even
one
before
implementation.
Pr
opinion
long
before
spectre,.
D
C
B
Yeah,
okay,
one
of
them
manages
named
types
and
the
other
one
manages
types
which
can
include
null
and
lists
which
I
don't
know
if
that's
important,
for
I
mean
for
the
the
specific
case
that
you're
looking
at
resolving
the
abstract
type.
It's
it's
used
in
exactly
the
right
place
in
exactly
the
same
way.
So
seems
like
the
right
thing
to
do
to
me,
but
something
to
keep
in
mind
if
there's
other
places
where
we
end
up
using
it
to
be
aware
of
that
difference.
F
B
Cool
is
there
an
order
of
operations?
I
think
the
the
first
one
separating
out
is
subtype,
since
that
method
literally
already
exists,
I'm
just
going
to
go
ahead
and
merge
this
right
now.
This
is,
let's
call
this
an
improved
editorial
change,
because
it
makes
it
actually
closer
to
the
reference
implementation.
B
F
I
mean
in
terms
of
the
motivation
to
be
completely
honest.
I've
had
some
hesitation
since
since
either,
and
I
first
discussed
it
and
definitely
since
the
working
group
meeting
as
to
how
valuable
those
those
changes
would
be.
But
I
worked
up.
You
know
this.
This
vr
to,
I
think,
better
match
what
should
happen,
and
you
know,
and
if
we
decide
to
revisit
that
at
least
we'll
have
a
good
working
base.
The
idea
is
that
you
know
you
have
to
you.
F
F
Yeah
I
mean
I
definitely
wanted
to
just
raise
awareness
with
the
rfc,
so
this
is
about
the
expanding
subtyping
rfc,
moving
to
the
last
of
those
three
small
numbers.
So
I
I
I
I'm
surprised
that
there
aren't
more
people
excited
about
interfaces
being
a
part
of
me.
I
think
that
might
solve
some.
F
Maybe
some
of
the
pain
points
around
adding
types
to
existing
unions,
but
maybe
I'm
wrong.
So
I
know
there
were
others
who
were
interested
in
in
these
topics.
So
you
know
I
just
sort
of
thought
that,
with
putting
together
this
rfc,
would
you
know
sort
of
link
together
a
couple
of
the
of
the
issues
and
feature
requests
and-
and
we
could
see
where
the
community
moves
from
there.
F
So
you
know
these
issues,
and
now
I
just
I
think
it's
good
to
have
a
document
sort
of
where
we've
come
to.
I
think
the
document
could
be
improved,
integrating
a
little
bit
more
of
the
feedback,
I've
gotten
from
the
previous
working
groups.
I
have
to
go
back
at
the
notes,
refresh
myself
on
nothing,
but
what
that's?
What
that
feedback
was
in
terms
of
the
same
points,
but
I
think
this
is
a
good
starting
point.
C
B
All
right,
sorry
for
the
delay
here,
just
getting.
B
Your
pr
in
a
good
state,
okay,
if
we
can,
you
can
use
the
last
15-ish
minutes
or
so
take
a
dig
into
a
discussion
on
schema,
metadata
and
apply
directives,
which
is
my
topic
but
benji.
I
feel
like
part
of
who
I
wanted
to
reserve.
The
time
is
that
you
had
assembled
this
presentation,
which
looked
really
nice.
I
wanted
to
give
you
an
opportunity
to
speak
through
it.
G
Yeah
I
made
I
made
that
put
too
much
time
into
that.
We
did
also
slightly
skip
over
a
topic
which
was
fixing
ambiguity
around
the
schema
when
the
schema
can
or
should
be
emitted
from
the
the
s.
B
I'm
totally
sorry
about
that.
We
did
yeah,
sorry,
let's,
let's
get
to
that
first,
so
that
was
supposed
to
be
earlier
in
the
agenda
and
I
my
eyeballs
whipped
over
it.
So
that's
fine.
G
No
problem
at
all
so
effectively,
we've
got
two
paragraphs
in
the
spec
at
the
moment
that
I
feel
aren't
clear.
These
were
pointed
out
by
roman
in
a
related
edit,
but
I'd
like
to
rather
than
worrying,
about
rewriting
them
to
be
something
different.
G
What
I
would
like
to
do
is
just
clarify
what
they
should
specifically
say
currently,
so
one
of
the
things
that
we
have
is
in
the
first
paragraph,
we
say
that
you
can
omit
the
schema
definition
when
the
query,
mutation
and
subscription
root
types
are
named,
query,
mutation
and
subscription,
which
implies
that
they
exist,
but
they
don't
all
have
to
exist
for
us
to
use
this
shortcut.
So
I
wanted
to
clarify
that
separately
in
the
inverse
situation.
G
We
say
that
the
schema
definition
should
be
emitted
if
it
only
uses
the
default
operation
type
names,
but
it
doesn't
specify
that
it
only
uses
them
for
the
root
operations.
So,
for
example,
if
you're
talking
about
viruses,
you
might
talk
about
mutations,
a
mutation
shouldn't
necessarily
be
the
mutation
root
type,
unless
you
explicitly
denote
that.
So
I
have
carefully
rewritten
the
wording
of
this
to
try
and
encapsulate
what
our
intent
is
just
be
a
little
bit
more
crisp
with
the
language.
G
In
doing
so,
I've
introduced
a
term
which
is
the
default
root,
operation
type
name,
which
is
perhaps
a
little
verbose
being
five
words,
but
I
think
it
is
at
least
useful.
So
quite
a
small
change,
hopefully
editorial.
I
think,
but
I
just
wanted
to
get
some
feedback
on
that.
However,
I
would
like
to
talk
about
metadata,
so
maybe
we
can
just
do
that.
Asynchronously.
B
The
change
makes
sense
to
me
in
terms
of
verbosity.
I
think
root
and
operation
are
acting
in
synonymous
ways
in
terms
of
their
descriptive
power,
and
you
could
probably
just
pick
one
because
operations
are
definitionally
at
the
root
and
so
shorter
operation
is
maybe
more
accurate,
but
just
pick
one
and
I
think
either
is
probably
fine,
yep.
B
A
Can
I
ask
that
we
have
a
big
fight
there
and
and
this
regarding
the
second
sentence
and
the
formulation
of
the
situation
of
the
second
paragraph,
so
maybe
not
now,
but
please
have
a
look
and
what
we
have
here
in
my
understanding
in
my
impression
when
I'm
trying
to
be
a
freshman
a
fresh
reader,
this
is
absolutely
makes
no
sense
and
produces
more
questions
than
answers,
and
so
what
I'm
insisting
they
are.
A
It's
actually,
I
think
on
my
original
pr,
I'm
trying
to
argue
first,
we
should
not
in
general,
I
feel
like
we
should
not
prefer
one
way
or
another.
We
should
not
dictate
when
the
let's
say
schema
generator
should
skip
or
not.
Why?
A
But
the
second
one
is
this
second
sentence
and
benji:
if
you
can,
please
increase
the
font
size
or
the
scale
in
and
show
this
sentence
just
to
show
you
and
then,
let's
think
about
it,
and.
G
Oh
sorry,
from
the
from
the
other
thing
yeah.
Likewise,
when
representing
a
graphql
schema
using
the
type
system
definition
language,
a
schema
definition
should
be
omitted
if
it
only
uses
the
default
root,
operation
type
names.
A
So,
basically,
imagine
refreshment.
Likewise,
when
schema
when
representing
a
graphql
schema
using
a
type
system
when
it's
not
representing
what
else
it
can
do,
you
know,
and
it
turns
out,
it
actually
speaks
about
the
case
when
the
schema
is
generated
by
a
tool.
As
far
as
understood
from
explanations,
I
suggest
that
then,
let's
change
the
word,
the
wording
here
that,
when
you're
generated
from
a
tool
but
turns
out,
it's
still
not
covers
the
case
and
so
on.
A
G
Okay,
so
I'm
not
sharing
the
right
way,
but
I'm
just
gonna,
I'm
just
gonna
go
with
it,
because
it's
too
late
now
so
we've
been
talking
about
metadata
and
a
lot
of
what
we've
concentrated
on
when
we
talk
about
metadata
is
metadata
for
the
schema
as
in
for
the
entire
schema,
and
what
I
want
to
talk
about
is
metadata
requirements
when
you
want
to
do
small
amounts
of
metadata,
so
we
want
to
introduce
new
capabilities
and
make
it
useful
for
lots
of
different
situations,
not
just
a
few
of
them.
G
So
I
see
that
we've
effectively
got
two
types
of
introspection.
We've
got
the
tooling
the
in
that
consumes
introspection,
so
things
like
graphical,
like
the
documentation,
generators,
composing,
schemas,
graphql
code,
generator
things
like
that,
but
we
also
have
this
double
underscore
type
field
that
you
can
use
to
introspect,
just
an
individual
type,
and
at
the
moment
I
don't
see
that
widely
used,
and
I
think
one
of
the
main
reasons
that
it's
not
widely
used
is
because
we
can't
associate
any
additional
metadata
with
that.
G
So,
for
this
talk,
imagine
that
we're
building
a
web
app
we've
got
a
whole
bunch
of
clients.
We've
got
the
web
client
we've
got
the
ios
client
we've
got
android,
we've
got
maybe
desktop
all
written
in
different
programming
languages,
and
we
want
to
be
able
to
iterate
quickly
across
all
the
different
platforms,
and
we
want
to
be
able
to
distribute
these
sort
of
small
behavioral
changes
just
via
the
graphql
schema
and
have
them
reflected
almost
immediately
without
having
to
wait
for
new
app
store
review
and
things
like
that.
G
So
here's
some
use
cases
one
that
I
see
quite
a
lot
in
the
clients
that
I
work
with
is
the
desire
for
these
kind
of
user
controlled
tables.
Where
you
can
filter
them,
you
can
change
the
sorting
of
them
and
various
other
behaviors,
and
this
is
the
kind
of
thing
that
could
quite
easily
if
we
had
metadata
be
controlled
by
that
metadata.
G
G
Yeah
then,
there's
also
things
like
shared
validation
logic,
so
if
you've
got
a
form
with
validation
logic
like
this,
you
may
want
to
be
able
to
share
the
rules
for
each
of
the
fields,
and
it
would
be
nice
to
share
them
in
a
way
that
was
then
be
able
to
be
used
across
multiple
of
your
applications
and
importantly,
to
be
able
to
change
them
on
the
fly.
G
If
you
find
out
that
your
email
address,
validation
rule
is
wrong,
maybe
you
want
to
just
fix
that
in
your
graphql
schema
and
have
all
of
your
clients
reflect
that
new
regex
instantly,
rather
than
having
to
wait
for
everything
to
go
through
app
store
review,
or
something
like
that
just
to
fix
a
very
minor
issue
also
helps
to
synchronize
this
logic.
G
So
that
is
another
use
case
for
it.
Yet
another
use
case
is
gating
features
based
on
permissions,
and
things
like
that.
You
could
indicate
next
to
a
mutation.
G
G
If
it
turns
out
that
fetching
a
hundred
rows
of
a
particular
collection
is
very
expensive,
maybe
on
the
server
side,
you
want
to
drop
that
down
to
a
limit
of
50,
and
you
want
all
of
your
applications
to
reflect
that
new
limit,
then
indicating
that
through
introspection
would
be
desirable
and
that's
something
that
even
graphical
could
benefit
from,
because
it
would
then
know
how
do
I,
how
what
options
do
I
show
when
it
comes
to
showing
the
collection
in
the
sidebar
there's
also
like
a
whole
bunch
of
other
stuff
like
dynamic
drop
downs
like
what
you've
got
on
github,
for
example,
stream,
and
defer
like
saying
what
you
should
use
for
the
initial
account
or
even
if
it
makes
sense
to
stream
or
defer
a
particular
field
and
also
information
about
the
schema
itself,
like
what
version
is
the
schema?
G
So
all
of
that
I
think
we
need
to
use
the
underscore
type
field
for
that
and
we
need
metadata
and
the
metadata
might
grow
quite
large,
because
we
might
be
indicating
a
bunch
of
different
topics
and
when
we
request
it
from
the
client.
We
probably
only
care
about
one
or
two
of
those
topics
at
a
time,
so,
ideally,
we
would
only
pull
down
those
little
bits
of
metadata
that
we
care
about,
not
everything.
G
But
my
main
concern
is:
I
am
concerned.
If
we
use
a
string
type
for
this,
we
string
apply
to
a
graphql
language,
for
example,
that
we
require
a
passer,
which
means
we
need
to
bundle
a
passer
with
every
client,
but
also
that
it
means
if
we
were
to
change
the
syntax
in
the
language
it
would
make
it
hard
to
if
we
were
to
add
something
to
one
of
our
metadata
types
that
needed
this
new
syntax.
G
Suddenly,
all
those
existing
clients
that
were
reliant
on
that
would
potentially
throw
errors
at
the
passing
stage,
which
doesn't
seem
desirable
at
the
moment.
We've
used
that
for
the
default
value
field,
but
generally
from
a
client
perspective,
we
just
care
whether
there
is
a
default
value
or
not.
We
don't
actually
care
what
its
value
is
unless
you're,
for
example,
graphical,
and
you
actually
want
to
render
that,
but
it
doesn't
tend
to
actually
be
used.
B
This
makes
a
lot
of
sense,
though
it
challenges
the
way
that
I've
been
thinking
about
introspection
to
this
point,
which
may
also
challenge
some
of
the
assumptions
we've
made
around
that
it,
which
is
introspection,
is
a
feature
used
by
developers
at
development
time
and
the
rest
of
the
schema
is
used
by
the
deploy
applications
in
production
and
one
example
of
leaning
on
that
is.
There
are
many
people
who
use
graphql
who
restrict
access
to
introspection.
G
So
I've
worked
with
a
few
clients
that
have
actually
done
this
in
the
application
schema
and
the
problem
that
they
often
come
up
with
is
that
actually
the
functionality
relates
to
their
graphql
types.
It
relates
to
their
their
fields.
Their
mutations
things
like
that
and
linking
to
those
in
the
application
schema
is,
is
cumbersome.
G
It
just
doesn't
feel
natural,
because
you've
got
something
that
you're
getting
through
one
resource
and
then
you're
kind
of
not
linking
it
tightly
from
another
and
for
most
of
them
it
does
feel
like
these
aren't
things
that
you're
going
to
change
very
often
you
would
change
them.
You
know
as
part
of
your
schema.
G
Maybe
you
even
define
it
alongside
your
schema
and
at
the
moment,
they're
effectively
then
pulling
those
out
and
then
re-inserting
them
into
the
the
schema
in
the
user
space
and
then
linking
them
back
together
again
in
the
client,
which
is
not
ideal.
F
Would
there
be
a
way
to
manage
permissions
on
metadata,
then,
similarly
to
how
permissions
can
be
managed,
I
guess
on
resolvers.
G
G
Yeah
and
I
think
whatever
we
come
up
with
for
metadata,
it
would
be
ideal
that
it
solves
both
of
these
problems
and
that's
my
main
concern
at
the
moment.
Is
that
we're
making
a
few
decisions,
or
at
least
we're
leading
towards
certain
decisions
that
I
think
will
severely
limit
the
these
use
cases
that
I
propose
here.
I
absolutely
think
that
we
need
them
for
like
schema,
composition
and
stuff
as
well,
but
I
don't
think
there's
anything
in
what
I've
proposed
that
would
prevent
those
things
from
working.
C
B
Sorry
no
worries
I
just
wanted
to
wrap
things
up
with
the
last
thought
and
let
everyone
disappear
which
the
use
cases
here
are
great
and
I
think
maybe
the
way
that
I'll
fold
this
into
our
thought
process
around
schema
metadata
is
not
necessarily
to
say
that
we
should
go
suggest
people
use
introspection
in
this
way,
but
to
the
degree
that
they
might
want
to.
We
should
not
end
up
in
a
solution
that
makes
it
additionally
challenging
to
do
this
unnecessarily.
B
That
might
be
the
most
helpful
way
to
think
about
this,
which
I
think
maps
closer
to
the
conclusions
that
you
got
to
around:
what's
safe
and
not
safe
about
a
parser
which
we
can
kind
of
dig
into
asynchronously,
but
thank
you
for
assembling
this.
This
was
actually
really
helpful
to
kind
of
think
through
the
cases
in
which
metadata
might
be
used
beyond
the
ways
in
which
we've
thought
about
it
so
far,
which
can
help
inform
the
shape
of
a
proposal.
We
want
to
assemble
with
that
we're
roughly
at
time.
B
So
thank
you,
everyone
for
for
hanging
in
there.
I
know
we
didn't
take
a
five
minute
break
this
time
around,
but
the
next
one
and
beyond
they'll
be
a
little
bit
short
so
appreciate
it
appreciate
everyone's
time
and
attention
on
everything.
Thank
you
to
everyone.
Who's
been
doing
all
the
excellent
work
and
see
you
all
next
time.