►
From YouTube: GraphQL Working Group - May 7, 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
Laid
before
we
get
started,
do
you
mind
if
I
give
a
few
administrative
updates
here
go
for
it
all
right,
I'll,
wait
till
everybody
gets
joined
here,
but
just
a
few
things
on
some
changes
that
we're
making
to
the
way
that
people
sign
the
specification,
membership
agreement
and
essentially
what
that
means
for
the
for
going
forward
and
then
just
also
a
heads
up
in
general
that,
for
this
particular
meeting
gonna
add
a
password
to
it.
We'll
explain
some
of
that
stuff.
B
B
A
Okay,
our
participant
list
is
looking
stable,
have
a
last
couple
of
people
joined
in
last
minute
or
two
and
I
have
merged
all
of
the
pull
requests.
The
agenda
in
the
future
would
be
great
if
all
the
player
requests
could
be
up
at
least
the
night
before
sometimes
I
kind
of
scramble
to
get
the
last
minute
once
merged
in.
A
A
B
Good
I
will
be
quick
and
then
all
I'll
disconnect
here
so
a
couple
of
things,
as
you
know,
in
order
to
join
one
of
these
meetings,
one
of
the
prerequisites
is
to
be
a
member
of
the
specification,
and
that
involves
in
the
past
is
involved.
Signing
a
document
in
DocuSign,
which
basically
says
you
know
I
I
own.
My
employer
agree
to
the
various
terms
of
this,
and
then
the
idea
is
that
once
you've
signed
that,
then
you
can
open
up
PRS,
you
can
make
contributions,
you
can
join
meetings,
etc.
B
We
looked
at
this
process
and
realize
that
we
have
some
tooling
at
the
Linux
Foundation.
That
would
make
this
maybe
a
little
bit
easier
in
the
future
and
essentially
it's
repurposing,
the
tooling
that
we
use
for
signing
clas
and,
in
a
nutshell,
the
way
that
this
works
is
that
somebody
who
has
not
signed
the
spec
membership
agreement
if
they
go
and
open
up
a
PR,
for
example,
to
add
themselves
to
the
the
meeting
list
for
this
particular
meeting.
B
If
they
haven't
sign,
inspect
membership
agreement,
then
they
get
a
notice
that
says
you
need
to
go
to
this
place.
Get
this
form
signed,
you
know,
take
you
to
your
employer,
get
added
to
the
approved
list
or,
if
you're
not
employed
or
working
on
behalf
of
an
employer,
sign
it
yourself
and
once
they
do
that,
then
they
get
unblocked
and
they
can
add,
you
know,
make
changes
in
the
repo
and-
and
you
know,
do
all
the
things
that
that
they
should
be
able
to
do
once.
B
They've
done
this,
the
big
change
from
the
way
we've
done
it
that
pass
to
the
way
that
it's
being
done,
or
their
will
be
done
now-
is
that
this
is
automated,
basically
done
with
github
bots
so
that
we
don't
have
to
manually,
compare
lists
or
go
back
and
see
who
has
signed
what
so
that's
one
change.
I
wanted
to
give
you
a
heads
up
that
this
is
coming.
It
shouldn't
affect
any
of
you.
B
We
haven't
turned
this
on
yet
I'm,
expecting
it
to
be
alive,
hopefully
sometime
later
on
this
week,
or
at
least
ready,
not
live
and
I.
Think
probably
the
best
way
to
do
this
would
be
a
phased
rollout
where
we
turn
this
functionality
on
for
various
repos,
once
we've
made
sure
that
everybody
who's
already
signed,
it
has
already
been,
you
know,
covered
and
in
the
approved
list,
and
the
tools
and
no
PRS
are
blocked,
so
we'll
be
rolling.
This
out,
probably
I'd,
say
probably
over
the
next
month.
B
B
B
Once
we
get
to
the
point
of
turning
it
on
usually
the
way
that
I
do,
this
is
I'll
turn
it
on
in
a
test
repo,
so
that
people
can
you'll
poke
at
it
and
see
how
it
works.
We
can
send
out
a
notification
too,
so
you
can,
if
you
want
to
kick
the
tires
or
if
you
have
a
you
know,
specific
issue
that
you
run
into.
We
can
address
it
there
before
you
know
rolling
it
out
more
broadly,
so
that's
one
thing
that
I
wanted
to
bring
up.
Let's
see.
B
Second
thing:
purely
administrative
I've
been
going
around
to
the
various
different
groups
that
are
using
zoom
accounts
and
adding
meeting
passwords
again.
This
shouldn't
really
affect
you
very
much.
Basically,
just
next
time,
probably
the
working
group
meets
there,
just
be
a
some
trivial
password
in
order
to
get
in
we'll
post
that
I'll
take
care
of
updating
the
meeting.
Invite
the
agenda.
That's
in
the
repo
and
just
you
know,
it'll
be
a
password
it'll,
be
on
a
separate
line.
The
goal
here
isn't
to
keep
meaningful
contributors
out.
B
The
goal
is
to
keep
out
people
who
are
randomly
guessing
zoom,
IDs
and
dropping
in
on
the
meetings
unannounced.
So
if
you
know
where
to
find
the
meeting
invite,
then
you
can
easily
find
the
password
so
we're
just
trying
to
avoid
you
know
the
random
number
zoom
bombers,
so
just
a
heads
up
next
time.
You
will
get
a
note
on
that
and
then
the
last
thing
that
I
have
here
is
for
the
calendar.
Lee
and
I
talked
about
consolidating
the
various
graph
QL
calendars
he's
got.
B
One
I've
got
one
that
it's
shared
out
through
the
foundation.
We're
gonna,
probably
I,
think
consolidate
on
the
one
that
was
shared
out
through
the
foundation
was
what
we
landed
on
and
the
reason
for
doing
this
is
just
we
have
multiple
groups
that
are
using
this
zoom
account
at
this
point.
One
of
the
easiest
ways
to
direct
traffic
and
make
sure
that
meetings
don't
get
scheduled
over
other
meetings
is
to
have
a
single
calendar.
So
I
can
update
that
link
as
well.
B
B
A
B
A
Okay,
let's
start,
as
we
always
do
by
an
introduction
of
all
of
our
spine
selves,
we
will
use
the
order
shown
in
the
agenda
file.
Give
that
a
good
refresh,
because
I've
merge
pull
requests
in
the
last
20
minutes
and
you
want
to
make
sure
you
got
an
up-to-date
look
at
those
I'm
at
the
top.
So
I'll
go
first,
everybody.
My
name
is
Lee
and
I
help
lead.
This
crazy
group
of
graph
go
folks.
A
P
A
Excellent
welcome
everybody.
I
would
usually
ask
for
volunteers
for
note-taking,
but
I've
noticed
that
in
our
notes,
file,
there's
already
extremely
detailed
notes
being
taken.
So
it
looks
like
we're
on
top
of
things.
Benji
Allen
I
assume
the
two
of
you
are
in
there.
Making
good
things
happen.
Do
any
other
of
you
need
help.
I.
A
A
Let's
go
over
our
agenda
really
quickly
for
the
day,
we'll
take
a
look
at
past
action
items.
If
there
are
anything
open,
although
I
think
we've
made
it
through
most
of
them,
then
we're
going
to
talk
about
custom,
scalar
specifications.
The
query,
query:
query:
query,
create
query:
query
heavy
duty
discussing
the
RFC
for
adding
introspection
shortcuts,
then
we'll
go
deep
on
input,
Union,
RFC's
and
stream,
and
defer
arguments
and
payload
formats
anything
else.
We
should
add
to
the
agenda
for
today.
A
A
There
are
only
two
and
they
are
ancient,
which
means
that
these
are
the
two
that
sit
around
forever,
so
we'll
probably
still
make
it
on
these
in
the
future.
But
the
good
news
is
that
a
all
the
action
items
that
we
had
set
up
in
the
last
two
months
have
been
closed
out.
I
believe
they
were
around
making
sure
that
we
had
agenda
item
or
agenda
documents
for
the
rest
of
the
year
which
I
set
up
yesterday.
A
J
A
A
D
I
can't
this
should
be
pretty
quick.
It
got
merged
this
morning
or
the
reference
implementation
did
at
least
so.
I
believe,
if
answer
that
they're
going
to
cut
a
release
for
that
I'm
like
after
ojs
project
but
yeah
I,
think
it's
in
good
shape
and
so
I
don't
know
what
that
means
for
the
specification
process.
It
moves
it
to
another
stage
or
not
yet,
but
I'll
defer
to
you
guys.
A
That
should
leave
it
on
my
plate
to
do
a
final
editorial
pass,
but
I
believe
that's
enough
to
move
it
to
the
final
stage
for
approval.
So
let
there
be
one
last
action
item
for
me
to
take
a
final
editorial
pass
on
that,
but
otherwise
awesome
work.
My
notes
been
a
while
to
get
all
the
eyes
doubted
and
teased
crops,
but
I'm
super
excited
about
this.
One
super
meaningful
improvement
to
how
we
do
custom
scalars.
E
Yes,
so
this
is
really
just
to
solicit
some
opinions
and
maybe
some
with
some
reasons
not
to
do
this.
We
seem
to
have
an
issue
in
graph
Q,
which
is
that
the
term
query
is
overloaded.
It's
used
in
many
many
different
ways
and
that
can
make
it
particularly
confusing
when
people
are
unfamiliar
with
graph
gel.
It's
also
particularly
hard
to
teach
as
a
graphical
trainer
and
other
such
things.
It
makes
it
hard
to
write
documentation.
It's
just
generally
a
problem.
E
So
the
graphical
spec
refers
to
query
quite
a
lot
and
when
it
refers
to
query
it
can
mean
different
things
and
often
you
need
to
be
already
familiar
with
graph
QL
to
know
what
it
is
that
it
means
in
those
contexts.
Sometimes
we
use
things
like
query
operation
and
that
normally
means
an
operation
of
the
query
type.
E
E
We
use
the
query
keyword,
which
obviously
is
the
keyword
in
the
graph
QL
language
that
you
would
type
and
then
there's
query
execution
which
may
refer
to
the
execution
of
a
query
operation
or
sometimes
to
a
mutation
or
subscription
operation
to
which
can
be
confusing,
because
then
you
might
want
to
refer
to
a
query,
query
operation
so
generally,
what
I
would
like
to
do?
What
I
would
like
to
propose
is
that
someone
and
I'm
volunteering
to
be
the
person
that
I
would
be
happy
if
someone
else
wanted
to
do
it.
E
It
goes
through
the
graphical
spec
and
literally
clears
up
all
of
these
uses
of
query
and
comes
up
with
some
solid
conventions
that
we
can
use.
We
already
have
terms
that
we
can
use
for
these
things.
So
we've
got
terms
like
operation
which
can
refer
to
query
mutation.
Subscription
we've
got
terms
like
document
that
can
refer
to
the
actual
graph
to
L
the
language
that
you
send
to
the
server
in
order
to
execute
your
query
and
various
other
things.
E
A
A
H
A
Yeah
yeah:
this
is
one
of
the
things
that
pains
me
every
time,
I
I'm
doing
like
an
editorial
pass
and
I'm
trying
to
merge
something
in
I,
just
like
strain
myself
from
going
in
deep
on
this
problem,
because
we
we
did
not
do
a
great
job
and
the
original
authoring
of
this
back
and
it's
pretty
pervasive.
So
I
acknowledge
that
it's
a
pretty
big
project
to
take
on
it's
a
little
bit
more
than
just
a
find,
replace
in
terms
of
my
recommendations
for
next
steps.
E
A
C
P
A
Yeah
I
agree,
I
was
I,
was
gonna
mention
something
similar.
It's
I
think
it's
super
worthwhile
to
have
an
a
process
document
that
maps
those
out.
That's
like
a
living
doc.
That's
something
that's
easy
to
provide
feedback
on
top
of,
but
I
agree
having
having
something
that
can
be
a
source
of
truth
in
the
spec
would
be
super
valuable,
I,
don't
know
exactly
what
that
should
look
like.
A
We
have
some
element
of
that
and
that
algorithms
and
portions
of
the
language
are
well
defined
and
then
show
up
in
the
appendix,
but
I,
really
like
the
idea
of
being
able
to
demarcate
a
term
as
being
like
a
term
of
art
within
the
graphical
spec.
That
has
a
definition
that
way
they
can
all
be
automated
and
collected
at
the
bottom
in
the
appendix
that
could
be
pretty
nice,
but
that
might
be
a
follow-on
step
to
this
I
think
I
wouldn't
want
to
overload
too
many
things.
On
top
of
this,
one
idea.
A
A
F
The
issues
that
I
was
always
facing
is
that
it's
difficult
to
kind
of
query
the
graph
QL
endpoints
getting
ideas.
What
this
can
actually
looks
like
when
you're
not
probably
familiar
with
it,
if
you're
trying
to
do
it
by
command
line
or
something
like
this.
It's
I'd
like
to
add
in
these
these
fields
here
to
make
it
easier
to
do
it
in
one
request
and
also
get
the
information
and
more
readable
format,
so
I'm
not
sure
what
exactly
I
need
to
do
here,
but
basically
it's
just
a
request
for
comments.
Right
now,.
E
A
fan
of
this
I
using
the
introspection
query
quite
a
lot.
The
the
off
type
of
type
of
type
of
type
of
type
of
type
does
seem
quite
redundant
when
we
are,
we
have
already
a
concise
language
that
can
express
the
nested
non-null
ability
and
array
syntax.
It
would
be
very
nice
to
be
able
to
just
say
this.
This
field
is,
you,
know,
square
brackets,
exclamation
points
and
know
exactly
what
it
is
without
having
to
query
that
depth
of
object.
I
think
it's
a
very
simple
thing
to
add
to
introspection.
E
A
Don't
necessarily
think
that
means
that
you
can't
do
something
like
this
I
think
that
just
has
to
be
strongly
considered
so
I
actually
really
like
Benji's
sort
of
counter
strawman,
which
is
imagine
a
inlined
type
like
an
inline,
2,
truthful
type.
Something
like
that.
How
would
that
change?
How
this
works?
J
I
think
I
agree
with
you
and
Benji
like
I,
think
unbalanced.
This
is
probably
the
right
thing
to
do,
but
we
need
to
do
a
lot.
More
I
would
like
to
see
a
lot
more
kind
of
depth
and
explanation
to
the
trxye
of,
like
you
know,
possible
implications
and
how
this
plays
out
in
the
future
before
we
just
murder.
C
I'm
personally,
against
bring
in
any
is
deal
inside
introspection
because
we
already
have
a
president
and
it's
actually
big
theoretical
issue,
not
a
big
practical
issue,
but
we
have
a
problem
with
customs
covers
because
default
values
for
the
fall
default
world
is
already
sterilized.
So
if
you
specify
default
value
for
custom
scour,
the
client
doesn't
have
access
to
serialize
and
parse
literal
functions,
so
it
cannot
pass
default
value.
You
know
Annette's
happened
because,
instead
of
JSON
data
structures,
we
use
strings
I'm,
not
saying
like
the
red
power,
just
example.
C
What
happen
if
we
bring
0
ended
things.
Another
thing:
it's
basically
mean
every
time
we
change
something.
Next
year,
scope
of
breaking
changes
become
bigger,
it's
not
as
serrated,
so
I'm
I'm,
like
I'm,
totally
for
if
we
figure
out
how
to
do
it
properly.
I'm
totally
for
this
change,
because
right
now
we
have
hard-coded
inside
the
transaction.
Query
we
hard
coded
seven
or
seven
or
nine
levels
of
deepness,
and
some
people
actually
hit
with
limits
and
it's
a
problem,
but
I
would
be
totally
against
way,
making
it
as
a
string.
C
What
what
I
would
suggest
is
to
think
about
arrays.
So,
instead
of
a
deeply
nested,
maybe
we
can
figure
out
how
to
do
how
to
return
it
as
array
since
for
a
you,
don't
need
to
write
deep
quietly
so
within
my
position,
so
I'm
not
really
like
extremely
fat
portion
but
time
to
live
like
for
each
change.
E
E
I
A
That's
much
more
interesting
to
me
so
Steven.
If
I'm
understanding,
you
correctly
you're
worried
about
the
idea
of
being
able
to
write
an
introspection,
query
that
might
end
up
I,
don't
know
somebody
has
a
non
knowable
list
of
a
nominal,
the
list
of
and
on
all
the
list
and
on
the
all
the
list
of
and
on
all
the
list
of
a
thing
and
like
extreme
corner
case
and
then
some
tools,
introspection,
query
just
happens
to
not
go
one
level
and
deep
enough
to
represent
that
and
ends
up
breaking
in
a
hardener.
J
A
A
Then
you
know
I
I,
don't
know
if
I'm
as
strongly
adverse
to
a
string
of
fide
form
as
Devon
is,
but
I
certainly
understand
the
concern.
But
but
but
that's
certainly
interesting,
you
know.
How
might
we
consider
reducing
the
depth
it?
That
problem
statement
leaves
us
in
a
lot
of
potential
different
directions,
there's
also
I've.
At
times
in
the
past,
we
brought
up
the
idea
of
a
recursive
query
by
being
able
to
speck
a
a
fragment
that
can
refer
to
itself
and
right
now.
A
The
only
reason
that
that
is
not
allowed
is
because
we
don't
have
good
tools
for
blocking
infinite
recursion,
but
I've
always
sort
of
seen
a
viable
solution
to
the
like
infinite
depth
problem
to
allow
recursive
fragments
in
a
controlled
way.
So
that
might
be
one
way
to
handle
this,
but
it's
certainly
not
the
only
one
but
I
Steve
and
I
kind
of
like
your
reframing
and
the
problem
to
bring
it
back
to
like
a
an
actual
tractable
problem
that
has
been
a
pain
point
beyond
just
convenience.
A
F
A
Don't
think
in
introspection
itself,
there
are
potentials
problems,
but
imagine
imagine
we
introduced
recursive
fragments
as
a
solve
for
this
particular
problem
so
that
you
could
write
out.
You
know
every
time
that
you
find
yourself
asking
for
the
type
of
a
thing
you
can
just
reference.
Give
me
all
the
fields
for
this
type
and
that
can
go
recursively
in
all
the
directions
it
needs
to
go
then
great.
This
particular
problem
could
be
solved.
We
know
for
sure
that
it's
not
gonna
do
anything
harmful.
A
But
if
you
introduce
that
as
a
general
tool,
then
people
will
use
recursive
fragments
in
other
parts
of
your
query.
So
imagine
if
you
have
a
bi-directional
relationship
between
two
things.
You
know
you're
a
music
API
and
you
want
to
know
the
artist
of
a
particular
song
and
then
you
want
to
know
the
albums
that
they
published
and
then
you
want
to
know
the
songs
on
that
album
and
then
you
want
to
recurse
from
there
back
to
the
authors
of
that
particular
song.
You
could
write
such
a
query
that
will
never
end.
A
A
So
we
could
eliminate
the
recursive
fragments
one
for
like
past
precedents
would
be
a
reasonable
reason
to
eliminate
that
if
we
feel
strongly
that
the
previous
decision
to
not
allow
that,
because
of
this,
you
know,
recursive
fragment
DDoS
problem
is
too
difficult
to
solve.
Then,
like
cool,
let's
not
follow
that
path
and
if
we
think
of
more
by
a
separate
path
with
more
viable
than
it'll,
do
that,
but
does
that
context
help
answer
why
the
recursion
could
be
a
problem.
D
A
Cool
I'll
suggest
some
next
steps.
I've
marked
this
ploy
request
as
a
straw,
man
proposal
and
so
I'm
glad
we're
talking
about
it.
I
think
next.
Steps
for
me
is
to
start
to
answer
some
of
these
questions
around
the
concerns
to
tease
out
cases
where
this
could
break
future
possible
additions
to
the
to
the
introspection
schema,
and
you
know
what
might
be
harmful
to
those.
A
Hopefully,
we've
managed
to
capture
a
couple
of
notes
around
some
of
those
concerns
already,
so
you
can
kind
of
look
to
those
to
address
them
and
and
then
comparing
this
potential
solution
path
to
other
potential
solution
paths,
so
user
RFC's
focus
around
a
problem
to
be
solved,
and
so
we
want
to
evaluate
many
potential
solution
paths,
and
so
this
is
a
good
potential
solution
path,
but
we
should
make
sure
it's
the
right
one
out
of
other
various
possible
future
paths.
If.
F
I'm,
a
real
quick
regarding
incursion,
wouldn't
a
developer
or
at
school,
that's
trying
to
to
clear
the
pool
the
whole
scheme
of
weren't.
They
already
did
that
doesn't
mean
by
doing
all
these
nested
types,
especially
with
the
n
plus
one
they
have
to
keep
making
recurring
calls
to
the
schema
to
get
that
information
there.
So
it
sounds
like
you'll,
be
doing
more
work
by
not
doing
this
in
goal.
A
Kind
of
so
you
can't
write
an
infinite
recursion
that
would
require
infinite
characters
written
in
the
query.
So
by
making
the
length
of
the
query
related
to
the
length
of
the
potential
query
execution
that
limits
any
potential
infinite
recursion.
So
recursion
is
not
what
we're
worried
about
infinite.
Recursion
is
what
we're
worried
about,
and
but
you're
right
about.
The
recurrent,
like
the
fact
that
the
query
itself
repeats
you
know
multiple
times
in
order
to
to
get
to
the
full
depth
of
these,
then
then
you're,
certainly
right
that
there
is
recursion
happening
I'm.
A
F
F
A
Particular
example:
yes,
but
not
in
all
examples,
so
remember
that
anything
that
we
add
to
graph
QL
can
be
used
not
just
for
introspection,
but
for
any
query.
So
if
we
were
to
add
something
that
allowed
us
to
do
infinite
recursion,
then
people
will
use
that
tool
outside
of
introspection
and
there's
probably
lots
of
very
useful
cases
outside
of
introspection.
So
cool
like
that,
but
we'd
have
to
make
sure
that
we
we
protect
against
infinite
recursion
DDoS
attacks,
I
think.
J
H
Yeah
sure
so
a
few
listen
that
a
week
ago
to
talk
about
the
various
solutions
that
have
been
proposed
for
the
input
Union
and
it
was
very
lively,
an
interesting
discussion.
We
reviewed
the
entirety
of
the
RFC
and
we
have
a
couple
set
of
notes
about
the
sort
of
overall
shape
of
what
hat
that
I
can
quickly
review.
H
H
Structural
uniqueness
and
that
had
a
whole
host
of
complexities,
that
kind
of
meant
that
it
was
gonna,
put
undue
burden
on
schema
authors
and
put
some
major
restrictions
on
what
you
could
actually
model
amongst
the
other
issues
with
it.
But
for
all
the
other
solutions
were
we
didn't
strike
anything
else
off
the
table,
but
we
did
start
thinking
about
a
new
solution
number
six.
H
We
we
don't
have
a
final
formulation
of
that
solution,
yet
it's
sort
of
still
being
thought
through,
but
there's
aspects
of
all
of
one,
two
and
three
that
were
that
people
like
that.
We're
hoping
to
be
able
to
merge
together,
there's
also
a
good
amount
of
talk
about
one
of,
as
potentially
a
solution
for
input
union
and
also,
potentially,
on
its
own.
There
was
an
interesting
discussion
of
using
one
of
both
an
input
and
an
output.
H
There's
some
follow-up
that
may
or
may
not
have
gotten
completed
yet,
but
leaves
gonna
do
a
little
to
write
up
some
concept
for
this
merged
solution.
Number
six
I
am
going
to
be
doing
some
I
haven't
gotten
to
this
yet,
but
some
research
on
what
literal
values
might
look
like,
because
no
solution
number
two
involves
the
usage
of
a
literal
value
instead
of
the
explicit
tightening
for
for
the
discrimination
and
then
Benji's
gonna
be
doing
some
thinking
about
this
one
of
and
ways
that
it
can
be
used
to
express
opinions
or
usage
in
the
output.
H
So
that's
like
a
rough
overview
of
the
discussions
that
happened.
I
would
like
to
open
it
up
to
perhaps
anyone
else
that
was
in
that
meeting
wants
to
cover
something
I
missed
or
expand
on
any
of
the
topics.
This
there's
lots
to
talk
about
here.
I
have
one
particular
thing
to
talk
about
with
regard
to
literals,
but
I'll
open
it
up.
First
yeah.
J
I
wasn't
I'm
sad,
probably
the
first
two
quarters
of
the
meeting
and
I
just
wanted
to
kind
of
frame
it
in
a
slightly
different
way
as
well,
but
I
find
easier
to
think
about,
which
is
that
we
basically
realize
that
there
are
two
broad
paths
forward:
either
a
tagged,
Union
approach,
similar
to
one
of
or
a
discriminated
Union
approach,
which
is
kind
of
one
options,
one
two
and
three,
and
so
that's.
Why
we're
we're
tackling
a
new
option?
Six,
as
kind
of
the
combined
ideal
approach
for
the
general
discriminated
Union
solution.
H
Sort
of
the
the
one
of
the
comments
in
a
recent
PR
or
issue
talk
from
a
person
doing
some
programming
language
research
was
commenting
on
about
how
most
of
these
solutions
are
essentially
the
same.
Underneath
the
hood-
and
you
know
we
sort
of
realized
that
in
a
way,
that's
that's
pretty
true.
We
have.
The
task
in
front
of
us
is
discriminate
between
types
right
and
then
there's
a
few
mechanisms
that
could
be
used.
You
could
use
the
type
name
itself,
you
can
use
a
literal
value
or
you
could
use
nothing.
H
Those
are
the
vehicles
for
discriminants
and
then
there's
mechanisms
that
we
can
use
to
resolve
ambiguity.
If
we
don't
have
a
match,
we
could
use
something
like
a
default.
We
could
use
something
like
the
order
of
matching
and
may
give
you
some
like
a
uniqueness,
some
kind
of
structural
analysis,
so
there
that's
kind
of
a
broad
set
of
like
what
it
is
that
these
solutions
could
contain.
G
H
H
One
other
aspect
that
we
kind
of
went
over
is:
where
do
we
put
the
the
power
to
discriminate?
Is
it
in
the
specification
with
like
strict
validations
and
restrictions
on
what
can
happen?
Is
it
in
the
server
where
the
schema
author
decides
a
lot
about
how
the
discrimination
might
happen,
or
does
it
wind
up
in
in
the
client?
So
that's
another
sort
of
axis
that
we
talked
about.
H
A
Before
that,
I
also
add
one
thing
that
was
really
valuable.
Is
we
started
out
by
reading
through
all
the
solution
criteria,
rather
than
jumping
straight
to
the
various
proposals
at
hand
for
the
solutions,
and
that
was
super
useful
because
it
allowed
us
to
both
add
color
along
the
way
we,
both
like
updated.
Some
of
the
objections
added
new
objections.
We
tweaked
some
of
the
criteria
score,
although
we
didn't
fan
that
out
across
all
the
places
where
those
are
visible
in
these
docs.
A
We
also
even
eliminated
some
of
them
where
they
were
potential
solution
criteria,
but
we
were
unsure
as
as
to
how
important
they
were
and
it
gave
us
an
opportunity
to
talk
through
it
live
so
I
thought
that
was
a
really
useful
way
to
spend
that
time
and
I
think
we.
We
have
a
follow-up
action
to
make
sure
we
go
back
and
I
I
was
taking
live
notes
in
the
spec
itself
to
try
to
make
sure
we
wouldn't
lose
all
that
stuff.
A
But
not
everything
ended
up
landing
there,
but
to
give
a
little
bit
of
color
on
a
solution.
Number
six
in
here
I
think
it
would
probably
then
cause
us
to
at
least
eliminate
three
in
favor
four
six
and
maybe
even
also
eliminate
one
and
two
in
favor
four
six.
But
the
idea
is
that
so
solutions,
one
and
two
require
an
explicit
discriminate
to
discriminate
between
the
various
potential
types
and
then
just
how
they
do.
It
differs.
A
So
the
first
one
requires
that
you
provided
dunder
type
name
field
and
then
that
that
that
would
just
be
a
string
that
matches
the
name
of
the
type
that
you
expect.
Your
your
input
value
to
be,
and
then
the
second
one
introduces
literal
types
and
then
you
just
have
to
when
you
specify
your
Union,
you
have
to
also
include
what
field
contains
a
literal
type
that
is
the
discriminator
and
then
presumably
we'd
have
some
we'd
have
some
schema
validation
that
ensures
that
your
discriminator
is
in
fact
valid.
A
Both
of
those
are
completely
viable
ways
to
do
discrimination
and
in
fact
there
were
elements
of
both
that
we
liked.
But
the
problem
was
with
both
a
dot
and
adoption
strategy.
So
for
people
who
are
already
accepting
a
input,
object
and
then
want
to
convert
that
input
object
to
an
input
Union
going
from
not
requiring
a
discriminator
to
requiring
a
discriminator
means
that
you
can't
do
that.
So
that
was
one
way
that
those
these
two
paths
were
a
little
more
restrictive
than
we
wanted
them
to
be.
A
The
other
cases
were
coming
up
with
examples
where
there
actually
was
structural
uniqueness
naturally
occurring
in
the
input
union
and
then
requiring
additional
discriminators
felt
like
an
unnecessary
burden
that
we
were
asking
schema
authors
to
take,
and
so
then
again
in
in
those
scenarios
that
felt
a
little
too
burdensome.
But
then
in
the
number
solution,
number
three:
it's
kind
of
the
opposite
solution.
A
Three
looks
only
to
the
order,
so
it
just
kind
of
walks
through
all
the
potential
input
unions
in
your
order
and
then
checks
them
one
at
a
time,
and
in
the
case
that
they
are
structurally
unique,
then
you
can
kind
of
know
which
one
of
those
is
going
to
end
up
matching.
But
in
the
cases
where
there
are
ambiguities
or
overlap,
then
the
tiebreaker
is
just
whichever
one
appears
earlier
in
the
Union
and
for
scenarios
where,
where
the
input
is
dominantly
structurally
unique
or
almost
always
structurally
unique,
then
this
works
fine.
A
A
Six
was
something
that,
like
one
and
two
provided
tools
for
discriminant,
whether
that
was
a
dunder
type
name
as
a
default
or
allowing
you
to
explicitly
suggest
a
in
schema
discriminator
field
that
used
literal
types
but
to
not
require
them
so
they're
there
as
a
tool
for
schema
authors,
but
they're
not
required,
and
by
making
them
not
required
in
scenarios
where
structural
uniqueness
is
guaranteed.
That's
a
viable
path
in
cases
where
a
schema
evolution
and
a
job
from
existing
schema
is
more
important
than
that's
a
viable
path.
A
So
there's
downsides
to
this
approach
too,
and
not
requiring
it
because
it
offers
scenarios,
in
which
case
you
could
end
up
with
a
schema
that
can't
be
evolved.
So
I
don't
know
for
sure
that
that
solution
is
the
perfect
solution,
but
I
think
it's
a
it's
a
pretty
interesting
assembly
of
these
previous
ones
and
maybe
even
a
way
to
be
able
to
use
some
of
the
tools
of
one
and
two
in
more
places
than
just
input.
Unions.
H
A
A
1,
&
2,
it's
just
a
slower
version:
it's
not
the
fastest
possible
algorithm,
but
if
you
think
like
check
the
first
one,
if
it
doesn't
match
check
the
second
one
and
if
it
doesn't
match
check
the
third
one
if
it
matches
you
got
a
win,
if
you
have
a
discriminator,
then
it's
obvious
that
any
ones
that
won't
match
you're
going
to
fail
and
there's
only
ever
going
to
be
one
that
matches,
but
that
discriminator
gives
you
a
tool.
So
you
can
replace
a
of
an
algorithm
with
one
algorithm.
A
So
I
think
the
cool
thing
here
is
that
the
algorithm
remains
the
same.
It's
just
check
them
in
order
and
as
we
do
with
other
algorithms
in
the
spec
will
suggest.
If
there
are
our
an
implementation
specific
faster
way
to
do
this,
that's
observably
identical
then,
to
the
algorithm
that
we
specify.
Then
we
encourage
it
and
we
would
probably
also
suggest
cases
where
that
would
be
the
case.
A
So
if
you
did
provide
dunder
type
name,
then
there's
no
reason
to
check
them
all
like
you
should
just
look
at
the
value
that
dunder
type
name
and
jump
straight
to
that
type
and
check
it
or
if
there's
a
literal
discriminator
then
like
check
the
literal
discriminator
before
you
check
other
fields
or
maybe
even
build
a
hash
map
table
and
look
it
up
way.
So
there's
plenty
of
ways
to
make
the
actual
checking
the
the
resolution
of
these
input
unions
fast,
but
the
algorithm
can
remain
quite
simple.
C
Worried
about
one
thing
about
error
messages,
so
if
I
kind
of
like
algorithms,
that
use
underscore
underscore
underscore
type
name,
structure,
uniqueness
and
all
other
things,
error
message
will
be
pretty
big
and
confusing,
and
if
we
try
to
do
it
user
friendly,
we
need
to
do
something
like
a
fourth
type
script
better.
In
a
sense,
we
need
to
explain
ambiguity
so
I'm
like
right
now,
looking
at
criterias
and
I'm
yeah
I'm
like
what
about
aging
criteria,
error
messages,
because
a
complex
algorithm,
create
confusion,
error
messages
and
it's
not
only
error
messages
from
a
server.
C
A
H
So
one
aspect
of
the
literals
that
I
have
been
thinking
about
I
haven't
been
able
to
do
a
ton
of
research
on
this,
but
what
the
way
that
we
had
sort
of
been
specifying
them
is
say
you
pick
a
field
that
will
be
your
literal
discriminant,
we'll
go
with
our
animal
example
and
so
to
adopt
an
input
Union.
You
would
theoretically
a
go
through
each
input
and
add
this
field
with
its
literal
value.
So
you
know
cat
input
would
get
you
know,
species
cat
add
into
it.
H
Now
that
introduces
some
kind
of
strange
things
in
that
the
membership
in
an
input,
Union,
sort
of
forces
of
change
in
the
structure
of
the
input
itself,
all
right,
so
we're
adding
this
field
that
that
say,
you
have
added
an
input
to
like
three
different
input
unions,
for
whatever
reason
like
do
you
do
you
map
them?
Do
you
change
the
field
name
so
that
they
don't
overlap
and
collide?
H
H
Is
this
thought
experiment
of
what,
if
the
input
union
definition
itself
contained
all
of
the
structure
necessary
to
describe
it
rather
than
putting
any
information
in
the
individual
in
with
themselves,
so
the
input
union
could
look
more
like
a
type
say
that
it
would
be
able
to
configure
here's
the
field
that
will
be
the
discriminant
and
then
it
will
map
the
value
of
that
field
to
the
type
and
one
of
the
nice
things
about.
This
is
it's
sort
of
you
model,
the
under
under
type
name,
you
can
model
in
the
exact
same
way.
H
H
Real
rough
sketch
in
the
chat
there
so
you're
defining
the
input,
Union
animal
and
then
you
map,
say
discriminator.
There's
an
error
in
that
actually.
But
but
this
is
opens
up
like
a
way
for
us
to
resolve
some
of
the
problems
of
of
this,
and
you
can
have
different
literal
values
like
a
multiple
literal
values
map
to
it,
the
same
type
where
we
wouldn't
be
able
to
model
that
very
efficiently
or
very
well
without
collisions.
If
the
definition
is
in
the
input
itself
can.
G
C
C
C
So
if
you,
basically
when
you
after
in
schema
and
you
choosing
a
name
for
discriminator
field,
you
limiting
all
the
members
in
using
word
and
if
members
generate
from
somewhere
like
it's,
not
really
obvious
restriction,
even
like
when
you
region
scheme,
it's
not
obvious,
you
cannot
use
with
name
because,
like
input,
union
definition
is
somewhere
and
you
just
write
in
this
type,
so
I
think
it
kind
of
seemed
similar,
reverse
problem,
but
in
the
case
of
constant
and
being
specified
inside
input
type,
it's
more
explicit
and
it's
more
like
the
back
above.
It's
more
more.
A
That's
a
good
point
from
Yvonne
I,
don't
know
if
it's
a
deal-breaker
for
this
approach,
I
kind
of
like
how
explicit
it
is,
but
it
might
need
something
more
to
limit
the
error
path,
but
certainly
it
may
be
easier
for
us
to
dig
through
actual
code
examples
in
a
github
issue
but
yeah.
This
feels
like
a
good
start
to
the
work
that
you
were
taking
on
to
investigate
literal
types
in
more
depth.
Yeah.
H
Yeah,
maybe
it
would,
it
would
imply
some
kind
of
screamin
scheme,
a
restriction.
It
reminds
me
of
interfaces
which
validate
like
membership
of
field
members
to
make
sure
that
they
match
the
interface.
So
it's
in
that
shape
of
validation.
That
would
have
to
be
yes,
yes,
it
would
be
a
minor
restriction
in
what
could
be
modeled.
E
Yes,
so
we
were
discussing
how
the
the
one
of
models-
some
external
data
sources
more
directly
than
the
ones
that
we
were
looking
at
currently,
because
some
some
of
those
already
use
this
kind
of
tagged
or
wrapper
approach,
where
you
have
an
object
that
wraps
it
with
a
key.
That
indicates
what
the
type
of
the
thing
is.
We
also
discussed
a
bit
about
how
that
may
also
be
appropriate
when
matching
those
external
sources
for
output
types
as
well
and
may
also
have
other
uses.
E
Lee
brought
up
some
interesting
history,
where
it
was
actually
considered
early
on
that
this
might
actually
be
the
shape
of
unions
in
in
graph
cube.
In
the
end,
they
went
in
a
different
direction,
so
there
may
still
be
value
in
it.
One
of
the
main
issues
is,
of
course,
that
now
it
gives
two
ways
of
doing
the
same
thing.
E
It's
not
necessarily
the
same
thing.
They
are
different,
but
are
the
differences
enough
to
justify
its
existence?
That's
something
that
I'm
going
to
be
digging
into
whilst
I
researched
this
topic
more,
but
it
could
well
be
that
it
is
a
sensible
solution
or
it
might
well
be
that
these
new
solutions,
like
solutions-
six,
for
example,
get
rid
of
the
need
for
this.
So
yeah
I'm
going
to
be
doing
a
bit
more
research
on
how
this
might
affect
both
inputs
and
outputs
and
potentially
even
arguments
as
well.
A
Yeah
part
of
that
historical
context
was
when
we
were
originally
looking
at
unions
there.
We
we
had
this
a
very
similar
kind
of
debate
that
we're
having
now
but
graph
kill,
was
much
younger
at
the
time
about
whether
we
wanted
a
discriminated
union
or
whether
we
wanted
a
tagged
structural
union,
and
we
decided
that
that
form
of
the
output
was
more
important
than
the
form
of
the
query
itself,
and
that
led
us
to
the
like.
A
The
tech,
discriminated
unions
using
type
name
to
discriminate,
but
I
certainly
doesn't
have
to
be
the
only
way
and
I
know
there
are
people
who
are
using
a
structural
union
approach,
even
though
that
the
query
doesn't
guarantee
that
that
will
play
out
the
way
that
they
hope
that
it
will
play
out.
They
just
have
sort
of
a
contract
with
their
clients
that
they
know
they
should
you
know
they.
They
should
expect
them
to
never
get
more
than
one
field
back.
A
I
think
there
are
even
a
hands
small
handful
of
cases
at
Facebook
that
use
that
pattern.
It
can
sometimes
be
really
useful,
so
I
can
certainly
see
a
case
where
having
two
ways
to
do.
This
is
reasonable
as
a
final
output
from
this
discussion,
and
we
might
want
to
backtrack
that
to
output
unions
as
well,
because
the
structure
of
that
of
the
data
itself
is
actually
pretty
different.
A
H
A
L
Okay,
so
we
just
wanted
to
go
over
kind
of
the
high
level
of
where
we're
at
with
deferred
stream
and
what
we're
currently
thinking
and
kind
of
get
feedback
as
we
work
on
the
spec
and
make
sure
that
we're
going
in
a
direction.
Everyone
agrees
with
so
I
I
have
a
PR
to
update
the
RFC
with
the
details,
and
so
I
can
go
through
it.
A
little
bit
so
for
I
can
put
a
link
in
the
chat,
so
you
can
follow
along.
L
So
for
defer,
we
have
an
if
argument,
that's
a
boolean
and
that's
just
kind
of
saying:
allow
you
to
control
whether
something
should
be
deferred
or
not.
If
it's
not
there,
it's
gonna
be
deferred.
If
it's
there
and
it's
false,
it
won't
be
deferred,
and
then
we
have
a
label
which
is
a
string
argument
and
that's
going
to
be
used
by
clients
to
understand
when
the
responses
come
in.
What
was
the
trigger
about
response?
L
So
both
of
these
arguments
are
on
both
defer
and
stream,
and
the
label
will
get
passed
through
into
the
payload
for
whatever
triggered
the
deferred
string
and
then
for
stream.
There
is
an
initial
count
argument,
which
is
an
integer
which
is
just
telling
the
server
how
many
items
should
be
in
the
initial
response.
L
And
then,
as
far
as
the
payload
format,
the
first
one
is
going
to
match
what
is
currently
in
the
spec
for
a
graph
QL
payload,
with
the
only
fields
that
are
deferred
or
not
going
to
be
there
and
the
arrays.
Only
the
initial,
the
number
that
are
in
the
initial
count
would
be
in
the
right.
That's
returned,
then,
each
payload
after
that
is
going
to
be
an
object
with
the
label
that
corresponds
to
what
was
passed
to
the
fern
stream.
L
And
then
I
guess
one
more
note
that
we
wanted
to
bring
up.
Is
that
we're
saying
that,
like
the
fern
stream?
Are
it's
highly
recommended
that
a
server
follows
them?
But
it's
going
to
be
in
the
spec
as
a
should,
instead
of
a
must,
which,
in
the
advance
cases
of
the
server
knowing
having
like
a
better
idea
of
how
like
maybe
it
could
be
more
performant
to
not
actually
stream
or
defer?
It
will
allow
that
and
kind
of
keeping
things
flexible
in
the
future.
L
O
Just
just
like
high-level
comment:
this
is
this
matches.
This
RFC
based
different
matches
exactly
with
how
our
client
worked
as
of
about
two
weeks
ago
and
reason
to
have
RFC's
being
used
in
production
before
they
could
expect.
We
discovered
at
Facebook.
Yes,
our
client
works
perfectly
with
servers
that
always
indicate
whether
their
defer
streamed
or
not.
O
But
when
we
tried
to
point
that
client
at
a
basically
an
on
Facebook
server
that
matched
the
current
spec
or
match
to
be
like
yeah
I,
don't
actually
follow
the
you
know
the
first
stream
spec
we
heard
because
we
never
gotten.
It
is
final,
because
the
initial
payload
is
the
final
payload
and
has
no
indication
of
whether
it's
the
final
payload,
because
it's
no
base
like
a
spec
compliant
response
that.
A
O
Unfortunately,
because
we
actually
have
our
client
aren't
the
first
string
client
in
production
as
it
was.
That
also
meant,
even
though
we
could
fix
our
client
so
that
it
can
now
work
with
like
future
or
the
current
version
of
our
client
isn't
a
week
ago
can
work
with
the
spec
compliant
servers.
Our
old
clients
cannot,
which
means
we're
basically
stuck
always
sending,
is
final,
whether
it
is
true
or
false,
and
we
don't
want
the
open
source
community
to
end
up
with
that
with
being
required
to
send
his
final.
L
A
A
L
A
We
can
I
think
we
can
definitely
change
that
cool.
There
are
very
few
cases
where
we
have
in
spec
definitions
for
for
these
things,
but
they're
often
camel
case
when
they
are
so
just
be
good
to
be
consistent,
I
also,
the
general
question
about
label-
that's
something!
That's
pretty
new
I
can
see
that
being
useful,
but
I
see
that
it's
required.
A
L
So
it's
not
a.
There
could,
for
example,
be
too
deferred
fragments
on
the
same
object
and
then
they
would
have
an
identical,
an
identical
path,
but
the
which
fragment
that
came
from
would
be
different.
So
it's
a
way
for
for
clients
to
to
understand
that
I
guess
that
it
doesn't
necessarily
need
to
be
a
required
argument,
because
maybe
maybe
your
client
doesn't
really
care
or
so
I
think
listening
that
to
not
be
required
is
all
there's
a
possibility
unless
anyone
thinks
otherwise
I.
O
O
A
Isn't
gonna
do
that,
for
you,
though,
right,
because
you
could
have
two
responses
that
are
independent
and
therefore
could
have
labels
that
overlap?
You
could
come
back
with
things
that
are
the
same
label
in
different
paths.
Do
you
need
either
way
you
need
to
know
you
need
to
be
able
to
associate
payloads
from
one
stream
of
responses
from
another
shirt
of
responses
right.
O
A
This
seems
like
a
very
different
thing,
because
you
can
have
many
streams
or
defers
within
a
single
operation
yeah.
Each
of
those
would
have
a
unique
label
within
that
operation,
but
do
we
need
to
also
have
the
name
of
the
operation
that
it
came
from
in
the
pale
I
mean
you
could
also
execute
this.
The
same
operation
twice
different
arguments
or
something
like
that
at
the
same
time.
So
is
that
enough?
A
I
think
this
is
my
pitch
would
be
that
this
is
something
that
we
we
don't
solve
at
the
graphical
spec
level
like
that
should
be
your
network
interface
should
be
able
to
know
the
difference
between
payloads
coming
in
not
associated
with
one
request
versus
a
separate
request.
That's
just
you
know:
stream
TT,
duplexing,.
N
Q
It
seems
Rob's
first
use
case
where,
like
withing,
the
same
query
like
under
the
same
scope,
you
could
have
multiple,
diverse
I,
think
for
clients
who
are
like
more
like
frat
fragment
oriented.
It
seems
like
required,
but
we
cannot
assume,
like
all
clients
behaving
this
way
and
like
there
are
simple
clients
who,
just
like
give
it
a
payload
as
long
as
I
know,
which
pass
I
merged
into
I'll,
just
merge
it
and
I'll
be
able
to
event
the
data,
so
yeah
I'm,
not
sure.
Q
A
A
Can
imagine
making
like
I
can
see
why
labels
is
super
useful
I?
Imagine
it
would
make
more
sense
to
make
it
optional,
as
per
the
spec,
but
then
clients
that
that
want
it
would
either
automatically
generate
it.
Based
on
you
know,
position
in
the
query
or
what
fragment
they're
found
within,
or
they
ask
their
the
developer
to
fill
that
in
and
the
client
requires
it
to
the
client.
You
know
when
you're
doing
your
compilation,
step
or
whatever
or
a
linter
step.
We
that
client
points
out
that
it
requires
label
because
it
relies
on
it.
O
A
E
May
have
missed
something
here,
but
I'm
a
little
bit
confused
about
the
results
of
streamed
results
coming
through
from
different
fragments
in
normal
gravity.
All
queries
the
fragments
would
be
merged
effectively
on
on
the
server
side.
So
the
client
would
only
ever
see
the
the
end
result
of
both
of
those
fragments
and
I.
Think
if
you're
allowing
two
different
streamed
responses
to
come
from
fragments
where
one
set
only
has
one
set
of
fields
and
another
set
only
has
the
other
set.
That's
going
to
be
quite
a
divergence
in
behavior
from
what
we
currently
have
is.
L
So
if
you
have,
if
you
have
two
fragments
that
are
both
deferred
on
under
the
same
object,
they
could
have
different
fields
on
those
fragments
and
you
you
would
not
want
like
the
you
want.
You
wouldn't
want
to
combine
them
together,
because
if
the
fields
on
some
on
fragment
a
are
slower
than
the
fields
at
fragment
B,
you
want
to
get
the
data
from
fragment
B
as
soon
as
you
can.
So
there
is
like
the
possibility
of
overlap
where
the
same
fields
are
there.
L
E
Yeah,
that
is
my
question,
so
I
think
we're
I'm
struggling
with
with
this
is
potentially
so
you
have
non
nullable
fields
on
both
sides.
If
you
were
to
do
something
like
code
generation
of
a
current
graphical
document,
you
would
you
would
know
that
the
results
that
you're
going
to
get
back
would
have
all
of
those
nan
nan
na
level
fields,
even
where,
like
three
of
them
are
specified
from
one
and
three
from
another.
E
In
this
solution,
you'd
effectively
need
to
generate
the
types
for
those
separately,
which
then
means
you've
effectively
got
that
initial
payload,
which
merges
the
two
fragments
as
one
set
of
things.
But
then
the
deferred
results
are
effectively
then
coming
in
as
two
separate
streams
which
I
think
could
get
challenging.
I.
E
O
So,
yes,
it's
very
problematic
for
tight
model-based
clients
like
a
client
that
says
to
like
provides
schema
types
directly
to
the
to
the
user
with
like
type
safe
arguments.
That
type
model
client
would
now
need
to
say:
oh,
are
any
of
these
fields
being
deferred
in,
but
that's
already
a
problem
for
type
model-based
clients,
because
you
might
not
request
a
field
that
is
a
non
nullable.
O
Four
fragment
based
clients.
Oh
so
there's
like
two
levels
of
fragment
based
clients,
pointers
like
operation
level,
where
you,
okay,
I
I'm,
getting
one
operation.
I
know
whether
this
field
is
going
to
be
in
the
operation
or
not,
and
there
you
again
have
to
now
update
that
client
to
be
knowledgeable
about
stream
and
deferred
fields.
It's
kind
of
messy
I
would,
which
is
why?
O
O
E
So
I
have
a
separate
question:
is
there
a
potential
issue
here
with
too
much
branching
going
on?
So
if
you,
for
example,
and
say
you
defer
three
fragments
on
the
same
type
and
then
they
have
say
a
relational
fetch
and
then
there's
things
that
are
deferred
on
that
previously,
you
would
effectively
be
returning.
E
You
know
a
particular
structure
of
objects,
but
now,
at
that
base
level,
you've
been
returning
three
objects
where
there
was
previously
one
and
at
the
next
level,
for
each
of
those
three
objects.
You're
then
going
to
potentially
be
returning
another
set
of
branches
from
there
could
that
be
used
as
a
you
know,
denial
of
service
or
some
some
nefarious
way
of
compromising
a
server,
not
compromising
you
know,
I
mean
then
interfering
with.
Yes,.
O
Do
s
a
denial
service
attack
like
more
likely
with
the
query
that
you're
using,
but
we've
always
had
like
we've,
always
allowed
you
to
create
queries
that
range
a
ton
and
like
get
this
user,
get
their
friends
get
their
friends
of
friends.
Get
their
friends
of
friends
a
like
you've
always
been
able
to
create
that,
so
it
it
does
potentially
exacerbate
the
problem.
On
the
other
hand,
it
also
gives
your
server
a
way
of,
because
you
have
these
well-defined
boundaries.
D
A
Just
a
meta
point,
I
think
it
would
be
worthwhile
to
document
those
concerns
in
the
RFC
doc
just
because,
even
though
we
might
be
able
to
talk
through
how
to
address
those,
whether
they're,
real
problems
or
not,
I
expect
that
this
will
just
be
a
thing
that
comes
up
over
and
over
again.
So
having
like
a
clear
articulation
of
all
those
and
are
see,
it
could
be
super
useful.
A
Just
to
speak
to
this
understand,
comment
and
and
maybe
I'm
coming
up
with
something
different
from
what
Benji
was
talking
about,
but
this
was
something
that
was
always
kind
of
an
open
question
to
me
is
when
you
need
to
fur
a
single
field,
then
then
it's
pretty
obvious
what
that
means,
but
if
you
defer
a
fragment
that
contains
multiple
fields
which
might
overlap
with
non
deferred
fields,
what's
the
behavior
and
QA
answered
that
for
the
overlapping
fields,
they're
gonna
appear
in
both
the
original
payload
and
the
subsequent
deferred
payload.
That
seems
like
it's.
A
J
Q
I
completely
understand
the
concern
here
and
I
think
this
is
something
that
we
actually
tried
if
we
are
able
to
deduplicate.
The
fact
is
that
the
defragmentation
algorithm
boasts
the
Facebook
implementation,
as
well
as
the
open-source
spec
is
very
complicated,
so
the
complicated
case
here
is
so
right
now.
The
case
is
example,
given
by
Lee
is
simple:
it's
a
leaf
field.
It's
in
vain
that
returns
this
tree.
Q
Okay,
that
is
the
thing
I
need
to
include
in
my
different
payload,
which
is
just
wait
way
too
complicated
to
do
and
I
think
like
at
the
wrong
time,
like
basically
for
for
our
server,
you
go
to
the
database,
you
achieve
something
once
and
you
return
client
twice
it's
more
efficient
than
trying
to
do
this.
Cpu
intensive
computation
of
tree
traversal
and
comparing
and
find
out
that
they
are
different.
So
that's
the
trade-off
we
made
there.
Q
J
A
O
O
L
Think
the
argument
on
the
other
side
of
that
that
was
brought
up
on
one
of
the
discussion
issues
we
had
was
that
if
you
are
as
like
as
a
client
developer,
if
you're
putting
defer
in
your
code,
you
kind
of
want
a
reasonable
expectation
that
the
server
is
gonna,
follow
it
because
of
the
performance
implications
that
you're
asking
for
and
if
it's
not
like
a
high
likelihood
that
it's
gonna
be
followed.
But
you
might
want
to
write
your
app
in
a
different
way.
L
If
you
know
it's
not
going
to
be
supported
and
like,
if
you
have
the
option
of
doing
separate
queries
versus
deferring
some
data,
then
the
fare
is
better
if
it's
supported.
But
if
it's
not
supported,
then
it's
worse
than
separate
queries.
And
so,
if
you
don't
know
what
the
server
is
going
to
do,
it's
you
can't
really
make
that
distinction.
When
you're
developing.
A
That's
a
good
point:
yeah
I
think.
If
the
server
is
accepting
those
directives,
then
it's
reasonable
to
assume
that
it's
actually
treating
them
correctly
and
not
just
always
ignoring
them,
even
if
we
provide
the
ability
to
ignore
them
in
a
case
where
a
server
thinks
that
the
performance
will
be
best.
In
that
case,
it's
not
really
ascribing
to
the
letter
of
the
rule
there
or
the
spirit
of
the
rule,
if
it's
dropping
them
with
no
consideration
to
the
performance
just
because
it
literally
doesn't
have
the
opportunity
to
do
a
streams
response.
A
Also
I
expect
that
if
you,
if
you
include
defer
or
stream
that
implies
something
about
how
the
actual
network
response
works,
you
know
like
it's
going
to
use,
chunked,
encoding
or
not,
or
a
server
that
doesn't
support
stream,
might
not
support
chunked
encoding
either.
So
my
sense
is
that
the
we
can
communicate
this
in
the
introspection
by
you
know.
You
can
you'll
be
able
to
know
whether
these
these
directives
exist
or
not.
J
Honestly,
when
they
did
bacilli
different
note,
I'm
curious
with
the
claim
that
difference
dream
actually
and
was
work
better
for
the
client
than
splitting
the
query,
because
I
got
at
least
night,
the
split
query
would
I
would
execute
in
parallel
across.
You
know,
multiple
load
balanced
observers,
whereas
a
different
query
would
see
really
still
and
the
client.
We
actually
get
faster
responses
by
splitting
their
query,
even
when
the
server
supports
different
stream.
So.
A
J
E
Sorry
Lee
to
be
pedantic,
you
mentioned
that
you
think
it.
It
should
be
indicated
in
the
schema
if,
if
the
service
supports
defer
or
stream
I
think
actually
needs
to
be
indicated,
both
in
the
schema
and
through
the
server,
because
it
might
well
be
that
the
schema
support
staff
Aaron
stream,
but
the
server
does
not,
and
vice
versa,.
A
E
To
give
you
an
example,
you
can
write
a
graph.
Your
schema
right
now
with
the
graphical
reference
implementation
and
you
can
add
subscriptions
to
that
and
you
can
serve
it
through
express
craft
ul
and
subscriptions
are
not
supported
because
I
mean
it
will.
It
will
do
it
as
a
regular
resolve.
It
won't
do
the
actual
subscribe
operation
yeah.
That's
that's!
What
I'm
trying
to
surface
basically.
A
A
Don't
think
so
right
now,
I
think
the
assumption
is
that
if
you
can
see
the
subscription,
if
you
can
see
subscriptions
in
introspection,
then
subscriptions
are
supported.
It's
certainly
possible
that
you
can
screw
that
up,
but
I
think
that's
the
assumption.
At
the
moment
we
could
probably
do
better
than
what
subscription
doesn't
really.
Maybe
that's
done
gee
if
that's
a
reasonable
restatement
of
your
point.
I.
E
E
A
This,
since
chunked
encoding,
is
a
sort
of
natural
suggestion,
atop
HTTP
and
that's
the
dominant
way
people
use
in
graph
QL.
Then,
hopefully,
this
isn't
quite
as
severe
as
subscriptions
was,
but
your
point
is
well
made
that
we
need
to
make
it
pretty
pretty
clear
and
how
clients
can
learn
about
what
their
network
protocols
can.
Support
and
schema
might
not
be
enough
since
this
game
that
could
be
shared
in
multiple
places,
so
that's
there
that
it
does
maybe
escape
a
little
bit
outside
of
stream
in
the
FIR
I
suspect.
A
A
One
other
question
that
came
to
mind
was
right:
now
there
is
I,
don't
know
if
we
mentioned
it
explicitly
in
the
spec.
It's
certainly
an
assumption
that
values
that
appear
in
more
than
one
place
in
a
query
are:
are
cached
such
that
you're
not
getting
like
two
different
answers:
two
different
values
for
a
single
field
spread
out
over
time.
A
If
you
run
a
query
right
now,
you
expect
to
come
back
with
the
consistent
values
across
those
fields.
Defer
intentionally
spreads
out
execution
over
time,
but
I
wonder
if
we
want
to
either
leave
this
undefined
or
apply
some
rules
for
how
we
expect
server-side
caching
to
work
for,
like
tearing
of
data.
Q
Yeah
I
think
we
actually
briefly
talked
about
this.
The
fundamental
problem
is
interleaving
mutations
that
happens
inside
a
query
and
the
way
I
see
it
is
that
a
different
stream
query
should
handle
interleaving
mutation
exactly
the
same
way
as
regular
query
handles
it.
Instead
of
saying
like
thinking
about
it's
like
spreading
the
execution
to
a
long
range
of
time,
we
should
just
think
about
it.
Q
It's
reordering
the
execution
time
right,
like
I'm,
prioritizing
one
and
then
delay
the
later,
but
the
server
is
still
trying
as
hard
as
it
can
to
return
all
of
the
data
within
the
smallest
amount
of
time
as
possible.
I
think
if
Jaffer's
on
the
line,
he
can
probably
also
talk
a
little
bit
more
about
I.
Think
like
in
order
to
introduce
consistency
within
playing
around
was
the
idea
of
like
strong
object.
Is
vegetable
object
at
Facebook,
which
I
think
without
those
concepts?
Q
It's
it's
gonna
be
really
hard
to
describe
like
consistency
in
graph
QR
or
like?
What's
that
means?
So
if
we
can
treat
that
like
as
a
separate
problem
by
itself,
I
think
that
would
be
great
because
it
is
a
problem.
I
think,
if
you
have
an
expensive
query,
that's
just
runs
a
pretty
long.
Time
is
gonna
happen
as
well.
D
Yep,
just
briefly
touch
on
that
question
by
Danity,
because
it
actually
is
rather
pervasive.
It
may
even
affect
the
discussion
we
were
having
earlier
around
fragmentation,
for
example
today,
when
we
do
defragmentation
on
the
server
it's
fundamentally
ization,
which
is
to
ensure
we
don't
sort
of
evaluate
the
same
field
twice,
it's
probably
the
best
way
of
doing
that.
Another
way
of
doing
that
might
be
meditation,
but
it's
not
plus
a.
It
requires
a
lot
of
memory
and
B.
D
It
might
not
really
that
effective
in
some
cases,
because
you
have
the
same
object
that
it
can
appear
in
the
graph
multiple
times
and
there's
no
way
of
knowing
that,
because
there's
no
notion
of
describing
an
identity
of
an
object
in
graph
QL,
it's
often
been
remarked
ofcourse
that
graph
QL
2
queries
a
graph,
but
it
returns
a
tree
and
we
don't.
We
don't
really
have
the
graph
structure
in
what
we
return.
So
just
as
a
want
to
give
some
context
about
that.
D
You
know
for
the
length
of
the
time
the
sessions
open.
We
have
an
in-memory
cache
of
object
by
identity,
so
do
it.
Those
are
some
kind
of
thoughts,
and
it's
really
a
question
of
whether
it
might
make
sense
to
if
not
begin,
the
standardization
process
of
identity
in
graph
QL.
You
know,
there's
a
note,
I
think,
there's
an
open
question
of
whether
it
should
proceed.
A
Support
doing
these
in
parallel,
rather
than
depending
one
or
the
other
and
I
like
the
way
of
thinking
about
it.
Is
this
using
defer,
just
like
there
you're
already
stretching
out
running
queries
over
time?
That's
that's
sort
of
natural!
You
just
get
a
payload
all
at
the
end
of
that
work
and
defer
is
an
opportunity
to
get
payloads
earlier
in
that
work.
So
it's
not
necessarily
about
wearing
the
same
value
multiple
times
over
over
time.
A
I
don't
know
if
we
have
to
be
explicit
about
that,
but
I
would
love
for
us
to
be
more
concrete
about
it,
and
whether
identity
is
a
key
piece
of
that
that
we
do
that
independently
of
field
and
stream
or
a
different
stream,
and
we
do
it
in
a
way.
That's
applies
equally
to
different
stream
queries
as
it
does
to
not
differentiated
queries.
J
M
If
you
can't
determine
it
before
you
get
to
the
point
of
the
query,
then
maybe
it
should
refer
I,
don't
maybe
should
go
back
to
the
standard
behavior
of
like
if
the
server
doesn't
recognize
like
an
old
server
that
doesn't
know
about
the
her
stream.
Yet
is
gonna,
probably
just
ignore
the
directive,
so
maybe
there's
an
opportunity
to
go
back
to
that.
If
you,
if
somebody
who
really
cares
as
a
client,
had
an
opportunity
to
determine
it
in
advance
in
any
way.
M
A
Q
Q
A
D
A
A
I
would
love
to
include
that
on
an
upcoming
agenda.
I
think
you
know
I
kind
of
view
that,
as
a
subcommittee
almost
of
this
group,
ideally
the
final
output.
There
is
something
that
this,
the
set
of
folks
who
regularly
come
to
this
meeting,
have
a
lot
of
visibility
into
this
opportunity
to
get
input
feedback
and
come
to
a
consensus
across
a
broader
set
of
folks.