►
From YouTube: GraphQL Working Group - December 2, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
How
are
you
doing
alex
doing
all
right?
How
about
yourself
good
good?
Thank
you,
everybody's
having
fun
with
time
zone
changes
and
meeting
calendar
changes.
C
Yeah,
it's
crazy.
Yes,
I
agree.
C
C
C
E
G
E
Merged,
if
anybody
else
isn't
on
the
attendees
list,
yet
there's
a
long
attendees
list,
it's
like
we've,
we've
got
19
people
on
the
call.
Today
part
of
me
was
thinking
like
last
meeting
of
the
year.
It's
going
to
be
thin
but
dang,
I
think
we're
going
to
run
out
of
time.
E
I
think
so
we
got.
We
got
a
lot
of
stuff
to
cover
today,
we'll
do
our
best
to
stay
on
time
and
hopefully
get
to
everything
in
the
agenda,
but
if
we
have
to
bump
something
in
the
next
month,
it
might
be
a
rare
case
where
we
have
to
do
that.
Anyhow,
in
the
spirit
of
that,
let's
get
started.
E
Welcome
everybody
to
december
2021
edition
of
our
working
group
meeting
by
joining.
Of
course,
we
all
agree
to
the
membership
agreement,
participation,
guidelines
and
code
of
conduct.
There
are
links
to
all
those
in
the
agenda.
Should
you
ever
want
to
remind
yourself
what
those
are?
E
As
per
usual,
we
do
around
the
room
of
of
introducing
ourselves
since
there's
a
lot
of
people
on
the
call
today,
rather
than
saying
what
you
do
and
what
you
work
on.
Let's
just
do
name
just
name
name
to
a
face:
we'll
do
the
order
that
they
appear
in
the
agenda
doc.
So
take
a
look
at
that
see
your
name
coming
and
try
to
be
ready
to
unmute
yourself.
So
that
way
everyone
can
have
a
name
to
a
face.
H
Hi,
my
name
is
alex
hi.
B
B
E
I'm
the
one
all
right,
welcome
all
faceless
and
faced
that
serves
as
a
very
good
sound
check
for
everyone
at
least,
and
it
was
probably
the
fastest
attendee
intro
we've
done
yet,
despite
having
a
lot
of
people
here.
So
thank
you
all
for
that.
E
I
see
no
pencil
marks
in
our
list
of
attendees.
I
know
benji
always
has
a
virtual
pencil
mark
next
to
his
name,
but
we
do
have
a
lot
of
folks.
We
do
have
a
lot
to
talk
about,
so
if
anyone
is
willing
to
pop
open
that
live,
google
doc,
everyone
should
have
edit
access
and
please
help
out
with
taking
notes,
especially
as
benji
is
live
in
discussion.
That
would
be
super
helpful.
E
Okay,
let's
take
a
quick
look
over
at
agenda,
make
sure
we've
got
everything
we
want
to
talk
about.
We
are
going
to
talk
about
previous
meeting
action
items
I'll
give
a
very
brief
update
on
tsc
elections
and
where
we
are
with
that,
and
then
we
will
get
into
the
good
stuff.
We
have
shorthand
for
variables
and
arguments.
E
It's
a
lot
of
stuff
anything
else
that
we
want
to
talk
about
today.
A
Please
do
so
yeah,
so
I'm
new
I've
got
ideas,
and
I
know
that
you
have
to
go
through
this
very,
very
heavy
rfc
process
to
get
them
to
be
discussed
here.
What's
the
right
way
to
have
like
a
seed
idea
just
before
I
go
marching
on
the
rc
thing
just
to
like
throw
it
out
and
say
what
do
you
guys
think
is
this
crazy?
Should
I
put
all
my
effort
into
it.
E
E
I'm
new
here
as
well
welcome
to
both
of
you
glad
you're
here
great
question,
so
we
have
these
rc
stages
that
basically
identify
where,
in
the
life
cycle
of
an
idea
is
where
you'll
see
sort
of
rfc,
two
or
three
is
like
very
close
to
bulletproof,
if
not
bulletproof
rfc
one
is
we
generally
agree
that
this
is
a
thing
that
we
want
to
pursue,
but
it
may
be
in
a
partially
complete
state:
that's
where
most
rfcs
spend
most
of
their
life
cycle
and
then
rfc
zero
is
crazy
idea.
E
E
So
if
you
want
to
explore
that
via
a
code,
pull
request,
if
you
want
to
explore
that
via
just
opening
an
issue
or
a
discussion,
now
that
we
have
the
discussions,
tab
on
github
live
or
in
the
graphql
working
group
github,
we
have
a
directory
called
rfcs
and
it's
totally
acceptable
to
add
rfc's
zeros
there
as
well
you'll
find
a
bunch
of
docs
there
that
are
in
various
states
of
usefulness
or
abandonment,
and
it's
just
a
shared
space
for
us
to
put
stuff.
E
So
I
would
say,
foreign
whatever
you
find
most
helpful
to
get
the
idea
out.
I
find
it
best
to
try
to
generate
some
asynchronous
discussion
on
that
first,
just
to
get
a
sense
of
if
it
has
legs
or
if
anyone
can
raise
up.
You
know
was
this
talked
about
in
the
past.
Are
there
you
know
primary
points
that
you
want
to
get
ahead
of
before
you
bring
it
to
discussion
here
and
then
that
way,
when
you,
you
do
bring
it
to
an
agenda
item
for
this
meeting.
E
F
You,
oh
absolutely
just
just
a
quick
because,
like
I
alexander,
I
commented
a
little
bit
on
your
pr
into
the
working
group
like
the
the
apr
that
you
make
to
the
working
group.
Rfc
can
definitely
be
like
way
less
involved,
like
everything
that
you
put
in
the
like,
you
could
make
an
rfc
zero.
That
is
literally
just
like
the
link
to
your
slideshow
and
the
link
to
your
like
description.
H
F
We
want
that
to
be
as
lightweight
as
possible
and
if
it
is
viewed
as
heavyweight,
let
us
know
like
what
we
can
do
to
reduce
that,
because
having
a
just
like
ability
for
people
to
throw
out
ideas,
is
really
important
and
doing
it
in
a
way
where
we
can
generate
discussion
even
outside
the
context.
E
Totally
agree:
yeah.
The
the
goal
here
is
that
the
heavy
weightedness
of
the
process
for
rfcs
scales,
up
as
the
maturity
of
the
rfc
scales
up
so
like,
as
you
get
closer
to
stage
two
and
stage
three,
like
the
we
get
very
detailed
about
making
sure
that
it's
the
right
thing
to
ship,
but
like
in
stage
zero
and
one
it's
very
exploratory
and
high
level
and
like
whatever
information
we
have
is
great.
It's
like
the
goal
is
just
to
make
sure
the
discussion
stays
on
focus
so.
H
My
understanding
is
this
is
sort
of
a
kickoff
for
ideas
or
place
to
kick
them
off.
Where
do
they
continue
these
discussions?
Do
we
wait
for
the
next
working
group
meeting
or
do
we
have
the
means
to
discuss
them
offline?
If
you
wish.
E
Both
either
can
it
can
work.
It
kind
of
depends
on
the
time
availability
what
you
need
to
do,
what
the
next
steps
are.
Sometimes
we
can
make
a
lot
of
progress
async
between
meetings
and
use
these
meetings
as
a
way
to
get
everybody
else
on
the
same
page
about
what's
happened
for
some
of
our
more
complicated
rfcs.
E
We
found
having
a
breakout
group
of
a
subset
of
this
room
meet,
you
know,
maybe
still
monthly,
but
like
at
the
halfway
point
between
these
is
helpful
as
like
a
check
in
and
make
sure
that
progress
is
being
resolved.
It,
it
kind
of
depends
and
again
it's
whatever.
Whatever
process
is
the
right
balance
between
what's
helpful
for
the
rfc
and
what
people
are
willing
to
put
into
it.
E
Good
question:
I
have
one
small
thing
to
add
that
I
didn't
think
to
put
on
the
agenda.
It's
just
a
super
brief
update,
which
is
this
was
the
last
agenda,
doc
that
we
had
in
our
folder.
So
I
went
ahead
in
the
hour
before
the
meeting
and
I
just
put
together
a
bunch
of
them
for
for
2022.,
so
just
wanted
to.
Let
you
all
know
that
those
are
out
same
time
guidelines.
E
You
know
we
can
always
discuss
changing
that,
but
I
figured
might
as
well
have
them
up
there
and
then
later.
If
we
decide
that
we
want
to
tweak
any
calendar
timings,
we
can
do
that
later.
One
two,
two
small
things
that
I
did
that
hopefully,
will
be
helpful.
I
put
these
in
a
2022
directory.
E
I
might
backpropagate
that
and
try
to
take
the
existing
ones
and
put
them
in
directories
and
figure
out
how
to
make
sure
I
don't
break
old
links
in
the
process,
we're
four
and
a
half
years
into
this
meeting,
that's
crazy
and
so
that,
like
list
of
agenda
files
getting
really
long,
which
is
like
congrats
to
the
set
of
folks
who
have
been
around
for
the
long
term
in
this
room
but
time
to
add
a
tear
to
our
organization.
E
The
other
thing
I
did
is
I
I
cleaned
up
the
template
a
bit
so
that
when
you
actually
see
the
markdown
rendered,
you
just
kind
of
see
the
good
stuff
and
a
lot
less
of
the
guidelines
and
all
the
stuff
in
between,
and
I
pulled
all
of
that
up
to
a
comment
block
above
it
so
that
when
you
go
to
either
edit
the
file
or
send
a
pull
request,
you
will
first
see
that
and
it's
a
little
bit
more
spelled
out.
E
So,
if
you
wouldn't
mind
taking
a
look
over
that,
I
don't
know
making
sure
to
misspell
anything
or
if
something's
confusing
feel
free
to.
Let
me
know
if
you
want
to
send
a
pull
request
to
one
file
I'll
make
sure
I
propagate
it
to
all
the
other
ones.
E
Thank
you
to
benji,
who
I
see
most
of
these
for
filling
a
bunch
of
these
in
since
we're
on
a
bit
of
a
tight
schedule.
Today,
we're
just
going
to
look
at
ready
for
reviews,
but
I
do
have
the
links
up
here
in
case
anyone
wants
to
go.
Look
at
all
of
the
open
action
items.
E
We've
got
four
that
are
ready
for
review.
I'm
going
to
go
through
these
in
bottoms
up
order.
If
you're
looking
at
the
that
link
so
slack
to
discord,
assign
relevant
permissions,
yep
done,
we
are
live
on
discord.
I
Yes,
so
we've
applied,
we've
heard
nothing
but
there's
not
really
anything.
We
can
do
it's
like
a
github
issue
that
has
been
filed,
I've
linked
to
it
yeah.
Unless
anyone
knows
someone
at
discord
that
can
you
know
bump
it
up
the
priority
list.
I
think
we're
just
stuck
waiting,
but
also,
as
I
put
in
that
issue,
I
don't
think
it
really
matters
because
we've
got
discord.graphql.org
now,
so
we
don't
really
need
a
discord.
Dot,
gg,
slash
graphql
as
well,
totally
agree.
E
Sweet
close
move,
the
rfcs
to
the
working
group,
repo
update
all
relevant
readmes.
E
Did
I
update
the
readme
yeah?
I
see
your
comment
here:
benji
you're,
not
certain.
If
that
was
also
done,
let's
go
ahead
and
close
this.
I
suspect
that
since
updating
readme's
is
not
super
crispy
on
whether
or
not
we
know
that's
done
or
not,
this
may
be
a
long
tale
of
finding
dead
links
and
updating
them
along
the
way.
I
hope
that
there
aren't
that
many,
but
if
anyone
spots
a
dead
link,
please
send
a
pull
request
to
fix
it.
E
Since
I
know
a
handful
of
folks
are
new
here
just
to
overview
what
we
did
we
used
to
have
all
of
our
working
files
in
the
graphql
spec
repo,
and
this
graphical
working
group,
repo
graphical
wg,
repo,
was
only
for
hosting
the
agenda
docs
and
notes
for
this
meeting
that
ended
up
being
a
problem
for
the
last
round
of
the
spec
cut
when
we
were
trying
to
get
a
clear
sense
of
who
was
committing
what,
when
and
like
had
they
signed
the
appropriate
agreements.
E
Basically,
we
need
a
tighter
permissions
on
the
spec
repo
itself
compared
to
this
one
and
that
had
a
practical
cost
to
people's
ability
to
to
merge
stuff.
E
We
wanted
to
make
sure
that
the
bar
for
merging
into
the
spike
repo
was
quite
high
and
therefore
we
wanted
to
limit
the
set
of
people
who
had
commit
rates
for
that
repo,
but
the
graphql
wg
repo.
The
goal
is
to
actually
have
relatively
broad
commit
rights.
Basically,
of
course,
the
the
tse
has
commit
rights,
but
the
hope
is
that
a
lot
of
just
long-term
members
who
show
up
to
this
meeting
regularly,
we
can
trust
to
give
commitment
rights
that
way.
E
E
The
the
goal
is
actually
to
just
get
the
information
in,
and
so
what
we
did
is
we
took
all
the
the
spec
pr
docs
or
the
rc
docs,
and
we
moved
them
into
this
repo
so
likely
a
bunch
of
stuff
broke
along
the
way.
Anyhow
issue
closed.
I
On
a
related
topic,
is
it
okay
to
start
like
merging,
typos
and
stuff,
like
that
into
the
graphql
spec
repo?
Again,
now
that
the
october
releases
out
yes,.
E
Absolutely
yeah
anything.
That's
like
plainly
obvious
editorial
change.
Yeah
we
can
merge
like
a
spelling
mistake
last
one
on
here
is
this
nominations.
I'm
gonna
leave
this
issue
open,
despite
being
ready
for
a
review
just
because
it's
a
good
place
to
capture
all
the
information
about
what's
happening
until
the
entire
election
process
is
complete.
So
here's
what
I'll
do
I'm
going
to
edit
it
from
nominations
to
elections?
E
Now
it's
a
little
bit
more
clear
that
it's
about
that
and
I'll
remove
this
ready
for
review
label,
and
that
is
a
good
segue
into
our
next.
E
Great
question
in
the
agenda
doc:
I
think
it
is
agenda
number
five
for
the
previous
meeting
action
items.
Two
links
there
ready
for
review
is
the
link
that
I
had
popped
open
and
was
working
through
should
show
zero.
E
Now
that
we've
just
gone
through
them
all,
there's
a
second
link
there,
which
is
all
open
action
items
which
you'll
notice
that
there's
a
lot
of
them
that
were
open
in
the
last
few
days,
which
is
again
huge
thanks
to
benji,
who
reviewed
over
our
note
stocks
from
the
last
couple
meetings
and
opened
issues
for
everything
we
said
should
be
in
action.
K
E
It's
it's
a
service.
You
provide
that
g,
that's
for
sure,
okay,
good
segue
into
our
next
topic,
which
is
the
update
on
the
tsc
elections.
That
was
the
last
open
action
item
that
we
just
looked
at
for
folks,
who
are
new,
just
a
super
quick
overview
of
of
what
this
is.
We
have
our
tsc.
It
is
11
folks
that
is
me
plus
10
others,
and
every
year
half
of
those
10
go
up
for
elections,
so
their
seats
expire,
and
then
we
have
an
elections
process
to
fill
them.
E
We
had
a
nominations
process.
I
tweeted
about
this
a
couple
times.
I've
had
this
issue
open
since
late
october,
and
we've
collected
a
bunch
of
names
into
a
form.
I
think
at
this
point
we
have
a
little
over
ten.
Maybe
a
dozen
I'll
have
to
ask
brian
our
pm
how
many
names
we've
got.
E
Anyhow,
our
our
nominations,
officially
close
on
the
29th
brian
set
up
the
timing
here
to
have
a
couple
of
days
between
the
29th
and
the
1st
so
that
he
could
prep
some
elections
material
for
the
remainder
of
the
tsc
and
the
next
step,
which
we,
I
don't
think
we've
actually
kicked
off,
even
though
this
says
on
the
first,
we
will
kick
this
off
I'll,
actually
drop
a
note
to
brian
to
help
us
get
this
started.
E
The
next
phase
is
the
remaining
tsc,
so
the
set
of
tsc
who
seats
are
not
up
that
are
not
expiring
enough
for
election,
we'll
review
the
folks
who've
submitted
their
name
and
we'll
take
a
vote,
so
that
has
not
happened
yet,
but
that
will
happen
between
now
and
the
end
of
the
year,
hopefully
a
little
bit
sooner.
E
E
Yes,
we'll
probably
we
might
do
it
on
like
a
github
discussion
or
something
and
then
just
like
hit
every
channel
to
make
sure
people
find
it
brandon,
and
I
will
figure
out,
what's
going
to
be
the
most
helpful
place
to
put
that,
but
yeah
we'll
definitely
drop
a
note
on
the
mailing
list,
we'll
probably
ping
everyone
individually
to
make
sure
everyone
has
got
this
info
cool.
E
L
L
Sharing
the
right
one,
okay,
so
at
the
guild
we
have
found,
while
working
with
clients
that,
like
when
you're
writing
a
query
right.
So
you
have
like
you,
define
your
query
and
then
it's
almost
the
same
variable
names
we
use.
So
it's
like.
If
you
need
a
user
from
id,
you
do
id,
and
then
you
pass
that
variable
name
down.
So
with
this
proposal,
it's
kind
of
like
the
javascript
syntax,
which
is
kind
of
like
it's
a
shorthand
for
variables.
H
H
E
Yep,
I
just
sent
a
link
in
the
chat
to
our
guiding
principles,
which
are
always
a
helpful
lens
for
looking
at
new
ideas
and
seeing
how
we
want
to
think
about
them,
and
what
you
just
expressed,
I
think,
is
a
great
example
of
preserving
option
value
is:
is
this
a
change
that
may
hurt
future
option
value?
E
I
I
don't
want
to
take
that
principle
to
an
extreme
that
limits
our
ability
to
make
any
change,
although
certainly
changes
to
the
syntax
itself
should
have
a
fairly
high
bar
compared
to
other
kinds
of
changes
we
might
consider
because
of
this
issue.
So
maybe
what
might
be
helpful
is
just
doing
a
brainstorm
of
wild
ideas,
syntax
ideas
that
this
could
potentially
conflict
with,
just
so
that
we
have
an
understanding
of
slightly
more
concretely
what
our
our
lost
options
could
can
contain.
E
H
H
How
do
we
indicate
this
is
a
problem
that
I
have
as
well
in
my
proposal?
How
do
we
indicate
to
the
client
that
this
syntax
is
actually
fine
that
they
can
use
it?
Because
that's
that's
another
thing.
If
you're
running
this
against
older
servers,
they
will
not
know
this.
If
you're
running
this
against
new
servers,
they
will
federation
needs
to
understand
how
how
how
to
handle
this.
E
Yeah
that
last
point
is
one
that
is,
I
would
say,
an
unsolved
problem
for
graphql
right
now
is
given
a
particular
server.
Do
you
know
its
total
feature
set
what
it
does
and
doesn't
support?
There
have
been
lightweight
proposals
on
how
to
solve
that
in
the
past.
I
don't
think
we've
ever
had
something
compelling
enough
that
we've
gone
through
it.
Yet
I
understand
in
practice
what
a
lot
of
people
do.
E
D
H
Yeah,
I
I
think
so
that
these
server
features
or
whatever
they
end
up
being
called.
I
mean
in
my
presentation.
I
have
some
not
recommendations,
some
ideas,
crazy
ideas,
let's
call
them
the
rfc
0s
that
are
based
on
directives,
but
somebody
has
to
make
a
decision.
Do
we
want
like
individual
directors
for
the
for
individual
things,
or
do
we
want
one
directive
that
has
that
is
sort
of
parameterized
and
can
say?
Will
this
feature
that
feature
and
what
not
what's
available?
H
What's
not
and
then
there's
the
question
that
I
was
that
I
faced
when
I
was
thinking
about
these
things
when
you
we
have
apollo
guys
here
right,
I'm
not
a
user
apollo
federation,
but
I'm
thinking
of
of
becoming
one
later
or
enabling
my
my
graphql
apis
to
be
federable.
H
And
the
question
is
what,
if
the,
if
you
need
to
federate
a
server
that
does
have
this
and
the
server
that
doesn't
so,
it's
no
longer
a
feature
like
how
does
federation
present
itself?
It's
not
that
it
has
it
everywhere.
It's
that
it
has
it
either.
Some
features
might
be
available
only
somewhere
or
it
has
to
have
the
ability
to
implement
them,
somehow
otherwise
and
so
on
and
so
on.
It
becomes
an
interesting
question
in
itself
that
way.
G
G
G
We
think
it's
it's
more,
it's
more
like
involved,
because
printer
like
stuff
that
the
printer
st
by
default,
it
should
do
shorthand
because
we
try
to
use
like
syntax
feature
we
have
so
it
will
like
new
version
of
graphql
libraries
will
generate
by
default.
The
generator
acquires
incompatible
with
all
servers.
L
E
Yeah
just
now
having
reviewed
over
these
guiding
principles
again,
a
lot
of
these
are
helping
us
understand,
cost
I
think,
like
so
favor,
no
change,
obviously
we're
making
a
change,
so
I
think,
like
the
way
to
interpret
that
is.
We
should
have
a
very
high
bar
of
added
value
and
then
the
burden
of
proof
here
is
on
the
proposal
so
and
then
also
we
want
to
make
sure
that
we
are
mo
being
motivated
by
real
use
cases.
I
understand
that
we
are.
E
This
is
a
a
consequence
of
the
guild
working
with
a
bunch
of
folks
and
seeing
common
patterns.
E
I
suspect
what
might
be
a
helpful
next
step
is
to
better
articulate
the
positive
case.
I
think
we
at
this
point
have
a
pretty
good
understanding
of
the
cost
anytime.
You
change
syntax,
there's
ripple
effects
across
the
whole
ecosystem.
There's
concerns
about
option
value.
I
have
concerns
about
you
know.
E
One
of
our
principles
is
simplicity,
is
more
important
than
expressiveness,
like
the
fact
that
you
have
to
say,
I
have
an
argument
called
x,
whose
value
is
the
variable
dollar
x
like
that
is
long,
but
it
is
self-descriptive
in
a
way
that
just
dollar
x
alone
implies
something
that
maybe
not
obvious
to
someone
new
to
graphql.
So
it's
like
there's
a
lot
of
costs
here
and
I
think
what
we
need
to
do
is
a
cost
benefit
analysis
and
the
next
step.
There
is
better
understanding
the
benefit
like
how
painful
is
not
having
this.
D
Yeah,
I
think,
that's
that
that's
a
good
point,
because
it's
the
improvement
that
this
will
give
us
a
worth
sacrificing
that
it
might
not
work
with
all
the
servers
and
stuff
like
that,
and
also,
I
think,
if,
if
we
just
put
the
id
in
there,
it's
not
clear,
is
it
a
positional
parameter?
F
Do
the
cost
benefit
make
sure
that
we
we
have
a
place
for
discussion
that
people,
you
know
make
sure
we
clarify
hey
here,
are
the
downsides
exactly
as
you
described,
michael
and
like
what
are
the?
What
are
the
upsides?
What
are
the
motive
like
just
having
read
through
today?
I'm
not
totally
clear
what
the
motivating
use
cases
are
yet
so
yeah.
I
I
think
it's
clear.
This
is
still
like
as
an
action
item.
This
is
still
rfc
zero,
but
worth
like
putting
down
all
of
the
details.
E
Agreed,
that's
a
that's.
A
great
action
item
to
take
cost
benefit.
Analysis
has
happened
on
this
issue
thread,
I
did
add
the
rfc
0
label.
So
that's
where
this
is.
I
think
that's
like.
I
understand
the
outcome
of
that
cost
benefit
analysis,
we'll.
Let
us
know
whether
we
want
to
pursue
this
and
take
it
to
one
or
do
something
else.
A
Yeah,
no
I'm
loving
this
discussion.
This,
like
I,
have
an
idea.
I'll
wait
I'll
do
the
process,
but
I
would
love
to
get
it
in
the
same
form
for
the
same
discussion.
How
do
I
get
my
rfc
onto
the
agenda.
E
Yeah
we
tend
to
blindly
merge
pull
requests
to
the
agenda
assuming
that
you've
signed
the
cla
which,
because
you're
here,
you
very
likely
have
already
done
that
so
yeah
you
anything
you
want
to
put
up
for
discussion
here.
You're
welcome
to
do
right.
I'm
pretty
excited
about
this.
E
Okay,
any
other
thoughts
on
this.
We
have
hit
just
a
little
bit
over
the
10-minute
mark,
which
is
what
we
asked
for
here
and
your
thoughts
here.
Should
we
move
on.
B
Sweet
I'll
I'll
try
to
keep
the
discussion
brief
this
time.
So
I
want
to
give
an
update
on
where
we're
at
with
the
implementation
proposal
and
then
a
little
extra
one
moment.
B
Cool,
so
in
the
in
the
spec
pr,
there
was
discussion
about
it's
cool.
If
I
share
my
screen,
yeah,
okay,
we
wanted
a
way
to
mark
list
elements
required.
I
think
it
was
lee
who
who
suggested
this
syntax
and
then
someone
else
said
they
liked
it
and
then
no
one
else
had
any
objections
to
it.
So
I
implemented
this.
This
is
working
now
it's
the
tests
are
passing.
It's
wonderful.
I
ivan
gave
me
an
architectural
review
on
the
implementation.
B
Was
super
helpful,
cleaned
up
a
lot
of
stuff?
My
stuff
is
up
for
review
again.
There
was
some
discussion
last
time
on
what
the
question
mark
operator
should
do.
I
don't
want
to
get
into
it
too
deeply
now,
because
I
feel
like
that
it'll
go
on
for
a
while,
but
just
a
brief
brief
overview
of
what
was
suggested.
B
The
the
question
mark
operator
currently
takes
something
that
is
non-nullable
and
and
converts
it
to
be
nullable.
So
if,
if
null
is
returned
for
a
field
marked
the
question
mark
operator,
it
won't
propagate
null
to
the
parent
it'll
just
come
back
null.
There
will
be
no
error,
but
it
was
suggested
that
it
could
be
used
as
sort
of
an
an
air
boundary
like
the
catch
and
a
try
catch.
One
of
the
things
that
came
up
was
that
the
exclamation
point
right
now.
B
If
something
comes
back
null
and
then
they'll
propagates
it'll
come
back
as
an
error
and
some
smart
clients
will
just
blow
out
the
entire
response.
If
there's
any
errors
so
suggested
that
using
a
question
mark
could
indicate
that
errors
were
handled
and
that
we
shouldn't
blow
out
the
entire
thing.
B
I
don't
remember
if
it
was
ivan
or
someone
else
who
also
suggested
that
if
we
did
that,
if
an
arrow
was
handled,
we
could
also
hold
the
on
modified
return
stuff
in
in
the
error
type,
so
that
you
know
some
clients
could
could
look
at
the
errors
and
and
do
interesting
things
with
them,
treat
it
as
like
a
non-destructive
action
and
some
could
treat
it
as
a
destructive
action.
It
would
kind
of
be
up
to
them.
We
we
there
were
some
other
suggestions
too.
B
We
didn't
really
land
on
one,
but
I
I
I
would
like
to
figure
out
a
way
to
find
come
to
some
sort
of
consensus
before
the
next
working
group
meeting.
I'm
doing
this
for
work
and
I'm
gonna
have
less
time
to
do
to
to
work
on
this
next
quarter.
So
if
we
can
get
to
stage
two
by
the
next
working
group
meeting,
that
would
be
ideal
for
me
so
like
how?
B
What
can
I
do
to
help
us
come
to
like
a
consensus
on
what
we
wanna
do
with
this.
E
E
I'm
absolutely
convinced
that
we
need
it
and
we
have
to
choose
one
of
these
semantics,
but
I
think
the
next
step
is
to
list
out
like
option
a
option
b:
option
c
like
what
are
they?
What
do
they
look,
basically
like
bifurcate
like
fork
and
then
like
in
option
a
like.
This
is
how
the
spec
would
look.
This
is
how
the
rules
would
work.
E
Here's
the
pros
and
cons
and
then
do
that
for
each
of
the
paths
that
we
want
to
consider
and
use
it
to
like
spitball
some
ideas
a
little
bit
and
either
then
conclude
that
some
of
those
paths
are
non-viable
or
like
whittle
it
down
to
like
if
it
whittles
down
to
one
great
like
you
can
come
back
and
say,
like
hey,
you
know,
I
put
a
ton
of
energy
into
teasing
out
what
the
right
thing
is
to
do,
and
now
I'm
confident
that
this
is
the
one.
E
That
is
the
right
thing
to
do
and
if
it's
more
than
one
then
that's
okay,
then
you
can,
you
know
async.
Just
like
start
tagging
folks
who've
been
active
on
this
thread
before
and
start
pulling
input
and
then
hopefully,
by
the
time
we
come
back
to
the
january
meeting,
we've
got
like
a
concrete
discussion
to
have.
B
Gotcha,
okay,
I
I
went
through
the
previous
working
group
meeting
and
dumped
people's
comments
into
this
into
this
thread
and
like
tagged,
the
people
responsible
for
them,
so
some
of
them
are
here.
I
can
try
to
clean
these
up
and
make
it
a
little
more
clear
what
the
differences
between
these
these
different
options
and
then
I
think
lee
you.
You
had
a
used
case
around
what
you're
doing
at
robin
hood
and
I
I'd
like
some
more
details
on
that.
B
So
I
have
you
tagged
here
and
then
benji
had
some
thoughts,
but
he
didn't.
He
said
he
wanted
some
time
to
write
him
down.
So
I
I'd
like
to
get
those
as
well.
D
I
just
want
to
say
also
before,
but
before
we
move
that
up.
So
I'm
I'm
also
super
excited
about
this,
but
I
would
like
to
implement
that
at
least
in
our
server
I
mean
this
is
already
implemented
in
graph.js,
so
that
we
have
a
couple
of
implementations
we
can
play
with
and
see
also
when
playing
with
it,
how
this
feels
before
moving
it
to
stage
two.
D
No,
no,
I
would
say
I
will
do
that,
maybe
over
over
christmas,
but
I
I
would
want
the
time
to
have
an
implementation
running
before
rushing
it
to
too
big.
I'm
really
very,
very
excited
about
the
feature,
but
I
want
to
also
have
a
feeling
about
where
the
edge
is
here.
I
mean
we
also
implemented
different
stream
and
it's
it's
still
stage
one
and
we're
finding
new
things
every
day
in
the
discussions.
So
it's
it's
worth
first
implementing
that
in
at
least
one
or
two
implementations.
E
Apart
from
graph
data,
michael
there's,
an
open
pr
against
graphql
js
that
alex
has
written
and
is
linked
in
the
one
of
the
line.
Links
in
the
agenda
is
his
pr,
but
take
a
look
at
that.
E
I
I've
only
given
it
a
a
quick
overview,
but
yvonne
and
sahaj
have
both
given
it
relatively
detailed
feedback
so
far,
so
it
it
it's
fully
complete
like
you,
should
be
able
to
patch
that
pr
and
have
a
functioning
version
that
completely
implements
what
alex
has
proposed
so
far
yeah
the
question
mark
operator,
including
by
the
way
his
his
recent
changes
to
the
syntax
that
he
just
talked
about.
He
already
added
those
in
there's.
E
We
got
to
work
through
that,
but
like
in
terms
of
this,
the
point
you
made
is
a
good
one
as
part
of
the
reason
why
you
know,
we
only
want
to
move
things
to
stage
two
once
we
have
at
least
one
implementation,
at
least
at
the
pr
phase,
like
mostly
complete
for
the
exactly
the
reason
you
described
like
it,
helps
us
shake
out
other
issues
by
like
actually
writing
unit
tests
against
them.
B
Yeah
yeah
we've
also
we're
working
on
a
on
integrating
with
the
guilds
code
generator.
What
what
client
do
you
use?
So
I'm
not
I'm.
D
Building
a
back
end,
I'm
working
on
the
net
implementation,
and
I,
like
I
mean
we-
we
implement
really
everything
very
early.
We
start
in
in
stage
two
in
stage
one
when
the
discussions
are
on,
but
it's
good
to
find
points
to
the
edges
that
you
might
have
in
non-javascript
servers.
F
Yeah-
and
I
don't-
I
know
that
you've
been
in
contact
with
jordan
eldridge
a
lot
on
this,
but
as
an
example
of
what
would
like
at
least
me
personally,
would
make
me
confident
that
we
can
like
push
this
through
more
quickly
is
if
it's
very
clear
you
and
he
are
presenting
like
a
unified
front
of
what
needs
to
happen
if
be
given.
He
has
the
most
like
he's.
F
F
H
Okay,
I'll
just
contribute
this
much
I've
been.
H
I
wanted
to
read
all
the
proposals
I
didn't
so
I
don't
know
enough
about
this
one,
but
it
does
tickle
my
mind
and
it
did
I'll
have
to
think
more
about
it.
What
does
this
mean
for
the
server?
What
does
this
mean?
What
do
you
want
back?
I
I
don't.
I
didn't
read
that
much
when
you
want
something
not
now,
and
it
is
now.
What
do
you
expect
server
to
do
to
exclude
it?
Is
that
the
type
of
filtering
or
is
that
what.
B
Oh
it
it
it
treats
this.
It
treats
the
field
the
same
as
if
it
was
marked
non-nullable
in
the
schema
itself,
so
it
would,
it
would
propagate
null
to
the
nearest
parent
and
return
it.
D
So
what
we
implemented
in
we,
we
just
did
the
first
preparing
steps
for
that.
So
when
we
built
the
query
execution
plan
for
the
essentially
executing
the
graphql
query,
we
have
we
have
written
the
types
actually
on
the
selection.
So
we
are
not
referring
to
the
field
and
on
the
selection.
We
can
then
rewrite
the
the
type
to
non-nullable
or
null,
and
then
you
have
the
error
behavior
as
if
it
were
the
field
type
so
that
you
either.
D
If
you,
if
you
make
a
nullable
fields,
then
actually
the
the
query
might
be
erased.
You
have
this,
there's
no
erasure
or
if
you
take
it
away,
you
don't
have
that
erasure
that
you.
H
Can
do
it
sounds
a
little
bit
like
a
criteria
I
mean
in
my
implementation.
It
would
have
to
turn
it
into
a
criteria
because
it
might
be
too
late,
especially
if
you're,
considering
different
stream.
If
you
get
stuff
before
you
get
to
this,
and
then
this
says
oops
there's
a
null
that
I
don't
want.
I
should
not
have
sent
you
that
one,
because
that
one
needs
to
be
now
two
and
so
on,
and
so
on
gets
into
tricky
situations
possibly,
but
I
I
don't
want
to
spend
time
on
on
that
necessarily
now.
H
It's
just
something
to
to
think
about
hello.
G
Your
hand
up
yeah
actually,
like
it's
remind
me,
a
interesting
idea,
maybe
as
part
of
rfc
process.
We
need
to
cut
like
timestamp
from
previous
working
group
meetings,
so
people
can
like
watch
watch
it
and
like
we
have
faster
discussion,
people
can
be
up
to
speed
faster
if
we
attach
like
timestamps.
G
Why
I
raised
the
hand
I
have
like
actual
suggestion,
because
I
think
it's
like
with
feature
at
least
like
what
alex
currently
wants
to
explore
through
ecosystem,
and
this
feature
started
as
like
for
coin
generation,
future
for
quad
generation.
G
So
I
think,
instead
of
like
going
back
and
forth
theoretically,
we
can
implement
it
as
an
experimental
feature.
Previously
we
did
like
some
experimental
features
like
inquiry,
variables
and
fragments
and
some
other
stuff.
So
my
personal.
G
G
Like
I
choose,
I
think,
looking
back,
I
choose
like
best
strategy
of
having
separate
like
release,
but
for
this
feature
I
cannot
merge
it
on
the
experiment
of
work.
Entire
ecosystem
will
experiment
with
it.
We
will
learn
or
not.
We
will
learn
stuff
from
actual
experience
of
using
it
as
a
way
forward.
E
That
sounds
reasonable
to
me.
I
defer
to
your
judgment
and
other
folks
working
graphical
js.
You
know,
usually
the
only
reason
why
we
would
hesitate
is
if
we
were
worried
that
people
will
over
rely
on
it
or
that
it
might
cause
some
other
problem.
Basically,
we
should
assume
that
it
will
dramatically
change,
and
if
that
breaks
anyone,
then
we
should
wait,
but
if
putting
it
behind
a
flag
alleviates
that
concern,
then
great,
I
think,
merging
this
and
then
patching
in
changes
along
the
way.
G
Yeah
my
personal
criteria
for
experimental
future,
if
we
have
active
active
champion
behind
it,
because
if
we
have
active
champion,
will
either
like
match
it
or
it
will
be
rejected
and
we
can
remove
it.
The
worst
case
scenario,
if
like
we
had
something
as
experimental
and
it's
there
for
like
years
without
any
progress,
so
alex,
is
pretty
active.
So
it's
like
check
the
like
the
same
with
rope
on
stream
and
defer
a
lot
of
progress,
a
lot
of
iteration,
and
so
I
think
it
would
be
helpful
for
everybody.
G
G
M
G
D
I
I
have
another
question
on
the
on
on
the
stage,
because
is
there
a
reason
to?
Is
it
just
that
you
don't
have
time
to
spend
on
it?
Why
you
want
to
get
it
very
quickly
to
stage
two,
because
I
mean
you
can
already,
I
mean
we
are
having
stream
and
defer
in
hot
chocolate
for
now
quite
a
long
time
and
everybody
can
use
it,
you
just
have
to
switch
the
flag
on
and
then
the
server
supports
it.
G
D
D
Is
extremely
first
different
because
we
didn't
have
the
spec
proposal
actually
the
the
implementation
that
you
mean
the
the
variations
they
came
up
before
we
had
a
spec.
Essentially,
everything
was
based
on
this
presentation
at
graphql
react
europe
in
2016.
I
think-
and
we
also
did
that,
but
I
think,
since
we
have
the
spec
proposal,
there's
a
lot
of
push
in
the
in
the
in
the
ecosystem.
D
D
B
Yeah,
I'm
not
I'm
not,
I
don't
want
to.
You
know,
move
something
to
stage
two
that
shouldn't
be
stage
two.
What
I'm
looking
for
is
just
you
know
if
we
can
speed
up
iteration
time,
if
I
can
get,
you
know,
feedback
a
little
quicker.
G
To
clarify
a
bit,
why
why
I
are
going
for
even
stage
one
but
have
an
experimental
under
experiment
effect
like
europe,
give
a
talk
about
streaming,
d4
or
like
various
conferences
and
like
in
his
talk,
he
said,
like
you
need
to
use
what
release
apply,
what
they
are
it's
like
for
people
to
actually
try
this
feature.
It's
not
like
flicking
one
switch,
it's
like
using
pr
on
different
parallel,
different
releases
on
different
projects,
which
is
like
a
lot
of
troubles
like
about
javascript.
G
Like
a
system
I
want
to
have
like
one
switch
in
one
place,
you
flick
it
and
you
are
all
with
you
can
use
this
feature.
You
don't
do
custom
views
custom
packages
like
alex,
created
previously
custom
package
for
for
this
feature.
I
want
to
prevent
that.
Basically,
so
our
people
will
try
it
more
easily.
D
Yeah,
I
I
do
understand
that,
but
I'm
I'm
just
just
from
the
experience
we
we
see
these
iterations
are
going
on
on
stream
and
defer
we.
We
are
still
finding
things
where
we
didn't
think
about
like
there
is
today
we
have
the
issue
with
the
scalar
lists,
which
which
can
be
deferrable
and
stuff
like
that,
and
sometimes
it
takes
time.
These
discussions.
G
D
L
G
It's
like
keeping
stage
one,
but
imagine
this
experimental
fact.
For
example,
we
have.
Why
do
you
acquire
variables
on
fragments
which
is
not,
I
think
like
might
have
proposal
about
it,
but
it's
like
strowman
and
we
like
discuss
it
one
one
time,
but
it's
on
the
experiment
of
work
and
we're
stayed
so
like
it
happened
before
and
like
we
didn't
have
any
major
problem
with
it.
F
Yeah,
this
is
higher
justification
for
being
under
an
experimental
flag
than
even
the
fragment
variables
does.
So
I
I
don't
think
that
that
is
like
if
you're
using
something
in
graphql
js,
that's
under
an
experimental
flag
like
the
expectation
is,
it
might
break
for
you
on
the
next
release,
you're
like
on
the
bleeding
edge
you're,
not
it's
not
stage,
two
things.
It's
like
truly
yeah.
G
It's
like
experimental
fact,
size,
formatter,
so
even
person
reading
the
code
and
not
seeing
like
topo
experimental
fact
here,
like
for
sure,
even
like
inside
the
parsers
written
like
this
path,
is
experimental.
So
it's
like
super
clear.
B
B
My
thinking
and
I'm
gonna
have
time
next
quarter
to
work
on
this
a
little
bit,
but
my
thinking
was
that
there
will
be
the
most
work
to
do
during
stage
two
and
then
or
during
stage
one
and
then
after
once
I
hit
stage
two
then
it'll
be,
like
you
said,
small
refinements
and
then
it'll
be
further
in
between
so
like
I,
I
allocated
a
full
quarter
for
this.
This
quarter,
but
next
quarter
I'll
have
you
know
like
a
few
weeks.
D
E
B
Real
quick
thing:
I
forgot
one,
a
choice
I
made.
There
was
also
a
question
of
of
for
the
does.
Does
the
the
number
of
dimensions
in
a
list
need
to
match
for
the
non-null
designator
and
the
and
and
the
type
and
I
I
I've
made
it
so
that
it
has
to
match,
you
can't
go
fewer
than
the
number
of
dimensions?
If
we
want
to
loosen
that
later,
I
think
we
can
in
a
non-breaking
way.
So
I
just
I
just
put
that
restriction
in
sorry.
E
Cool
well,
it
sounds
like
we
know.
The
the
criteria
for
two
is
that
we
have
identified
and
then
resolved
all
the
concerns
and
challenges
sounds
like
there's
the
main
one
we
just
talked
about,
which
is
what
to
do
about
the
question
mark
operator
and
then
a
couple
minor
ones
which
is
around
getting
feedback
on
this
last
round
of
syntax.
That
you've
changed
seems
super
viable
that
we
can
give
you
feedback
between
now
and
the
end
of
the
year.
E
L
B
E
Your
hard
work
on
this
great
progress,
sweet
thanks.
E
All
right
next
up
rob
to
give
us
an
update
on
deferring
stream.
I
Just
before
we
do
that
ivan
raised
a
good
point
just
now
putting
the
timestamps
into
the
notes.
This
is
something
I
keep
forgetting
to
do
so
if
anyone
notices
that
I've
not
put
a
timestamp
in
for
a
while,
please
either
go
ahead
and
do
that
or
just
like
nudge
me
thanks.
J
I
set
up
a
repo
deferred
stream
working
group
where
basically
added
a
discussion
topic
there
for
pretty
much
almost
everything
that
I
can
remember
that
we've
discussed
just
so
to
keep
things
organized
for
myself
and
for
other
discussions
that
happen,
and
I
I
mark
a
lot
of
them
as
resolve
the
things
that
I
think
that
we're
all
aligned
on
the
ones
that
are
not
are
listed
here
and
there's
a
few
that
I
want
to
go
over
where
I
think
that
I
have
like
a
pretty
good
idea
of
what
we
should
do
and
want
to
see
if
we're
all
aligned
on
that.
J
Is
enforcing
the
correct
delivery
order
of
payloads?
This
is
something
that
we
talked
about
a
few
meetings
ago.
J
So
I
have
a
a
bunch
of
examples
here
and
I
want
to
go
through
each
of
them,
one
by
one
and
say
what
I've
implemented
and
if
we
all
agree
on
that
direction,
so
so
the
first
one
is
like
the
kind
of
classic
example.
So
you
have
a
deferred
fragment
that
has
another
deferred
fragment
inside
of
it
are
you
meant
to
be
sharing
your
screen.
J
J
Yes,
yeah
yeah,
so
so
this
first
example,
we
have
a
a
deferred
fragment
that
has
some
fields
that
are
not
deferred
and
then
another
object
with
a
deferred
fragment
inside
of
there,
and
the
issue
is
that
if
this
in
the
example,
if
this
homeworld
field
takes
longer
to
resolve
than
species
and
tidal,
then
you
would
receive
a
payload
that
in
the
path
is
referencing
a
deeply
nested
field
that
hasn't
been
returned
in
any
prior
payloads.
J
So
so
I
did
some
work
on
this
and
have
it
so
that
in
this
case,
the
server
will
know
that
this
other
fragment
is
apparent
of
that
one
and
it
will
get
held
up
so
that
this
doesn't
happen.
J
The
next
example
is
where
the
fields
are
nested,
but
if
there
isn't
another
child
object
in
the
hierarchy,
and
so
because
of
the
way
that
the
collect
fields
algorithm
works,
that
all
these
fields
are
kind
of
treated
on
the
same
level.
So
in
this
case
it
would
be
possible
to
receive
them
where
nested,
the
payload
for
a
nested
fragment
could
be
returned
before
the
payload
for
top
fragment.
J
So
now
this
other
example
is
kind
of
like
what,
if
you
have
like
an
unrelated
field,
that
has
this
species
field
object,
which
then,
since
it's
not
deferred,
would
be
coming
back
before
the
other
one.
But
in
this
case
it's
not.
I
don't
think
that
it's
trivial
for
the
executor
to
understand
like
the
whole
tree
of
dependencies,
so
this
nested
fragment
still
would
get
held
up
until
after
the
top
fragment
is
returned.
J
Last
one
is
just
kind
of
the
same
thing
could
happen
when
you
have
a
stream
nested
in
a
defer,
and
so
in
that
case
we
do
want
the
stream
payload
to
come
back
after
the
deferred
fragment,
because
otherwise
it
would
again
be
referring
to
a
field.
That's
not
there
and
then
the
last
one
is
just
saying
that
in
a
stream
list
like
they
should
come
back
in
the
same
order
as
the
index,
you
shouldn't
get
the
first
index
before
the
zeroth
index,
so
I
guess
any
thoughts
on.
E
That
this
last
one
seems
plainly
non-controversial.
I
would
be
extremely
confused
if
I
had
an
ordered
stream
and
got
the
data
out
of
order,
so
that
seems
to
make
sense.
Maybe
it
would
be
helpful
to
like
it's
it's
nice
to
have
these
examples,
but
like
is
there
a
unifying
principle
or
constraint
that
describes
them
like?
I
can
I
might
like
reframe
this
last
one
of
like,
rather
than
saying
they
should
not
be
sent
out
of
order.
Payload
n
is
dependent
on
payload
and
minus
one
being
delivered
right.
E
Like
that's
the
constraint
like
if
I
was
going
to
build
a
task
dependency
or
promise
dependency
like
I
would.
I
would
link
you
know
next
payload
to
previous
payload,
and
is
this
a
similar
constraint
for
the
previous
set
of
examples,
the
the
merge
path,
so
you
in
the
path-
and
it's
like
the
the
guarantee-
is
that
you
have
some
like
that
path
exists
and
in
the
accumulation
of
previous
payloads.
That
path
exists
and,
like
that's
the
dependency.
J
E
J
I
think
it's
it's
it.
It
fits
in,
I
think,
with
with
the
way
that
the
algorithm
is
written
in
the
spec
for
collecting
fields,
and
I
I
haven't
done
it
yet,
but
I
plan
on
like
describing
it
that
way
in
the
spec,
because
that's
also
how
it's
implemented
in
the
code
as
well,
but
yeah.
I
I
I'll
I'll
definitely
do
that
in
the
like
actual
spec
edits.
E
H
Fraction,
I
don't
see
a
problem.
I
see
this
as
a
convenience
feature
and
somebody
just
has
to
decide
essentially
there's.
Let's
put
it
this
way,
I
am
not
aware
of
the
reason
why
an
able
client
would
not
be
able
to
handle
out
of
order
streams
incorrectly.
However,
that
would
bring
complexity
to
the
client.
So
this
is
a
decision
as
to
where
do
we
put
the
complexity?
H
Is
it
server
who's
responsible
for
this,
or
is
it
not,
and
the
only
other
thing
that
I
can
come
up
with
is
if
we,
if
we
put
this
and
nail
it
down
in
in
the
spec
you
or
we
are
preventing
able
clients
from
perhaps
squeezing
out
a
little
bit
more
of
performance
out
of
the
server,
because
it's
causing
delays
on
the
server
side
and
buffering
on
the
server
side.
F
Just
to
clarify
this
a
little
bit,
we've
we've
already
established
that
existing
clients,
due
to
any
client
that
basically
parses
responses
into
a
graph
format
on
the
client
before
allowing
people
to
access
them,
is
essentially
incapable
of
handling
and
there's
a
lot
of
clients
like
that
relays,
like
a
very
clear
example
of
this,
but
basically
cannot
handle
paths
that
don't
yet
exist
in
the
response.
So
there
is
a
dependency
with
existing
clients,
which
was
the
motivating.
H
D
Yeah
they
can
patch
so
relay.
Essentially,
that's
also
why
we
have
these
markers,
these
label
markers
for
the
relays
to
to
know
which,
which
which
segment
they
they're
using
essentially
but
it
works.
I
mean
relay-
is
the
first
client
who
really
implemented
it.
So
that's
where
the
concept
is
actually
coming
from
right.
E
L
E
Aspect
to
performance
here,
which
is
actually
like,
the
order
in
which
you
put
these
things
on
the
network
can
matter
when
your
bandwidth
is
limited,
and
you
know
sometimes
these
payloads
can
be
quite
large,
especially
for
streams.
So
I
you
know
some
of
this.
I
know
this
is
a
long
ago.
Conversation
was
like
to
what
degree
are
these
things
guarantees
versus
guidelines
like
you?
How?
How
far
do
we
want
to
go?
I'm
making
these
a
enforced
constraint
versus
something
that
is
a
an
option
for
a
server
to
decide.
H
I
mean
I'm,
I'm
predominantly
a
server
guy.
I
have
been
a
ui
guy
and
I
think
this
feels
nice.
This
feels
good
as
an
overall
thing,
so
I'm
not
opposing
it
at
all,
but
if
we
leave
it
gray
area
that
will
leave
a
room
for
confusion
and
and
misunderstanding
what,
if
the
client
can't
handle
it,
but
the
server
outputs
it
in
whatever
order
or
the
other
way
around.
You
expect
something
that
doesn't
happen.
H
E
An
example
of
what
I
mean
is
say
we
decided
counter
to
what
rob
is
proposing
here
and
we
said
hey.
We
want
to
actually
squeeze
out
the
last
bit
of
performance
and
streams.
Can
stream
out
of
order,
deferred
fields
can
be
just
can
be
sent
down
before
their
parent
is
ready
in
those
edge
cases,
and
it's
up
to
clients
to
handle
that
complexity?
E
Would
it
be
okay,
then,
in
that
case,
for
a
server
to
decide,
I'm
not
even
going
to
bother
preparing
this
nested
field
until
the
top
one
is
prepared,
because
it
just
determines
itself
that
it
thinks
it
can
tweak
performance
that
way.
So
that's
what
I
mean
by
like
defining
like
a
minimum
set
of
constraints
and
then
saying
like
what
what
is
then
in
the
server's
capability
is
the
server
required
if
a
child
field
can
be
prepared
earlier
than
its
parent?
E
Is
the
server
required
to
send
it
down
as
soon
as
possible,
or
is
it
allowed
to
wait?
So
that's
all
I
mean
about
you
know.
H
I'll
tell
you
one
adjacent
concern
that
that
that
might
impact
the
server
implementation
before
I
started
reading
about
graphql.
I
actually
hoped
it
addresses
this.
It
doesn't,
but
in
in
it's
about
delivering
this,
what
is
the
same
entity
or
object
on
the
server
side
multiple
times,
because
it's
nested
multiple
times
in
different
contexts?
H
If
the
server
has
it
and
if
it
is
the
result
or
the
future
or
whatever
the
the
actual
promise
of
any
deferred
thing
in
order
to
reduce
the
server
resource
use
utilization,
it
might
just
say
here:
you
need
this
in
these
hundred
places,
just
just
go,
get
rid
of
it,
but
otherwise
it
wouldn't
be
able
to
do
this
if
it
has
to
wait
for
some
parents
or
whatever
right.
H
E
That's
maybe
why
getting
the
constraint
right
here
could
be
important.
I
mean
we
could
always
revisit
it.
Should
we
later
we've
had
that
conversation
before
of
having
a
an
actual
graph
payload,
rather
than
a
tree
payload
one
of
the
early
contributors
to
graphql
called
a
tree
ql
because
they
thought
it
was
unfair
that
we
called
it
graphs,
because
really
we
just
get
to
query
and
return
trees.
E
Trial
yeah,
let's
be
honest
if
we,
if
we
want
to
come
back
to
that,
then
we'd
want
to
make
sure
whatever
constraint
we
describe
here
is
is
generalizable,
so
you
know
the
previous
ones
we've
done.
There
is
having
sort
of
like
a
side
load
of
objects
like
a
denormalized
graph
with
links.
E
We
could
always
come
back
to
that
again.
I
think,
like
even
in
that
mode,
the
constraint,
as
I
described
it
before,
might
might
be
the
right
one
and
rob
it
sounds
like
you
agree.
That's
the
way,
you'd
want
to
phrase
it
that
you
can't
send
a
deferred
payload
unless
the
path
at
which
it
would
be
merged
is
already
realized
by
the
client
via
previous
payloads.
So
you
could
imagine
in
a
in
a
world
where
we
sent
a
normalized
graph
rather
than
a
tree,
the
the
path
would
be
into
that
normalized
graph.
E
It
would
be
like
you
know,
whatever
objects
of
some,
you
know
identifier
and
then
like
the
field
name,
and
if
the
the
previous
bit
there
wasn't
described,
then
you'd
be
stuck
in
the
same
position
which
might
get
around
that
like
if
there's
some
circuitous
path
by
which
you
get
to
that
thing.
That
hasn't
been
loaded
yet
then,
like.
Maybe
that
thing
could
be.
E
J
Yeah
the
the
constraint,
what
what
you
said
as
if
the
fragment,
if
the
path
is
there,
isn't
exactly
what
it's
implemented
as
because
we're
not
I'm
not
saying
that
if
the
path
becomes
available
by
a
fragment,
that's
not
apparent
of
the
one,
that's
deferred!
J
D
D
D
D
H
D
H
E
Okay,
rob:
I
know
you
want
to
talk
through
a
couple
of
these,
so
maybe
extra
feedback
should
go
to
this
discussion,
but
yeah
continue.
J
E
J
Yeah,
so
the
next.
The
next
issue
was
the
validation
for
initial
count
on
the
stream
directive,
which
was
originally
brought
up
by
benji
that
we
did
not
have
a
validation
rule
for
it,
and
this
kind
of
goes
into
another
topic
that
has
been
discussed
here
and
there.
I've
seen
where
validation
rules
are
only
applied
on
both
the
query
and
the
schema
not
on
the
entire
request,
including
variables,
because
that
is
what
lets
you,
for
example,
validate
persistent
queries
at
the
time
of
their
persistence
and
not
on
every
execution.
J
The
original
ask
is
why
isn't
there
validation
if
you
pass
an
invalid
number
like
negative
one?
So
if
you
have
a
case
where
that's
being
passed
as
a
variable,
it's
not
possible
to
validate
that
with
the
current
way
that
we're
doing
validation,
benji
suggested
just
treating
them
as
zero
or-
and
I
that's
my
god
of
where
I
want
to
go
to-
because
it
seems
like
addressing
the
issue
of
validation
with
variables-
is
a
larger
concern.
E
I
do
I
I
don't
want
to
cross
the
line
where
validation
requires
the
variable
values
to
be
provided,
because
a
lot
of
services
will
do
validation
at
least
statically
they'll.
Do
it
at
the
time
of
query
check-in,
at
which
point
you
you
don't
know
what
any
one
particular
query
would
be
so,
like
I
think
the
appropriate
way
to
interpret
validation
is
generalized
across
all
all
possible
input
values
like
we
gotta.
You
have
to
make
sure
that
the
query
is
valid.
For
that.
E
You
could
of
course,
have
a
a
sort
of
partial
solve
here,
with
the
validation
rule
that
if
your
initial
count
is
as
a
fixed
value,
it's
not
a
variable
then
like,
of
course
statically.
Then
you,
you
know
what
that
is,
and
you
could
throw
a
validation
error.
That's
not
unreasonable.
H
F
F
How
do
you
deal
with
negatives
in
a
field
that
can't
take
negatives
for
that
argument,
value
and
it's
fine
for
to
leave
like
static
validation.
That
lee
is
proposing
is
like
being
the
partial
solution
up
to
basically
tooling
implementers,
so,
like
yeah,
I'm
responsible
for
cl
static,
client-side
validation.
I
in
like
my
code
base.
E
Don't
think
it's
crazy
to
add
this
as
a
as
a
spec
validation
rule
we'd
want
to
make
it
clear
that
it's
like
it
is
incomplete.
Like
you,
you
have
to
also
do
runtime
validation,
but
it
seems
to
be
a
value
add
to
say
like
if
I
wrote
out
a
literal
negative
one,
an
initial
account
that
that
just
immediately
causes
an
error
rather
than
waiting
for
you
to
run
the
query.
That's
statically
known
and
it
seems
like
we
could
do
that.
E
But
if
you
did
that,
then
I
think
benji's
suggestion
here
is
probably
the
wrong
one
that
like
it,
wouldn't
make
sense
to
fail
a
literal
at
validation,
but
then
allow
a
variable
value
through
by
coercing
it
and
then
continuing
to
execute
normally
like
you'd
want
consistent
behavior.
So
my
suggestion
sorry
go
ahead.
No
go!
I
didn't
know
that
you
could.
G
E
My
my
suggestion,
regardless
of
whether
we
add
a
validation
rule
or
not,
is
is
actually
a
runtime
error
like
if
we,
if
a
negative
value
is
kind
of
never
the
right
thing,
then
it's
best
to
throw
and
like
I
don't
know,
maybe
someone
fat,
fingered,
a
negative
sign
and
didn't
mean
to,
and
it's
not
that
they've
just
got
a
negative
one.
It's
got,
they
got
like
negative
three
and
they
actually
wanted
three,
or
they
did
some
math
to
compute.
E
That
and
like
forgot
to
call
absolute
value
on
it
or
something
stupid
if
it
blindly
just
works,
but
it
like
blindly
works
by
giving
them
zero
like
that,
might
be
a
surprising
result
and
it's
probably
a
better
developer
experience
to
throw.
Even
though
that's
annoying.
M
D
The
comments
with
it
it's
it
feels
fishy
that
we
that
we
don't
validate
that
and
but
it's
it's
a
new
situation
with
the
old
ones
like
we
skip
and
def
we
skip
and
include.
We
essentially
can
very
easily
validate
them.
If
it's
not
a
boolean,
we
just
pathing
a
syntax
error
or
something
a
validation
error.
D
But
here
it's
it's.
Actually
we
would
need
to
put
that
into
the
runtime
to
validate
if
it's
wrong
or
not.
H
I
think
this
regard
has
to
the
service
has
to
validate
one
way
or
another.
I
mean
that's
his
business
and
yeah
some
servers
don't,
but
they
get
into
trouble
and
that's
a
whole
different
ballgame,
but
in
terms
of
validation,
I
think
that
it
I
mean
graphql
needs
to
remain
relatively
simple
and
succinct.
H
We
don't
want
to
have
gajillion
different
things,
doing
the
same
thing
in
different
places.
If
we
don't
have
to,
I
I
think
that
way
and
if
we
could
say
if
we
could
provide
information
at
the
schema
level,
that
something's
supposed
to
be,
let's
say
positive,
not
zero.
Even
then
it
doesn't
say
that
tooling
has
to
implement
it.
It
doesn't
say
that
the
client
has
to
validate
it.
It
doesn't
even
say
that
the
server
side
framework
that
the
server
is
using
to
parse
graphql
has
divided
it.
H
F
I
D
E
You'd
want
a
line
there
that
says
assert
initial
account
is:
is
a
zero
or
a
positive
integer,
and
then
like
that
that
there's
an
error,
you
can
define
what
you
mean
by
assert.
E
I
think
so
you
probably
want
to
explore
and
make
sure
that
the
behavior
makes
sense,
but
my
intuition
is
that
yes,
like
this,
would
result
in
a
error
originating
at
this
field
that
was
marked
with
that
stream
and
then,
at
that
point
typical
error
behavior
should
occur.
So
if
it's
nullable,
then
we
just
knock
it
out
with
null
there's
no
stream
there,
and
if
it
is
non-nullable,
then
it
bubbles.
I
Just
as
a
minor
comment
on
this
and
I
may
be
incorrect,
but
I
think
that
we
only
have
asserts
on
things
that
occur
just
before
actual
execution
occurs.
I.E
no
resolvers
have
been
called
at
that
point.
It's.
H
D
I
I
G
I
When
we
do
for
the
subscription
root
level
field
that
we
determined
that
we
need
to
get
the
event
stream
from,
we
also
do
it
for
that
as
well.
I
believe
we
assert
that
it.
You
know,
I
forget
what
we
assert,
but
we
assert
something
there,
but
that
again
is
before
any
resolver
is
actually
called,
but
I
would
be
hesitant
about
adding
an
assert
that
happens
after
any
resolvers
would
be
called,
and
here
we're
talking
about
throwing
an
error
from
the
field.
F
F
D
H
I
think
that
the
spec
can
specify
the
latest
time
that
it
has
to
be
validated,
but
not
necessarily
the
earliest.
Let's
say
I'm
implementing
my
own
server
right
now,
I'm
using
a
library
that
does
not
have
support
for
this
to
parse
out
graphql
requests.
Well,
guess
what
it
does
most
of
the
things
for
me,
but
it
won't
do
this,
so
it'll
have
to
rely
on
my
code
happening
later
to
barf
out,
and
in
that
sense
it
will
come
later,
even
though
I
personally
also
agree
with
matt.
H
I
would
love
to
fail
fast,
because
people
may
actually
not
notice
these
issues.
Even
if
we
return
errors,
they
might
be
ignored
and
then
they
won't
know
that
they
actually
have
trouble
and
the
things
are
they're
not
getting
data
so
sometimes
failing
fast
and
and
furiously
is
useful.
I
D
E
Yeah,
I'm
I
don't
I'm
I'm
hesitant
about
this,
like
of
course,
I
agree
with
the
general
principle
the
earlier
you
get
your
error
the
better,
but
I
worry
about
a
inconsistency
where,
because
this
would
be
a
new
thing,
the
only
thing
that
would
use
this
new
thing
would
be
the
stream
directive
and
there's
no
way
to
extend
that
new
thing
into
into
user
code.
So,
like
okay,
really
like
at
the
base.
E
What
we're
talking
about
is
this
initial
account
argument
is
taking
a
sub
type
of
in
called
positive
event
and,
like
that's,
not
a
thing
that
our
type
system
recognizes,
if
it
was
all
of
our
existing
mechanisms
would
already
work
here,
but
what?
If
there
was
some
user
provided
field?
That
was
deep
in
the
thing
that
said,
hey,
actually,
I
have
like
the
type
that
I
want
here
is
positive
in
like
streams.
Initial
account
can
do
that.
Why
can't
I
do
that?
I
want
to
assert
my
my
field.
E
Is
correctly
supply
the
right
values
long
before
I
get
there
right
now,
that's
not
how
that
works
right
now
and
like
that's
just
one
case,
we
do
others
where
you
got
to
match
a
regular
expression
or
whatever
any
arbitrary
logic
can
happen
within
that
field.
To
say,
like,
oh
actually,
like
there's
been
some
incorrect
input
here,
that's
beyond
what
the
type
system
described.
E
D
E
Yeah,
basically
at
whatever
the
point
like,
if
you
were,
if
we
were
going
to
follow
benji's
proposed
piece
here
at
some
point,
you
have
to
take
the
negative
value
and
turn
it
into
a
zero
or
whatever,
wherever
it
is
that
we
would
do
that.
I'd
say
I'm
not
particularly
in
love
with
the
idea
of
killing
on
a
negative
interpreting
is
zero.
I'd
rather
just
do
an.
D
J
F
H
But
why
is
that
an
issue
I
mean?
Doesn't.
E
There's
other
precedent,
if
you
just
like
just
search
through
the
spec
for
the
word,
throw
and
you'll
find
cases.
The
first
one
that
I
came
across
was
result.
Coercion
so
say
your
field,
resolver
returns
a
string,
but
your
field
was
typed.
Int,
that's
going
to
cause
an
error,
that's
not
an
error
thrown
by
user
code.
That's
an
error
thrown
by
the
by
the
specs
defined
definition
in
there
for
the
engine,
and
so
like.
I,
I
think,
there's
a
couple
of
cases
where
we
throw
field
level
errors
and
to
benji's
point.
E
We
don't
call
it
assert,
we
call
it
quote:
throw
a
field
error
and
that,
like
field
error,
is
well
defined
and
like
it
is
aligned
to
a
field
and
it
bubbles.
I
E
Yeah,
I'm
awesome,
I'm
I'm
loose
on
the
validation
part.
I
think
that
would
be
a
nice
to
have
and
like
also
designing
for
the
ninety
percent
case,
like
ninety
percent
of
these
are
going
to
be
top
level
fields
with
like
a
subset
being
nested
within
ninety
percent
are
gonna,
have
an
initial
count
as
like
a
a
literal
value,
rather
than
a
variable,
with
like
the
long
tail
having
a
variable.
E
So
if
you
were
to
like
smoosh
those
two
together,
I
suspect
the
vast
majority
of
these
stream
cases
are
going
to
be
some
topical
field
which
needs
to
be
streamed
where
it's
like
hardcoded.
I
want
the
first
three
and
I
don't
know,
maybe
no
one
will
ever
hard
code
a
negative
value,
and
this
is
just
like
over
engineering
but
like
if
we
wanted
to
give
an
early
indication
doing
that
via
validation
rule.
Like
I
don't
know,
it
doesn't
feel
like
a
cost
to
me.
It
feels
like
value.
E
J
Okay,
the
next
one
stream
on
scalar
lists
the
spec
says
that
data
should
be
a
map.
I
believe-
and
the
question
is,
if
you're
streaming
a
scale
scalar
list,
is
it
okay?
That
data
is
also
a
scalar?
J
I
think
it
makes
sense
that
it
is
for
me
it
was
not
clear
in
the
spec,
but
that's
how
it
has
been
implemented
and
unit
tested
in
the
reference
implementation.
Since
the
beginning,
I.
D
After
of
the
result,
and
my
remark
to
this
was
just
so:
I
we
haven't
implemented
that
in
hot
chocolate,
yet
that
it
can
be
a
scalar,
because
because
we
need
to
either
state
that
it
can
be
or
say
that
it
can't
be
and
has
it
has
to
be
differently
webbed
or
is
not
valid
valid
on
scalar
lists,
because
we,
we
kind
of
said
we're
using
the
same
structure
with
data
and
at
the
moment
the
spec
states.
D
It's
a
map
and
either
we
break
that
up
or
we
define
that
it
has
to
be
somehow
put
in
the
map
in
some
way
or
something
like
that.
I.
F
I
think
we
actually
could
clean
up
the
spec
itself
here
a
bit
and
say
that
data
for
a
normal
query
or
for
a
normal
response
data
is
the
type
of
the
root
like
the
root
type
or
like
the
query.
Mutation
subscription,
which
is
always
a
map,
but
when
data
like
data
is
the
type
of
the
type
that
it's
returning
when
it's
an
object,
that
is
a
map
when
it's
a
scalar,
it's
a
scalar.
E
E
Nested
list
yeah,
it
would
be
a
list.
I
think
this
is
right
in
the
spirit
of
time.
Maybe
we
should
do
overviews
of
these,
because
I
know
we're
gonna
run
out
of
time.
This
seems
right
to
me
I'll.
Just
give
you
the
historical
context,
why
we
say
data
is
a
map.
E
Type
is
because
like
right
now,
that's
always
true,
and
sometimes
the
spec
is
redundant
for
the
reason
of
giving
extra
information
about
what
to
expect
so
like
it's
always
a
map
type
because
you're
always
creating
against
the
top
level
query,
which
is
always
an
object,
but
it
seems
like
we
should
be
able
to
refine
that.
H
D
So
yeah,
so
we
just
state
state
that
it
that
it
can
be
can
be
whatever
it
is,
so
it
can
be
a
scalar
when
when
we
have
a
scalar
value
and
can
be,
I
just
wanted
it
to
be
stated
in
respect,
because
at
the
moment
we
don't
have
that
and
that's
a
change.
So
that's
the
only
agreement
I
want
to
have
on
that.
L
F
H
J
Yeah
the
rest
of
them,
I
didn't,
I
don't
want
to
talk
about
the
rest
of
them
in
as
much
depth
as
the
previous
ones.
I
I
still
like
want
to
do
more
research
on
the
rest
of
these,
but
just
like.
Oh
I
just
maybe
I'll
just
do
like
a
quick
overview,
and
it
would
be
great
if,
if
you
have
opinions
to
leave
comments
on
the
discussion
yeah,
so
this
it
was
talked
about
a
couple
times
is
how
does
deferred
work
when
you
have
multiple
mutations
in
an
operation?
J
J
It's
you
can't
like
you,
can't
resolve
this
first
and
send
the
initial
payload
that
has
mutation
b.
It's
like
a
chicken
and
egg
problem
which
comes
first.
So
does
this
defer
just
get
ignored
here?
I
I
don't
think
that's
great,
because
I
think
that
it's
useful
to
do
that.
Do
we
make
an
exception
that
when
it
is
deferred
result
a
could
be
executed
after
mutation
b
has
started,
and
this
goes
if
this
is
like
a
very
deep
nested
field
or
something
this
actually
ties
to.
H
E
And
for
I'll
just
give
you
my
two
cents
before
we
move
on,
so
I
don't
forget,
but
my
my
opinion
here
is
that
option.
Two
is
the
right
thing
like
the
intent
with
option.
One
is
to
avoid
confusing
results
where,
if
you,
if
you
had
omitted
any
kind
of
deferred
statement
here
and
you
queried
result
a
twice,
you
would
want
to
get
its
value
with
like
each
incremental
thing
applied.
E
If
you
have
an
explicit
defer
it,
it
seems
reasonable
to
me
to
suggest
that
you
would
want
to
wait
for
that
to
resolve.
After
all,
mutations
have
been
applied
and
like
that.
That
seems
like
a
reasonable
thing
for
people
to
opt
into
actually.
H
H
E
Well,
only
one
operation
can
ever
be
executed
at
a
time,
regardless
of
whether
it's
a
mutation
or
a
query.
H
F
J
Yeah
the
thing
is
with
like,
especially
with
relay,
where
kind
of
the
pattern
that
it's
encouraged
is
that
you
execute
a
mutation
and
then
spread
your
fragments
for
components
underneath
it
to
refresh
all
your
data.
J
F
Yeah,
that's
a
very
that's
a
very
good
point.
I
basically,
I
guess
the
attitude
would
be.
We
definitely
need
to
resolve
this,
but
do
we
need
to
resolve
it
to
get
to
stage?
Are
we
in
stage
two
stage?
Three?
F
E
We
could
we
could
move
the
proposal
forward
by
deciding.
This
is
not
a
problem
we
want
to
solve
right
now,
but
I
think
rob's
point
is,
is
a
salient
one
and
the
fact
that
qa
and
jafar
have
already
gone
down
option.
Two
here
should
be
a
strong
signal
that
consumers
of
graphql
want
that
behavior,
and
I
think
it's
a.
I
think
it
would
be
pretty
confusing
if
you
said
we,
you
know
we
ship
this
feature
and
then
all
of
a
sudden,
you
can't
use
it
because
of
that.
D
D
J
Have
open
which
I
I'm
I'm
very
happy
saying
you,
you
can't
defer,
like
the
actual
mutations
to
grip
the
fields
I
think
option
b
on
the
other
one.
That
makes
total
sense
here
because
of
that
are
way
less
clear
to
me
than
having
fragments
tied
to
components
deep
in
your
tree.
That
you're,
including
on
a
mutation.
D
E
Totally
agree:
rob
I'm
I'm
sorry
to
have
to
cut
this
conversation
short,
I
know,
there's
a
ton
of
stuff.
I
think
the
remaining
action
here
is
to
ask
folks
to
to
dig
into
the
repo
that
rob's
put
together
with
all
these
open
discussions
that
weigh
in.
I
do
want
to
make
sure
we
save
a
little
bit
of
time
for
alex
at
least
to
get
through
his
powerpoint,
if
not
to
have
some
discussion
after
that,
since
we're
a
lot
less.
E
I
want
to
say
that
the
the
mode
in
which
you're
working
through
this
is
by
far
the
most
complicated
rfc
to
hit
graphql
since
its
inception,
I
think
that's
pretty
clear,
maybe
close
second
would
be
subscriptions
and
setting
this
up
is
a
separate
repo
with
discussions,
I
think,
is
a
very
fantastic
way
to
go
about
keeping
track
of
all
these
open
issues.
So,
thanks
for
being
very
well
organized
thanks,
all
right
alexander
yield
the
floor
to
you.
Yeah.
H
H
Oh
there,
it
is
that
one
excuse
me
if
I
use
the
logos
inappropriately,
I
didn't
end
up
removing
them,
but
the
logo
is
not
the
point
it's
just
there
for
do.
You
see
my
screen
by
the
way
yeah,
okay,
so
so
far,
most
people
seem
to
be
developing
it
this
way
and
assuming
it
that
way
and
suggesting
that
way
or
or
advising
against
anything
else,
but
this
is
not
the
only
way
that
this
is
done.
H
Implementations
documented
online
that
actually
say
that
we
should
not
do
mutations
just
at
the
root
level
that
we
should
allow
nested
mutations
because
they
help,
and
I
try
to
just
very
quickly
identify
some
problems
that
those
of
us
who,
I
guess,
sit
on
the
other
side
of
this
face.
We
faced
name
namespace
explosion.
I
have
in
the
product
that
I'm
developing
over
well
hundreds
of
types.
H
They
have
many
fields
the
product
is
complex.
This
is
not
a
question
of
microservices
that
some
people
propagate
this
is
a
box
that
people
buy,
that
people
put
in
there
on
premises
and
they
actually
run
that
and
need
the
api.
So
this
is
a
massive
massive
api
that
would
literally
need
tens
of
thousands
of
kinds
of
root
mutations.
H
So
there's
another
issue
beyond
the
name
namespace
explosion,
and
this
is
re-specification
of
the
context.
So
whenever
you
specify
these
individual
mutations
that
go
one
after
another,
you
have
to
respecify,
usually
the
context
or
the
parent
or
whatever
they
apply
to,
even
if
they
apply
to
the
common
thing.
H
There
is
a
lot
of
validation
that
we
need
to
to
perform
and
it
helps
to
do
a
lot
of
this
validation
sort
of
at
the
right
time.
H
If
we
only
have
root
mutations,
we
can
only
do
individual
root
mutation
validations
and
then
we
have
to
wait
until
the
very
end
until
all
of
these
things
are
done
to
composite
them
and
and
look
at
the
whole
picture
and
see
how
that
changed,
which
we
have
to
do
anyway,
but
it
doesn't
give
us
any
place
in
between
to
say
okay,
this
bunch
of
these
bunch
of
changes
or
the
scope
is
done,
and
now
I
want
to
validate
just
that.
H
So
the
question
is:
why
are
we
not
having
concurrent
mutations
and
what
says
that?
And
it's
not
really
the
data
model,
because
the
data
model
can
be
structured
such
that
root
mutations?
Oh
sorry,
non-root
or
nested
mutations
or
concurrent
intuitions
in
this
case
makes
sense,
for
example,
in
this
case,
I'm
showing
two
updates
one
to
checking
account
another
two
to
savings
account.
H
These
are
these
are
just
stupid
examples,
but
they
work.
I
mean
these
are
completely
independent
and
there's
no
impact
at
all.
As
to
whether
you
do
one
first
or
the
other
first,
the
outputs,
the
outcome
will
be
exactly
the
same,
and
while
this
is
not
always
true
and
we've
seen
some
examples
of
that
in
the
previous
presentations
in
this
very
meeting,
it
is
often
true-
and
it
is
often
true
in
my
case
as
well
so
right
now.
I
can't
necessarily
do
anything
about
that,
because
the
spec
says
no.
H
I
have
to
execute
them
sequentially,
because
the
client,
my
and
the
client
might
actually
rely
on
that
fact
for
whatever
reason-
and
I'm
advised
against
actually
solving
my
problem,
which
I'll
paint
later
so
essentially
by
allowing
what
you
will
see
coming
up.
So
in
any
case,
what
actually
sort
of
may
happen
anyway-
and
this
is
who
actually
has
control
the
server?
H
Yes,
we
are,
we
are
telling
the
server
to
implement
mutations
sequentially
or
to
execute
them
sequentially.
But
the
point
is
that
the
server
has
a
tough
job
anyway,
it
has
to
accept
many
requests
at
the
same
time
coming
from
many
sources
anyway
and
has
to
be
able
to
comply
with
them.
Many
of
these
requests
actually
can
come
from
the
same
source
from
the
same
user
from
the
same
session
all
at
once.
H
It
is
quite
possible
and
it
happens,
so
it's
still
the
server
that
has
to
somehow
rely
or
or
figure
out
how
to
do
these
without
exploding,
and
the
client
has
to
also
know
to
reasonable
extent
as
to
what
it
wants
to
accomplish
and
or
not,
and
what
order
of
things
makes
sense
to
it.
Does
it
want
to
travel
to
city
a
first
for
cdb
first
before
it
gets
to
c
and
so
on
and
so
on?
H
So
the
clients
are
actually
aware
of
the
ordering
and
they
might
want
a
certain
order
for
some
things
that
are
interdependent,
but
for
things
that
are
not,
we
are
sort
of
presenting
a
roadblock
here,
not
necessarily
a
roadblock,
but
we're
advising
people
not
to
do
this,
and
some
are
ignoring
that
advice.
H
H
So
this
is
how
I
I
see
graphql
I
like
graphql,
because
it
brings
clarity
it.
It
makes
it
clear
as
to
what
the
server
can
do,
what
it
needs
to
be
able
to
do
that
and
the
clients
can
actually
be
clear
about
what
they
want,
so
the
servers
can
properly
serve
them.
Not
do
any
extra
stuff
not
do
stupid
stuff,
it's
it's
just
there.
H
It's
just
right,
so
I
accidentally
dropped
this
respectful
api
kind
of
play,
work
where
it's
on
on
a
restful
api,
but
it
is
that
and
that's
just
my
own
personal
view,
so
whether
somebody
actually
thought
of
that
or
not
that's
how
I
feel
about
it
and
this
thing
sort
of
flies
in
the
face
of
it.
H
This
is
what
I
want
to
have,
and
this
is
in
line
with
a
lot
of
well
a
lot
of
a
number
of
implementations
that
I
found,
which
is
essentially
to
say,
look,
I'm
giving
the
context.
In
this
case.
I
want
to
update
the
customer.
This
is
actually
really
not
doing
anything
on
its
own.
It
could
be
doing
something.
Let's
say
in
some
implementations
it
can
do
something
with
basic
properties
with
scalars
and
whatnot,
but
ultimately
it's
providing
context
for
the
inner
nested
additional
mutations.
That
can
actually
now
say.
H
Well,
I'm
working
within
that
guy
and
in
that
sense
the
name
spacing
happens.
Sort
of.
Naturally
this
is
the
update
checking
of
the
update
customer,
not
of
something
else
or
somebody
else.
So
the
name
spacing
is
fine.
I
don't
have
thousands
of
root
mutations.
I
just
have
essentially
update
for
each
type
that
I
have
or
the
context
that
I
have
and
then
inside
of
it
I
have
whatever
is
relevant
in
there.
H
It
just
needs
to
specify
the
things
that
that
particular
tiny
atomic
mutation
actually
can
handle
and
because
of
that
mutations
become
simple,
they
are
elementary
and
you
can
actually
combine
them.
I
don't
have
to
come
up
with
every
possible
combination
of
these
sometimes
which
I
don't
have.
I
do
have
some
handmade
canned
mutations.
If
you
wish
that
are
good
for
for
a
lot
of
things,
but
I
can't
give
you
canned
mutations
for
hundreds
of
types
that
have
tens
of
thousands
possibilities
there
or
more.
H
H
So
how
is
this
actually
supposed
to
work?
Something
like
this?
This
is
actually
how
the
server
would
do
it
anyway.
My
experiment
experiments
go
this
way.
This
is
what
happens
even
if
you're
not
actually
coding
specifically
for
this,
but
you
have
layers
in
between
your
code
and
the
database
anyway,
which
is
well
you've
got
you're,
fetching
your
customer
anyway.
H
You
need
to
do
that,
whether
you're
doing
this
sequentially
or
not,
there
might
be
some
locking
involved,
I'm
not
saying
database
locks
and
there
might
be
memory
synchronization
and
sound
force
and
whatnot,
and
these
guys
now
can
happen
in
multiple
threads
concurrently
or
not.
If
they
did.
If
they
were
implemented
concurrently,
they
would
have
probably
have
some
locks
anyway,
which
would
mean
that
they
will
happen
in
some
order,
whatever
that
order
is,
if
they're
not
related
to
one
another
and
the
server
could
know.
H
Actually,
you
know
what
these
things
do
depend
on
one
another,
so
maybe
I'll
obey
the
order
from
the
client
or
whatnot
there's
a
lot
of
things
there
once
this
is
done,
we're
coming
back.
This
is
the
finalization
that
I
was
talking
about.
I
can
actually
look
at
the
whole
thing,
these
nested
bits
that
were
done
for
this
particular
customer.
H
Let's
say
in
this
case
and
have
some
fraction
of
the
big
validation
if
you
wish
or
or
actual
database
statements
or
whatever
I
want
to
do
after
this
before
I
get
to
the
root
level
again
and
do
more
mutations.
So
it's
it's
useful
in
that
respect
and
then
what
spec
changes
are
required.
I
mean
this
is
my
comprehensive
list,
this
empty
space.
Here,
it's
more
about
it.
The
spec
says
that
these
have
to
be
sequential.
H
The
spec
does
not
say
that
these
inside
ones
have
to
be
sequential.
That's
fine,
but
very
often
we're
saying
and
we're
seeing
and
hearing
people
say
you
know
what,
but
this
is
dangerous.
You
shouldn't
do
this,
because
graphql
doesn't
specify
what
this
order
is,
and
I'm
saying
so.
What
let
it
be
I
mean
if
this
is
what
the
server
says
and
allows
sure
that
means
that
the
client
cannot
rely
on
this
particular
order,
and
if
it
wants
some
particular
order,
it
will
have
to
break
things
up.
H
So
what
are
we
doing
here
I'll
like
break
things
up
in
multiple
different
queries?
Oh
sorry
calls
mutations
so
to
just
run
this
very
quickly.
These
are
desired
changes.
H
These
are
not
my
official
proposals,
think
of
them
as
crazy,
brainstorming
ideas,
and
you
will
see
that
sort
of
I
try
to
make
them
some
a
little
funny
kind
of
sort
of
way.
So
there
is
an
impact
on
different
stream,
as
as
I
noted,
but
I
think
we
need
certain
sort
of
good
guidance
and
documentation
and
not
say
just
don't.
H
Do
this
just
saying
be
careful,
because
this
is
what
it
means,
and
if
you
want
to
accomplish
something
in
order,
you
have
to
do
it
in
some
other
way
and
so
on
and
so
on.
H
H
And
we
would
want
to
have
some
sort
of
ways
for
schema
to
or
desire
to
specify
this
behavior.
It
isn't
required
immediately
and
and
perhaps
for
the
clients
to
request
for
execution
which
is
actually
similar
to
some
of
the
rfcs
that
already
exist.
I
think
operation
execution
or
some
like
that
is
what
one
is
called
in
any
case,
one
of
the
stupid
ideas
or
crazy
ideas
just
simply
say
that
a
field
is
able
to
run
things
sequentially
inside
of
it.
This
is
not
a
serial
type
like
I've
seen
a
lot
of
people
mention.
H
This
is
a
serial
field,
because
this
allows
a
little
bit
more
of
the
type
reuse.
If
you
have
shared
types
across
multiple
services
that
are
federated,
you
might
have
services
that
can
handle
some
things
and
others
that
can't
that
cannot
sequence
their
operations,
and
in
this
case
it's
sort
of
funny
that
we
have
to
say
yeah.
I
can
do
things
sequentially
when
in
fact
probably
many
servers
will
do
this
sequentially
anyway.
H
It's
because
the
the
the
spec
is
the
other
way
around
and
it's
by
default,
implying
that
things
can
be
done
concurrently,
and
this
is
simply
telling
the
client.
Yes,
you
can
actually
rely
on
this
order.
You
don't
have
to
break
up
these
requests
in
any
sort
of
other
way.
If
you
just
put
them
in
here,
I
will
execute
them
sequentially
if
you
say
so,
perhaps,
and
then
the
question
becomes.
How
do
we
say
so
again?
H
A
crazy
idea
here,
instead
of
using
the
typical
braces,
use
the
square
brackets
sort
of
an
array,
syntax
that
implies
order
of
sorts.
H
It
is
relatively
simple:
it
does
have
the
syntax
impact
that
I
sort
of
had
a
problem
with
as
well.
So
this
is
my
own
red
flag
if
you
wish,
but
it
does
solve
a
problem,
the
directives
alone
would
bring,
which
is
if
a
client
requests
a
sequential
execution
and
the
server
does
not
actually
support
it
or
cannot
guarantee
it.
H
But
there
is
something
better,
the
other
way,
and
this
is
similar
to
deferring
talk
that
we're
talking
about,
but
it's
not
really
deferring
any
outputs.
It's
just
differing
how
things
are
happening
in
there.
It's
just
saying:
well,
you
know
what
this
full
thing
it
actually
depends
on.
Some
syntax
insert
your
own
idea
here.
H
This
is
just
a
a
way
of
doing
it
depends
on
bar
and
when
you
have
that
you
have
relinquished
your
a
reliance
on
the
default
order,
at
least
for
full
and
the
order
will
happen
bar
will
execute
first
and
then
full
will
execute
later
in
here.
I
just
we
use
the
variable
syntax,
there's
a
dot
in
there
which
would
barf.
Otherwise
I
mean
we
can
figure
out
something
else.
H
Maybe
a
dollar
dollar,
maybe
something
else
doesn't
matter,
it
could
be
a
string,
but
if
it's
not
a
string,
it
could
be
validated
by
the
clients
better
and
so
on
and
so
on.
So
it
might
be
better
that
way
and
there's
something
even
better
in
than
that
and
again,
this
is
not
a
natural
progression,
not
necessarily.
This
is
just
showing
you
how
things
can
evolve
over
time,
perhaps
which
is
saying
why
not
allow
the
output
of
one
query
or
mutation
in
this
case
to
be
used
as
an
input
in
another.
H
I've
had
from
my
own
clients
a
number
of
demands
like
that,
because
I
didn't
expose
queries
full
power
everywhere
and
they
would
like
to
be
able
to
find
something
and
then
apply
annotation
to
that,
for
example,
and
this
something
simple
like
that
would
actually
be
able
to
to
solve
that.
H
So
this
brings
us
to
the
server
functionality
or
features
or
abilities
question
again.
These
are
just
stupid
ideas.
I
just
wanted
to
bring
attention
to
them.
H
Markdown
document
that
sort
of
documents-
these
same
things
that
I
just
spoke
about
well
in
text
in
a
little
bit
more
detail,
perhaps
that
I
showed,
but
in
a
more
boring
fashion-
and
I
wanted
to
kickstart
this
discussion
and
have
have
this
progress.
Somehow,
however,
this
progresses
with
you
guys
in
in
gracular
working
group,
because
I
believe
this
is
very
important.
It's.
It
was
important
enough
to
have
these
guys
document
something
related
and
expose
things
their
own
way
and
whatnot.
H
So
I'm
not
alone
and
there's
a
lot
of
benefits
to
this.
I
understand
that
it
can
sound
scary,
but
it's
not
complicated.
It's
not.
It
doesn't
need
to
be
scary.
The
clients
can
still
work
ex
the
exact
same
way
that
they
do
today
and
if
they
want
something
to
be
done
in
order,
they
can
actually
do
it
one
way
or
another
already,
but
unplugging.
This
nested
mutation,
things
is
is
would
be
really
useful
for
us
guys
who
have
tons
of
tons
of
of
of
types
and
and
properties
in
them
or
fields.
E
Alex,
I
know
we're
way
over
time,
so
a
bunch
of
participants
already
left.
I
want
to
make
sure
that
we
have
a
space
to
continue
this.
I
opened
up
a
discussion
thread
in
the
working
group
channel.
It's
not
an
issue
but
a
discussion.
Hopefully.
E
Easier
to
fork
this
out
because
there's
a
lot
going
on
here,
I
want
to
make
sure
people
have
the
opportunity
to
weigh
in
so
I
dropped
a
link
in
the
chat,
but
also
find
it
there
in
the
discussions
thread
and
then
also
make
sure
you
dig
up
the
historical
context.
As
you
can
imagine
it's
not
the
first
time
we've
talked
about
this.
I
think
oleg
opened
one
about
namespaces
like
three
or
four
years
ago.
That's
probably
where
the
bulk
of
this
past
discussion
happened,
but
yeah
welcome
the
the
discussion
on
this.
E
I
think
it'll
be
really
interesting,
yeah
and
we
are
10
minutes
over
time,
so
I'll
make
the
wrap-up
brief
thanks
everybody
for
joining.
We
covered
a
lot
of
ground
today.
So
thank
you
all
for
your
participation
and
I
hope
everybody
has
great
winter
holidays
and
we'll
see
you
all
in
the
new
year
thanks
everyone
thanks.
Everyone.