►
From YouTube: GraphQL Working Group - 2023-02-02
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
B
A
A
Lots
of
people
in
this
time
you
know
when
you
merged
earlier
one
of
the
attendees
I
was
just
clicking
and
then
Benji
merged.
C
D
E
F
A
F
F
G
F
F
C
In
the
absence
of
Lee
I'll
I'll
get
us
started
so
hello,
everyone
and
welcome
to
February's
graphql
working
group.
C
We,
as
by
being
here
you
have
agreed
to
the
membership
agreement,
the
participation
and
contribution
guidelines
and
the
code
of
conduct
links
to
all
of
these
are
in
the
agenda
file
itself.
So
please
do
have
a
look
at
those
if
you
haven't
already.
Please
also
make
sure
that
you
are
in
the
list
of
attendees
in
that
the
same
document
we'll
go
around
first
and
do
a
quick
introduction
of
attendees
that
will
be
in
the
order
that
they
appear
in
the
attendees
list.
So
I
will
get
us
started
hello.
Everyone,
I'm,
Benji,.
E
K
And
hello,
all
Lee
I'm,
so
sorry
for
joining
so
late
and
thank
you.
Benji
for
getting
us
started,
I've
been
having
some
Wi-Fi
connectivity
issues
and
I
had
to
run
and
find
an
ethernet
cable.
But
here
we
are.
K
And
sorry,
one
sec
will
I
pulled
my
agenda
file
back
up
thanks
everyone
for
doing
the
intro
welcome.
K
We
have
a
very
healthy
agenda
and
in
the
spirit
of
getting
to
that,
actually,
let's
see
yeah.
Let's
just
read
over
real,
quick,
so
agenda
items
that
we
have
on
Deck
an
opportunity
for
me
to
talk
about
the
election
results
from
the
TSC.
K
We've
got
a
topic
from
Benji
and
Ricky
about
managing
our
build
and
publish
infrastructure,
ambiguity
and
schema
definitions,
advancing
argument,
name,
uniqueness,
defer,
updates
from
Yvonne
and
default
value;
validation.
That's
check
from
Yahoo
anything!
That's
not
listed
on
there
that
we
want
to
talk
about.
K
Cool,
let's
do
a
quick
recap
of
Prior
meetings
before
we
dig
into
those.
K
K
I
would
say
the
thing
that
is
most
exciting
over
the
last
couple
of
meetings
has
been
that
we
we
officially
launched
the
scalars.graphql.org
project
and
so
Props
to
Donna
in
particular,
for
getting
a
lot
of
that
stuff
put
together,
but
Andy,
of
course,
we're
getting
the
project
up
and
running,
and
that
is
at
scalars.graphical.org.
It
looks
it
looks
pretty
nice
Andy's
got
a
first
couple
of
them
up,
but
the
idea
is
that
anyone
who
wants
to
share
broadly
their
scalar
spec
now
has
a
place
to
put
it.
B
One
thing
I'd
like
to
call
out
from
the
past
two
meetings,
the
non-primary
meeting,
because
they
tend
to
have
fewer
items
on
the
agenda.
It
means
that
we've
had
actual
like
substantive
discussion
on
deferring
stream
and
fragment
arguments
and
having
the
higher
Cadence
has
helped
those
proposals
actually
like
iterate
and
advance.
K
Yeah
we've
made
a
huge
amount
of
progress
on
the
front
stream,
in
particular
over
the
last
month
or
two
in
in
no
small
part
because
of
having
the
extra
time
to
dig
into
them,
but
there's
also
a
or
smaller
subcommittee
that
got
set
up
to
go
deep
into
that.
I
think
on
a
weekly
basis,
which
has
been
really
fantastic.
There's
just
like
a
lot
of
really
deep
things
to
work
through
and
Rob
props
to
you
for
showing
leadership
on
that
keeping
progress.
F
K
K
Proposal
from
Matt
has
gone
through
sort
of
I
think
various
rough
and
dirty
phases
over
the
last
year
or
two
and
Matt's
got
it
cleaned
up
into
like
a
very
compelling
state
to
move
forward,
and
so
Matt
I'm
sure
you're
continuing
to
work
on
that
for
a
bit.
But
we
got
to
have
a
really
healthy
discussion
about
that
at
the
last
which
secondary
meaning
is
that
yeah.
B
K
I'm
gonna
say
action
items
for
the
sake
of
time
and
take
into
a
first
agenda
item.
So
first
election
results.
K
My
apologies
for
running
across
this
a
little
bit
behind
schedule,
but
we
did
in
fact
have
a
successful
election
with
I
think
10
nominees
with
a
bunch
of
open
slots.
This
time
only
I
think
one
person
running
for
the
elections.
Andy
we
did
get
in
fact
get
reelected,
but
then
a
whole
bunch
of
new
faces
so
QA
see
you
here.
K
Welcome
congrats
to
you,
Rob
I,
think
you
or
on
that
list,
as
well
congrats
to
you
and
who
else
is
here
for
the
first
time
who
was
added
in
Yuri
who's.
K
Also
in
I,
see
you
there
and
then
who
am
I,
who
am
I
missing,
Stephen
who's,
not
on
the
call
fantastic
set
of
folks
very
excited
to
have
your
support
and
Leadership,
and
thanks
to
everybody
who
nominated
themselves
and
the
best
part
of
the
TSC
is
that
we
do
the
bare
minimum
of
stuff
and
delegate
heavily
to
the
working
group.
So
everyone
can
continue
to
share
a
fair
bit
of
leadership,
but
it's
always
great
to
have
a
set
of
folks
who
can
help
run.
K
These
meetings
maintain
a
repositories
and
anytime.
We
do
need
to
take
a
vote
on
things.
Be
able
to
do
that.
So
thanks
everybody
for
participating
in
that
any
questions
or
comments
about
the
TSC
election
stuff.
C
Hi
everyone
Thomas
is
with
me
today,
standing
in
for
Ricky.
C
So
what
we
wanted
to
talk
about
here-
and
this
might
particularly
be
a
TRC
topic
because
it
deals
with
the
the
technical
configuration
of
you-
know,
GitHub
and
GitHub
actions,
npm
and
all
those
other
such
things
we
have
up
until
this
point
done
what
is
necessary
to
make
things
happen
rather
than
what
necessarily
would
have
been.
You
know
the
best
path.
So,
for
example,
quite
a
lot
of
the
graphical
build
infrastructure
was
dependent
on
personal
npm
tokens
and
things
like
that
which
isn't
ideal
going
forwards.
C
C
Ricky
has
outlined
what
they
think
would
be
the
best
approach
going
forward.
But
of
course
we
can
also
have
input
on
that
Thomas
as
you're
more
familiar
with.
What's
actually
going
on
in
the
build
steps
there
would
you
like
to
say
anything.
D
Currently
we're
using
change
sets
for
like
because
there's
a
lot
of
different
packages
in
the
in
the
graphical
repository
like
not
just
graphical,
but
also
the
vs
code
plugin
and
like
a
lot
of
dependent
packages,
so
we're
using
change,
sets
to
manage
all
of
that
and
all
the
release
to
npm
is
currently
depending
on
Ricky's
tokens,
and
currently
part
of
that
is
broken,
as
the
current
token
isn't
able
to
publish
some
of
those
packages
which
basically
will
kick-started
this.
C
Yeah,
thank
you.
So
I
don't
have
much
experience
in
setting
up
this
kind
of
broader
build
infrastructure.
Does
anyone
else
have
that
and
have
a
particular
route
that
they
think
we
should
follow?
Also,
has
anyone
read
through
Ricky's
proposal,
which
is
quite
short
and
have
any
input
on
that.
F
G
Or
away
I
forget
how
this
repository
code,
but
the
web
wrappers
for
graphq.js
I
discovered
that
nobody
in
Foundation
have
like
tokens
to
it
and
it
required
some
like
help
from
meta
side.
Might
help
me
to
obtain
them.
So
I
think
since
we
have
like,
especially
now,
since
we
have
like
multiple
projects
and
some
of
them.
G
Some
of
them
like
is
a
work
as
a
project
from
meta
and
some
of
them
like
stuck
at
some
stage
and
maybe
in
the
future.
Somebody
wants
to
renew
them
like
a
parser.
How
I
see
us
past
parser
photography?
Oh
it?
Maybe
somebody
wants
to
maintain
it
or
do
at
least
bug
fixes,
but
at
the
same
time
I
don't
know
who
have
like
a
access
to
it
or
who,
who
just
kind
of
like
share
access
to
it.
G
So,
like
it's,
a
minimum
I
think
like
having
one
user
that
has
access
to
all
the
repos
and
packages
and
somebody
from
Foundation
having
access
to
it
and
be
in
contact
Point,
meaning
like
if
somebody
wants
to
get
token
for
some
something
he
can
send
the
email
to
to
a
responsible
person
like
foundation,
and
this
person
can
give
away
tokens
I
might
for
I,
don't
know
about
like
one
password,
never
actually
used
it,
but
it's
a
minimum
I
think
we
should
have
like
user
and
have
like
is
a
graphql
foundation
or
TC
members
access
to
those
skills,
maybe
both
I,
don't
know.
B
Yeah
I
I
wonder
if
it
is
also
worth
us
doing
an
explicit
for
all
packages
owned
by
the
foundation
do
like
explicitly
have
somebody
like.
We
identify
at
least
two,
ideally
three
owners
who
have
permissions,
and
maybe
it's
through
this
mailing
list.
Maybe
it's
through
all
TSC
members,
maybe
it's
whatever.
B
It
is,
but
then
require
each
of
those
people,
or
at
least
three
of
those
people
to
do
like
a
minor
version
bump
and
release,
even
if
it's
like
a
no
op
release,
just
so
that,
like
I,
remember
having
to
release
I,
think
or
having
to
bump
graphql.js
once
and
it
was
a
disaster
because
it
was
the
first
time
I'd
ever
tried
to
bump
a
version
of
anything
on
graphql.js
or
on
npm.
So,
basically,
if
we
don't
actually
exercise
the
path,
then
there's
no
guarantee
it
will
work.
C
Yeah
excellent
great
input,
the
the
other
thing
to
factor
in
is
some
of
our
projects,
particularly
graphical
I,
think
use
automated
publishing
and
that's
where
the
the
one
password
slash
two-factor
authentication
discussion
came
in
so
having
a
solution
there
that
we
can
be
secure
with
and
know
that
you
know
it's
not
gonna
leak,
those
tokens
to
The
Wider
internet
and
things
like
that.
C
K
I
certainly
think
the
GitHub
actions
Auto
deploy
is
going
to
be
the
like
best
balance
between
security
and
convenience,
because
it
means
any
local
project
maintainer
just
needs
to
do.
K
It's
a
balance
because
it
means
anyone
who
is
capable
of
changing
the
deploy,
like
the
version
field
in
a
package.
Json
file
can
then
therefore
do
a
deploy
for
that
particular
project,
but
I
think
the
alternative
would
be
that
the
only
way
to
actually
deploy
a
version
would
be
to
find
a
TSC
member
who
has
MFA
credentials
and
that's
more
secure
but
more
burdensome,
and
it
may
slow
us
down
in
terms
of
our
ability
to
deploy
stuff.
K
C
So
what
would
be
the
course
forward
from
here?
Is
there
anyone
who'd
be
willing
to
own
this
topic
and
because
I
seem
to
understand
at
the
moment
that
there
are
graphical
releases
that
are
pending
we're
kind
of
in
a
bit
of
a
broken
state
where
some
things
can
be
published,
but
not
others.
K
K
Is
there
a
short-term
thing
that
we
need
to
do
specific
to
graphical
that
unblocks
its
ability
to
deploy
today
like?
Can
we
can
we
on?
Can
we
undo
whatever
we
did
that
stopped
it
being
able
to
deploy
to
like
so
we're
not
blocking,
or
is
that
just
too
burdensome
to
figure
out,
and
instead,
what
we
should
do
is
use
graphical
as
the
Canary
project
to
test
out
this
new
flow.
D
You
just
would
need
somebody
who
has
access
to
all
the
relevant
packages
to
create
that's.
Okay,.
C
Speaking
of
which
Lee
I
think
you
gave
me,
and
maybe
some
others
access
to
some
of
the
packages,
but
they
among
those
didn't
seem
to
be
the
packages
under
the
at
graphical
scope,
so
it
may
not
have
been
caught
under
the
same
umbrella
and
I.
Think
that's
where
the
sort
of
split
in
permissions
has
gone
awry,
so
I
I
currently
do
not
have
sufficient
access.
As
what
I'm
saying.
K
Got
it
if
you
send
me
a
list
of
the
ones
that
I
missed,
I
will
go
through
and
add
you
to
those
and
then
I'll.
Take
is
an
action
on
me
to
get
this
plan
go
on.
K
K
Thank
you
and
I.
Imagine
the
two
or
three
of
us
may
need
to
just
like
get
on
a
separate
call
at
some
point,
just
to
make
sure
that
it
actually
works
the
first
time
and
because
it
doesn't
always
and
and
if
we've
missed
something,
then
we'd
probably
be
good
to
just
have
some
block
of
time.
K
No
thank
you
for
finding
it
and
a
good
detailed
proposal
next
up
on
the
agenda.
Benji
is
also
one
of
yours,
but
this
is
about
fixing
ambiguities.
When
schema
definitions
are
omitted,.
C
C
I
think
this
all
came
out
of
Roman's
research
from
reading
the
spec
carefully
and
noticing
some
inconsistencies
and
then
me
noticing
further
inconsistencies
placed
around
those,
so
this
clarifies
or
I
say
clarifies
it
technically
changes
what
the
spec
is
actually
saying,
but
I
believe
it
changes
it
to
what
the
intent
of
the
spec
is
rather
than
what
it
actually
says
right
now,
which
is
basically,
if
you're
using
the
default
operation
type
names
like
query
mutation
subscription,
then
you
don't
need
to
put
the
schema
keyword
in
your
spec,
but
if
you
are
using
different
names
for
those
operations
or
if
you're
using
one
of
those
names-
and
it's
not
meant
to
be
used
for
the
operation,
for
example,
you
have
mutation.
C
That
is
meant
to
be
talking
about
like
a
viral
mutation,
rather
than
actually
the
mutation
operation
type,
then
you
must
put
the
schema,
keyword
and
say,
query
query
and
then
obviously
not
put
the
mutation
in
or
put
mutations.
Some
other
type,
something
like
that.
So
this
is
what
this
PR
clarifies.
It's
not
very
many
lines
of
code
or
text,
I
should
say
so.
Yeah
I
was
just
wondering
if
we
can
progress
this
a
little
bit,
because
I
was
asked
about
a
related
question
in
chat
the
other
day
and
realized
it.
It
wasn't
matched
yet.
K
K
This
change
seems
super
straightforward
to
me
and
I
think
we
should
do
it,
but
I
do
think.
We
should
make
sure
that
some
of
our
most
used
implementations,
especially
graphql.js,
is
reference
encodes.
This
Behavior.
K
I,
don't
feel
super
strongly
about
blocking
the
change
to
the
spec
based
on
that,
but
I
I
do
want
to
make
sure
that
we
don't
lose
sight
of
it
like.
Is
it
useful
to
hold
this
open
just
as
a
flag
that
says
we
are
waiting
for
a
PR
against
scrap
field.js
to
ensure
that
its
behavior
is
appropriately
in
place.
C
Michael
you
you're
a
maintainer
of
a
implementation.
Have
you
had
any
issues
with
this
one.
A
Sorry,
what's
not
following,
we
switch
one.
C
So
this
is
the
schema
keyword
in
the
sdl.
K
K
It
only
does
the
first
part,
which
is
it
checks
to
see
if
your
schema,
mutation
and
and
or
query
mutation
and
subscription
types
are
default
names,
and
then
it
decides
whether
or
not
omits,
but
there's
an
important
extra
bit,
which
is
it
needs
to
then
also
check
to
see
if
those
default
names
exist
in
other
types.
K
C
Okay,
great
so
I
think
effectively
that
code
block
that
I've
got
in
the
the
top
comment
is
effectively
a
test
case.
If
you
pass
that
and
then
print
it
back
out
again,
you
should
get
the
same
thing
out
as
you
put
in
so
if
we
can
just
basically
send
that
to
the
relevant
implementations
and
see
if
they
have
any
issues,
we've
got.
The
implementers,
Team
I
think
so.
I'll
tag
that
on
the
on
that
issue
and
see
if
we
get
any
feedback.
K
Cool
I'm,
taking
a
specific
note
on
that
thread,
just
so
that
we
keep
track
of
that.
C
By
the
way,
I'm
not
taking
very
good
notes,
so
if
anyone
can
contribute
to
the
notes,
that
would
be
beneficial.
K
K
Cool
sorry
for
the
label,
I
checked
that
up.
Thank
you.
Benji
I
marked
this
as
rc1
for
now,
just
I
think
the
draft
text
is
probably
correct,
but
just
knowing
that
we
usually
hold
draft
for
having
the
code
elements
in
sequence
with
the
spec
element.
K
That's
probably
the
right
thing,
but
as
soon
as
we
get
that
change
in
place,
this
one
looks
good
to
go.
I
think
this
is
going
to
be
super
straightforward.
K
You
Benji
you're
on
a
roll
in
terms
of
agenda.
You've
got
the
next
one,
which
is
argument
name
uniqueness,
which
I
believe
is
a
follow-up
from
the
last
time
we
talked
about
this.
C
C
We
seem
to
have
an
issue
in
the
spec,
where
we
don't
where
we
didn't
state
that
the
arguments
must
be
unique.
This
was
brought
up
a
while
back
and
I
think
it
was
generally
agreed
that
this
is
the
right
direction
to
go,
but
I,
don't
think.
We've
seen
the
the
champion
of
this
for
a
while
at
the
at
the
working
groups.
C
So
it's
not
been
sort
of
pushed
forwards,
I'm,
not
sure
what
the
what
the
process
is
for
this,
but
I
think
that
we
probably
should
do
this
I
know,
for
example,
in
graphql
JS.
When
you
look
at
the
args,
they
are
a
list
because
that's
what
the
spec
says,
rather
than
being
an
object
which
would
be
more
convenient
and
things
like
that
so
yeah.
What
is
the
process
here
of
making
taking
those
forwards?.
K
We'll
just
claim,
if
you
want
to
just
claim
champion
of
this,
then
we'll
make
it
so
I
think
we
can
just
say
that
champion
timed
out
and
you're
taking
over
and
it
looks
like
Yvonne
did
the
same
on
the
code
side,
thank
you
Yvonne,
and
so
that
got
merged
actually
for
a
bit
of
time
ago.
So
this
is
just
a
matter
of
now
the
spec
agreeing
with
where
the
referee
implementation
has
been
for
a
while.
C
Yeah
and
I
think
it
is
pretty
straightforward,
so
I
think
this
is
probably
ready
to
advance.
K
Cool
anyone
opposed
to
calling
this
one
accepted.
A
K
Sounds
good,
it
is
so
thank
you,
Benji
and
Matt
for
digging
in
and
clean
it
up
and
getting
it
ready
to
merge.
I'm
gonna
go
ahead
and
squash.
It
squash
and
merge
excellent.
That.
F
K
In
thanks
for
keeping
an
eye
on
it,
all
right,
the
Venture
show
is
over
we're
gonna
move
on.
Do
we
have
Yvonne
here
today?
Let's
talk
about
fully
deduplicated
responses.
Yes,.
G
Get
book
out
so
we
need
to
power
up
bacteria.
Yeah
I
will
share
a
screen
because
I
need
to
show
some
examples
and
I
think
examples
of
Market
easier
to
understand.
So
the
Shima
screen
right,
yes,
okay,
so
what
we
agreed
I
think
like
on
what's
working
group
or
one
before
what
is
that
we
imagine
fields
with
initial
response
inside
the
floor.
So
if
we
have
like
in
this
example,
do
you
see
my
mouse
pointer?
G
G
Yeah
I
don't
know
how
to
make
it
bigger,
but
yeah.
So
we
have
like
few
days
inside
before
we
have
a
and
b.
Obviously
we
already
ship
a
so
we
can
be
duplicate
it
with
it
with
B.
So
we
all
agreed
on
that
and
it's
kind
of
easy
in
a
sense
next,
next
step
forward.
Is
we
also
agreed
to
basically
much
of
the
default
defaults
into
one
also
differ
on
the
same
level,
especially
since
we
remove
labels.
G
Now
we
don't
have
a
the
like
groups
or
differ,
and
everything
matched
and
it's
kind
of
like
makes
sense
from
graphql
standpoint
since
fragments
or
inline
fragments.
G
G
It's
next
to
our
deduplication,
if
you
have
a
default,
for
example,
site
before,
and
we
have
like
everything
matching
up
except
like
a
value,
so
you
can.
You
can
do
that,
and
this
totally
makes
sense.
Nothing
is
broken.
Nothing.
Everything
is
kind
of
cool.
So
with
like
free
example
of
the
duplication
there
right
straightforward,
they
like
you,
can
decide
on
them
statically
and
they
don't
create
any
problems.
G
Next
example:
I
took
it
from
probation
agenda
item
I
linked
to
Ike
issue.
If
you
want
to
to
see
like
issue
and
discussion,
please
click
it.
It's
an
agenda
or
somebody
can
post
it
in
the
chat.
I
G
Get
shipped
got
it
so
like
in
a
sense
and
if
outer
default
is
already
shipped,
so
client
already
get
the
ID.
And
basically,
when
we.
I
B
A
But
there
is
a
risk
by
the
way,
by
doing
that,
then,
if
you
run
lists
twice
that
you
actually
get
a
long
or
shorter
list.
E
This
this
example
I
thought
that
we
said
that
list
item
would
all
be
merged
into
one
defer,
because
they're,
both
both
deferries,
are
under
the
same
root
object
right.
They
have
the
same
path.
K
This
is
a
nested
defer,
though
so
it's
you
have
one
defer
yeah
so
put
your
hand
over
the
bottom
half
of
the
query.
First,
so
you're,
just
looking
at
defer
list,
Item,
ID
and
you're,
saying
like
all
right,
I
need
to
get
the
IDS
of
all
the
items
in
my
list,
I'm
deferring
that
additionally,
I
am
and
like
within
the
scope
of
the
Deferred
thing,
I'm
deferring
the
IDS
and
the
values
of
of
each
item,
and
so
what
we're
saying
here
is
well.
K
We
already
got
the
IDS,
but
if
this
is
the
case,
we
were
deferring
values.
The
result
of
this
in
terms
of
payload
should
be
one
payload
or
okay.
There's
the
host
payload
before
you
got
to
the
defer.
That's
contains
whatever
it
contained,
there's
the
second
first
deferred
payload,
which
has
the
list
of
all
the
item
IDs
and
then
there's
a
later
defer
that
has
the
list
of
all
the
item.
Values
did
I,
get
that
right.
E
K
You're
right,
we
did
talk
about
that.
I
think
this
is
something
that
we
we
should
be
able
to
provide
control
over,
because
it
is
the
specificity
is
ambiguous.
It
is
unclear
from
the
query
author,
whether
they
intended
for
this
to
be
first
get
the
IDS
then
later
defer
to
get
the
values
or
whether
they
sort
of
meant
this
as
like,
there's
just
two
separate
things.
I
would
like
to
defer
and
it
is
safe
to
batch
them
together.
I.
B
I
think
problem
Rob
has
run
into
on
this.
Is
that
the
you
can't
disambiguate,
whether
you
in
the
First
Response,
whether
the
value
item
and
whether
list
item
value
is
part
of
the
first
payload
and
like
value
is
just
unset,
because
it's
the
wrong
kind
of
object
or
something
versus
like
we're
still
waiting
for
this
inner
defer
like
there's?
No,
because
the
path
is
the
same,
you
can't
actually
disambiguate
the
two.
B
But
but
this
this,
what
Ivana
is
explaining
would
work
if
the
defer
was
inside
of
the
list
instead
of
inside
of
the
instead
of
instead
of
just
nested
at
the
same
level
as
the
yeah.
Exactly.
G
Yeah,
it's
good
that
that
is
simple
effort.
It's
like
that!
It's
like
quickly
without
too
much
thinking
like
an
hour
ago,
but
I
learned
that
my
understanding
is
not
exactly
the
same
as
everybody
else,
so
we
all
agreed
that
stuff
is
merged
into
one
before
per
path,
so
it
should
be
in
this
case.
It
should
be
one
before
with
IG
and
value
right.
E
G
Okay,
so
yeah
yeah,
so
this
one
now
I
get
how
it
should
be
so
next,
one
the
one
that
tricky
one
that
not
all
bedded
and
that's
why
I
wanted
to
explain
now.
So,
basically,
we
duplicate-
and
here
with
we
duplicating
E
and
F-
it's
like
in
this
case
nothing
is
embedded
in
anything.
So
it's
like
totally
parallel
things.
G
So
easy
easy
rose,
not
working
here.
There
is
at
least
I
I
could
not
find
like
any
any
easy
rule
that
the
right
query
that
a
safe
original
intention
of
we
think
being
independent,
but
guaranteed
that
we
don't
duplicate
Fields
inside
the
response.
G
So
in
this
case
a
solution
for
me,
a
solution
is
not
in
solution,
is
not
in
a
rewriting
query
but
solution
in
response
format
and
by
the
way
I
feel
strongly
about
fully
the
duplicating
fields.
We
discussed
the
we
discussed
it
on
several
like
working
groups,
different
specific
working
groups
and
also
on
this
working
group
and
solution
we
moving
towards,
since
we
have
abandoned
labels,
we're
moving
toward
like
the
duplicate
and
more
and
more
things
and
like
at
least
and
until
now,
like.
G
We
discuss
the
solutions
that
we
deduplicate
Fields,
but
not
all
of
them,
and
leave
it
to
a
server
and
server
can
deduplicate
or
can
like
duplicate
so
and
the
most
complex
example
I
could
find
in
all
this
discussion.
Was
this
one
so
I
like
try
to
find
a
way
to
make
it
duplicated
and
by
the
way
we
can
discuss
the
duplication
as
they
go
like
separately,
but
first,
let's
discuss
what
I'm
proposing
and
see
see
if
it,
even
so
when
issue.
Maybe
somebody
can
can
come
up
with
contra
argument
or
better
solution.
G
E
Just
in
case,
just
in
case,
it's
not
clear
for
everyone
why
this
is
like
difficult
to
de-duplicate
it's
because
it's
the
E
and
F
field
to
optimally
to
duplicate
it.
You
would
have
to
know
the
order
that
potentially
Soul
field
a
and
slow
field
B
resolving.
E
If
that
first
defer
is
coming
in
first,
then
you
want
to
include
it.
If
it's
not,
then.
G
Yeah,
so
we
have
like
I
decided
that,
since
you
cannot
service
risk
problem,
I
could
not
solve
it
without
breaking
anything.
I
broke
a
rule
that
you
have
one
period
per
path:
one
incremented,
because
we
throw
like
for
forever.
It
totally
made
sense
with
labels,
because
visuals
we
we
preserve
shape.
If
you
specify
why
bone
something
you
you
guaranteed
to
have
like
responsible,
but
since
we
drop
label
I
thought
like
what
what
the
downside
so
breaking
this
through
and
so
as
a
solution.
G
We
have
like
duplicating
E
and
F
here
and
obviously,
after
initial
response
number
one,
we
have
number
two
and
number
two
deliver
exactly
that
and
it's
delivered
in
it
under
the
most
outer
path,
the
shortest
path
because,
like
in
previous
examples,
shortest
short,
a
shortest
path,
kind
of
wins,
because
it's
executed
earlier
so
enf
delivered
hero.
And
now
we
have
like
just
two
to
incremental
pivots
with
so
few
days
so
field
B
and
gnf
attached
the
sword
field
B
and
we
can
deliver
it
right
anyway.
G
In
any
combination
we
can
deliver
like
one
is
number
three
and
second
is
number
four
was
versa
and
we
can
also
budget
up
since
the
crematology
and
array
preserve
order.
We
can
have
like
One
Pilot
have
an
enf
and
soul
field,
a
at
the
same
incremental
just
as
different
items
inside
the
incremental
one
thing.
New
here
is
a
complete
completed
field.
G
G
You
apply
apply
items
from
incremental
as
a
purchase
to
initial
response
in
order
manner
until
you
get
completed
on
particular
path,
and
you
know
that
this
path
completed
so
much
three
stops.
That
is,
like
you,
get
number
one
most
number
two
into
number
one
and
much
like
number
three
into
with
single
tree,
and
by
that
point
you
have
like
all
the
data
to
complete
all
the
differs
on
path,
A
and
B
and
important
thing
completed
here
means
all
the
defer
on
this
level
is
completed.
G
So
if,
for
example,
fourth
payroll
delivered
first
we've
completed
true,
it's
also
correct
because
it's
mean
like
path:
empty
puff
is
completed
and
Inner
Path
can
be
incompleted
because
it's
inner
puff
and
they
will
come
later
so.
G
B
B
So
there's
no
point
in
giving
a
payload
that
has
a
defer
that
is
not
completed
and
it's
better,
in
fact
to
end
up
overlapping
with
a
different
defer.
So
in
this
case,
if
I
did
the
EF,
potentially
slow
field,
a
defer
and
I
provided
the
entire
payload
as
payload
number
two,
then
I
can
be
duplicate
out
the
abef
and
just
provide
GH
and
potentially
slow
field
B
right.
B
Alternatively,
you
could
flip
and
it
could
be
that
in
the
first
or
in
the
first,
so
in
the
first
update,
it's
a
b
e,
f
g,
h,
potentially
slow
field
b
as
a
whole
payload
and
that's
the
second
response,
the
first
incremental
response
and
then
the
second
response
is
just
potential
just
potentially
slow
field
a
with
the
path
to
it.
B
Either
of
those
are
fine,
because
at
that
point
the
response
that
comes
back,
you
actually
have
as
a
product
developer
like
your
product
that
is
waiting
on.
This
defer
is
actually
complete
at
that
point.
B
Alternatively,
it's
even
okay,
in
my
opinion,
to
shove
it
all
into
the
first
payload
and
just
say:
okay,
the
whole
thing
is
ready
because
it
whatever
we
actually
Computing
all
this,
but
maybe
it's
that
we
should
actually
have
paths
plural
and,
like
oh,
we
were
able
to
get
potentially
slow
field
day
and
potentially
slow
field
B
at
the
same
time,
and
actually
we
can
just
shove
all
of
this
into
one
incremental
response.
B
I
would
prefer
that
as
a
product
developer
of
oh,
we
got
both
The
Furs
they're,
both
ready,
so
just
give
them
to
me
now,
but
I
definitely
would
not
want
that.
Potentially
slow
field,
B
data
to
be
split
out
of
the
number
two
response.
B
A
G
F
B
G
B
Yeah
I
think
I
think
it's
okay,
yeah,
it's
okay
to
say:
oh,
you
already
have
all
of
the
data
besides
this
one
field
within
this
defer,
so
all
we're
showing
you
is
this
one
field
and
that
at
that
point
this
deferred
path
is
complete.
But
it's
not
okay
to
say!
Oh
well,
you
don't
know
like
you
have
to
look
at
this
completed
field
to
know
whether
I
actually
have
given
you
all
the
data
at
this
path.
A
G
Yeah
so
so
ideas
here,
first,
the
first
step
of
how
it
may
batch
and
everything
on
this
on
a
path
into
one
big
differ,
even
as
previous
example
show,
even
if,
before
it
said
before,
we
still
patching
it
into
one
before
I
tied
to
a
path.
G
We
don't
much
before
the
force
between
between,
like
parallel
structures
as
as
shown
here,
but
we
can
share
responses
between
them,
but
we
don't
send
like
data
set,
cannot
complete
any
diff
anything
good
before
by
itself.
B
A
B
Yeah
this
this
we've
discussed
a
lot
I
feel
like
and
I'm
a
little
bit
less
concerned
about
the
like,
allowing
deduplication
explicitly
because
we're
treating
defer
as
a
as
a
diff.
Basically.
K
Remind
me
if
I'm,
forgetting
I,
think
we
had,
because
part
of
the
bid
to
add
this
layer
of
Simplicity
was
making
label
no
longer
something
that
we
would
require,
because
that
was
sort
of
an
added
set
of
complexity
and
I.
Remember
having
a
conversation,
I,
don't
remember
where
it
resolved
of
whether
or
not
we
wanted
to
still
allow
label
or
something
like
it
to
be
optional
as
a
way
to
get
back
to
controlling
nested
deferries,
not
being
merged
right
like
you
would
you
would
not
merge
to
defers
that
had
separate
labels.
E
Yeah
we
decided
to
to
fully
take
it
off
with
so
with
the
workaround
that
you
could
use
to
achieve
that.
It
would
be
wrapping
whatever
the
parent
field
that
you're
defer
is
in
with
a
aliased
aliasing.
Whatever
object
is
above
that
defer
that
would
cause
them
not
to
be
merged
together
and
kind
of,
like
the
real
solution
would
be
when
we
ultimately
have
Alias
fragments.
G
F
G
A
solution
which
I'm
not
solution
to
the
same
problem,
which
I
want
to
enable
with
this
change.
Since
now,
you
can
send
multiply,
multiply
incremental
per
path,
basically
with
you're
turning
response
into
like
initial
response
for
a
series
of
patch
patches
on
top
of
it,
and
you
guarantee
that
patches
doesn't
have
duplicated
data,
they
have
like
duplicated
Keys,
you
can
have
like
A
and
B
duplicate,
but
like
no
leaves
will
duplicate
ever
inside
patches.
G
Now,
if
we
decide
to
return
back
to
labels,
what
what
we
can
do
is
to
just
signalize
like
after
this,
instead
of
just
having
completed,
we
can
at
least
like
have
array
of
label
names
and,
like
resources
are
completed,
important,
think
to
explain
how
I
came
to
this
idea,
and
maybe
it
will
help
to
qualify
why.
Why
was
proposal
of
like
basically
treating
treating
subsequent
payrolls
as
patches,
not
like
shapes
of
particles,
the
first
but
patches?
G
It's
because
I
spoke
with
vanua
and
why
not
showed
showed
me
that
in
apple
kotlin,
imagine
is
already
done.
So,
like
you
get
initial
response,
you
keep
it
in
memory,
a
series
of
maps
in
memory,
representation
of
Json
and
when
you
get
like
incremental
you
patch
with
maps-
and
you
have
like
single
map
or
per
query,
you
don't
maintain
separate
maps
for
average
E4.
G
So
you
have
one
Global
kind
of
perqua,
not
Global,
like
perquiry
Json
representation
on
memory
and
you
just
patch
it,
and
you
need
to
do
that
even
with
current
proposal,
where
you
guaranteed
to
have
like
only
one
incremental
per
path,
because
you
need
to
Market
it
with
initial
response.
So
client
already
need
to
to
do.
Whispering
Grant
already
need
to
have
a
member
representation
and
code
too
much
with
jsons.
So
from
a
client
standpoint,
we
don't
I,
didn't
too
much
additional
complexity.
G
In
my
view,
if
we
say
like
instead
of
merging
One
initial
response
and
one
path,
patch
per
path,
you
can
match
like
two
three
or
four
until
you
get
like
completed
true.
So
it's
the
same
code.
It's
the
same
in
memory,
representation
and
inside
issue.
I
describe
that
our
editing
complexity
is
actually
over.
G
When
you
don't
have
duplicated
data,
you,
you
do
less
operation
of
looking
for
a
key
in
the
map,
so
it's
faster,
so
a
question
here:
one
is
I'll,
get
it
how
to
implement
that
and
I
like
try
to
come
up
with
implementation
for
this
working
group.
I
might
I'm
stuck
on
another
issue,
so
I'm
working
behind
on
it,
but
I'm
working
so
I
want
to
finish
my
graphql
Js
PR
to
implement
that.
G
A
One,
the
one
patch
per
pass,
was
kind
of
to
simplify
and
we
essentially
know
then.
Okay,
this
pass
is
complete,
but
with
the
completed
value
there,
you
can
signal
that
so
yeah.
G
And
we
need
this
field
for
stream
anyway,
so
it's
not
New
Concept.
It
will
be
already
inspect
in
a
little
bit
different
meaning
in
stream.
It's
mean,
like
array
is
finished
here,
it's
mean,
like
object,
is
finished
but
meaning,
basically
it's
the
same.
It's
like
this
path
is
finished
in
case
of
stream.
Pathways
represents
an
array
here:
Pathways
represent
an
object
same
meaning
and.
C
C
G
Yeah,
actually
it's
even
simpler:
it's
not
all
the
fields
and
the
Deferred.
It
just
owes
the
fields,
because
initial
response
was
or
incremental
and
was
with
completed
mean
like
all
the
fields,
all
top
power
fields
on
which
buff
is
completed
and
Minion
is
the
same
with
Stream
So
stream
means
like
all
the
items
in
Array
delivered
and
array
have
like
can
can
be
like
sub
array
and
on
this
level
you
can,
it
can
be
another
extreme
yeah.
So
Millions
is
same
thing
is
finished
on
web
path.
G
One
additional
like
not
direct,
benefit,
I
know
it's
it's
not
what
D4
was
intended
to
do,
but
what
we
discussed
at
some
point
in
in
inside
the
poll,
it's
like,
for
example,
introspection
query
results
can
be
huge
and
for
some
results
for
some
schemas
it
can
be
like
gigantos
and
create
a
problem
since
not
all
the
implementation
can
pass
Json.
So
right,
Json,
Webb,
big
browsers
until
recently
have
I
think
it
was
four
gigabytes
or
something
of
Json.
G
G
It's
already
like
hitting
like
some
limit
you
ship
D
to
a
client
and
when
you
start
like
working
on
other
fields,
so
as
as
a
side
benefit,
it's
it's
a
solution
for
for,
like
resource
constraint
on
the
server
server
can
decide
like
it's
enough
data
for
for
me
to
keep
like
for
this
client
in
memory
and
I
want
to
afford
that
required
and
work
on
another
batch
yeah
and
just
wanted
to
mention
it
here.
C
So
I
have
one
more
question
about
this
and
I
live
in
the
JavaScript
world,
so
I,
don't
particularly
care
about
the
answer
to
this
myself.
It's
more
I
want
to
make
sure
it
doesn't
cause
issues
for
anyone
else.
This
is
effectively
moving
towards
the
payloads
being
less
predictable,
the
shapes
of
them.
You
know
they
may
or
may
not
have
Fields,
because
those
male
may
not
have
been
delivered
in
previous
times.
C
C
G
Yeah,
what's
exactly
a
realization,
I
had
speaking
with
benowa
I
I
assumed,
like
I,
had
this
idea
for
a
long
time,
but
assume
it
will
create
problem
for
for
strongly
typed
coin.
What
I
learned
we
already
created
problem
for
strongly
typed
client
by
a
woman
to
unwind
everything
and
by
allowing
to
yeah,
basically
by
wanting
to
unwind
inside
the
initial
response
and
deduplicating
things
which,
more
as
the
same
deduplicated,
between
initial
response
and
unwinding
the
result
for
quantity
is
the
same.
Everything
becomes
optional.
G
Here
is
so
what
Bernard
showed
me
if
you
already
need
to
preserve
in
memory
in
memory
Json,
since
you
need
to
deal
with
anyone,
so
you
mark
everything
is
optional.
You
still
understand
what
field
can
be
tied
to
particular
path,
but
they're,
all
optional,
since,
like
everything
can
be
in
line
here.
G
G
It's
just
like
when
you
get
empty
Puff
you,
you
not
sure
everything
is
optional,
so
empty
path
have
a
b
e
g
and
so
B
and
A
and
B
have
like
only
so
a.
But
everything
is
optional
and
the
like
this
proposal
doesn't
change.
Anything
shape
is
predictable,
no
ability
is
non-predictable
but
unpredictable.
No
ability
is
a
consequences
of
in
line
and
stuff
and
consequences
of
g-duplicating
stuff.
G
So
this
one
doesn't
change
and
when
are
you
you're
on
the
call
if,
if
I
said
something
wrong,
yeah
I
think
when
I
wanted
to
join
yeah.
H
I'm
here
no
I
think
you,
you
said
it
even
better
than
I
would
yeah.
That's
that's
correct
what
you
what
you
said
in
our
case
for
Apollo
Kathleen,
at
least
foreign.
B
Yeah
I
mean
the
way
I
would
solve
this
as
a
client.
If,
if
we
go
with
this
structure,
I
would
not
even
allow
you
to
like
the
inline
defers
are
their
own
methods
in
the
in
the
client.
So,
like
you
have
to
call
something
in
order
to
access
the
inline
spread.
The
fields
under
the
inline
spread
and
I
would
just
not
allow
accessing
those
fields
until
it's
completed
so
either.
My
like
access
gives
me
null
or
gives
me
like
a
pending
value,
so.
E
So
so
I
think
the
summarize
is.
Ideally
it
would
be
really
great
if
we
could
have
one
payload
per
defer
per
deferred
path,
and
everything
is
completely
duplicated,
as
in
every
payload
you're
receiving
prior
doesn't
have
has
all
the
fields
you
need
for
the
next
one.
E
The
problem
with
going
in
that
direction
is
how
do
we
implement
it
in
a
way
that
is
not
like
doing
some
kind
of
like
more
checks
on
the
client.
Hey
wait.
We
at
first
I
was
like
this
could
be
done
statically
if
you
have
a
server
that,
like
normalizes
queries
ahead
of
time,
but
that's
when
we
came
up
with
this
case,
where
it's
dependent
on
the
order
of
how
fast
or
slow
resolvers
are
can't
be
done.
E
Statically,
I,
I,
don't
think
that
we
could
we'll
be
able
to
come
up
with
an
algorithm
that
fits
into
the
spec
text
that
could
efficiently,
like
figure
out
how
to
do
that.
E
So
that's
how
that's
why,
like
we're
coming
to
this
decision
of,
do
we
want
to
like
be
sure,
there's
no
deduplication
by
relaxing
the
constraint
of
having
one
payload
per
path
and
adding
this
completed
field
or
or
is
it
okay
to
allow
deduplication
okay
to
allow
duplication?
Sorry,
does
that
summarize
it
accurately.
K
K
J
K
J
The
best
connection
and.
J
Okay,
I
think
yeah,
because
I'm
just
just
refreshing
the
group
for
the
call
to
action
in
terms
of
reviewing
the
the
current
the
current
implementation.
F
F
J
Is
expect
VR?
There
were
some
comments.
J
Matt
in
terms
of
on
on.
J
Terms
of
whether
you
know
in
the
implementation
itself,
whether
undefined
input.
F
J
Are
allowed
if
people
want
to
weigh
in
on
that
other
than
that
we
wanna
I,
guess
get
as
many
eyes
on
it
as
possible,
because
Lee's
work
was
great
and
complex,
but
I
think
I
think
still
the
way
to
go
forward,
despite
whatever
intervening
changes
have
happened
in
the
implementation
and
I
just
want
to
get
as
many
eyes
on
it
as
possible.
K
Sweet:
let's
consider
that
a
call
for
action,
since
we
don't
need
to
get
the
eyes
on
it
in
the
next
five
minutes
in
particular,
but
you've
got
links
there
to
the
OG
PR,
the
one
that
yaakov
has
very
graciously
got
up
to
date
on
the
latest
state
of
the
code
base
and
Benji's
spec
PR.
J
Yeah,
maybe
Matt
pointed
out
that
one
of
your
code
changes
assumes
that
a
input
value
cannot
be
serialized
to
undefined
and
so
that,
if
it
is
undefined,
it's
not
present
and
he
questioned
I
guess
he
was
questioning.
Why
you
know.
Is
that
really
true?
J
The
old
codes
seemed
to
use
like
a
had
zone
property
check,
which
seemed
to
imply
that
it
could
be
an
input
value
like
a
scalar
input.
Value
could
be
serialized
undefined,
so
I
was
looking
through
some
of
the
other
code
and
and
some
of
the
other
old
code,
and
you
know
our
existing.
It
looks
to
me
like
we
can't.
You
know
that
we
would
actually
throw
an
error
somewhere
else,
so
that
that
was
a
safe
change
to
make
into
a
non-breaking
change
to
make.
But.
J
Curious
that
an
internal
value
cannot
be
under
you
know,
cannot
be
undefined
that
that's
just
not
allowed
that's
considered
an
error.
Maybe
a
bit
surprising
need
to
be
documented,
but
not
a
breaking
change.
I,
don't
know
if
that
Rings
any
bells
for
anyone
else.
G
I
would
say
that
it's
not
breaking
change
so
expect,
because
it's
JavaScript
specific
from
spec
point
of
view.
There
is
no
icon
defined.
There
is
like
no
values
and
it's
okay
to
do
breaking
change
for
graphq.js.
If
it
releases
this
major
version,
so
I
would
say
if
we
it's
big
feature
anyway,
so
it
will
maybe
break
something
for
somebody
probably,
and
somebody
use
some
real
trucks,
so
I
would
release
it.
G
This
major
in
major
release
of
graphic
address
and
wouldn't
do
too
much
archeology
to
find
out
how
breaking
it
is
for
specifically
for
JavaScript
implementation,
I
would
say
like
let's
keep
it
intention
for
graphql
JS
was
always
is
to
be
easily
replicatable
in
other
languages.
Graphql
python
is
like
direct
copy
and,
as
far
as
I
know
especially
static
one,
which
is
don't
have
a
difference
between
no
and
undefined,
they
don't
have
undefined
so
I
would
say
like
for
the
whole
graphql.
Implementation
has
has
its
own
property,
and
undefined
means
the
same.
G
It's
not
existing,
because
it
makes
it
easy
to
replicate
into
other
languages
that
don't
have
like
two
different
new
values.
So
I
wouldn't
like
put
too
much
effort
in
it.
Just
like
I
think
your
solution
is
good
enough,
just
release.
We
just
release
it
on
the
major
version
and
specify
like
note
it
may
be
breaking
something
if
you
had
like
weird
scours,
that
Sarah
was
trying
to
find
for
some
reason.
C
I
I'd
also
say
to
anyone
implementing
this
spec
in
their
own
implementations
in
other
languages.
C
If
you
were
implementing
the
previous
Behavior,
this
should
be
a
major
version
bump
for
you
as
well,
because
it
does
concretely
change
how
default
values
are
handled,
hopefully
with
minimal
impacts,
but
there
can
be
impact,
especially
if
people
have
defined
things
in
a
way
that
was
vulnerable
under
the
old
system.
So
definitely
a
major
release
for
everyone.
I
think.
K
Yeah,
we
probably
need
to
write
some
synopsis
blog
post,
that
like
gets
into
the
nuances
in
a
way
that
makes
it
clear
to
people
who
are
using
default
values
because
I
think
they
would
see
this
really.
This
is
a
breaking
change.
Why
and
it's
fairly
subtle,
that's
pretty
important,
especially
if
it
would
cause
a
subtle
change
in
Behavior.
K
On
the
particular
housing
property
versus
undefined
bits,
if
they're
I
pulled
up
the
pr
and
noticed
that
we're
using
has
in
property
in
a
couple
of
places,
so
I
I,
don't
know
exactly
what
specific
spot
the
potential
misin
case
was.
But
if
anyone
wants
to
go
there
where
they
saw
it
and
then
just
put
an
inline
comment
and
and
tag
me,
then
I'm
happy
to
go.
Take
a
look
but
I
think
first
principles,
I
agree
with
everything.
K
B
J
Wife
preserved
sources
of
variable
values:
sorry
I'm
away
from
the
keyboard.
You.
J
J
B
Yeah
and
the
only
reason
I
was
even
able
to
make
that
comment
was
because
I
was
trying
to
preserve
exact
behavior
for
fragment
arguments.
So
I
was
also
hitting
this
exact
same
code
and
I
was
like
oh
I.
Did
this
I
preserved
this?
You
changed
it,
but
if
you've
done
the
code
archeology
to
determine
that
like
it,
this
may
just
be
a
behavioral
thing
from
before
we
were
using
types
at
all
in
graphql.js,
in
which
case
yeah.
Maybe
the
types
are
now
accurate
and
it's
not
needed,
which
is
a
good
thing.
K
Very
much
a
good
thing:
okay,
Matt!
If
you
happen
to
spot
your
comment,
sent
my
way
so
I
can
take
a
look
but
yeah.
What
you
said
is
probably
true
the
internal
bit.
It's
not
where
we
would
find
a
Breaking
change.
I
think
we'd
find
a
Breaking
change.
Potentially
in
aggregate
of
this
whole
stack
landing
and
that's
the
thing
we
should
write
against.
K
J
F
We
have
timeline.
K
That's
the
action
awesome.
We
are
ending
on
time,
exactly
amazing
how
that
works.
Thank
you
all
for
the
good
discussion.
Welcome
again
new
TSC
members
and
we'll
talk
to
you
all
in
the
next
meeting.