►
From YouTube: GraphQL Working Group (Primary) - 2023-03-02
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
B
A
Yeah,
it's
it's!
It's
like!
Oh
that's,
wrong!
Camera
I
really
want
to
get
a
couple
of
things
on
the
finishing
lines.
A
Yeah
we
should
should
have
been
in
October
I.
Think,
that's
why
we
should
get
things
like
one-off
into
the
spec
and
then
I
mean
there
are
already
a
lot
of
clarifications.
We
could
just
release
it.
There's
there's
anyway,
a
question
how
we
like
at
the
moment
it's
a
very
manual
thing
for
the
spec
cut
and
we
originally
wanted
to
do
it
yearly,
and
then
it
should
be
just
okay.
We
cut
it
and
release
what
we
have,
but
I
don't
know
what
the
process
like.
B
D
Easy,
the
first
one's
pretty
easy.
We
just
you
know
I
think
we
add
a
git
tag
and
that's
it
because
the
page,
the
actual
specs
end
up
getting
auto-generated.
So
the
draft
one
I
think
we
generate
the
most
recent
tag
and
then
we
generate
the
draft.
D
D
Yeah,
we
think
our
Cadence
is
looking
something
like
once
every
18
months
or
so,
which
I'm,
okay,
with
I,
think
doing
it
somewhere
close
to
annually
makes
sense.
I.
Think.
Last
time
we
went
through
a
handful
of
things
that
we
had
at
sort
of
the
last
stage
that
needed
final
review
and
made
sure
that
we
like
kicked
a
bunch
of
them
in
before
we
did
the
spec
cut,
to
avoid
a
situation
where
we
didn't
expect
cut
and
then,
like
a
bunch
of
stuff,
landed
right
after
that
I.
D
A
Yeah
it's
a
one-off
I
would
like
to
get
in
I
mean
it's
sitting
nearly
ready
for
what
see.
B
Enough
can
we
get
these
no
stock?
Please,
oh.
D
Yes,
I
need
to
just
like
add
onto
my
calendar
because
that's
like
my
best
way
to
set
reminders
for
myself
to
do
that,
we'll
see
if
you
get
the
last
person
shot
while
I
set
this
up
it'll
only
take
a
second.
F
So
I
think
we're
missing:
Jonathan,
Ricky
and
Yakov.
F
D
Certainly
hope
so
I
posted
a
link
in
the
chat
to
the
note
stock
and
I
am
adding
it
to
the
agenda
doc.
D
Ument
imminently,
for
some
reason,
I
got
logged
out
of
GitHub.
How
did
that
happen?.
B
Could
you
clear
your
selection
for
me
as
early,
otherwise,
the
pink
is
just
gonna.
Keep
following
me:
oh
gosh,
yeah.
D
E
D
We
appreciate
the
desire
for
security,
but
I
also
don't
really
love
getting
logged
out
of
things
apparently,
but
here
we
go
all
right
good
to
go.
D
Hopefully
the
left
handful
of
folks
who
have
agenda
items
chuckling
in
time
running
over
a
couple
minutes
over
so
we'll
get
things
going.
Welcome
everybody!
It's
March!
Can
you
believe
it
three
months
in
it's
2023.
D
yeah,
I
can't
believe
it.
Where
did
where
did
all
the
time
go,
but
we're
all
here
and
that's
a
good
thing
to
to
have
and
of
course,
by
being
here,
we've
all
agreed
to
spec
membership
agreement,
participation,
guidelines,
contribution,
guide,
code
of
conduct,
fantastic
documents
that
I
hadn't
been
writing
and
have
Links
at
the
top
of
our
agenda.
If
you
ever
want
to
read
them,
let's
do
a
quick
round
of
intros
names
to
faces
for
the
purposes
of
the
recording
and
then
also
just
to
make
sure
that
everyone
knows
everybody.
D
We
will
go
in
the
order
listed
in
the
agenda
file
if
you're
not
there
yet
go
ahead
and
send
a
pull
request.
I
will
keep
tabs
on
those
as
they
show
up
and
get
them
merged
in
and
I
just
merged.
The
first
of
those
in
now
which
was
Vons.
Thank
you
so
go
ahead,
refresh
that
agenda
file
and
then
we'll
enter
from
the
top
I'm
at
the
top
hello
everybody.
C
D
I
No,
it's
my
fault,
I
actually
submitted
it
against
February,
yeah
and
I
blocked
myself
twice
so
I'm,
basically,
cooling
myself,
which
is
in
another
point
for
like
discussion
a
couple
of
times
ago
about
having
some
mechanism
to
reduce
yourself
to
agenda
instead
of
doing
TRS.
So
now,
I'm
fully
agreed
that
we
need
some
simpler
system.
D
I
I'm,
not
stopping
you
from
building
that
tool.
Yeah.
F
I
If
you
have
this
discussion
hi,
my
name
is
Juan
and
I'm
from
Apple.
D
D
Fantastic,
thank
you
Hugh
and
then,
if
the
notes
end
up
slowing
down
for
anybody
else,
just
check
for
help
quick,
look
over
our
agenda
for
today.
I'll
do
a
quick
recap
of
Prior
meetings
for
folks
who
weren't
there.
This
might
be
short
because
I
think
some
of
those
were
short
I,
don't
know
that
we
have
too
many
previous
meeting
action
items
just
purely
because
we
haven't
had
too
many
action
items
recently,
but
we
will
take
a
quick
look.
D
D
Ricky
wanted
to
flow
a
topic
about
npm
access,
which
I
think
unfortunately
may
be
blocked
on
me,
but
Benji
I
think
you
have
some
context
in
case.
Ricky
can't
show
default
value,
validation,
status,
checks,
I
know
we
talked
about
that
last
time,
so
that
should
be
exciting.
Of
course,
always
super
interesting
things
to
talk
about
on
a
different
stream
and
then
graphical
or
HTTP,
very
cool,
to
see
that
on
the
agenda
that
should
keep
us
busy
anything
that
we
don't
have
here
that
we
want
to
talk
about
today.
B
Meta
on
the
agenda,
the
agenda
says:
put
your
name
in
there
in
alphabetical
order,
and
literally
no
one
does
that.
Should
we
either
remove
that
from
the
agenda,
or
should
we
start
actually
putting
our
names
in
alphabetical
order?
The
advantage
of
doing
it
in
alphabetical
order
should
be
that
the
number
of
merge
conflicts
should
be
reduced
slightly.
D
Yeah
we
end
up
seeing
them
mostly
anyway,
but
but
you
were
correct,
that's
the
preferred
reason
of
why
it
is
an
alphabetical
order
is
to
hope
that
people,
one
don't
insert
themselves
twice,
because
you'd
certainly
find
that
if
you,
if
you
missed
the
alcohol
sort
but
yeah,
it
does
reduce
a
little
bit
of
the
merge
conflict.
So
more
merging
these
all
in
right
before
the
meeting
I'm
gonna
leave
it
there.
D
I
guess
let
this
be
a
nudge.
That
folks
should
think
about
Alpha
sorting
the
attendee
list
was
they
add
themselves,
but
maybe
another
good
vote
in
favor
having
a
tool
like
an
automate.
This
Edition
for
us,
if
anybody's
ever
interested
in
writing
that.
C
B
B
D
I
think
I
I
will
get
to
this.
Maybe
50
of
meetings
where
the
night
before
I
will
pull
this
down
in
vs
code
and
select
I,
usually
leave
myself
at
the
front,
because
it's
just
it's
like
a
smoother
transition
and
generally
I
would
say
it's
not
just
that
I'm
putting
my
name
or
the
front
but
like
whoever
is
the
meeting
dri
or
representing
the
meeting,
can
have
their
name
at
the
front
and
occasionally
that
has
been
Benji
occasionally
that
has
been
other
folks
but
to
the
degree,
that's
me
leading
it.
F
Too,
that
we
don't
really
say
anything
about
alphabetical
order
when
people
just
kind
of
try
to
merge
into
the
the
agenda,
you
have
to
click
into
the
join
a
meeting
markdown
file
to
see
that
notice
about
the
alphabetical
order.
So
maybe
people
just
miss
it.
There.
D
That's
a
good
point:
I'll
look
at
the
template,
I
think
maybe
I
can
put
like
a
comment
below
the
attendee
list.
That
mentions
please
insert
yourself
in
alphabetical
order
below
the
host.
D
D
I
think
we
had
just
just
enough
people
who
were
traveling
or
on
vacation
or
whatever
the
attendance
for
this
got
really
thin.
Benji
and
I
were
both
going
to
not
be
in
attendance.
D
We
just
made
a
kind
of
last
minute
call
to
cancel
that
meeting
and
I
think
there
are
only
two
things
on
the
agenda,
one
of
which
was
mine
that
we
forward
on
to
this
being
so
we'll
do
that
every
once
in
a
while
I
think,
having
three
meetings
a
month
makes
it
a
little
bit
more
more
reasonable
to
allow
one
of
them
to
get
canceled
every
once
in
a
while.
D
So
now
it's
time
to
update
from
that
one,
but
the
one
prior
to
that
the
Apec
secondary
one
had
both
a
good
conversation
about
schema
definition,
ambiguity
from
Benji
that
introduced
a
spot
of
a
very
real
problem
and
introduced
a
very
good
looking
solution
that
very
quickly
got
to
stage
one.
It
was
also
part
of
the
inspiration
for
me
doing
a
bit
of
the
rewrite.
D
That's
my
agenda
topic
today:
it's
not
nudge
and
then
pretty
significant
advancement
of
the
fragment
argument
back
from
Matt,
which
I
don't
think
we
have
on
our
plate
to
talk
about
today,
which
is
totally
okay,
but
folks
should
go
back
and
look
at
that.
If
you
haven't
had
a
chance,
I
think
we've
made
a
significant
amount
of
progress.
D
Cool,
let's
take
a
quick
look
at
open
issues,
am
I
correct
in
saying:
okay,
yes,
no,
nothing
open
for
review.
At
the
moment.
D
D
E
D
D
Okay,
first
agenda
is
mine,
root
operation
types.
So,
let's
see,
if
it's
worth
me
projecting
for
a
moment,
I
put
up
both
a
spec
RC
pull
request,
as
well
as
pointing
at
Benji's
change
that
fixed
a
handful
of
these
things.
So
I
think
most
of
this
is
actually
getting
our.
D
D
And
you'll
see
this
cool,
so
proposal
here
is
to
do
two
things.
One
is
a
bunch
of
naming
convention
and
Clarity
in
the
spec
and
then
there's
one
actual
significance
in
tax
change.
This
was
the
follow-on
from
the
conversation
that
we
had
in
the
Apec
follow-up
conversation
three
weeks
ago,
Benji
LED.
This
discussion,
which
was
specifically
around
this
idea
of,
is
it
okay
to
have
a
schema
definition
in
the
type
description
language
that
has
nothing
in
it
and
at
present
that
is
not
allowed.
This
proposes
changing
that
to
allow
that
to
be
the
case.
D
Since
there's
kind
of
two
things
going
on,
it's
probably
helpful
to
First
just
collect
a
bit
of
feedback
on.
Does
this
seem
reasonable?
Is
this
the
right
change?
It's
a
bit
of
a
trade-off.
What
do
folks
think
about
this.
A
Isn't
that
when,
when,
when
a
wood
pass
is
a
schema
without
this
in
it
like
I
passed
the
first
schema
and
built
with
that
and
incomplete
schema,
I
implicitly
created
the
schema,
oh
and
then
I
also
could
apply
the
extent
on
it.
So
it's
not
necessary
that
we
have
that
law.
D
It
is
necessary
because
of
our
Logic
for
picking
up
default
named
types,
so
I
will
highlight
just
a
few
points
from
Benji's
presentation
a
couple
of
weeks
ago,
which
was
imagine
you
have
a
schema,
that's
talking
about
viruses
and
mutations
of
viruses,
and
you
have
some
type
in
your
type
schema
called
mutation,
and
it's
not
referring
to
like
the
graphql
mutation
root
type.
It
is
referring
to
a
viral
mutation
in
your
actual
focused
schema.
D
In
the
absence
of
a
schema
definition,
graphql
will
look
at
that
type.
Called
mutation
assume
that
it
is
the
graphql
mutation
type
and
then
lift
it
into
being
the
mutation
root
type
and
the
the
way
that
you
avoid.
That
is
by
explicitly
defining
your
schema
and
and
saying
that
there
is
no
mutation
type
or
your
mutation
type
is
you
know,
graphql
mutation
or
something
that
is
named
differently?
But
what
do
you
do
in
the
mode
where
you're,
using
the
extend
schema
to
add
things
later?
A
Yeah,
it's
kind
of
making
sure
that
we
don't
infer
I.
D
Miss
that
aspect
yeah
yeah,
so
that's
exactly
right,
so
defining
this
means
there
is.
This
defines
a
schema
that
has
no
root
types,
so
it's
not
the
same
as
the
absence
of
it
and
therefore
those
root
types
are
not
going
to
be
go
gone
and
searched
for
with
default
names.
H
D
I
think
we
do
do
we.
Do
we
not
always.
H
We
allow
syntax
when
you
extend,
we
allow
it
to
you
to
not
include
the
curly
braces,
but
we
do
not
allow
that,
at
least
as
of
the
last
time
I
checked.
Maybe
it
has
changed
I'm
pretty
sure
we
do
not
allow
an
empty
like
no
fields
for
interface
or
Union
when
it's
just
type
or
interface,
without
extend.
D
Oh
yeah,
yeah
I,
think
I.
Remember
what
you're
talking
about
yeah,
because
we
allow
it
to
be
empty
and
you
can
extend
it
to
ad
Fields
later,
but
the
actual
syntax
of
how
we
do
that
amidst
the
empty
block
so
like
the
equivalent
here
would
be
just
saying.
The
word
schema
with
nothing
after
it,
as
opposed
to
schema
with
an
empty
set
of
braces.
But.
H
We
don't
allow
type
with
no
braces.
It's
only
extend
type
with
no
braces
and
that's
to
me
what
we're.
What
we're
basically
saying
is
that
we
are
allowing
like
we're
not
we're
allowing
syntax
that
will
result
in
an
invalid
schema
in
the
absence
of
extensions
and
I.
Think
that
that's
correct
I
think
that's
exactly
what
we
want
to
do,
but
like
we
should
symmetrically
do
that
for
types
and
interfaces.
In
my
opinion,.
D
That
is
good
feedback
and
I
agree
with
that
yeah
yeah.
We
should
make
sure
that
we
apply
a
decision
like
this
symmetrically.
I
Yeah
I
actually
wanted
to
say
exactly
the
same:
I
can
throw
it
a
little
bit
more
context
on
that.
Actually.
Last
time
my
chip,
cracked
Java,
supported
MK
braces
on
type,
as
extension,
because
for
like
Community
it's
logic
and
when
we
discuss
empty
types,
basically,.
I
Incorrectly
empty
types,
so
we
did
it
like
in
weird
think,
which,
like
type
and
name
and
without
anything,
and
this
constitute
empty
type,
and
it's
created
like
with
parser
ambiguity,
where
a
type
is
followed
by
inquiry,
and
is
it
like
invalid
type?
Or
is
it
like
a
type
forward
by
query
or
or
empty
type
and
inquiry?
I
So
we
change
our
parser
to
be
like
forget
how
it's
called
greedy
and
try
to
parse
more
things
so
I'm
like
for
actually
I
was
in
empty
curly
braces,
since
we
cannot
do
just
schema
without
it.
I
understand
why?
Because
there
is
no
like
name
and
schema,
will
eat
like
next
word
to
some.
It's
like
without
curly
braces
so
I'm
for
you
know
for
like
reversing
decision
and
our
encourage
some
types,
especially
since,
like
graphql
Java,
one
of
the
main
implementation,
where
it
supports
it
and
people
actually
like
it
more.
I
For
people
like
current
syntaxes,
like
very
confusing
a
Polaris
graphical
implementation,
rust
and
I
spoke
with
Engineers
working
on
it,
and
it's
the
most
confusing
thing,
and
another
thing
maybe
related
to
that
first
is
confusing,
is
a
syntax
and
second
thing:
it's
confusing
for
people
to
understand
difference
between
schema.
I
D
I
agree
with
that,
and
in
particular
the
I
think
there
have
been
decisions
we've
made
in
the
past
and
in
hindsight,
probably
not
ideal
around
allowing
the
syntax
to
represent
invalid
schemas
correctly
for
the
purposes
of
printing
errors
or
or
whatever,
even
just
presenting
it,
so
that
it
can
be
edited
and
corrected
back
into
help.
We
have
other
tools
for
highlighting
invalidations
that
do
not
need
to
be
syntax
errors.
D
D
How
do
folks
feel
feel
about
doing
this
incrementally
sort
of
starting
with
schema
as
our
test
case
and
then
having
a
fast,
follow
to
look
at
the
rest
of
the
sdl?
Or
should
we
do
this
as
a
holistic
change.
H
Yeah
I'm
I'm
Pro
that
but
I'm
also
Pro,
like
splitting
changes
into
minimal
things,
even
if
they
don't
necessarily
make
sense
without
combining
them
all
as
a
whole.
So
are.
D
Okay,
this
has
been
super
helpful.
This
is
part
one
from
two
parts
of
the
change
here,
but
this
was
the
one
that
made
made
real
syntactic
changes
and
I
had
a
suspicion
that
I
would
get
good
feedback
and
I
did
so.
Thank
you
for
that.
The
rest
of.
B
Please
so
we
would
I
think
we
were
talking
previously
about
where
people
combine
the
schema
together
from
multiple
schema
files
that
have
all
the
different
parts
in
them.
B
So
if
you
imagine
that
you're
using
none
of
the
default
route
names,
but
you
happen
to
have
a
type
called
query
and
a
type
called
mutation
and
they're
each
defined
in
their
own
file,
like
that's
a
file
for
query,
there's
a
Parker
mutation,
one
for
subscription
that
relates
to
strike
payments
or
something
would
you
then
expect
to
put
this
schema
with
empty
braces
in
each
of
those
files
to
kind
of
indicate
that
that's
not
a
operation
type
or
would
it
be
like
an
extend
schema,
but
with
nothing
inside
of
the
braces
that
doesn't
do
anything
or
just
that
there
would
be
a
schema
in
a
file
on
its
own
and
all
these
other
types
will
leave
on
their
own
and
we
don't
care
because
they're,
we
know
that
they're
partial!
D
I
think
it
depends
on
like
right
now
we
don't
have
any
kind
of
specified
description
of
how
to
treat
multiple
files,
and
so
we've
we've
left
that
a
little
bit
up
to
the
implementer
and
I
think
there's
two
separate
ways
that
would
seem
completely
reasonable
to
me.
One
would
be
boring
concatenation
so
like
each
file
stands
on
its
own
to
a
degree,
but
they're
only
really
evaluated
by
adding
them
all
together
and
in
that
mode.
D
I
would
imagine
like
one
file
somewhere
has
the
empty
schema
at
the
top,
and
then
all
the
other
files
have
the
extend,
and
then
the
other
mode
is
something
where
you
do
some
like
slightly
more
complicated
merging
where
your
merge
like
duplicate
schemas.
You
allow
that
and
you
merge
them
together
and
there's
some
like
tool,
specific
rules
for
how
that
works.
B
H
Yeah
yeah
the
as
like,
we
actually
have
finally
sharded
our
schema
like
for
our
tooling
and
that's
exactly
how
I
would
do
it
is
right,
put
the
schema
with
empty
curly
braces
at
top
and
then,
when
you
have
a
type
that
is
the
root
type
co-located
with
that
type
in
the
same
file.
I
would
do
the
extend
schema
like
extend,
schema
query.
My
query
type
then
type
my
query
type
like
stacked
right
on
top
of
each
other,
but
yeah
this.
H
This
basically
helps
us
make
that
file
sharding
more
possible
and
reasonable,
especially
once
you
start
getting
into
like
schema
modularity,
whereas
this,
where
a
schema
is
composed
of
potentially
many
schemas.
A
E
H
D
All
right
on
the
part,
two
part
two
here
is
primarily
just
about
naming
conventions
and
clarity.
D
In
particular,
there
is
two
things
that
are
going
on
that
felt
really
hard
to
read
to
me
that
I'd
like
to
resolve
one
of
them
is
we
we
intermix
these
words
kind
and
type
which
I
appreciate
are
synonymous,
but
we
currently
use
the
word
type
to
refer
both
to
graphql
types,
but
then
also
to
like
categories
of
of
things.
D
So
we
say
an
operation
can
be
a
query
type
or
a
mutation
type
for
a
subscription
type
and
we're
not
talking
about
like
a
graphql
type,
we're
talking
about
the
fact
that
there's
three
variants
of
an
operation,
and
so
we've
overloaded
that
word
a
bit
and
there's
not
too
many
places
where
that
occurs.
This
is
I
think
maybe
the
only
one
or
one
of
you,
but
my
pitch
is
to
exclusively
use
the
word
type,
to
refer
to
a
graphql
type.
D
To
avoid
that
overloading
and
use
the
word
kind
instead,
while
synonymous
it
is,
it
gives
us
a
separate
word
to
use
to
talk
about
non-graphql
type,
things
that
have
variety
where
you
want
to
be
able
to
specify
like
which
variant
you're,
referring
to
in
that
moment
we're
already
a
bit
inconsistently
using
both
of
these
two
terms,
and
so
my
pitch
is
to
actually
go
through
the
anywhere,
where
we're
defining
real
elements
and
shift
them
to
use
the
word
kind,
and
then
I
have
some
other.
This.
D
The
second
category
is
sort
of
studying,
clear
names
to
how
we
talk
about
things
at
the
root
and
then
how
we
talk
about
these
overall
definitions
as
a
result,
but
maybe
first
to
get
quick
feedback
on
this
type
versus
kind.
Naming
convention
folks
have
thoughts
on
this.
A
I'm
just
looking
at
because
at
the
moment
no
we
on
the
operation,
definition
node,
we
are
using
operation
and
then
it
returns
an
operation
kind.
Is
that
correct.
D
D
D
The
place
that
I'm
looking
at
it
within
the
spec
is
within
the
syntax.
There's
a
syntax
rule
called
operation
type
and
it's
a
operation
type
is
one
of
the
keyword,
query,
the
keyword,
mutation
or
the
keyword
subscription,
and
explicitly
wanting
to
change
that
to
operation
kind,
because
other
I
find
that
very
confusing,
because
we
also
talk
about
operation
type
as
the
root
type
for
a
query:
the
root
type
permutation,
the
root
type
per
subscription
and
the
goal.
F
Yeah
yeah,
I
I
will
say
from
a
a
community,
graphql
usage
and
education
point
of
view.
People
have
just
for
years
are
just
used
to
the
idea
that
there
are
three
operation
types
and
they
know
exactly
what
those
types
are
so
I
think
just
as
long
as
we're
not
changing
that
and
we're
not
messing
with
years
of
History.
That's
awesome
don't
have
to
re-educate
people
that
way.
D
D
The
second
naming
convention
change
this
is
suggesting
is
how
we
talk
about
the
things
at
the
root.
There's
two
things
that
we
could
refer
to
at
the
root
we
could
be
referring
to
the
type
at
the
root
and
right
now.
The
words
that
we
use
in
the
spec
is
root.
Operation,
type
definition
and
part
of
what
I'm
trying
to
do
is
simplify
that
a
bit
we're
using
the
word
root
in
a
redundant
way
and
to
just
make
it
very
clear
that
this
is
the
operation
type.
D
So
you
don't
need
to
repeat
the
word
that
is
root,
because
that
is
definitionally
known
by
the
fact
that
it
is
the
type
of
the
operation,
and
you
can
imagine
why.
Shortening
that
name
up
and
making
sure
that
we're
being
very
clear
about
an
operation
has
a
type
definition.
D
The
other
thing
that
is
talked
about
in
the
sense
of
a
root
is,
we
will
often
talk
about
root
fields,
and
this
to
me
feels
like
a
holdover
from
the
the
earliest
versions
of
graphql
actually
required
that
you
had
one
root
field
that
came
from
there.
There
wasn't
really
a
type
at
the
top.
There
was
just
like
a
collection
of
root
fields
that
graphql
made
available,
and
you
chose
one
and
in
the
graphql
we
all
know
and
love.
That's
not
the
case,
there's
a
type
and
there's
a
selection
set.
D
That
applies
against
that
type
and
looking
for
a
way
to
replace
the
concept
of
a
root
field
as
broadly
as
possible,
with
the
idea
of
a
root
selection
set,
which
is
what
we
actually
have
that
aligns
more
tightly
to
the
top
level
type.
B
So
I
I,
like
the
the
history
there,
so
thanks
for
sharing
that
with
us
I,
don't
think
that
root
field
is
necessarily
going
to
be
confusing
for
the
majority
of
people
who
don't
know
that
history
of
graphql
I
think
it
makes
certainly
make
sense.
To
me
root
field
is
one
of
the
fields
in
the
root
selection
set.
B
I
am
concerned,
as
I
wrote,
on
the
the
pr
about
the
similarity
of
the
words
type
and
kind,
because
they
effectively
mean
the
same
thing
in
the
English
language
which
you've
already
covered
and
actually
you've
convinced
me
a
lot
more
in
what
you
were
saying
just
now,
which
is
great,
but
I
would
still
I
think
push
to
differentiate
these
a
little
bit
more
if
we
can
having
root
type
and
operation
kind.
B
D
The
thing
is
difficult,
like
part
of
what
I'm
trying
to
unpack
is
that
there's
a
handful
of
Concepts
here
that
we
somewhat
sloppily,
use
interchangeably
and
I'm,
trying
to
tease
them
apart,
one
of
them
being
that
an
operation
is
defined
by
the
particular
kind
or
variant
that
it
is,
and
it
is
defined
by
the
type
that
the
execution
for
that
operation
begins
with,
and
so
it's
it
kind
of
makes
sense
that
we
talk
about
his
root,
but
it
feels
like
we're
saying
we're
using
two
separate
Concepts
in
an
overlapping
way
and
there's
also
there's
no
requirement
that
it
is
used
at
the
root.
D
D
These
are
separate,
but
the
thing
that
I
would
want
to
avoid
is
making
it
less
clear
that
these
relate
to
the
operation
by
like
because
I
considered
root
type
definition
as
an
alternative,
but
that
feels
like
it
gets
it
further
away
from
what
it
is,
which
is.
It
is
the
type
definition
for
this
particular
operation.
D
I,
don't
want
to
eat
all
of
our
time
on
the
topic
so
I'm
seeing
people
noodling
on
the
idea.
If,
if
I
could
ask
for
a
follow-up
here,
it
would
be
to
please
take
a
look
at
this
actual
change.
I
tried
to
highlight
the
top
levels,
but
I
think
a
lot
of
the
devils
in
the
details
here
in
terms
of
the
actual
spec
edit
changes
about
whether
it
makes
things
more
or
less
clear,
and
so
please
tear
this
apart
and
add
a
bunch
of
feedback
within
I'd
love
to
get
this
right.
B
D
D
We
do
not
have
Jonathan
here
you're
right,
Ricky
the
floor.
Is
your
artist
to
talk
about
npm
access,
yeah.
J
Just
a
a
quick
note:
if
you're
able
to
the
way
the
graphical
pipeline
is
set
up,
if
you
recall
in
December,
we
discussed
that
we
had
to
remove
my
access
until
things
were
fixed
and
they
were,
and
so,
if
you
can
res
it's,
my
access
is
restored
for
all
of
the
npm
packages,
except
for
the
graphical
organization.
I
think
I
just
need
to
be
added
re-added
to
a
team
there.
Just
it
should
just
be
username
a
CAO.
E
J
Yeah
but
yeah
just
sorry
to
investor
about
that,
but
that's
really
the
only
thing
I
need,
if
you
have
extra
time,
there's
also
instructions
for
adding
it:
a
DNS
record
for
the
vs
code,
Marketplace
Organization
for
our
because
we
have
an
official
obvious
code.
You
know
reference
extension
so
to
speak,
but
a
lot
of
users
have
been
asking
us
to
get.
It
verified
the
organization
verified
because
it'll
then
their
companies
can
sign
off
on
it
and
stuff
so
yeah.
That's
all.
D
I
appreciate
that
this
is
sitting
on
my
plate
and
I'm.
Sorry
for
for
letting
it
sit,
I
will
get
the
npm
thing
fixed
super
quickly.
What
exactly
is
necessary
for
the
Marketplace
verification
piece.
J
Just
you're,
just
adding
a
txt
DNS
record
that
that
allows
us
to
verify
the
graphql.org
domain
is,
is
matched
with
the
the
organization
account
I
created
on
vs
code,
Market,
guys
and
there's
a
whole
bunch
of
other
issues.
We
can
talk
about
later,
but
the
npm
thing
is
the
most
important.
So
if
you
have
any
time
for
anything
just
at
any
point
in
the
next
few
days
or
whatever,
just
that
and
then
we'll
figure
out
the
other
stuff
later
yeah.
F
F
Oh
go
ahead
here:
oh
I
was
just
gonna
say:
you'll
probably
want
to
have
a
nice
catch-all
feedback
email
address,
set
up
for
the
the
vs
code
Marketplace
as
well,
because
you'll
start
getting
a
ton
of
feedback
from
there.
I
don't
know
if
that's
part
of
the
the
other
steps
we're
going
to
talk
about
later,
but
just
a.
J
Heads
up
yes,
yeah,
there's
a
bunch
of
other
things.
Yeah
I
have
a
whole
plan
for
that
and
for
more
better
group
access
and
having
bot
users
publish
on
npm
instead
of
mine
and
other
other
things.
There's
a
whole
plan.
I
put
together,
we'll
get
that's
awesome,
but
yeah
we
have
a
we're.
Getting
set
up
for
graphical
three
and
Monaco
got
some
exciting
things
coming
down.
D
Awesome
yay
yeah:
you
should
have
an
invite
now
to
the
graphical
bit
again.
While
I
was
in
there,
I
saw
that
Matt
and
Benji.
You
guys
have
pending
invitations
for
some
reason.
In
case
you
ever
want
to
access
those
are
sitting
there.
Probably
in
your.
D
All
right
well
I,
clicked
the
resend
button
and
me:
okay,
cool
okay,
we'll
work
this
time
around,
but
that
one
should
be
good,
so
Ricky,
you're
you're
in
and
running
there
on
the
npm
thing.
D
Can
you
send
an
email
to
operations
at
graphql.org
with
all
the
details
you
need
for
The,
NPS
change,
which
I
believe
includes
Jory
RPM
and
me
because
I
don't
have
access
to
the
the
DNS
records.
That's
like
a
Linux
Foundation
tool,
but
jewelry
does
and
she
can
just
like
open
and
I
take
it
ticket,
and
that
should
just
happen.
J
D
J
D
Sweet
okay
on
to
the
next:
where
does
my
agenda
file
go
this
time?
This
is
what
I
get
for
trying
to
open
up
npm.
At
the
same
time,
it's
running
a
meeting
foreign.
D
Value
validation
check
until
next
meeting
gave
us
plenty
of
time.
Oh.
D
Yeah
well
shoot.
Well.
Maybe
he
had
to
leave
early
and
wish
I'd
known
that
we
would
have
rearranged
a
bit.
That's
okay!
We
can
catch
you
next
time,
we'll
move
on
then
Rob
to
Fern
stream
deduplication.
Let
us
know!
What's
the
latest.
K
Yeah
so
we
have
been
meeting
every
week
working
on
this.
We
start
sharing.
K
And
so
just
to
kind
of
give
an
overview
of
where
we're
at
what
issues
we've
been
running
into
so
I
think
we're
relatively
in
agreement
that
we
have
landed
on.
K
K
Yeah
then,
from
there
we've
been
discussing.
What
do
we
do
with
fields
that
are
both
inside
and
outside,
of
the
different
or
in
a
defer,
and
also
in
a
parent
or
adjacent
defer
been
working
through
this.
K
We've
also
have
had
these
constraints
for
a
while
now
and
we've
been
like
discussing
Solutions
Within
These
constraints,
where,
for
every
like
defer,
there's
one
there's
one
payload
per
path
we
haven't
been
just
we've
decided,
we
haven't
been
sending
like
splitting
up
data,
that's
all
in
the
same
defer
into
multiple
payloads
and
sending
them,
and
the
payload
that
you
get
has
a
path
in
it.
That
path
corresponds
to
where
the
deferrer
was
meaning
that
maybe
we're
not
sending
data.
K
That's
further
down
the
tree
with
a
path,
that's
deeper
there,
because
that's
not
where
the
defer
was,
and
we
were
only
sending
the
payload
once
all
the
data
has
been
resolved
in
the
context
of
deduplication.
That
might
mean
that
there
was
the
same
data
in
a
previous
payload,
and
now
it's
not
sent
in
a
later
payload,
so
I
mean
just
a
couple
of
basic
options
there
like
if
there's
no
deduplication,
that
means
that
you're,
you
have
a
field
inside
a
different
outside
you
feel
and
outside
of
a
defer.
K
You
get
back
the
result
twice
we
one
trade-off
with
that
is
like
we
do
decide.
Deduplication
is
great.
It's
hard
to
introduce
in
the
future,
because
clients
are
going
to
come
to
expect
that
that
data
is
in
both
places,
kind
of
a
half
minute,
then
full
deduplication.
That
is
hard
because
of
this
example
that
we
talked
through
a
few
times
where
you
it
could
be
dependent
on
the
order
of
fields
of
which
payload
comes
first
and
you'd
have
to
know
at
runtime
how
to
de-duplicate
Fields.
K
You
couldn't
just
go
and
execute
both
of
these.
You
couldn't
remove
fields
from
one
defer
without
knowing
that
this
other
one
is
going
to
be
sent
first,
and
so
it
can
be
done.
But
that
means
you
have
to
hold
on
to
more
of
the
data
in
the
executor
and
you're
kind
of
removing
them
after
the
after
the
fact,
even
just
doing
like
a
straight
up
like
you're.
K
Looking
at
this
query
and
deciding
that
these
need
to
be
resolved,
that
could
be
that
that
could
be
kind
of
easy
if
you
have
a
a
step
ahead
of
doing
execution,
where
you
rewrite
your
query,
but
that's
not
something.
That's
in
the
spec,
because
the
spec
kind
of
goes
incrementally
recursively
like
down
object
by
object.
K
So
these
are
things
that
make
deduplication
hard
I,
don't
think
that,
like
either
the
folder
duplication
or
partial
deduplication
could
be
done
in
the
spec
algorithm
without
having
some
kind
of
way
of
like
looking
ahead
at
fields
that
before
they're
executed,
so
Yvonne
and
Benji
both
had
a
few
proposals
that
changed
some
of
these
constraints
and
basically
What
If.
K
Instead
of
giving
you
back
all
of
the
fields
at
the
same
time,
for
this
defer
you
Traverse
through
the
tree
and
you
can
find
where
the
overlaps
are,
and
you
could
send
those
fields
ahead
of
time.
That
means
that
the
client
would
be
getting
payloads
that
maybe
aren't
necessarily
actionable
by
them,
because
they
need
that
whole
piece,
but
it
would
result
in
it
would
result
in
not
sending
the
same
data
more
than
once.
K
Then,
there's
also
this
interesting
option,
where
you
have
a
query
like
this
and
kind
of
our
model
right
now,
is
that
once
you
get
to
a
different
you're
kind
of
branching
execution
and
running
these
fields
again,
and
maybe
doing
some
deduplication
by
seeing
what
other
fields
are
in
the
as
you're
going
down?
K
So
if
we
are
doing
deduplication,
I
think
we
definitely
need
to
make
sure
that
we're
not
calling
like
the
same
resolvers
twice
both
inside
and
outside
of
the
defer
they
have
to.
We
have
to
guarantee
that
the
same
results
are
returned,
but
implementing
that,
with
the
constraints,
without
changing
the
constraints
that
we
have
is
hard,
it
seems
like
you
would
have
to
either.
K
You
know
do
as
Benji
proposed
where,
as
you're
going
down
the
tree,
you
start
sending
data,
that's
in
the
Deferred
before
it
before
the
rest
of
the
fields
are
completed,
or
you
have
to
have
some
kind
of
cash
where
you're
holding
on
to
the
results,
so
that
the
next
time
you
get
to
it,
you
can
reuse
the
cash
value
I.
K
You
could
do
that
in
JavaScript,
with
maybe
with
a
weak
map,
but
that's
not
a
construct
that
is
usually
in
other
languages.
I
think
and
I
could
be
hard
to
do
so
kind
of
want
to
get
everyone's
opinion.
What
if
we
went
full
no
deduplication
at
all,
is
this
even
okay,
like
that
you
could
get
different
results
here.
D
So,
on
the
caching
thing,
I
think
we
already
have
a
handful
of
other
cases
where,
if
you
queried
the
same
field
multiple
times,
it's
reasonable
to
expect
that
you
would
get
a
different
value
every
time.
I
think
most
graphical
Services
don't
work
that
way,
but
there's
nothing
about
the
spec
that
demands
that
it
cannot
and
either
clients
will
take
the
most
recent,
the
newest
one
or
you
might
have
a
mode
where
you're
actually
aliasing
them.
D
Each
time
you
can
imagine
having
a
field,
that's
called
random
roll
of
a
die,
and
every
time
you
query
the
field,
you
get
a
different
number
between
1
and
20.,
and
if
you
don't
Alias
them,
then
maybe
they
get
merged
and
you
only
get
one
answer,
but
if
you
Alias
them-
and
you
get
three
separate
answers
and
there's
no
caching
there.
D
D
Does
that
mean
that
there's
a
firm
boundary
here
with
that
will
not
be
merged
as
part
of
the
collect
Fields
algorithm
because
of
collect
Fields
algorithm,
looks
at
the
defer
and
says
I'm,
not
even
going
to
look
at.
What's
in
here,
I'm
going
to
set
this
aside
and
defer
the
entire
thing
opaquely,
then
that
implies
that
you
are
going
to
have
two
separate
fetches
to
that.
D
Most
recent
comment
field
which
very
well
could
give
you
a
separate
answer,
in
which
case
the
payload
needs
to
match
the
actual
execution
Behavior,
or
we
say
that
it
is
not
opaque
field.
Collection
can
occur
and
by
doing
that,
you're
saying
that
this
query
and
a
rewrite
of
this
query,
where
the
defer
is
wrapped
purely
around
the
bio
field,
are
actually
identical,
like
you're
not
going
to
fetch
the
top
level
field.
A
second
time
because
of
the
way
that
the
collect
Fields
algorithm
has
worked,
has
worked.
D
You're
actually
only
fetching
that
most
recent
comment
field
once
and
you
are
somewhat
intelligently
deciding
what
are
actually
the
field.
Resolver
calls
that
are
getting
deferred
and
it
is
only
the
call
to
the
bio
field
that
is
getting
deferred,
and
so
it's
not
it's
not
actually
possible.
In
that
case,
even
if
even
if
calling
the
field
itself
could
give,
you
multiple
answers
only
time
or
each
time
collect,
Fields
algorithm
would
demand
that
you've
actually
only
called
that
resolver.
Once
like
the
middle,
the
middle
scenario
just
like
we
should
not
allow
it
to
be
true.
K
The
like
I
I
think
it
would
be
I,
don't
know
if
it's
objectively
better
but
but
like
assuming
that
we
do
like
have
the
case
where
it
does
mean
that
the
bio
is
the
only
one
that
is
refetched,
that
if
that's,
what
that
this
means,
then
I
think
that
there
are
other
other
trade-offs
that
we
need
to
make,
which
is
either
changing
the
result,
structure
or
or
if
we
have
to
be
okay
with
that
kind
of
caching
or
I.
K
K
I
Yeah
I
want
to
provide
context,
and
one
thing
I
want
to
suggest
instead
of
like
because
we're
in
this
this
discussion
for
a
long
time
and
we
discuss
a
bunch
of
solution,
maybe
instead
we
formulate
this
question,
as
is
it
requirement.
I
think
it
is
a
requirement
that
same
path
should
provide
the
same
value,
and
if
we
recognize
it
as
a
requirement,
we
can
never
evaluate
like
solution,
because,
like.
I
My
question
here,
like
first
I,
provide
a
little
bit
context.
Previously
we
discussed
it
research
and
Liz.
You
said
like
we
have
other
examples
where
values
are
different.
I
would
say
the
completely
different
cases.
For
example,
aliases
Alias
is
influence.
Yes,
you
can
Alias
like
field
with
this
visalia's,
but
it
will
result
in
another
path.
For
example,
you
can
have
a
random
number
and
you
Alice
is
random
number
one
and
random
number
two
and
it's
logical.
You
have
two
different
values.
I
It's
logical!
That
value
is
changed
between
queries,
so
if
you
send
one
query
and
get
one
field,
then
send
another
query
and
get
to
it
different
volume,
it's
okay
here,
what
what's
new
here
and
what
we
discuss
is
if
you
merge
incremental
everything
into
one
query,
not
a
normalized
or
not
in
a
store
between
queries,
but
you
might
swipe
the
whole
initial
response
and
incremental
paths
into
one
query.
I
Is
it
okay
for
to
have
like
to
have
like
same
path,
a
resulting
in
number
of
different
values
or
not
because
it's
one
question
how
many
copies
we
have
either
one
or
multiply
it's
another
thing.
It's
another
question:
is
it
okay
to
have
like
same
path?
Haven't
multiply
different
values,
I
would
say
no,
it's
not
okay,
but
I
want
to
I'm
interested
in
hearing
other
people.
What
other
people
think
is
it
like
issue
or
not.
A
The
the
the
the
problematic
thing,
I
think
is,
like
we
closed
down
the
path
of
rewriting
the
query
and
with
rewriting
the
query
we
could
solve
this
problem,
I
think
quite
easily
and
I
mean
that
is
essentially
the
same
process
as
we
do
with
different
mutation
right.
We
rewrite
it
in
a
different
way
and
then
we
could
Traverse
and
execute
it,
but
Let's
pass
we
which
for
now
didn't
take,
because
we
didn't
want
to
change
the
execution
algorithm
too
much.
B
Michael
I
have
thought
about
this:
a
fair
amount,
since
we
discussed
this
last
and
I,
actually
don't
think
we
can
solve
this
with
query
rewriting
the
the
query.
That's
on
the
screen
at
the
moment,
if
we
were
to,
for
example,
take
this
and
wrap
just
the
the
bio
with
the
defer.
B
B
K
A
B
I
see
that
being
like
there's
two
ways
of
interpreting
the:
if
we
think
only
in
terms
of
defer,
I,
don't
know,
find
incremental
delivery,
there's
two
ways
of
interpreting
it.
One
is
taking
the
data
that
you
would
have
if
you
ran
the
entire
thing
and
delivering
it
to
the
client
in
in
patches,
so
I.E
there
would
have
been
a
final,
concrete
payload
and
we're
just
sending
that
through
in
patches
earlier
than
it
would
have
come
through
or
with
parts
of
it
earlier
than
than
before.
B
In
that
situation,
it's
absolutely
the
case
that
we
would
expect
that
every
path
in
it
would
be
a
consistent
value
and
it
would
never
get
two
different
values
at
the
same
path
as
Ivan
was
saying
and
I
I
broadly
agree
with
that.
The
other
way
of
thinking
about
it
is
especially
with
stream.
B
It
might
allow
us
to
send
through
more
data
to
the
client.
Then
we
might
actually
want
to
represent
in
memory
at
one
time.
So
you
could
imagine
if
you
do
pagination
through
through,
like
a
list
of
you
know
all
of
the
I
don't
know
products
in
your
inventory
when
you're
paginating
over
that.
B
In
in
a
web
page
like
you,
don't
want
to
render
too
much
stuff,
but
if
you
want
to
you,
do
like
a
query
against
that
and
then
write
it
to
I,
don't
know
a
CSV
file
or
upload
it
to
a
Google,
spreadsheet
or
whatever.
In
that
case
streaming,
the
the
sub
parts
seems
reasonable.
You
don't
necessarily
it
it
gives
you
the
ability
to
not
have
to
do.
B
Pagination
just
take
the
data
as
it
comes
and
keep
writing
it
very
efficiently
into
this
other
source,
and
those
are
two
quite
different
use
cases
and
I
think
we
can
actually
support
both
of
them,
but
we
have
to
be
very
careful,
so
I
just
want
to
put
them
both
in
people's
minds
when
they're
thinking
about
this.
D
Right,
there's
also
I
think
you're
right,
yeah
yeah.
That
was
my
point.
There
is
I,
want
to
be
really
careful
about
describing
anything
here
as
a
firm
requirement.
D
There's
of
course,
some
things
that
are,
but
ultimately
a
lot
of
these
things
like
we're
talking
about
a
change
so
inevitably
there's
things
that
are
going
to
change
about
the
way
graphql
works
and
we
just
need
to
make
good
trade-offs
between
them
for
what
it's
worth,
there's
a
bit
of
a
Saving
Grace
in
the
simpler
option
of,
if
you
were
to
treat
this
defer
as
an
entire
block
that
you
just
don't
even
look
at
the
like
we've
just
say
all
right:
we're
not
going
to
think
too
much
about
deduplication.
D
Anything
in
the
Deferred
blocks
go
through.
There's
do
a
separate
pass
of
collect
Fields,
so
you've
got
your
non-deferred
and
then
your
deferred.
D
That's
the
collections,
which
I
think
is
your
sort
of
the
original
pitch
before
we
went
into
this
thread,
the
Saving
Grace
to
that
one
is,
if
you
are
using
a
a
like
a
local
caching
mechanism
in
your
executor,
which
I
know
many
services
do,
then
it
is
up
to
you
to
decide
based
on
that
mechanism,
whether
you
would
like
field
resolvers
to
to
be
to
present
different
values
or
not
because
the
other
case
where
this
can
bite
us
today
is
if
you
have
two
separate
paths
to
the
same
field.
D
You
can
imagine
you
know
this.
This
is
about
blog
posts
with
comments,
so
you've
got
one
thing
here.
That's
the
most
recent
comment
and
you
can
have
imagine
having
a
parallel
field,
which
is
comments,
All
comments,
and
if
you
were
to
First
query
the
most
recent
comment
and
then
via
a
race
condition.
D
You
know
you
got
a
new
comment
and
the
list
of
comments
has
something
different
in
it.
You'd
look
at
that
and
be
like
what
the
heck
just
happened
here,
and
you
would
imagine
that
the
person
under
the
hood
and
then
some
amount
of
caching
locally.
So
you
got
consistent
things
that
make
sense,
or
even
something
more
similar,
which
is
like
all
right.
Let's
just
assume
that
the
race
condition
part
we
solved,
and
you
got
that
done
right.
D
So
you
just
like
only
fetch
against
your
database,
so
the
degree
that
you
need
to
to
fetch
additional
blocks
of
data
keyed
by
an
ID
which
implies
that
in
you
know,
the
simplest
possible
case
of
executing
the
particular
query
that
you've
got
listed
up
here.
D
You
re-execute
the
resolver
for
most
recent
comment,
which
hits
which
has
a
cache
hit,
and
it's
like
I
already
have
that,
and
then
you
hit
author,
which
it's
a
cache,
because
you've
already
got
that
and
then
you
execute
ID
and
name
which
are
just
like
simple
field
lookups,
and
then
you
hit
bio,
which
is
like
I
need
to
fetch
bio
and
it's
either
also
a
simple
field
lookup
or
like
that's
the
field
that
ended
up
being
expensive
and
that
you've
like
deferred
the
expensive
cost
of
it,
in
which
scenario
you're
you're
it's
the
server
implementer
essentially
has
some
control
here
about
whether
they
get
new
information,
and
it's
certainly
still
possible
that
it
yields
a
different
result,
in
which
case
we
need
to
make
sure
that
it
is
clear
that
that
could
be
the
case
and
so,
as
you're
merging
in
the
incremental
delivery
of
a
payload.
D
D
In
which
case
this
would
you
know
I,
don't
know
the
name
field
of
the
audit
could
be
different
or
if
you
refetched
content,
the
content
could
have
been
different
and
if
that's
going
to
deliver,
you
need
to
make
sure
that
the
new
delivered
value
overwrite
the
previously
delivered
value
does
that
seem.
Is
that
how
you've
been
thinking
about
this
Rob
I.
B
Think
I
think
the
problem
there,
though
Lee
is
like
dealing
with
that
from
the
server
side
is,
is
one
thing,
but
it's
the
client
receiving
it.
That's
the
issue
in
in
all
the
situations
in
current
graphql,
even
where
you've
got,
for
example,
the
comment
field,
it's
at
two
different
parts
where
it's
got
different
values
like
sure
it
represents
the
same
comment
by
ID,
but
it's
at
two
different
parts.
B
The
issue
here
is:
if
you've
got
something
like
relay
that
deeply
understands
your
graphql
query:
it's
not
necessarily
a
big
problem,
because
we
will
they
treat
the
fragments
as
their
own
like
little
containers
and
that
can
be
self-consistent
here
and
that
can
be
self-consistent
there,
but
for
a
more
simple
or
straightforward
graphql
client
that
might
want
to
see
incremental
delivery
as
effectively
a
patch
onto
an
object.
We
can't
just
overwrite
the
field
as
you
describe,
because
the
selection
sets
are
completely
different,
like
at
some
point.
B
That
path
needs
to
have
the
data
from
both
of
the
branching
trees.
If
you're
looking
at
it
as
a
final
object
that
you
can
navigate
through
and
those
could
be
inconsistent,
one
of
them
might
have
three
entries
and
another
one.
Four
and
one's
got
the
bio
field
and
one
doesn't
and
suddenly
it's
all
inconsistent
and
how
do
you
even
patch
it
so
you'd
have
it
to
allow
for
that?
G
G
That's
requested
at
different
paths,
a
client,
that's
attempting
to
you,
know,
use
the
normalized
cache,
let's
say
by
an
ID,
could
end
up
merging
that
same
object
with
the
same
ID,
but
and
and
possibly
an
overriding.
You
know,
as
least
suggested
with
with
the
new,
with
the
new.
G
Longer
and
you
can
get
a
mixture
of
results,
you
know
that
you
know
so
it's
a
tricky
problem,
but,
and
certainly
incremental
delivery
makes
things
worse
because,
as
as
you
stretch
things
because
you
know
this,
this
will
happen
more
often
the
client
will
be
requesting
potentially,
but
it's
not
it's
not
necessarily
a
new
problem.
What
gets
really
tricky,
though,
is
when,
as
we
mentioned
when
using
deduplication
that
this
this
gets
very
tricky,
because
you
can
try
to
get
out
of
some
of
these
problems
by
requesting
the
ID.
G
But
even
then
you
know
when
that's
deduplicated,
you
really
you
know
you
get
into
worse,
but
the
problems
still,
you
know,
still
exists.
You
know
without
defer,
but
I
would.
But
one
one
thing
to
one
thing
to
to
to
also
mention
is
that
you
can't
necessarily
assume
when
we
have
incremental
delivery,
that
the
object
that
that
is
delivered
last
is
the
latest.
G
It
could
just
be
that
you
know
some
of
the
subfields
failed
to
fail
to
complete,
so
the
like,
a
defer
that
you
know
arrives
later,
even
though
it
claims
it's.
The
most
recent
comment
doesn't
necessarily
have
the
most
recent
comment.
G
H
Yeah,
so
one
of
the
things
I
think
that
we're
hitting
this
like
the
first
stream,
is
really
hitting
its
head
against
over
and
over
just
taking
it
like
Broadview
is
we
we
have
graphql.js
we're
doing
our
experiments
on
top
of
graphql.js,
we're
able
to
prototype
server,
behavior
and
server
idioms
really
well
and
we're
able
to
answer.
Oh,
yes,
this
is
Possible
Oh.
Yes
doing
this
deduplication
of
fragment
of
defers
is
possible
up
to
this
point.
H
It
becomes
really
hard
at
that
point,
but
we
don't
have
the
equivalent
like
spec
or
canonical
client
implementation
like
right
now.
The
client
implementation,
the
reference
client
implementation,
is
just
the
Json
responses
right
and
nobody
is
expecting
product
teams
to
be
interacting
directly
with
the
incremental
payload,
at
least
as
far
as
I
understand
like
if
we
are
expecting
like.
Oh,
you
write
a
product.
You
have
this
at
defer
and
here's
like
we.
We
want
to
showcase
actually,
like
writing
a
product
using
the
actual
deferred
payloads
like
do.
We
have
any
examples
of
that.
H
Do
we
have
any
like
real
usage,
or
is
it
always
currently
wrapped
around
like
the
Apollo
or
relay
or
some?
What
we're
calling
more
sophisticated
client,
because
I
I
think
that
that's
really
like
we
have
all
these
discussions.
H
Well,
if
you're
using
Relay
can
work
like
this,
if
you're
just
using
a
dumb
client
that
doesn't
exist
because
I
don't
know
that
there
is
one
besides
like
typescript
graphql
code,
gen
like
I,
don't
I,
don't
understand,
like
I,
don't
have
a
clear
view
of
how
we
would
map
the
incremental
responses
of
any
of
these
proposals
into
a
quote-unquote,
dumb,
client.
G
Maybe
one
thing
we
could
do
is
is
release
experimentally
like
a
graphql
v17,
but
with
a
couple
or
even
the
different
behaviors
like
based
on
arguments
and
let
you
know
right
now
we're
you
know.
V17
is
like
an
alpha,
but
if
we
released
a
version
with
experimental
directives,
even
possibly
with
different
behaviors
and
let
clients
experiment.
H
Yeah
I
almost
also
argue
for
having
a
spec
client
in
graphql
dash
Js.
A
Really
because
because
I
I
think
client
is
where
the
most
variety
is
like
server
implementations,
if
you
look
at
them,
they
maybe
are
schema
first
code
force
or
annotation
base,
but
in
essence
they
all
work
the
same
way.
But
when
you
look
at
the
client
world,
it's
completely
where
different
clients
work
completely
different.
I
argue:
there
cannot
be
a
spec
line.
D
I'm
just
laughing
at
Ricky's
comment
because
I
think
graphical
actually
kind
of
is
closest
to
that,
because
the
goal
is
to
it
kind
of
it
like
it.
It's
the
dumbest
possible
client
and
that
you
send
it
queries
and
it
shows
you
the
responses
but
like
in
doing
so.
It
needs
to
be
able
to
support
the
entire
breadth
of
everything
that
graphql
can
do.
F
A
slight
difference,
though,
with
graphical-
and
you
know
it's
the
same
for
studio
and
all
these
other
types
of
tools
that
have
an
explorer-like
interface.
Somebody
that's
using
that
tool,
is
expecting
to
see
a
query,
for
example,
that
they're
firing
that
has
a
deferred
fragment
in
it.
They're
going
to
expect
to
see
the
data
come
back,
that
they've
asked
for
and
then
they're
going
to
expect
to
see
the
other
data
come
back
later
in
the
shape
that
they've
identified
in
their
query,
they're
not
going
to
expect
to
see
all
of
the
internal
payload
come
across.
F
C
F
D
Well,
that's
a
good
point
and
maybe
we're
getting
and
a
bit
of
attention.
I
just
got
a
quick
point
to
make
for
those
tools.
I
certainly
hope
that
they
evolve
to
show
like
here's,
the
raw
payloads
that
show
you
what's
coming
in
raw.
Here's,
the
like
replay
the
timeline
and
like
have
a
scrubber
like
have
a
way
to
engage
with
the
fact
that
it's
incremental.
Hopefully
that
is
good,
but
my
quick
thought
on
the
dumb
client.
D
Obviously
I,
don't
think
any
dumb
clients
in
graphql
exists
really
and
if
they
do
they're,
probably
not
using
incremental
delivery.
But
we
should
make
sure
that
the
floor
of
sophistication
for
such
a
client
to
take
advantage
of
this
should
be
quite
low
and
I.
Think
that's
part
of
the
reason
why
I've
been
excited
to
see
the
incremental
payload
structures
here
be
fairly
straightforward.
I
I
You
take
an
inquiry
from
from
biking
away
or
something,
and
one
just
want
to
test
it
in
graphical
and
as
a
response
you
get
like
response
with
a
bunch
of
streaming
differs
and
it's
hard
to
read,
and
you
just
want
to
like
see
for
the
buck
experience
you
want
to
see
like
overall
thing.
So
it's
not
the
long
stretch.
Imagine
like
is
a
type
of
work
in
graphical
saying.
Please
match
it
for
me
into
one
response
for
me
to
inspect
it.
I
For
me,
it's
like
a
real
picture
for
graphical
to
have,
instead
of
like
having
especially
in
case
of
stream,
having
like
bunch
of
payrolls,
each
have
a
couple
of
elements
and
before
on
this
stream,
and
everything
got
out
of
order,
you
just
press
a
button.
Everything
match
into
one
final
shape
and
wait
for
me
answer
a
question
about
having
different
values
for
the
same
path.
I
This
feature
become
becomes
impossible
if
we,
if
different
paths,
have
different
values,
you
cannot
represent
like
in
whole
discussion.
We
discuss,
normalized
stores
with
ideas,
graphical
doesn't
have
normalized
store,
the
graphical
doesn't
have
a
concept
of
ideas,
but
it's
it's
easy
feature
to
just
match
all
the
incremental
with
initial
response
into
one
thing
go
like
object,
single
data,
and
this
become
impossible
if
you
have
like
different
values
under
the
same
path,
so
I
think,
with
my
talk
experiment
with
graphical
useful
in
what
sense
at
least.
F
F
B
Final
State
at
the
end
of
a
stream
and
defer
it
should
be
equivalent
to
you
having
not
done
a
stream
or
defer
at
all,
and
you
just
end
up
with
a
final
browsable
payload
and
at
the
moment,
some
of
the
solutions
that
we're
discussing,
give
Quakers
issue
which
is
effectively
they
aren't
mergable.
We
we
can't
reconcile
them.
D
Yeah
that
seems
fair
and
especially
like
the
idea.
Framing
this
as
incremental
delivery
over
like
long-term
streaming
is
correct
in
that,
even
before
introducing
these
things
yields
get
executed
in
order.
That
order
is
somewhat
well
defined
by
the
spec,
but
still
there
is.
There
is
a
sense
of
time
passing
as
you
go
from
executing
the
very
first
query
and
delivering
a
payload
to
the
last.
D
It's
just
that
the
final
result,
kind
of
comes
to
you
in
one
big
lump
at
the
end,
and
the
goal
here
should
be
to
make
sure
that
that
data
is
coming
back
to
you
in
multiple
lumps
so
like
it's
not
like
we're
introducing
a
new
concept
by
the
idea
that
things
get
executed
over
time.
That
was
the
point
that
I
was
trying
to
make
earlier
about
how
you
can
already
find
yourself
in
yes
to
be
a
two
separate
paths
as
opposed
to
the
same
path.
D
But
you
can
already
find
yourself
in
a
situation
where
you
get
two
separate
pieces
of
information,
that
a
normalizing
client
would
have
to
figure
out
how
to
reconcile,
and
so
like
I
I,
want
to
make
sure
that
we're
not
over
complicating
I
think
also
a
particular
failure
mode
for
incremental
delivery
is
the
execution
algorithms
are
so
sophisticated
and
complicated
in
the
spirit
of
avoiding
these
Corner
cases
that
no
one
can
really
fully
understand
it
and,
as
a
result,
no
one
trust
it
and
then
doesn't
want
to
use
it,
and
that
I
think
we
should
also
be
pretty
cautious
to
avoid
that
outcome.
F
D
F
It
it
I
think
that's
nudging
us
towards
option
two
at
the
top
of
your
screen
there
Rob.
If
you
want
to
scroll
up
yeah,
it's
definitely
been
on
the
it's
been
listed
for
a
while,
but
yeah
yeah,
I
I,
know
we're
almost
out
of
time.
F
Do
we
there
was
some
talk
about
what
should
be
a
requirement
versus
a
constraint
here
and
maybe,
if
we
just
had
two
more
minutes
to
nail
down,
if
we
all
agree
on
the
constraints
or
the
requirements,
at
least
that
will
help
set
the
tone
for
the
next
to
First
stream
working
group
meeting.
Should
we
just
run
through
those
constraints
and
see
if
we
all
agree.
K
Yeah
so
well,
there
there's
the
one
that
we're
saying
now,
which
is
that
the
newest
one
which
isn't
listened
here
is
that
you
have
a
final
shape
that
which
I
get
which
that
means
that
you
cannot
call
the
same
field
field
executor
more
than
once
right.
You
have
to
either
cache
that
result
or
do
your
algorithm
in
a
way
that
it
only
gets
called
once
right.
K
Then
the
other
constraints
are
the
one
payload
per
path,
the
each
path
corresponding
to
where
the
defer
is
and
the
the
client
getting
all
of
the
all
the
all
of
the
data.
You
know
all
the
data
that
is
specified.
D
D
Think
that's
the
one
thing
that
is
plainly
true
is
that
anything
that
we
choose
here
will
be
a
principled
balancing
of
which
constraints
we
keep
in
break
with
the
the
goals
that
we
have,
and
my
only
nudge
here
is
a
nudge
in
the
direction
of
of
trying
to
keep
things
on
the
simpler
side
of
the
spectrum,
but
pushed
back
against
me
if
I'm
wrong
in
that
nudge,
but
otherwise
I
think
the
constraints
and
requirements
is
too
friendly.
Here
all
resonate
with
me.
K
Right
but
I
think
the
problem
is
that
we
can't
have
these
three
constraints,
plus
this
new
one,
without
either
having
like,
adding
that
caching
for
a
field
resolving
into
the
execution
itself,
which
I
feel
like
that's
a
trade-off.
Where
you're
adding
you
could.
It
could
be
a
significant
memory
issue
for
the
server.
I
The
simplest
mechanism
is
to
have
matching
path,
but
the
simplest
mechanism.
It's
not
the
only
one.
So
if
we
use
like
labels
or
Venture
proposed,
like
pendant
thing
that
we
need
to
do
to
do
anyway,
so
I'm,
I,
I,
would
say.
Okay
go
here
is
not
necessary.
That
buff
is
much
is
that
client
can
have
a
mechanism
to
understand
that
a
fragmented
certain
Pathways
for
field
at
some
point.
D
I'm
gonna
wrap
it
since
for
overtime,
but
I.
Think
Benji's
comment
is
like
a
great
a
great
kind
of
summary,
and
it's
probably
worth
copying
and
pasting
into
the
thread
of
the
first
three
constraints,
all
feel
like
nice
to
haves
and
all
feel
relatively
breakable
without
a
ton
of
pain
and
the
requirements
all
feel
essential.
So
we
got
to
focus
on
the
requirements
over
the
constraints
as
written
today.
D
E
D
Right
folks
that
wraps
us
for
today
I
know
we
didn't
get
to
everything
on
the
list,
but
we
do
have
the
the
follow-on
meetings
later
in
the
month.
So
hopefully
see
you
mostly.
There
awesome
thanks.
Everyone.