►
From YouTube: GraphQL Working Group - 2022-07-07
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
The
topic
it's
a
busy
one,
but
it's
not
over
time
at
least
on.
What's
what's
scheduled.
D
D
G
G
Pull
requests
merged,
just
let
me
know
if
you
still
got
one.
G
No
problem
at
all,
I
sorted
attendees
by
alphabetical
order
in
case
you're,
going
to
add
one.
There.
G
G
Everyone
here
joining
means
that
we've
all
agreed
to
the
spec
membership
agreement,
participation,
guidelines,
contribution
guide
and
code
of
conduct
links
to
all
of
those
are
in
the
top
of
the
agenda
file.
As
per
usual,
I
will
drop
a
link
to
the
agenda
just
in
case
that
way
we
make
sure
everybody's
got
the
same
thing
they're.
G
Looking
at
the
same
thing,
a
couple
have
late
changes
went
in
there,
so
give
it
a
quick
reload
if
you
haven't
okay,
so
as
we
always
do
first,
let's
do
just
a
quick
round
the
room
of
names
to
faces
so
that
we
all
know
who
we're
all
talking
to
and
then
we'll
do
an
overview
of
our
agenda.
So
I'll
kick
us
off.
My
name
is
lee
working
for
the
graphql
foundation
here
representing
meeting
and
then
I've
sorted
everybody
else
by
alphabetical
order.
Hopefully
that
helps
doing
the
intros
so
adron.
G
If
a
drone
is
here
which
she's
not
here
yet
so
alex,
if
you're
here.
G
Oh
alex
is
literally
joining
this
second
alex
welcome.
My
friend
we
literally
just
started
intros,
and
I
called
on
you
the
instant
you
joined.
So
oh.
I
Sweet
great
yeah,
hey,
I'm
alex
I'm
at
pronto,
ai.
K
K
G
A
Hey
cohen,
from
indeed.
G
I
think
so
awesome
well
for
folks
that
you
don't
have
your
name
on
the
attendee
list.
Please
send
a
pull
request.
We'll
get
you
merged,
it's
a
good
way
to
keep
track
of
who's
attending
all
of
our
meetings.
G
Of
course,
benji
always
does
this
a
great
service
by
leading
or
note
taking,
but
if
others
could
volunteer
to
help
out,
especially
since
some
of
the
things
being
discussed
today
and
she's
gonna
be
doing
some
talking
so
talking
and
note
taking
at
the
same
time
pretty
hard
to
do
anyone
willing
to
volunteer
to
help
out
benji
with
no
taking.
I
G
We
did
this
last
time
we
took
a
quick
break
halfway
through.
I
thought
that
was
well
needed
and
and
went
well,
so
we're
going
to
do
that
again
and
as
a
means
to
help
keep
ourselves
on
time
rather
than
breaking
at
the
hour
and
trying
to
come
back
at
some
moment.
What
we'll
do
is
break
about
five
minutes
before
the
hour
that
way
at
the
hour.
We
know
what
time
we
need
to
be
back
to
get
things
started
again.
G
Hopefully,
it'll
help
keep
us
on
time,
but
we
have
a
healthy
set
of
things
to
talk
about
today.
So
we're
going
to
talk
about
directives
on
extended
enum
values,
meta
fields
and
sounds
like
that.
That'll
be
a
healthy
discussion.
We
have
some
counter
arguments
to
that
as
well,
formally
allowing
recursion
on
resolve
abstract
values.
G
An
update
from
the
graphql
over
http
working
group
sounds
great
and
a
discussion
on
some
previous
topics:
around
graphical
definitions,
order
of
chapters
and
algorithms
updates
and
discussion
on
deferred
stream
and
what
sounds
like
a
recursive
algorithm,
perhaps
something
around
struck's
rfcs
from
benji
sounds
like
fun
anything
less
or
not
listed
here
that
we
ought
to
talk
about
today.
G
G
And
this
one
was
completed,
it
is
okay,
so
this
was
one
that
we
had
an
approved
or
accepted
rfc
and
between
the
last
meeting
in
this
meeting
I
went
through
did
a
bunch
of
editorial
changes
and
got
that
one
merged
so
that
one's
now
in
the
draft
spec.
So
closing
that
issue,
that
was
the
only
one
that
was
marked
as
ready
for
review.
G
We
do
have
a
handful
of
open
actions
from
previous
meetings,
though
some
historical,
I
think
we
don't
need
to
go
through
all
of
these
live,
but
especially
for
the
the
most
recently
opened
ones
from
may.
G
Anyone
have
updates
on
any
of
those
and
then
benji
did
we
have
actions
come
out
of
the
june
meeting.
Do
you
know.
A
G
Rob
one
of
these
was
on
you,
the
stream
and
defer
the
spec
wording
being
accurate
about
index
formatting.
Is
that
something
that
you
feel
good
about?
That
was,
I
think,
feedback
from
two
sessions
ago.
M
Yeah,
I
think
my
spec
pr
is
slightly
behind
we're
at
right
now
and
there's
a
few
more
open
questions.
I
hope
that
we
could
resolve
today
and
then,
after
that,
I
should
have
I'll
get
everything
up
to
date
for
and
ready
for
review.
G
Okay,
I'm
gonna
mark
this
one
as
close,
then,
since
discussion
will
help
us
you'll
do
that
and
we
don't
need
an
action
item
for
you
to
know
that
the
spectex
needs
to
eventually
be
accurate.
A
There
was
one
action
item,
but
I
believe
that
that's
been
acted
upon.
It
was
for
roman
to
file
the
pull
requests
which
he
has
done
so
I'll
file
it
later,
but
I'll.
Just
then
close
it
straight
away.
G
That
reminder
yeah,
and
that
was
good.
We
ended
up
pulling
on
that
thread
and
having
a
handful
of
bigger
editorial
changes.
So
that
was
a
good
spark.
I
think
some
of
those
are
still
open,
but
that's
okay,
we're
making
progress.
G
The
other
one
here
was
about
pulling
feedback
for
client
controlled
nullability
alex.
Do
you
feel,
like
you
got
the
feedback
that
you
needed
relative
to
that
action?.
I
So
I
think
I
think
that
one
was
to
open
up
a
thread
so
that
we
could
get
community
feedback
like
as
people
start
using
it.
I
haven't
done
that
yet
we
don't
have
a
release.
Client
control,
no
ability,
but
I
can
do
so.
Okay,
if
I
do
a
quick
update
on
where
we're
at
yeah
go
for
it
all
right,
cool
yeah,
so
there's
a
pr
open
for
validation,
parsing
and
lexing
has
been
merged
into
into
the
main
branch.
I
It's
not
out
in
a
release
yet,
but
it's
it's
there,
I'm
no
longer
at
yelp.
So
I'm
not
I'm
not
gonna
be
working
on
on
this
full
time,
but
I'm
gonna
see
it
through.
I'm
gonna,
you
know
finish
out.
Client
control,
no
ability
it
just
might
be
a
tad
slower,
especially
right.
Now,
as
I
get
ramped
up
my
new
job,
so
I'm
gonna
do
discussions
on
error
handling
and
then
I
think,
there's
some
concerns
around
null
propagation.
Still.
I
We
were
talking
with
benji
and
some
of
the
folks
from
meta
about
that.
So
I'll.
Do
those
discussions
sometime
in
the
next
couple
working
group
meetings.
D
G
You
know
on
that
itself
so
that
we,
when
we
come
back
to
it
eventually
I'll,
remember
what
we
just
talked
about.
Thank
you
for
the
update.
Hopefully,
your
next
thing
is
exciting
by
the
way
yeah.
G
G
H
Yes,
so
directives
on
extended
enum
values,
I'll
try
to
be
quick,
but
just
a
bit
of
of
context.
I
work
on
apollo
katlin,
it's
a
library
that
generates
code
and
so
on
the
left.
You
see
the
declaration
of
a
very
simple
enum
on
the
right.
That's
the
equivalent
code
that
that
the
library
generates
no
issue
here.
But
of
course,
as
soon
as
you
use
some
certain
reserved
keywords,
you
may
generate
code
that
doesn't
compile
anymore.
H
H
That's
the
that
the
syntax
to
to
escape
reserved
words,
and
so
wouldn't
it
be
nice
if
the
user
instead
could
choose
the
name
to
use
in
the
generated
code
and
we
try
as
much
as
possible
to
to
actually
use
graphql
as
a
way
to
configure
things
in
the
library.
H
And
so
that's
what
we
came
up
with.
You
extend
your
enum
and
you
use
a
directive
called
target
name
to
specify
the
the
name
you
want
to
use
in
the
generated
code,
so
here,
for
instance,
int
value
instead
of
hint
enum
with
a
capital
e
that
fixes
also
the
problem.
H
But
the
issue
is:
that's
not
actually
valid
with
the
current
version
of
the
spec,
because
the
spec
will
at
least
not
the
spec,
but
the
the
parsers.
H
The
spec
will
have
the
impression
that
you
are
trying
to
redefine
a
value
that
already
exists
in
the
original
enum,
whereas
that's
not
what
you're
trying
to
do
here,
you're
trying
to
add
a
directive
on
the
on
an
existing
value
and
that's
basically
it's
the
the
proposal
here
is
to
actually
make
the
spec
evolve
to
to
allow
this
syntax
and
or
maybe
actually
to
allow
this,
but
maybe
with
a
different
syntax.
H
If,
if
there's
a
any
any
potential
problem
or
issue
that
that
I
haven't
foreseen,
that's
why
I'm
here
to
to
get
together
some
feedback
on
this?
H
Can
we
go
ahead
with
this,
or
is
there
any
any
potential
issue?
I
think
at
the
very
least,
we
should
keep
the
current
rule
to
not
redefine
an
existing
value,
because
if,
if
you're
trying
to
do
that,
that's
probably
that's
probably
an
issue,
and
it's
very
good
to
fail
fast.
H
Also,
if
we
go
ahead,
I
think
we
should
probably
this
allow
like
adding
a
directive-
that's
already
there
and
the
original
value,
unless
of
course,
it's
repeatable,
but
yeah
other
than
that.
That's
that's!
Basically,
it
any
feedback
is
very
welcome
on
this
and
and
yeah.
That's
it.
I
guess
thank
you.
B
I
mean
we
are
using
that
also
like
in
schema
stitching
that
you
can
add
kind
of
directives
to
fields
that
exist
on
an
existing
schema.
So
I
didn't
mind:
we
have
our
own
puzzle,
so
we
could
easily
yeah
do.
L
This
thing
fits
to
like
this
is
a
deeper
issue
with
schema
extensions,
where
what
you,
what
you're,
really
trying
to
express
here,
is
a
schema
union
right,
you're
trying
to
say
we
can
recursively
merge
the
schemas
down
and
our
current
schema
extensions
are
basically
we
we
never
merge.
So
like,
I
can't
add,
an
argument
on
an
existing
field
either.
L
So
that's
the
same
problem
and
it's
unclear
whether
schema
extensions
like
we
could
just
change
the
schema
extension
like
syntax
to
just
be.
You
can
put
an
extend
in
front
of
any
schema
object
and
it
just
means
merge
right.
That's
that's
pretty
close
to
what
this
gets
to
for
at
least
enoms.
B
I
mean
the
problem:
if
we
go
in
things
like
argument
merging,
then
you
have
to
think
about
how
that
I
mean
we
have
to
specify
algorithms
for
that.
It's
getting
right,
more
complicated.
K
And,
unlike
unlike
the
object
types,
the
enums
extensions
are
normally
backwards
incompatible.
So
when
you
merge
multiple
imams
into
one
enum,
you
have
potential
to
break
the
clients
who
are
who
are
already
using
that
in
them,
because
new
extra
values
without
multiplying
the
clients
will
just
break.
G
Yeah
I
two
thoughts
came
to
mind.
I
was
hearing
this.
One
was
what
matt
expressed
about
schema.
Extensions
are
essentially
a
intentionally
crippled
version
of
schema
merging
it's
like
we.
We
took
out
a
lot
of
the
problems
that
happen
when
you
try
to
merge
schemas
by
limiting
what
you
can
do
with
extensions.
This
is
one
of
the
things
that
I
think
was
not
necessarily
intentional,
but
was
the
side
effect
of
that
as
a
limitation
and
the
other.
G
I
think
I
saw
in
the
chat
from
benji
that
you,
you
would
see
a
similar
issue
like
if,
if
we
were
to
allow
this
on
enums,
then
I
would
expect
that
behavior
to
be
available
in
most
other
places,
but
I
think
things
get
a
lot
more
complicated
like
imagine.
If
you
wanted
to
apply
a
directive
to
an
argument
of
a
field.
G
Does
that
mean
that
you
must
also
list
all
of
the
other
arguments
that
field
has
and
if
you
don't
does
that
imply
removal
of
them
or
an
inability
to
merge
or
what
happens
if
the
underlying
type
were
to
evolve
on
the
schema
in
some
way,
such
that
what
you
were
trust
trying
to
say
was
you
know,
here's
the
path
to
the
thing
I
want
to
add
a
directive
to,
but
an
evol
evolution
of
the
schema
caused.
Something
orthogonal
to
that
to
you
know
not
be
mergeable.
A
I
wonder
if
schema
coordinates
is
is
useful
here
for
some
somehow
I'm
not
quite
sure
exactly
how.
But
it
feels
like
precisely
locating
a
thing
in
the
schema
to
then
modify
in
some
way
might
be,
might
be
able
to
use
that.
G
E
E
N
E
Said
in
the
chat,
it
could
be
more
like
a
patch
that
yeah,
as
you
just
said,
with
specific,
coordinates
and
then.
D
E
Thing
would
be
a
bit
more,
wouldn't
need
to
re-specify
all
the
context
that
we
just
mentioned
here.
N
You
know
one
way
to
get
around
the
complexities
of
merging
is
to
just
say
that
you
know
if
your
your
definition
has
to
be
identical
so
like
if
this
was
to
work
for
like
say
fields,
and
you
know
you
have
different
arguments
say
you
just
have
to
like
define
the
whole
field
verbatim,
plus
your
directives
for
for
enums.
That's
obvious,
because
there's
not
you
know,
there's
no
fields
or
you
know,
arguments
or
anything.
It's
just
it's
the
same.
N
You
know
it's
the
same
thing,
but
in
any
case
you
still
run
into
another
issue
where
then
it
makes
it
impossible
to
remove
that
thing
from
the
original
schema
like,
like
you,
add
an
enum,
you
want
to
add
a
directive
to
it
in
this
extension
and
then
they're
they're
kind
of
like
locked
to
each
other.
In
a
sense,
we've
we've
run
into
that
with.
N
When
we
do
federation,
we
have
types
that
are
in
different
services
and
sometimes
they're
in
you
know
like
so
different
ownership,
different
code
bases
and
it
kind
of
likes
it.
It
then
gets
locked
in
place
kind
of
so
that
that
might
be
a
consideration
for
for
doing
this.
With
the
names.
B
But
I
mean
you
can
reinterpret
that
today
I
mean
you
could
use
extensions
and
build
a
custom,
merge
workflow
around
that.
So
it's
I
don't
know
if
it's
worse,
because
that
could
be
quite
a
complex
area
that
we
open
up
here
and
then
we
define
something
that
in
the
end
will
not
meet
all
expectations
right.
There
will
be
always
this
other
use
case
for
the
schema
merging.
B
G
I
think
it's
worthwhile
to
back
up
a
second
and
answer
like
what's
the
problem
we're
trying
to
solve
here,
because
I
think-
and
while
you
suggested
a
really
good
motivation
for
this.
But
that
motivation
is
not
really
necessarily
about
extending
schemas.
G
H
Well,
I
mean
in
our
case
the
the
other
option
would
to
just
not
use
like
extending
graphql
extensions
at
all,
like
we
always
have
like
configuration
files
that
are
specific
to
the
tool,
but
by
usually
we
prefer
to
to
use
to
use
the
like
extensions
to
the
schema,
but
other
than
that.
D
D
L
It's
all
on
the
client,
so
it
is
like
fine
whatever,
but
that's
it's.
It's
annoying
that,
like
schema
extensions,
is
sort
of
supposed
to
give
you
this
power
and
it
in
this
one
case
doesn't,
but
that's
also
because
the
spec
can't
anticipate
all
the
power
everybody
needs
for
all
compositions
of
said
power.
G
Yeah
matt
you
had
to
mention
in
the
chat
you
and
lauren
are
going
back
and
forth.
That,
I
think,
is
the
interesting
like
zoomed
out
version
of
this,
which
is,
you
know,
schema
diffing,
merging
intersecting
like
there's,
actually
a
pretty
complicated
domain
there
with
different
operations
that
yield
different
outcomes,
and
I
think
part
of
why
extensions
feels
limited
is
that
it
intentionally
narrows
down
the
set
of
of
that
space.
G
But
you
know
that's
not
to
say
that
like
matt,
what
you
just
described
like
yeah,
we
just
write
a
type
and
do
some
custom
merge.
That's
not
necessarily
not
observant
of
the
spec.
The
spec
just
doesn't
have
much
of
an
opinion
there.
The
the
scheme
is
there
to
describe
the
screaming
language.
Is
there
to
describe
what
a
schema
looks
like
and
the
extension
tools
are
there
because
you
know
people
ask
for
them.
I
think
looking
back,
we
might
consider
those
you
know
less
successful
than
we
had
hoped.
G
But
another
angle
on
this
benoit
that
may
be
interesting
to
explore
is
to
is
to
kind
of
follow,
matt's
suggestion
to
see
where
that
takes
us
of,
rather
than
leaning
on
the
extension
syntax
and
the
limitations
of
it
to
just
go
to
the
full
schema
syntax
with
a
a
custom.
Merge
function,
that's
very
specific
to
the
needs
that
the
android
client
has.
G
That
is
interesting
that
we
can't
address
or
worse
make
a
decision
that
actually
makes
those
turn
into
foot
guns,
and
it
may
be
the
case
that
you
just
have
like
you
want
the
flexibility
like
you.
L
There's
there's
another
thing
that
this
would
be
very
useful
for
which
is
we.
We
do
something
called
schema
syncing
where
we
take
the
schema
file,
that's
on
our
server
put
it
over
to
the
client
and
then
the
client
can
start
using.
It
it'd
be
very,
very
useful
to
us
if
we
could
say
here's
the
base.
D
L
N
Yeah,
it's
it's
maybe
helpful
to
distinguish
here
that,
like
this,
what
this
use
case
calls
for
is
merging
in
purely
metadata
on
the
graph
we're
not
actually
trying
to
you
know
like
change.
You
know
anything
like
like
you
know
like
from
the
client
standpoint.
You
know,
nothing's
changed
and
like
applying
metadata
onto
the
graph.
Is
it's
a
problem
that
we've
had?
N
Also
where,
like
some
things,
you
know
you
do
with
directives
you
you
do
it
directly,
but
often
there's,
maybe
like
different
ownership
of
that
metadata,
and
so,
like
we've
built
out
a
whole
system
just
to
hold
metadata
on
the
graph
and
using
you
know
basically
like
schema,
coordinates,
which
I
think
benji
mentioned
you
know.
Maybe
there
is
utility
in
the
in
a
more
like
generalized
standardized
way
to
do
that
too.
So
it's
an
easier
problem
than
having
to
do
a
true.
N
You
know
composition,
because
you're,
just
talking
about
metadata
and
you're,
just
saying
I
just
want
to
you
know
apply
this
to
these.
You
know
these
types
or
these
enums,
so
maybe
benji's
on
something
with
like
the
schema
coordinates
approach.
G
I
want
to
be
sensitive
to
time.
Benoit.
Do
you
feel,
like
you've,
got
good
feedback
and
info
to
rev
on
this
and
make
some
progress?
Yeah.
G
Awesome
yeah
really
interesting
topic:
okay,
we'll
use
the
rest
of
the
time
until
five
till
the
hour
to
talk
about
meta
fields,
we'll
start
with
the
von
and
then
we'll
open
up
to
discussion,
which
will
include
roman's
number,
eight
topic,
so
first
evan
I'll.
Let
you
kick
us
off.
J
Yeah
so,
as
I
was
saying
during
the
mute-
and
I
didn't
enable
the
video
and
give
me
a
second
so
situation
right
now
is
like
we
have
two
approaches:
both
gaining
contraction.
J
First
is
exposing
sdl,
it's
used
extensively
in
a
power,
and
but
it's
not
standardized,
so
it's
really
exposed
through
pro
its
own
the
power
field.
There
is
another
proposal
that
gaining
traction
is
exposing
directly
from
introspection.
J
I
will
talk
more
about
that
and
basically
I
want
to
propose
fourth
one.
I
want
to
know
that
the
last
time
I
shared
like
with
whites
and
had
a
mental
agenda,
it
was
like
a
version
one
of
what
I'm
proposing
after
that.
J
I
had
like
internal
discussion
in
a
power
and
also
spoke
with
banjo
so
and
by
the
way,
if
you
want
to
know
more
about
different
approaches
and
more
in
depth
like
comparison
of
them,
like
benjamin
did
a
great
talk
on
a
conference
on
graphql
conference
on
this
topic,
I
think
compare
like
five
or
seven
different
approaches
here.
J
I
would
like
I
would
skip
as
year
one
because
if
we
will
go
into
his
deal
with
resource
introspection,
I
think
it's
important
topic
at
some
point
like
maybe
we
actually
need
to
discuss
it,
especially
like
roman,
raise
this
question.
I
think
like
last
time
so,
but
to
keep
it
short.
Let's
keep
it
and
focus
on
on
introspection-based
solutions,
so
and
currently,
as
a
dominant
approach
for
exposing
like
additional
metadata
for
introspection,
is
exposing
directives
and
problem
here,
and
I
think
it's
actually
the
main
problem.
J
At
least
for
me,
directives
was
designed
as
kitchen
sink
for
new
syntax.
So,
like
anything,
it
can
represent
anything.
You
want,
semantically,
it's
just
a
syntax
without
a
meaning,
syntax
plus
validation
and
in
working
group
outside
of
working
group.
People
use
it
for
like
a
lot
of
things
and
including
like
internal
things
or
like
schema
transformation,
for
example.
Here
on
the
slides,
you
just
want
to
implement
fields
without
repetition.
J
J
People
generate
resolvers
with
directives,
for
example,
now
for
j
like
if
using
alpha,
j
and
want
to
expose
this
graph.
Here
you
can
basically
attach
an
alpha
j
query
to
a
field.
So
not
every
directive
makes
sense.
J
It
doesn't
make
sense
to
expose
all
the
directives
and
like
especially
if
the
transformation,
and
not
like
metadata
itself,
directive,
repeatable
and
chainable,
and
we
explicitly
added
that
like
few
years
ago
and
it's
make
sense
if
directive
a
transformation
is
mean
like
if
you
applied,
you
need
to
apply
transformation
in
order
so
and
you
need
to
repeat
need
ability
to
repeat
some
transformation
multiple
times,
but
for
metadata.
J
J
The
option
of
it's
totally
make
sense
for
transformation
and
other
stuff,
but
without
ability
to
say,
like
every
field
should
have
that
we
cannot
build
a
strong
contract.
You
cannot
depend
on
something
existing.
Basically,
everything
becomes
movable
and
from
other
type
systems.
We
know
that
if
everything
is
knowable,
always
knowable,
it's
like
a
puff
the
disaster.
J
Basically,
we
need
to
be
sure
that
something
is
present
and
basically
I
like
showed
how
wait
type,
how
this
proposal
for
pi
directives
how
the
exposure
and
it
I
think
it's
the
best
possible
way
to
represent
directives
inside
the
introspection,
with
all
restrictions
that
I
showed
before.
But
it's
create
like
a
bunch
of
problems,
and
I
list
them.
J
It's
not
a.
Basically,
it's
like
introspection
itself
is
first
class
types
and
transmission
queries
first
class
inquiry:
you
can
sub
select,
you
can
duplicate
stuff,
you
can
do
a
bunch
of
stuff,
you
can
do
with
graphql
here
we're
creating
like
some
untyped
and
thing
and
we
lose
like
a
bunch
of
benefits
of
graphql
itself
yeah.
J
D
J
I
try
to
do
that,
but
it's
like
problematic
and
directive
optimized
for
people
as
sdl
directors
optimized
for
person
who
write
hdl,
not
person
who
like
who
consume
them,
both
in
case
of
description
and
like
the
overall
usability.
J
So
I
wanted
to
to
think
how
I
can
solve
it
and
define
requirements.
So
requirements
is
based
on
mainly
it's
basically
like
documenting
what
we
have
for
introspection
right
now,
so
you
should
be
able
to
dump
entire
introspection.
We
have
it
right
now
it
should
be.
J
J
Metadata
should
be
like
first-class
citizens,
s-type
system
like
sub-selection,
deprecation
description
and
whatever,
and
it's
important
for
our
schema
evolution
to
track
like
who
use
what
and
like
duplicate
stuff
and
where
to
remove
it.
If
nobody
use
that,
I
think,
like
it's
very
important
in
one
gram,
if
we
speak
about
english
and
ips,
so
what
what
I
was
thinking
of
if
we
took
energy
from
programming,
language
languages
and
recently
in
our
discussion,
we
use
programming
languages
prior
art
in
discussion.
J
So
what
we
think
about
directives
as
decorators,
they
like
see
more
in
a
sense
and
what
strike
me
in
gc59
during
quite
directive
discussion.
It's
actually
like
split
it
up
and
decorators,
and
decorator
metadata
is
two
different
things
and,
as
far
as
I
can
see
in
programming
languages,
decorators
are
not
preserved,
they're,
just
a
function
that
do
transformation,
including
setting
metadata
so
setting
up
metadata,
is
like
one
of
type
of
transformation
you
can
do
and
what,
if
we
decouple
directives
from
metadata-
and
we
actually
have
it
right
now.
J
So
if
you
think
about
existing
directives
exposed
from
introspection,
we
we're
doing
that.
We
have
duplicate
specified
by
one
of
and
like
one
of
his
proposals,
so
we
represented
as
a
field
inside
introspection
and
we
represented
as
directive
in
sdl,
and
it's
create
a
pressure
like
last
time.
We
discussed
experimental
directive
basically
to
do
something
like
that.
There
is
a
pressure
to
add
that
into
a
spark.
J
That
is
like
pass
as
a
as
a
field
in
introspection
and
print
it
as
directive
back,
I
like
what
workflow
and
I
think
we
need
to
generalize
it.
There
is
why
I
describe
like
a
difference
between
directive
and
metadata
directive,
as
we
have
right
now
and
metadata
is
what
I'm
trying
to
achieve
with
this
proposal.
J
So
the
end
result.
Actually
I
I
want
something
that
to
the
talk
like
that,
you
have
a
method
metadata
attached
to
every
like
object
inside
introspection
field
argument,
every
new
value,
everything
we
expose-
and
you
can
query-
is
normal
graphql
query
you
can
sub
select.
You
can
do
other
things.
J
So
I
had
like
this
idea
a
year
ago
and
the
key
problem
here
was
how
to
represent
it
this
year,
and
this
is
very
important.
People
want
to
dump
stuff
in
this
deer,
and
I
will
with
my
previous
proposal
once
whites.
If
everybody
anybody
look
in
it,
here's
my
current
proposal,
I
discuss
it
inside
the
company
and
I
want
today
to
show
it
basically
idea-
is
to
create
kind
of
like
new
new
type
of
directive
like
it's,
not
a
directive,
it's
metadata,
meaning
like
no
transformation,
no
changes
to
validation,
nothing!
J
J
So
basically,
with
metadata
is
in.
This
deal
is
representable
as
as,
as
you
show
here,
on
the
right
side,
but
introspection
is
represented
as
just
a
field
as
I
showed
before
so
nothing
new,
and
we
can
in
this
case
we
can
actually
say
required.
We
can
allow
people
to
require
directives,
so
without
requiring
it's
like
a
field
is
knowable
with
required.
It's
meant
it's
not
knowable.
J
If
you,
instead
of
requiring
spotify
repeatable,
you
have
a
list
of
put
out
yeah.
I
forget
the
name
to
change
your
name.
It's
last
minute
with
like
version
one
version
like
changing
my
proposal,
so
some
slides
are
off.
J
You
can
specify
it
in
code,
first
approach,
just
as
like
values
and
we
apply
input,
custom
arguments
algorithm,
we
already
have
it
and,
moreover,
it's
like
compatible
since
I
propose
a
new
symbol
and
by
the
way,
like
whole
credits
about
plus
sign,
goes
to
banjo
banjo
suggests
to
use
like
a
plus
sign,
plus
and
basically
mean
it's
like
static.
It's
not
influence
anything.
It's
like
just
exposed
for
introspection,
it's
not
change,
validation,
nothing
just
exposed
for
introspection
and,
more
importantly,
it's
like
create
its
own
namespace.
J
So
it's
not
the
with
directive.
So,
yes
transformation
period,
people
can
just
define
directive
alongside
with
a
new
metadata
thing.
So
it's
basically
first-class
citizen
with
all
things,
including
quite
you
can
detect
breaking
changes.
If
you
make
something
unknown.
If
something
was
required,
then
became
non-required,
yeah
and
other.
J
So
in
a
way
it's
similar
to
what
benjamin
proposed,
because
we
discuss
it
and
we
split
up
in
opinion,
I
think,
and
to
to
save
banjo
sometime
and
explain
what
the
difference
bencher
proposed
to
use
the
same
type
as
input
on
output
and
create
new
types.
J
B
J
B
Evan,
just
just
just
one
question:
why
why
why
did
you
decide
against
having
the
as
as
an
extension
to
a
directive
where
you
can
say
the
directive
as
this
output
type.
J
Yeah
yeah
so
in
original
proposal,
first
like,
as
makes
sense
only
if
directives,
if
directivo
is
exposed,
like
only
if
it's
metadata,
but
people
use
directory
for
a
bunch
of
things.
We
can
create
subclass
of
directives,
but
I
think
it's
confusing
and
it's
especially
confusing
like
in
original
issue.
J
One
of
the
points
that
people
rest
up
is
ability
to
distinguish
what
shared
and
what
not
what
doesn't
share.
So
people
propose
different
symbol
in
the
first
place.
To
address
that,
so
I
thought
instead
of
creating
like
directives
with
unrestricted
syntax
with
validation
and
instead
of
like
creating
some
subset
of
directives
that
are
more
restricted.
J
F
J
J
Metadata
as
a
accessible
by
client
yeah-
sorry,
I
missed
that
part.
I
tried
to
squeeze
so
instead
of
like
discussing
the
whole
rfc,
why
I
need
that.
I
focused
on
discussion
mechanism.
Basically,
like
original
issue,
provide
description
of
why
this
mechanism
is
needed.
F
Yeah,
for
example,
if
you
think
about
like
custom
mappings
for
clients,
I
don't
see
why
this
shouldn't
just
be
a
field
that
returns
the
thing
and
the
formatted
thing.
I
don't
see
why
you
would
need
to
expose
this
as
metadata
to
be
honest,
because
then
the
client
would
have
to
fetch
all
the
metadata
like
a
huge
list
of
emoji,
mappings
or
whatever.
So
to
me,
it
seems
more
simple
to
just
have
the
field
expo
export
the
value
that
you
want
to
use
on
the
front
instead,
but
yeah.
G
Going
to
call
it
just
make
sure
we
have
time
for
a
break,
we're
not
going
to
stop
discussion
on
this,
though
so
we'll
come
back
after
the
break
and
keep
going
on
this.
Since
I
know
we
want
to
have
an
additional
five
or
ten
minutes
so
take
a
quick
break
stretch.
Your
legs
use
the
bathroom
we'll
come
back
here
on
the
hour
and
pick
up
where
we
left
off.
G
Awesome
add
a
link
to
the
agenda
file
too.
If
you
can.
G
Everyone
back,
okay:
where
were
we
yvonne
you
just
let
us
through
your
presentation,
deck
roman.
I
know
you
wanted
to
reserve
specifically
five
minutes
to
dig
into
sort
of
a
counter
proposal,
so
maybe
we
can
hand
it
to
you
to
talk
through
that.
C
The
floor,
thank
you.
Can
you
hear
me
guys.
D
C
Yeah,
so
basically,
it's
not
counter
proposal.
It's
just
counter
arguments
to
some
arguments
for
meta
fields,
meta
types
and
against
applied
directives
approach.
C
So
my
biggest
problem
actually
my
biggest
concern
with
all
this
is
not
the
features
of
the
meta
field,
meta
classes
by
themselves,
but
the
other
concern
the
background
of
this
whole
issue.
C
So
if
we
go
with
something
like
this
in
the
future,
what
we'll
have
we'll
have
two
metadata
facilities
right
because
directives
don't
go
anywhere
so
users,
first
readers
when
reading
the
spec
will
immediately
recognize
this?
Oh,
they
have
directives
which
is
metadata
and
recognized
as
metadata
facility,
and
also
this
meta
thing.
C
I
think
directives
are
immediately
associated
with
the
unknown
facilities
in
other
languages
and
even
for
for
java.
Annotations
is
literally
identical
in
syntax,
not
only
in
semantics,
but
in
syntax
to
directives
with
ad
sign
and
so
on.
C
So
basically,
and
it's
used
when
you
look
at,
for
example,
deprecated
or
include
skip
they're
used
for
the
same
purposes,
so
from
experience,
20
years
in.net
or
a
little
less
in
java
or
another
language,
this
facility
is
there.
We
are
very
successful.
C
People
do
all
kinds
of
things
in
dot
net,
from
simple
labels
to
the
whole
aspect,
oriented
programming,
you
know
doing
transactions
calling
some
tracing
login
methods
and
all
that
stuff
and
every
everybody
went
very
well.
C
So,
on
the
other
hand,
it's
the
the
concept
is
already
known
familiar
to
users,
and
it's,
I
would
say,
even
for
new
euros
very
straight
and
simple.
Compared
to
this
meta
types
seem
a
little
bit
more
complicated.
For
example,
I
couldn't
quite
make
sense
of
this
screen,
with
example
of
definition
of
the
previous
presentation.
C
So
basically,
my
my
concern
is
this.
So
why
didn't
we
go
with
directives
we
had
to
have
to
at
the
end,
explain
to
readers?
Why
didn't
you
go
with
the
directives
since
it
is?
It
was
very
good
for
other
platforms.
C
With
meta
types,
it
seems
to
me
some
of
the
problems
we
don't
solve.
We
just
move
them
like,
for
example,
input
and
so
on,
sorting,
organizing
them
and
so
on,
and
one
of
the
one
of
the
the
things
I
see
with
the
when
you
define
extension
and
meta
type
as
extension
of
standard
meta
type,
then
you
have
plain
list
and
imagine
the
situation
of
like
netflix
what
they
have
like
hundred
teams.
C
Instead
of
so
each
team,
instead
of
defining
their
own
directive,
they
will
go
and
add
the
field
in
this
meta
field
type.
I
see
a
huge
contention
there,
just
one
of
the
comments
now
with
arguing
against
applied
directives.
C
C
Now
the
directive
word
comes
up,
you
know
and
is
used
as
what
what
it
should
be
since
directive
is
a
kind
of
call
for
action.
I
think
the
the
word
is
irrelevant
and
it's
historical
accident
which
platform
picked
what
word
for
the
facility
values
of
the
problem,
with
the
input
types
as
output
types
and
untyped
types,
so
this
is
the
same
with
as
with
melody
types.
The
suggested
solution
is
to
written
on
string
yeah,
which
is
the
same
as
here.
C
So
we
have
to
resolve
it
in
any
way
and
first
of
all,
structs
might
help
and
also,
I
think
we
should
look
at
untyped
things
for
any
end
map,
as
you
often
say,
show
me
the
real
world
example.
C
So
here
is
the
real
world
example
in
our
own
kind
of
spec
and
believe
me,
this
happens
in
real
world
applications
all
the
time-
and
I
know
there
are
attempts
in
specific
implementations
to
add
at
least
map
scalar.
C
Then
there
is
example
in
benji's
presentation.
The
problem.
If
there
are
too
many
directives,
he
gives
here's
the
example
labels
from
multiple
languages.
It's
not
the
case.
Usually
it's
in
based
on.net.
The
number
of
directives
is
very
limited.
If
it
is
it's
never
100,
if
you
have
a
reasonable
developer,
reasonably
competent
now
I
want
to
address
very
quickly.
C
Another
counter
argument
against
the
directives
that,
as
a
fact,
the
right
directives
in
real
world
schemas
that
contain
some
secrets,
so
I
think
it's
completely
wrong
and
the
wrong
way
we
are
kind
of
addressing
it.
So
here
is
the
logic
of
thinking.
Sdl
file
is
a
development
artifact
right
like
source
code
like
everything
else.
What
I
should
leave
in
git
report
can
we
have
secrets
in
git
report.
C
The
answer
is
definite.
No
and
we
know
that's,
that's
not
some
special
security
practice,
that's
absolutely
accepted
truth,
no
secrets
and
git
reports,
git
reports
leak,
get
hacked
and
so
on.
So
many
people
get
access
there,
so
secrets
should
live
somewhere
else
in
secret
walls
and
so
on.
C
C
So
this
should
not
be
a
concern
for
us
and
at
the
end,
even
if
developer
doesn't
want
to
remove
their
filtering
them
in
in
transpection
cost
is
absolutely
trivial.
So
my
conclusion
and
suggestion
we
should
explore
applied
directives
to
the
very
end,
maybe
in
parallel
with
the
metaphys
or
whatever
else.
C
For
the
reasons
I
just
said,
because
it
is
very
similar
to
other
facilities
in
other
platforms
for
most
users
coming
here,
they
will
be
associated
and
that
will
be
for
them
expected
way
to
go
and
very
confusing.
If
we
introduce
something
else,
and
finally
also
there
is
still
hanging
problem
role
of
sdl,
because
we
are
kind
of
a
catch-22,
chicken
and
neck
problem
here
still
so
formally
where
dearest
directives
are
coming
from,
which
should
be
shown
by
intersection.
C
Introspection
because
looks
like
by
definition,
introspection
is
a
source
of
truth.
Sdl
can
be
only
generated
from
introspection
right
then,
where
all
the
directives
can
come
from
at
all
anyway.
So-
and
this
relates
to
one
of
the
my
other
kind
of
topic
theme
about
role
of
sdl,
which
we
will
kind
of
clear
up
which
actually
shows
up
everywhere,
including
this
particular
topic.
That's
all
for
me.
Thank
you.
G
Rather
than
opening
it
up,
I
want
to
be
thank
you
for
that,
roman.
By
the
way
I
I
really
agree
with
a
lot
of
the
ideas
and
and
problem
statements
from
both
of
those
two
presentations,
rather
than
opening
up
for
present
for
discussion.
I
want
to
be
respectful
of
the
rest
of
our
agenda
time.
G
My
advice
for
moving
this
forward.
There's
more
problems
being
uncovered
than
there
is
solution
space
being
explored.
I
think
we
ought
to
have
two
things.
One
is
a
discussion
thread
where
we
can
start
teasing
through
all
of
these
things,
async
yvonne.
If
I
can
tap
you
to
open
up
that
in
the
discussions
board
for
wg,
you
guys
can
share
roman,
and
you
can
share
your
presentations
there.
We
can.
We
can
go
from
there
starting
to
explore.
G
The
second
that
might
be
helpful,
is
to
have
a
breakout
meeting
similar
to
what
we've
been
doing
for
the
composite
schemas
just
so
we
can
have
like
much
more
time
and
dedicated
space
to
really
dig
into
this.
I
think
there's
a
lot
here-
and
we
just
don't
have
enough
time
in
in
these
overall
meetings
to
to
get
as
much
in
the
weeds
as
we
need
to.
So
that
would
be
my
advice
for
moving
forward,
but
thank
you
both
for
the
presentations.
G
J
I
have
a
question
yeah,
I'm
concerned
like
let's
focus
yeah.
I
agree.
It's
like
it's
bigger
topic,
but
since
there
is
like.
J
I
don't
want
to
create
a
like
one,
huge
meta
discussion
about
everything
so
like
is
it
safe
to
say
we
have
two
separate
issues
we
can
wait
are
talking
now
to
each
other.
One
is
introspection
versus
hdl
thing
and
second
is
extending
introspection
with
additional
metadata
yeah.
I'm
like
personally,
I'm
interested
in
second
one.
G
They
are
inclined
to
to
roman's
point
to
the
degree
that
I
want
to
insist.
C
I
won't
insist:
bringing
in
sdo
here
is
the
oh.
Yes,
it's
different,
so
let's
focus
simply
on
introspection.
Yeah.
G
C
I
have
already
regarding
sdo.
J
C
J
So-
and
I
will
put
it
in
like
a
list
of
requirements
and
basically
question
here:
can
we
especially
since
we
already
have
like
two
two
three
proposals?
Basically
like
apply
directive,
what
I'm
proposing
and
like
see
more
to
what
what
banjo
propose
and
pathways
like?
I
think
our
proposal
is
like
see
more
different
in
the
way
typing
is
handled.
G
I
think
that's
smart,
but
I'll
I'll.
Let
you
take
that
offline.
I
think
that's
a
smart
way
to
go
about
it,
yvonne
and
hopefully
we'll
have
some
progress
there
to
talk
about
next
time,
but
I
want
to
be
respectful
of
time.
I
think
we're
probably
going
to
have
to
bump
some
agenda
topics,
because
that
one.
J
J
B
But
it's
totally
different,
but
you
could
solve
a
couple
of
things
in
introspection
with
it.
So
it's
not.
G
In
that
case,
that's
a
big
topic.
I
want
to,
let's
see
if
we
can
get
through
some
of
the
ones
that
are
scoped
for
a
little
bit
less
time.
First,
all
right,
yakov,
I'm
gonna
hand
it
to
you
to
talk
about
resolve
abstract
type.
O
Okay,
can
you
hear
me.
G
O
O
Okay,
so
basically
we
have
a
step
within
our
execution
algorithm
in
which
we
need
to
resolve
the
abstract
type
to
a
concrete
type,
and
we
at
some
point
I
can't
remember
how
long
ago
was
now.
We
at
some
point
have
allowed
interfaces
to
implement
other
interfaces.
O
Might
it
be
more
than
a
year
ago
could
be
so
the
question
so
a
poll
this
started
from,
I
forget
exactly,
but
we
can
file
the
links.
It
started
from
a
request
on
the
graphql
js
implementation
reference
implementation
to
allow
interfaces
which
currently
provide
a
method.
To
do
this
resolving
and
the
spec
just
says
each
system
should
provide
some
sort
of
internal
method.
O
The
request
was
to
allow
interfaces
that
implement
other
interfaces
to
you
know,
set
up
a
chain
so
that
the
interface,
the
method
that
resolves
that
abstract
type
could
resolve
it
as
another
intermediate
abstract
type,
and
then
that
methods
that
abstract
types
resolve,
abstract
type
would
be
called,
and
so
what
the
question
becomes
and
that's
why
I
link
to
the
spec
text
is
whether
we
think
that
that
needs
to
be
a
specific
change
within.
O
You
know
our
specification,
and
so,
if
you
know
take
a
look
at
the
attached
request,
it
basically
adds
a
step
to
the
algorithm,
with
some
help
very
helpful
edits
by
benji,
so
that,
instead
of
the
algorithm
saying
that
we
just
directly
call
that
internal
method.
We
say
that
that
internal.
O
We
explicitly
say
that
that
internal
method
could
resolve
to
to
an
interface
to
another
abstract
type,
and
it
should
you
know,
and
then
we
can
recursively
call
it,
and
this
is
some
that's
that's
basically
the
question
in
implementing
the
functionality
within
graphql.
Yes,
we
can
implement
it
without
any
change
in
the
spec.
O
I
think
that
I'm
not
too
familiar
at
all
really
with
how
this
is
done.
In
other
languages,
it
was
my
gut
feeling
that
this
change,
or
this
feature
wouldn't
require
a
change
in
the
spec
text.
But
but
but
there
were
other
opinions
that
maybe
it
would
so
that's
about
it,
throw
it
out
to
the
group
as
to
whether
we
think
that
this
and
and
please
ask
questions
if
I
haven't
explained
this
well
enough.
But
it's
a
much
narrower
issue
than
the
way
the
issues
we've
been
discussing
so
far.
G
G
Your
question
why
why
is
this
require
a
change
to
the
spec,
because
if
you
were
to
run
this
internally
as
a
recursive
algorithm
where
interface,
a
resolves
interface
b,
which
resolves
to
interface
c,
which
resolves
to
concrete
type
d
and
outside
observer,
still
sees
interface
a
resolve
to
concrete
type
d,
which
is
the
required
algorithm?
So
what's
actually
the
observed
change
here.
O
So
I
mean
I
guess,
the
question
is
whether
there
is
you
know,
I
don't
think
there
is
an
observed
change.
The
question
is
like
sort
of
an
additional
feature.
I
guess
the
question
is:
do
we
want
that
addition
by
adding
it
to
the
spec
we'd
say
that
every
every
implementation
has
to
has
to
sort
of
allow
it,
and
I'm
not
sure?
If
that's
that
makes
sense,
I
mean
I
just
don't
know,
because
I
don't
really
do
anything
outside
of
typescript
javascript.
O
O
And
we
can
certainly
get
the
feedback
offline.
I
don't
want
to
take
too
much
time
away
from
all
the
other,
weighty
things,
but
it
is
currently.
B
O
Right
so
when
we're
when
we're
executing,
we
have
a
step
in
which
we
need
to
resolve
that
abstract
type.
So,
if
like,
if
we
have
a
chain
in
which
in
which
we're
resolving
a
couple,
different
levels
we
want
to
be
able
to,
you
know
follow
that
change.
We
can
currently
do
it
with
with
a
few
code
changes
in
the
reference
implementation.
O
A
So
I
guess,
if
I
understand
correctly
what
you're
saying
is
you
would
like
a
feature
change
to
graphql
js
that
allows
this
recursive
behavior
and
the
the
graphql
js
maintainers
aren't
in
full
agreement
that
that
can
just
be
added
without
a
spec
change,
because
it
would
no
longer
directly
reflect
what
the
spec
represents.
Is
that
accurate.
O
J
Yeah,
so
actually
this
feedback
on
this,
because
spectex
is
very
exact.
It's
it's
like
say
you
should
have
a
type
type
should
resolve
to
to
the
concrete
type
and
the
reference
implementation
is
done
like
that.
Putting
I
don't
think
it's
language
dependent
in
a
sense
because
pack
is
not
language
dependent
but
say
abstract
type
should
have
a
function
to
resolve
to
concrete
type.
J
Now
we
basically
in
reference
implementation,
we're
saying
abstract
type
can
resolve
not
to
to
a
type
but
to
a
another,
abstract
type
and
with
subtract
type,
will
have
a
function
to
resolve
two
way
to
some
abstract
type
and
in
the
end
of
the
chain.
You
should
have
a
concrete
time,
so
I
don't
see
any
language
dependent
in
here
so
question
for
me:
either
we
want
it
in
general
and
it
should
be
the
to
graphic
js
on
or
not.
J
I
don't
see
why
c
sharp,
like
difference
between
javascript
and
c
sharp
and
in
that
sense,
like
both
implementation,
have
like
function
to
resolve
type.
It's
a
question:
if
this
function
can
return
only
concrete
type
or
another
abstract
type
and
create
a
chain.
B
J
Yeah,
it's
it's
right
so
question
here
is
that,
like
with
jump
from
abstract
type
to
concrete
from
a
field
type
to
concrete
type?
Is
one
step
or
you
are
able
to
do
it
here
or
here
you
can
do
hierarchy
in
tab
system
right
now
and
inspect
we
say
like
every
abstract
type
should
have.
I
can
continue
to
resolve
to
concrete
type,
a
question
here
if
you
have
like
a
pet,
a
mammal
and
cat
and
dog,
it's
like
our
favorite
example
with
animals.
J
J
People
asking
probably
for
modularity
but
yeah,
I'm
like
it,
said
that
complexity.
We
had,
I
didn't
with
added
complexity
to
reference
implementation.
If
we
add
in
with
a
complex
reference
implementation,
basically
say,
everybody
else
should
replicate
it.
If
everybody
else
should
replicate
it
more,
with
the
same
as
we
put
it
in
a
spec.
B
I
I
don't
see
the
need
for
it
is.
Is
it
not
a
utility
function
that
you
want
to
have,
because,
because
why
should
we
make
it
multiple
steps
at
the
moment
you
can
optimize
this
this?
This
resolve
this
a
resolve,
control
concrete
type
to
whatever,
but
then
you
would
have
to
do
multiple
steps.
G
Yeah,
I
have
the
same
thought
as
like
this
seems
like
it
adds
anytime,
we
add
recursion,
we
have
the
possibility
for
infinite
recursion
or
long-lived
recur.
It's
like
it
becomes
a
reliability
or
or
even
worse.
An
attack
vector
is
an
alternative
here
to
give
people
utilities
to
make
this
possible
within
the
so.
The
spec
language
is
that
there's
a
general
function
that
does
this
resolution,
which
calls
a
internal
method
and
that
internal
method
graphql.js
represents
another,
do
as
well
as
sort
of
a
method
provided
by
each
type.
G
J
My
question
here
is
that,
as
a
maintainer
of
crafty,
it's
like,
as
we
discussed
it
in
graphql.js
as
a
whole
group.
I'm
like,
I
feel
everything
we
merge.
We
encourage
other
people
to
do
the
same.
So
on
what
point
I'm
like,
I
don't
think
it's
like
strictly
necessary,
so
treating
it
providing
some
other
utility
functions
that
don't
create
recursion
in
execution.
I'm
like.
I
definitely
if
it
can
be
so
regret,
I'm
definitely
forward,
but
we
have
an
issue.
J
J
J
So
if
we,
if
we,
if
with
complexity
worth
adding
or
not
like,
maybe
it
might
be-
okay,
maybe
not
at
respect.
But
I
want
to
like
ask
group
if
it's
worth
it
or
not,
because
I'm
I'm
not
sure-
and
I
don't
have
objective
criteria
to
say
no,
like
yakuzpr
technically,
we
have
like
some
disagreement
about
name
of
function
and
other
factoring
which
we
can
so
like
internally,
but
other
than
that.
O
Of
the
recursion
issue,
there's
some
text
of
the
text
back
that
that
spec
text
that
I
have
could
probably
be
improved.
The
current
version
of
the
pr
doesn't
allow
recursion
because
it
checks
to
make
sure
that
that
the
interface
return
return
is,
you
know,
an
appropriate
member
of
the
chain
and
since
we
can't
have
cycles
within
the
interfaces,
you
know
we
won't
run
into
that
problem.
O
But
I
do
agree
that
we're
basically
implementing
a
utility
that
could
be
solved
in
user
land,
the
question,
and
so
the
question
is:
should
we
you
know
I
guess
go
forward?
I
don't
think
we
have
any
concerns,
though,
about
levels
of
recursion
there.
Okay,.
G
G
I
don't
think
we
have
to
come
to
a
conclusion
right
now
and-
and
maybe
it's
worthwhile
to
bring
this
just
use
this
as
a
flag
to
pull
feedback
back
to
your
your
pr
and
the
spec
text,
specifically
limiting
it
to
like
no,
no
one
type
could
just
delegate
to
any
other
arbitrary
type
to
recurse.
It
has
to
be
an
implemented
interface,
that
that
seems
like
a
that
limits
for
the
infinite
recursion
threat,
and
it
also
reduces
the
scope
and
hopefully
gives
us
tools
to
help
make
the
the
developer
experience
quite
reasonable.
G
O
A
Folks,
I'm
going
to
keep
this
real
short
because
we're
running
short
on
time,
but
effect,
oh
great,
that
link's
broken
that's
wonderful,
but
effectively
the
graphql
over
http
working
group.
We
now
have
resurrected
it
kind
of
went
into
hiatus
for
a
period
of
time,
but
it
feels
like
it's
time
to
bring
it
out
dust
it
off
and
get
it
ready
to
to
ship.
Potentially
I'd
like
to
do
that
actually
quite
soon,
preferably
with
the
next
version
of
graphql
spec
itself.
A
So
with
that
in
mind,
I
spent
a
large
number
of
hours
going
through
and
effectively
doing
some
very
heavy
editing
to
the
specification
itself.
A
We
have
had
a
graphql
over
http
working
group
meeting,
where
we've
discussed
that
one
of
the
main
things
about
this
change
is
there
was
a
lot
of
contention
over
this
content
type,
so
media
types
there's
an
issue
with
graphql
as
it
currently
stands
when
you
use
json
as
the
the
way
of
talking
in
both
directions
that
we
can't
trust
errors
if
they
don't
have
a
200
or
2xx
status
code,
and
that's
because
those
errors
or
that
data
from
the
response
could
come
from
an
intermediary
server.
A
It
could
come
from
a
proxy
or
you
know
something
like
cloud
flare,
something
on
the
edge
whatever
and
as
such,
we
can't
trust
it
unless
it's
200,
in
which
case
we
know
it,
hasn't
been
tampered
with.
So
what
we've
effectively
proposed
is
that
we
add
this
application,
slash,
graphql,
plus
json
content
type.
So
if
you
have
that
type
on
the
response,
then
you
know
it's
definitely
graphql
and
you
can
trust
it
even
if
it's
a
500
or
a
400
or
whatever
else.
A
So
that's
one
of
the
changes
versus
the
status
quo
for
this
specification.
Everything
else
is
basically
just
us
saying
here
is
the
status
quo,
here's
what
people
have
been
doing
in
the
wild
for
the
last.
You
know,
seven
years
and
let's,
let's
actually
write
this
down
so
that
we
know
exactly
what
we're
doing
and
what
the
strict
rules
are
on
everything.
A
So
we
have
that.
What
we've
proposed
is
that
there's
effectively
a
watershed,
we
didn't
want
to
make
anyone
in
compliant
when
we
first
launched
this.
So
the
idea
is
that,
on
the
1st
of
january
2025
at
exactly
midnight
utc,
this
rule
would
come
into
effect.
Date
can
be
changed,
of
course,
but
effectively,
I'm
giving
everyone
two
years,
two
and
a
half
years
to
implement
this,
this
media
type
on
responses.
A
After
that
time,
jason
will
still
be
supported,
so
it's
still
non-breaking,
but
everyone
is
recommended
to
use
the
application
graphql
json
type,
and
that
will
then
mean
that
error
codes
will
be
valid.
So
that
is
one
of
the
broad
changes
we've
also
gone
through
looked
at
restructuring
we've
discussed
with
various
parties.
A
A
Those
will
not
be
going
into
version
zero,
but
we
want
to
make
sure
that,
like
whatever
we
do
with
version
zero
shouldn't
have
to
break
too
much
when
we,
if
and
when
we
do
add
those
other
things.
That
was
the
bulk
of
that.
The
other
thing
is,
we
are
restarting
meetings.
I
am
aiming
to
do
another
meeting
in
the
next
couple
of
weeks,
because
I
would
like
to
present
something
to
you.
The
working
group
next
month,
hopefully
of
here,
is
a
spec.
What
do
you
think
kind
of
thing?
A
So
maybe
wait
a
week
and
then
eyes
on
this
spec
would
be
ideal
so
yeah.
Basically
we're
asking
for
feedback
for
input
now
would
be
more
useful
than
input
later
and
a
reminder
that
the
aims
are
to
basically
specify
what
is
already
the
case
and
that
we're
only
dealing
with
queries
and
mutation
operations
not
subscription
operations.
G
Very
exciting:
what's
what's
your
call
to
action?
What
do
you
need
from
us
right
now.
A
Well,
I
would
like
to,
as
I
say,
present
this
next
month,
so
I
think
anyone
who's
really
interested
in
this
topic
now
is
the
time
to
get
re-involved.
We've
not
done
a
brilliant
job
of
bringing
everyone
back
in.
It
was
just
anyone
who
was
monitoring
that
particular
repo
so
get
involved,
and
the
other
thing
is
yeah.
If
people
can
cast
their
eyes
over
it
be
just
before
the
next
working
group,
maybe
a
week
before,
or
something
like
that,
that
would
be
very
valuable.
G
No,
not
necessarily,
but
we've
been
historically,
we've
tried
to
do
them
roughly
once
per
year,
so
I
think,
that's
probably
where
we
will
end
up
this
time
as
well.
Let
me
remind
myself
I'll
tell
you
about
the
last
one
yeah
we
did
october
yeah.
I
think
it's
probably
the
right
place
for
us
to
aim
is
october.
A
Okay,
so
from
your
point
of
view
lee,
if
you
could
chase
up
all
the
relevant
legal
things
that
we
would
need
to
do,
because
I
know
that
that's
associated
with
spec
releases
we'll
need
that
for
the
over
http
one.
That
would
be
ideal,
or
at
least
let
me
know
what
needs
to
be
done.
If
I
can
do
it
and
yeah,
we
just
want
to
push
this
forward
and
get
a
first
version
of
it.
G
The
biggest
piece
is
getting
the
the
contributions
agreement
signed
for
everyone,
who's
contributed
to
it.
I
think
that
was
the
thing
that
took
the
most
time
to
cut
the
last
version
of
the
spec
yeah,
and
since
this
repose
existed
for
a
long
time,
we
may
have
to
do
some
some
bookkeeping
to
make
sure
that
we've
got
that,
but
I
can
help
out.
G
Awesome,
thank
you
for
the
update.
I
want
to
shift
things
around
in
the
agenda
a
little
bit
just
because
there's
been
a
fair
amount
of
stuff
going
on
with
deferring
stream,
and
I
want
to
make
sure
that
for
sure
we
don't
miss
that
so
rob
if
you
don't
mind
taking
it
from
there
I'll
edit
the
agenda
to
reflect
the
order
shift.
M
Yes,
just
going
to
share
one
second.
M
So
the
last
couple
meetings
we've
talked
about
batching
in
general,
and
so
this
is
specifically
now
for
batching
of
payloads
that
return
from
deferred
stream
and
the
problem
we're
trying
to
solve
here.
Is
you
execute
a
query
that
could
potentially
issue
a
large
number
of
payloads
for
defer
and
or
stream,
which
could
overwhelm
your
client,
causing
like
a
a
loop
of
re-renders,
so
the
last
the
last
time
we
talked
about
it?
I
think
there
was
general
consensus
that
we
do
want
to
have
some
sort
of
batching.
M
Javascript
js
types,
the
execution
result
is,
could
be
a
a
stand,
your
standard
result,
or
it
could
be
a
async
generator
of
the
payload
types
that
we
have
previously
defined.
M
The
downside
with
this
is
if
there
are
a
lot
of
payloads
that
are
ready,
we're
basically
adding
the
delay
of
the
execution
event
loop
cycle,
which
maybe
would
have
to
be
compensated
for
with
an
additional
delay
of
debouncing
on
the
client
side
or
network.
Something
like
that.
We
also
talked.
Should
we
just
always
return
an
array
of
these
payloads,
whichever
payloads
already
return
an
array
of
them.
M
There
was
a
little
bit
of
pushback
here,
because
it
is,
it
could
be
considered
kind
of
a
weird
api
that
you're
yielding
a
list
of
things.
Instead
of
one
single
thing
we
also
discussed.
Could
it
just
return
one
payload
or
an
array
of
payloads?
M
I
think
a
problem
with
this
is
that,
if
you're
using
an
untyped
language,
you
might
not
think
that
these
arrays
are
coming
in
and
write
code
that
could
fail
when
it
does
see
an
array,
then
there's
another
suggestion,
I
think
from
yvonne,
which
was
basically
return,
an
object
with
an
incremental
property,
and
have
this
incremental
be
a
list
of
all
the
payloads.
M
So
we
had
talked
with
matt
for
meta
and
some
other
people
at
the
graphql
conference,
and
it
seems
like
we
were
mostly
aligning
on
this
approach.
Option
d.
M
M
G
M
Right-
and
I
think
that
another
point
that
yvonne
was
was
going
for
with
this-
is
that
the
type
definition
of
the
response
is
a
little
bit
more
consistent
where
these
sub,
these,
like
defer
stream
payloads,
are
always
under
this
property
and
they're,
not
really
the
same
thing
as
an
initial
payload
anymore.
O
B
I
I
still
would
say:
let's
keep
it
short,
because
I
also
saw
I
mean
we,
we
are
exploring
that
the
same
than
apollo
is
at
the
moment.
So
what
they
are
looking
at,
the
federation,
they
cannot
always
defer.
So
there
are,
they
can
only
defer
if
it's
done
on
entity
sections
so
having
it.
A
must
complicates
things.
M
Yeah
I'll
call
out
with
a
must
versus
should
what,
where
we
had
landed
previously
was
on,
should
meaning
that
clients
need
to
accept
that
the
server
may
not
put
it
there,
and
I
think
the
strongest
reason
for
doing
that
is
that,
if
that's
totally
the
wrong
decision,
we
could
change
to
must
in
a
non-breaking
way,
because
all
the
clients
will
still
work.
But
we
we
can't
ever
go
the
other
way.
So.
M
But
I
would
like
to
keep
the
discussion
not
on
most
for
sure,
but
on
this
payload
response
format.
But.
O
Yeah,
but
I
think,
I
think
to
focus
it
then,
where,
if
the
server
is
choosing
not
to
on
or
defer
on
some
later
payload
sorry
on
this
first
payload,
then
we
can
then
incremental
might
appear
or
might
not
appear
or
would
all
the
data
just
appear
on
the
first
just
be
delayed
and
sent
together,
meaning?
How
would
how
would
that
intersect
with
up
with
this
new
format?.
M
I
think
the
the
must
verse
should
is
whether
the
deferred
directive
is
ignored
entirely,
and
it's
not
delivered
incrementally
versus
if
it
is
being
honored
and
there
are
multiple
ready
at
the
same
time
or
they're
ready
with
the
initial
payload,
they
would
be
under.
B
I
I
think
that
that
is
a
good
middle
ground
like
if
we,
if
we
have
this
incremental
property.
So
when
I
read
through
that,
I
thought
that's
actually
good
middle
ground,
because
if
you
do
not
defer,
so
if
your
server
decides
to
do
not
defer,
you
still
put
it
in
the
incremental
property
and
for
the
client,
it
doesn't
have
a
change.
So
you
get
the
same
patches
that
you
would
get
with
the
proper.
G
M
Look,
wouldn't
it
be
a
client
if
the.
I
B
Yeah,
but
I
I
I
know
what
you
mean
lee,
I
think,
because
if
we
introduce
it
now
as
a
short,
we
cannot
go
to
a
must.
Oh
no!
No!
It's
because,
like
backwards,
compatible,
clients
have
no
problem
because
they
don't
send
the
deferring,
so
we
cannot
break
them
from
the
server,
even
if
we
make
it
a
must,
because
only
once
a
client
sends
that
in.
A
G
I
think,
in
that
case
it's
it
goes
against
another
principle,
which
is
that
the
clients
should
not
be
in
the
business
of
specifying
how
a
server
should
behave.
They
should
be
in
the
business
of
supplying
demanding
what
they
need,
and
then
the
server
should
take
that
information
to
decide
what
the
best
way
is
to
produce
that.
G
So
if
a
client
says
you
must
defer
at
this
boundary,
even
especially,
if
it's
going
to
deliver
it
in
a
format
that
assumes
that
the
client
will
like
assemble
those
steps
and-
and
you
know
render
it
in
one
pass,
because
it's
delivered
in
a
single
payload,
then
all
that's
doing
is
tying
a
the
server's
hand
behind
its
back,
like
maybe
that
server
is
taking
a
significant
performance
hit
and
pausing
there
to
resume
something
else
and-
and
it
could
have
just
been
faster
to
just
not
defer
first
place
and
leave
it
in
line,
especially.
B
G
But
anyhow
we're
to
rob's
point
we're
getting
off
topic.
He
wants
to
talk
about
this.
This
format
yeah.
O
I
mean,
I
guess,
with
this
format,
I'm
just
questioning
if,
if
the,
if
the
incremental
tag
does
appear,
does
that
mean
that
I
I
mean
we
should
just
be
cognizant
of
the
fact
that
this
the
server
has
the
prerogative
currently
of
of
sometimes
sometimes
honoring
the
directive,
and
sometimes
not
you
know,
usually
it
should,
but
right,
but
sometimes
yes
and
sometimes
no.
O
So
I
think
we
should
just
be
clear
that
the
existence
of
the
incremental
tag
doesn't
mean
that
it's
that
the
data
payload
only
contains
what's
included
within
within
the
initial
request,
so
it
doesn't
really
help
for
the
typings
issue.
That's
the
only
point
that
I'm
making,
because
that's
not
unless
we
change
the
unless
we
go
this
extra
lag
and
go
from
to
musk.
That's
that's.
Why
I'm
bringing
it
up
here.
J
So
it's
need
it
needs
to
know.
Should
it
like
what
parts
of
it
we
return
and,
what's
not
without
saying,
must
we
basically
say
client
need
to
go
inside.
The
data
know
like
know
all
possible
locations
and
should
go
inside
the
data
and
check
if
it's
there
or
not.
If,
if
we,
if
this
deferral,
which
stream
is
finished
or
not,
I
think
it's
like
extra
work
of
client
looking
inside
the
whole
idea
of
a
feature
is
incremental.
Rendering
meaning
client
should
know
if
this
part
is
finished
or
not,
and
now
we
by
making
it
optional.
F
I
also
see
use
cases
where
the
server
could
decide
to
set
the
execution
result
as
whole,
for
example,
if
you
think
about
a
graphql
cache,
then
if
you
execute
a
deferred
or
streamed
response,
the
first
time
and
it
might
be
slow,
then
it
makes
sense
to
stream
through,
but
once
the
gateway
can
cache
it
and
then
has
the
whole
execution
result
immediately
for
the
client.
Why
would
it
stream
those
if
it
can
send
the
whole
execution
result
at
once,
and
it
would
be
way
faster.
J
J
I
idea
here
is
like
from
performance
point
of
view.
A
server
can
still
decide
if
it's
sent
in
one
package
or
in
multiply
it
just
shape,
is
the
same.
If
person
say
if
coin
says
stream
or
d4
it's
mean
like,
it
should
be
delivered
in
patch
format
in
the
sample.
What
was
supported,
but
the
shape
is
so.
We
have
strong
types
basically
like
when
we
have
a
guarantees
on
how
data
look
like
and
how
incremental
will
work.
F
M
M
I
I
can't
say
if
there's
a
difference
between
that
versus
them
not
being
in
the
prior
payload,
but
I
think
that
this
is
useful
for
other
scenarios.
Like
let's
say
you
have
a
query
that
returns
an
array
of
objects
and
there's
a
defer
under
that
object.
That
would
be
one
payload
for
every
element.
That's
in
that
array,
and
it
may
not
be
ready
initially
with
the
initial
result,
but
you
do,
but
maybe
a
lot
of
them
are
ready
at
the
same
time.
So
you
you
could
do
that
in
one
reminder.
M
G
B
This
is
essential
to
force
like
like,
if
you
put
a
defer
on
an
appointment,
that
you
have
to
put
it
in
incremental,
even
if
you
as
a
service
say,
ignore
it,
ignore
the
deferred,
don't
do
the
defer
and
the
main
reason
for
for
for
for
for
forcing
this
into
the
shapes.
Actually,
so
the
client
knows
what
is
coming
like.
Typically,
that's
also
what
we
do
in
the
dot
net
line.
L
We
defer
actually
changes
the
response,
shape
that
the
client
is
expecting
and
if
we
don't
require
things
to
end
up
in
incremental,
then
we're
saying
the
client
needs
to
support
both
possible
response
shapes,
whereas
this,
like
in
everything
else,
we
do,
we
say,
there's
one
response
shape.
The
client
has
specified
the
response
shape
and
the
server
is
guaranteed
to
give
you
that
shape.
G
I'm
still
slightly
concerned
about
the
potential
for
the
real
performance
degradation
from
that
which
I'll
be
honest,
is
just
like
a
theoretical
concern,
but
I
don't
know
matt,
may
you
probably
have
more
experience
than
the
rest
of
us
since
you
guys
have
been
exploring
this
in
relay
for
a
while
against
real
service.
L
L
We
we
don't
even
use
arrays,
it's
just
like
bytes
coming
down
but
whatever,
but
I
I
believe
we
have
not
seen
real
problems
with
splitting
up
at
those
boundaries,
but
I
I
might
be
wrong.
I
mean
the
high.
G
Level
sense
I
have
is
to
like,
when
a
service
might
decide
that
it
wants
to
ignore
defer
for
performance
reasons,
is
one
where
the
deferred
fragment
heavily
overlaps,
with
fields
that
are
not
deferred
they're,
like
they're
all,
either
like
prefetched
and
hot,
in
cash
or
they're
all
dry
values.
It's
just
like
trivially,
cheap
to
fetch
it
and
return
it
where,
like
maybe
you'll,
regain
some
of
that.
But
you
know.
D
G
L
M
Yeah
before
we
move
on,
I
just
want
to
just
restate
what
I
think
that
we're
saying
to
make
sure
I'm
on
the
same
page,
when
I
say
when
we're
saying
should
versus.
Must
that
means
that
a
a
client
is
sending
a
query
that
has
a
deferral
or
stream
and
if
the
server
decides
to
ignore
it,
because
that's
allowed,
because
it's
only
a
should
recommendation
and
not
a
must.
That
means
that
the
response
is
going
to
be
not.
M
Payload
and
so
there's
not
going
to
be
one
of
these
incremental
payloads
delivered,
it
will
be
in
whatever
the
parent
payload
is,
that
could
be
the
initial
payload
or
another
defer
or
stream
payload,
and
we're
saying
that
all
incremental
payloads
should
be
delivered
inside
of
this
incremental
wrapper
and
the
initial
payload
may
using
rfc
definition
of
may
also
include
an
incremental
wrapper
for
the
case
of
any
deferred
payloads
that
potentially
already
at
that
time,
is
that
correct.
G
M
D
G
Not
settled
yet
on
the
must
versus
should,
but
the
this
form
of
having
a
specific,
incremental
key
as
an
array
seems
like
the
right
direction,
since
it
explicitly
allows
you
to
include
it
in
the
initial
payload,
whereas
our
previous
mode
did
not
allow
that.
That
does
seem
quite
useful.
G
The
the
only
like
nitpicky
thing
I
have
with
it,
which
is
probably
not
that
big
a
deal,
is
that
for
the
typical
case,
where
an
incremental
payload
is
a
single,
a
single
thing
being
added,
there's
just
a
lot
of
like
json
junk
surrounding
the
data,
and
I
have
like
a
slight
worry
about
the
overhead
of
that.
But.
G
M
I
do
think
that
the
case
of
there
being
more
than
one,
is
not
the
typical
case,
but
I
think
it's
more
common
than
an
edge
case
yeah
and
I
do
want
to
get
one
last
word
in
on
versus.
Must
I
I
think
that
the
the
most
most
important
reason
that
we
should
stick
with
should
is
that
that
will
force
clients
to
understand
that
if
they
defer
something,
it
may
not
be
deferred.
M
And
if
we
decide
that's
a
terrible
idea,
then
we
could
have
another
rfc
and
change
it
to
must
and
that's
no
problem,
no
clients
will
be
broken
and
new
clients
won't
need
to
have
that
overhead
of
figuring
it
out.
But
if
we
start
out
with,
must
now
and
need
to
change
to,
should
then
there's
going
to
be
clients
out
there.
That
will
demand
that
if
they
defer
something,
they
have
to
get
an
incremental
payload
back.
So
I
I
mean
I'm
a
little
bit
on
the
fence
too.
M
Okay-
and
I
know
that
we're
almost
out
of
time-
so
if
we
could
get
async
feedback
on
this
one,
this
is
basically
avon
brought
up.
We
have
these
fields
defined
data
errors,
label
path,
as
next
defer.
Payloads
data
is
always
an
object
or
null
stream.
Payloads
data
is
always
an
array
or
no
for
typing
purposes.
Should
we
change
the
name
of
data
to
items
that
will
make
it
more
clear?
Is
it
a
stream
payload
or
a
deferred
payload,
or
are
we
diverging
these
types
unnecessarily?
M
I
don't
have
like
a
super
strong
opinion,
but
would
like
to
get
other
thoughts,
cool,
okay,
yeah
these.
These
are
the
last
two
issues
that
I
have
open
for
deferring
stream
as
of
right
now,
so
I'm
hoping
that
with
some
feedback
on
this,
I
could
get
the
spec
pr
up
to
date
and
ready
for
review
for
next
month.
M
G
Thanks
again
for
all
of
your
hard
work
on
this
rob,
this
is
this
is
a
huge
change,
so
yeah
call
to
action
for
everyone
to
drop
it
on
this.
This
is
important
to
get
this
piece
right
too.
I
know
we're
out
of
time.
We
had
two
more
agenda
items:
roman
and
benji.
A
B
G
Well,
benji
will
make
sure
to
give
you
ample
time
earlier
in
the
agenda
next
time
and
hopefully
an
extra
month
to
to
bake
the
ideas
around
and
pull
feedback
gives
you
a
more
robust
presentation
to
the
rest
of
the
group.
C
G
C
Cool,
okay,
one
second,
so
my
my
short,
you
know,
presentation
was
actually
call
for
commenting
on
discussions.
I
already
did
open
discussions
and
posted
them.
It's
about
formal,
graphql
definition
and
the
order
of
chapters.
I
think
they
are
kind
of
basic
foundational
for
all
other
fiction
consequences
come
up
with
the
issues
that
we
discussed
today
about
role
of
sdo
as
well.
C
Another
thing
lee
you
said
we
are
done
with
the
first
portion
of
fixes
right,
so
I
can
start
delivering
the
rest
of
them.
G
Yeah
yeah
feel
free
to
open
pull
requests
or
anything
editorial.
If
you
wouldn't
mind
adding
to
the
agenda
a
link
to
the
discussion
just
so
that
we
it's
easy
to
listen.
C
I
okay
yeah.
I
will.
G
And
we'll
dig
into
thank
you
cool
all
right
folks,
thank
you
for
sticking
with
us,
as
we
go
a
little
bit
over
time
and
for
all
the
healthy
discussion
see
you
all
soon.