►
From YouTube: Create Deep Dive #3: GraphQL
Description
In this Deep Dive session, Nick Thomas, Staff Backend Engineer on the Create team at GitLab, shares his knowledge of GitLab's GraphQL API.
Download the slides: https://gitlab.com/gitlab-org/create-team/uploads/8e78ea7f326b2ef649e7d7d569c26d56/GraphQL_Deep_Dive__Create_.pdf
Learn more about Deep Dive knowledge sharing sessions: https://about.gitlab.com/handbook/communication/knowledge-sharing/#deep-dive-sessions
Find out when the next Create Deep Dive is taking place: https://gitlab.com/gitlab-org/create-team/issues/1
---
Read more about our product vision: http://bit.ly/2IyXDOX
Learn about FOSS & GitLab: http://bit.ly/2KegFjx
Get in touch with Sales: http://bit.ly/2IygR7z
E
C
C
D
Great
so
today
we're
doing
a
deep
dive
into
graph
QL,
which
is
a
new
API
gate.
Lab
has
started
developing.
We
initially
became
aware
of
graph
QL
in
around
2017,
and
it's
been
a
bit
of
a
fun
story
since
then,
I'll
go
into
the
history
a
little
bit
more
later,
I'm
going
to
be
using
this
for
you
for
the
presentation,
simply
because
if
I
switch
screens
too
often
zoom
crashes
and
I
can't
do
anything
so
I
want
to
be
going
into
graphic
QL
here.
D
A
little
bit
to
play
with
queries
and
stuff
service
is
just
the
safest
way
for
me
to
get
through
the
whole
presentation
without
it
completely
blowing
up
on
me,
so
it's
being
recorded
and
uploaded
to
YouTube
afterwards,
of
course,
and
if
you
do
have
any
questions
at
any
point,
just
pop
them
into
the
document
right
here:
I'm,
not
monitoring,
slack
I'm,
not
monitoring
using
chat.
So
here
is
absolutely
the
right
place
for
them
to
go
and
I
would
love
for
there
to
be
some
questions.
D
If
you
have
anything,
you
want
to
ask
part
way
through
and
you
think
is
better
addressed
immediately.
Just
interrupt
me.
I
don't
mind
that
and
say
hey
Nick.
What
about
this
you've
got
to
the
malleus,
absolutely
fine,
so
getting
started.
Then
it
kind
of
we
have
to
start
by
asking
what
graphic
UL
is
as
it's
an
API
api's
are
things
you
used
to
provide
programmatically
interact
with
a
web
service.
So
something
like
get
map
comp.
You
don't
always
want
a
user
to
be
going
in
and
clicking
buttons
and
so
on.
D
It's
quite
hard
to
programmatically
Drive
websites
by
going
in
and
having
a
program
and
click
the
buttons
for
you
it's
possible,
but
it's
much
better
to
have
something
that
the
computer
can
interact
with
more
directly.
That's
an
API
they've
been
around
for
a
very
long
time.
Get
labs
got
a
REST
API
at
present,
REST
API
is
were
initially
defined
in
around
2000
by
the
great
royal
fielding
himself
and
in
2000.
The
web
was
a
very
different
place
in
particular,
api's
weren't
really
considered
separate
thing.
D
Typically,
you'd
have
your
one
endpoint
and
you'd
be
trying
to
handle
programmatic
and
user
responses
on
the
same
endpoint
using
different
response
types.
So
the
API
would
be
you
post
to
the
page.
With
some
Jason
and
/
Jason
is
interpreted
by
the
computer,
the
server
it
returns
some
more
Jason,
whereas
if
your
user
visits
exactly
the
same
page,
it
gets
a
nice
HTML
page
with
a
webform,
you
can
fill
in
click
about
uses
a
different
content
type.
D
So
rest
was
very
much
built
around
this
idea
of
allowing
you
to
use
the
same
endpoints
for
different
users
in
modern
years.
We've
kind
of
moved
away
from
that
api's
are
now
literally
Jason,
only
or
XML.
Only
on
the
completely
separate
endpoint,
sometimes
even
look
completely
so
I
put
hostname
with
the
main
web
services
connect
to
and
that's
just
kind
of,
one
of
the
things
that's
fallen
out
of
how
wide
the
intervening
years.
There
are
many
different
types
of
REST
API
and
they
kind
of
come
in
levels.
D
There
are
lots
of
different,
but
generally
it
falls
into
the
second
highest
layer.
I
get
live.com.
You
have
a
certainly
one
level,
it's
called
the
level
4
REST
API,
and
what
this
means
is
it's
almost
completely
rest,
but
not
quite
in
particular,
we
do
a
lot
of
good
things.
We
version
the
API.
This
is
the
top
one.
Here
is
kind
of
level,
4
graphic
that
I
stole
from
Damian
Fremont
over
there.
So
resource
is
in
the
URL
itself.
You
can
see
API
v1
users,
we're
operating
on
users
and
that's
specified
in
the
request
path.
D
We've
got
a
request
version,
so
we
can
change
the
API
over
time,
as
you
can
see
right
here,
we're
also
in
a
separate
place
now
we're
looking
at
slash,
API,
slash
whatever
slash
users.
So
we've
moved
away
from
this
initial
idea
that
everything
would
be
an
England
in
the
same
end
point
and
that's
generally
considered
normal
for
s2
these
days
de
requests
parameters
themselves
go
into
the
body
of
the
request.
That's
normally
Jason,
certainly
at
github
calm,
and
we
use
HTTP
methods
to
denote
what
we
want
to
do.
D
So
if
we
just
want
to
read
for
contents
of
the
users,
then
we
get
API.
If
you
want
users,
if
you
want
to
change
it,
then
we
you
would
put
API
if
you
want
users-
and
these
are
verbs
in
the
HTTP
standard
that
allow
us
to
kind
of
specify
more
semantics
around
what
we
want
to
do.
There
are
a
lot
of
things
we
don't
do
in
our
REST
API,
though
it's
about
as
good
as
REST
API
is
generally
come.
D
Put
the
central
promise
of
rest
and
of
what
Roy
fielding
was
looking
at
when
he
was
writing
his
paper
nearly
20
years
ago
now
is
you
would
have
a
single
client,
it
would
work
for
everything,
and
this
is
something
that
Gil
lab
has
never
cannot
brought
it
into
him.
The
web
generally
never
ended
up
buying
into
we've
got
the
web
browser,
and
that
is
a
single
client,
of
course,
and
it
works
for
all
manner
of
web
sites
as
long
as
they're
talking
HTML
and
those
kinds
of
simple
web
formats.
D
So
we've
moved
quite
far
from
the
initial
vision
of
what
recipe
now,
instead
of
having
lots
of
different
web
services
all
accessible
from
the
same
client,
which
is
what
we
call
a
level-five
REST
API
when
it
permits
that
we
have
lots
of
different
api's
and
lots
of
different
client
Swamper
api.
A
front-end
is
a
good
example
of
this.
It's
kind
of
a
mix
of
present
some
of
it
is
web
2.0
old-style
here's,
some
HTML
with
some
JavaScript
mixed
in
a
lot
of
the
rest
of
it,
is
essentially
a
view
based
web
application.
D
Vapi
that
we
have
so
it
operates
quite
differently
to
reinitialize
rest
graph.
Ql
is
also
an
api.
It
is
a
level
0
of
rest
api,
which
is
like
the
least
powerful
from
a
rest
perspective
that
you
can
do.
In
particular,
you've
got
a
single
endpoint
that
you
always
post
haste,
either
post
to
no
matter.
What
you
want
to
do
is
always
a
heated
EP
post.
It's
always
the
same
path
and
the
body
itself.
D
The
later
you
post
to
the
api
says
what
you
want
to
do,
or
the
semantics
of
the
requests
are,
whoever
you
want
to
read
some
later
or
whether
you
want
to
create
a
new
object,
etc.
So,
in
many
ways
it's
kind
of
a
step
back
in
terms
of
accepted
best
practice.
I
have
a
wonderful
book
on
my
bookshelf
called
restful.
D
Do
it
completely
differently?
Anything
else
rest
can
do
graphically.
Well,
can
also
do
recover
some
of
the
capabilities.
Later.
It's
not
like
we're
giving
up
on
things
and
there's
no
replacement
for
them.
You
could
argue
they're
giving
up
on
the
single
client.
Ambition
means
that
you
can
do
these
things
better
as
well,
and
we'll
come
on
to
that
a
little
bit
more
later
as
well.
What
we're
really
trying
to
optimize
for
is
flexible
queries
that
can
be
served
very
efficiently
by
the
server,
so
the
client
can
always
request
exactly
what
he
wants.
C
Has
there
been
talk
about,
like
you
said
now,
we
we
always
post
to
the
endpoint
and
in
the
rest
world
we
could
optimize,
gets
to
only
read
and
like
do
optimizations
based
on
what
we
think
is
going
to
happen
and
it
get
doesn't
write
anything
so
we
can
optimize
for
that.
Are
there
several
ways
of
doing
that
progress?
Well,
it's.
D
One
of
those
things
were
giving
up
on
caching
especially,
is
very
different
in
the
graph
QL
world,
and
essentially
it's
left
up
to
each
individual
client
a
whole
bunch
of
good
things.
Rest
gives
you
like
the
ability
to
have
a
caching
proxy
in
the
middle
that
keeps
hold
of
requests
and
serves
them
for
as
long
as
they
seem
in
date,
just
go
away,
and
that
sounds
awful
and,
as
I
said
initially
I've
got.
Why
would
you
do
any
of
that?
That
sounds
like
a
terrible
idea.
D
20
is
a
best
practice,
gone
20
years
of
ecosystem
no
longer
available
to
you,
and
certainly
when
I
first
encountered
graphic
UI
for
oh,
this
is
just
JSON,
RPC
or
XML.
Rpc
doesn't
slightly
differently
and
there's
a
lot
of
truth
to
that,
and
it
does
lead
to
the
question
of.
Why
would
we
even
use
it
in
that
case
and
I
think
I
struggled
of
that
for
a
while?
But
just
yesterday,
while
I
was
writing
these
slides
in
a
great
hurry,
I
came
across
a
merge
request.
D
That
kind
of
I
mean
I've
already
changed
when
I
slide
it,
but
it
kind
of
encapsulated.
Why
we
changed
our
minds?
It's
a
wonderful
feature
being
worked
on
by
Mario.
It
tries
to
improve
the
performance
of
a
specific
area
of
gait
lab
in
doing
so.
It
bumps
up
again
are
some
functionality
in
our
REST
API.
Essentially,
there's
one
attribute
in
the
commit
model
which
we
can
no
longer
serve
from
the
efficient
data
saw
store.
The
thing
is
that
attribute
is
used
by
basically
none
of
our
API
clients.
D
They
get
it
because
in
the
REST
API
model
we
have
there
are
projections,
it's
a
thing
that
exists
in
rest,
but
they're
quite
difficult
to
add,
and
they
it's
difficult
to
add
them
in
a
backward
compatible
way.
If
you
want
to
remove
a
field
from
the
response,
because
you
covered
it's
expensive
to
calculate
and
basically
nobody
uses
it,
you
can't
move
it
without
breaking
the
compatibility
guarantee.
All
you
can
do
is
add
a
new
projection
that
somebody
can
select-
and
this
might
be
again
/ap
IV
for
commits
question
mark-
gives
you
equal
symbol.
D
That's
a
projection,
it
says,
leave
out
some
fields,
but
it's
difficult
to
be
in
a
backward
compatible
way
and,
as
a
result,
we
tend
not
to
in
this
merge
request.
It's
a
single
field
that
turns
out
to
be
really
expensive
to
calculate,
and
we
just
can't
remove
the
field
without
breaking
our
compatibility
guarantee.
We
also
don't
have
much
visibility
into
which
fields
are
actually
being
used
by
clients,
because
we
always
just
give
them
back
the
whole
response.
D
D
A
perhaps
the
Cammisa
spread
across
three
projects,
there's
20
in
total,
so
we
can
reduce
it
to
perhaps
three
queries
instead
of
20,
but
it's
quite
difficult
to
monkey
patch
that
into
existing
rest
api
is.
We
have
to
learn
a
couple
of
places
on
get
lab.
Comm
commit
off
as
a
one
example,
and
there
was
an
M
plus
one
issue:
loading
lots
of
different
commit
offers,
and
we
have
to
fix
that.
It's
a
lot
of
killed.
D
It's
spread
out
it's
hard
to
understand,
and
while
I
was
looking
at
this
major
quest,
I
found
myself
really
wishing
that
this
was
a
graph
QL
API,
because
if
it's
graph,
QL
API,
the
client
always
tells
the
server
exactly
what
fields
it
wants.
So
for
the
vast
majority
of
cases
where
this
field
isn't
needed,
the
clients
never
going
to
ask
for
in
the
first
place,
so
there
will
be
absolutely
no
issue
removing
it
from
a
default
query,
because
there
is
no
default
query.
B
D
D
Visionless
it
hasn't
built
in
deprecation
mechanism.
You
have
instrumentation,
you
can
see
which
fields
being
used
in
which
arms,
so
you
could
put
out
an
advisory
saying
we're
going
to
remove
these
fields
in
a
few
months
time,
and
then
you
can
monitor
where
wrong
people
have
stopped
using
them.
It
really
helps
and
lazy.
Evaluation
is
something
that's
just
kind
of
built
into
the
frameworks
from
the
start.
Anybody
can
add
a
new
field
to
graph
QL
and
have
that
be
loaded
in
a
way
that's
respectful
of
the
underlying
performance
possibilities,
so
I
get
lab.
D
We
can
only
query
repositories
one
at
a
time.
So
if
you
query
three
different
projects,
you
have
to
have
at
least
three
queries,
but
if
you're
going
for
commits
in
each
of
those
three
different
projects,
you're
only
doing
three
queries,
rather
than
perhaps
twenty
or
forty
queries
to
get
all
the
information.
So
it
really
helps
you
to
build
that
in
just
as
part
of
the
framework
I
think
we
can
overcome
most
of
these
disadvantages
using
rest,
but
we
end
up
with
a
very
complex
code
base
and
a
very
complex
API
just
difficult
to
use.
D
D
D
It
was
just
difficult
to
use
and
they
did
literally
throw
out
20
years
of
best
practice
start
again
designing
something
that
worked
for
their
specific
use
case,
which
is,
of
course,
a
single
client
for
a
single
website
rather
than
one
client.
That
was
lots
of
different
websites
and
what
they
did
was
essentially
take
the
API
and
optimize
it
for
that
case,
rather
than
trying
to
be
more
general,
it's
linked
quite
heavily
to
react.
D
Ojs
they
were
both
opens
from
react.js
was
open
source
and
around
2012
name
was
used
internally
at
lab
at
Facebook,
rather
with
relay
to
actually
build
the
web
applications,
they
were
deploying
to
iOS
to
Android,
etc.
Graphed
well
itself
was
open
sourced
in
2015,
and
that
was
a
specification
plus
a
reference
implementation
in
2017.
They
make
him
aware
of.
D
I
was
involved
in
this
issue
where
we
looked
at
essentially
starting
a
graphic,
UL,
API
ourselves,
I
think
it
over
to
start
one
as
well
so
you're
kind
of
we
have
an
irony,
and
you
want
to
know
why
there
were
some
issues
with
patent
grants
or
the
lack
of
them.
Rather,
they
were
raised
and
resolved
in
2017
around
the
time
we
were
working
on.
The
initial
merge
request
to
add
sport
to
get
up
as
if
they
were
resolved,
and
now
it
is
an
entirely
open.
D
D
We
still
don't
have
full
support
in
killer
before
everything
you
can
do
in
the
REST
API
in
graph
QL.
But
what's
missing
it's
still
alpha
on,
we
are
looking
to
build
out
that
support,
so
everything
the
REST
API
can
do.
The
graphical
API
can
do
as
well.
The
first
feature
that
we've
got
using
graphic
Euler's,
actually
issue
suggestions.
D
It's
really
good
fun
and
you
starts
having
an
issue
title
and
it
will
find
matching
issue
titles
and
if
graph
QL
is
enabled
on
get
lab,
it
will
do
that
using
graph
QL
instead
of
the
REST
API
and
it's
a
graph.
Ql
is
house
now
independent
from
Facebook
in
a
lot
of
ways.
They've
just
started
a
new
foundation
for
it
and
they
are
still
members
of
the
foundation
but
they're
trying
to
turn
it
into
more
of
a
public
thing
than
a
Facebook
thing.
D
So
I
just
want
to
cover
some
basics
of
graph
QL.
Before
we
dig
into
the
query
language
itself
have.
The
most
important
aspect
is
that
everything
literally
everything
in
graph
QL,
is
a
field
and
throughout.
If
is
a
field
I'll
italicize
it
a
field
is
essentially
a
calculation.
It's
saying
this
is
going
to
return
some
data
and
you
can
pass
arguments
for
fields
as
you
can
see.
D
On
the
right
hand,
side
you've
got
this
field,
echo,
which
is
actually
a
function,
and
it
takes
some
text
as
an
argument
and
it
returns
some
text
the
exclamation
mark.
At
the
end.
That
just
means
you
will
always
get
some
text
back,
you'll,
never
return
nil.
It
will
always
be
a
string,
and
the
field
encapsulate
some
calculations
on
my
text.
What
actually
does
happen
I'll
just
demonstrate
over
here.
You
can
check
your
world
if
you
run
that
it's
just
as
a
parse
error.
D
So
you
can
see
this
is
running
on
my
local
machine
and
the
definition
of
the
echo
field
normally
we'd
call
this
function.
The
definition
is
that
it
always
returns
back
the
same
string.
We
have
the
username
prepended,
it's
the
little
testing
thing,
but
if
it
is
a
field
rather
than
being
a
function
or
a
type
or
anything
else,
it's
a
field.
I'll
say
that
a
lot.
D
If
it
feels
to
take
arguments,
fields
have
types
arguments
have
types,
everything
has
a
type
types
are
also
fields.
I
just
want
to
make
that
point.
In
the
schema.
There
is
a
field
that
list
types
and
types
of
fields.
It's
great
for
some
types
have
built
him,
some
a
user-defined,
and
there
are
some
special
types
right
at
the
top.
Is
the
schema
type
and
that's
always
has
a
query
type
and
a
mutation
type
which
access
access
from
the
query
and
mutation
fields.
D
You
can
probably
see
what
is
going
types
can
have
fields
themselves,
so
you
end
up
with
a
graph
of
fields.
You
know,
I've
got
the
query
field,
the
project
field,
the
ID
field,
that's
a
graph
free
nodes
want
to
free.
Let
me
have
the
issue
here,
so
that's
free
for
right
into
the
I
ID
you
forming
a
graph
of
fields
essentially,
and
it
gets.
D
D
Fql
comes
with
a
schema
built
in
and
you
event.
Essentially,
you
define
the
schema
through
code
and
that
determines
what
fields
are
available
to
users.
What
types
are
available
to
be
users,
and
also
what
directives
are
available
to
be
users.
The
whole
thing
is
entirely
queryable.
I
just
demonstrate
this.
D
B
B
D
From
this
schema,
it's
a
bit
like
swagger,
except
its
built-in.
You
can
introspect
exactly
what
you
can
do
with
the
graph
QL
endpoint,
which
is
pretty
handy
for
API
clients.
You
can
also
generate
them
also
for
auto
completion.
You
can
see
in
this
graphic
ul
endpoint
if
I
start
typing.
It
shows
me
what's
available
within
that
context,
and
that's
all
based
on
what's
a
murky
or
like
swagger,
the
schema
is
automatically
generated.
You
don't
have
to
put
any
work
in
to
get
that,
which
is
very
nice.
D
From
my
point
of
view,
you
want
your
automatic
documentation,
which
is
great
in
order
to
use
graphic
UL,
mostly
right
inquiries.
They
look
like
this.
It's
essentially
a
type
of
SQL
for
graphs.
If
you've
ever
used
a
graph
database,
the
syntax
is
quite
similar
to
that.
But
it's
relatively
niche
in
the
world
I'd
say
most
people
are
familiar
with
at
least
a
bit
of
SQL.
D
D
So
the
request
is
on
the
left.
The
response
is
on
the
right
and
it's
just
a
screenshot
of
up
here.
It's
pretty
much
the
same
so
in
here
we
have
the
outer
block
and
then
we
specify
in
there
what
fields
we
want,
if
you
don't
say,
query
on
line
one,
it's
implied.
So
it
is
just
not
here
and
you
can
just
leave
that
type
means
exactly
the
same.
The
definition
is
a
set
of
fields.
You
say:
I
want
this
field.
I
want
this
field.
I
want
this
field.
D
If
a
field
has
fields
itself,
this
field
project
is
of
type
project.
We
know
that
it
takes
the
argument
for
path
which
specifies
a
particular
project
return.
That's
all
server-side
semantics
there
knows
about,
and
it's
documented
in
the
schema,
but
because
it's
of
type
project
we
can
also
say
okay
from
the
type
project
I
want
the
fields.
Id
I
want
the
field
issue
which
takes
an
argument.
Each
is
the
I
ID
went
from
that
I'm
on
the
web,
URL
and
then
down
here.
We've
got
two
more
fields.
D
D
D
The
uses
for
it
are
many,
and
obviously
it's
not
just
this
kind
of
query
you
can
make
you
can
do
it
for
anything
and
behind
the
scenes.
Graphic
UL
works
very
hard
to
make
miss
efficient.
If
this
were
a
REST
API,
we
probably
make
three
separate:
get
requests
one
form
project,
one
whole
project
to
wolf
project
free
here
we've
made
a
single
request,
we've
exactly
what
we
want
and
the
return
date
on
the
right
is
also
exactly
what
you
want.
D
If
this
were
a
REST,
API
we'd
have
three
large
documents
containing
full
details
of
all
three
projects.
We've
done
three
round
trips,
which
is
slow,
we've
probably
done
them
in
series,
because
parallelizing
things
is
quite
hard
on
the
client
side
and
we've
got
far
more
data
than
we
want
in
graph
QL,
you've
written
a
single
query.
It
specifies
exactly
what
we
want
to
a
server
if
the
project
were
quite
simple
and
the
project
model.
This
will
be
a
single
query
from
the
graphical
point
of
view.
D
In
fact,
it's
a
few
more
than
that,
because
getting
issues
adds
one
or
two.
But
the
point
is
it
matches
up
the
request,
because
the
clients
exact
asking
for
exactly
what
it
wants?
The
server
can
serve
that
very
efficiently,
so
it
will
translate
that
into
SQL
double
using
a
select
star
from
projects
where
full
path
is
in
one
two
or
three,
which
means
that
it
needs
to
do
one
square
mile
instead
of
three
and
per
database
to
paralyze
step
for
us.
It's
absolutely
great.
D
The
response,
as
you
can
see,
has
just
a
feel
to
best
of
all
it
doesn't
matter
how
many
fields
a
project
has,
because
we
only
ever
get
back.
What
we
asked
for
this
is
a
huge
benefit
compared
to
what
normally
happens
in
rest,
because
you
don't
have
to
pass
all
that
extra
Jason.
You
don't
have
to
generate
all
that
extra
Jason.
It
just
makes
everything
much
easier.
Downside,
of
course,
is
that
the
response
is
less
generally
applicable.
You
can't
use
this
response
and
there's
many
different
respond.
D
Context
should
say,
as
you
build
a
general-purpose
restaurant,
so
the
value
of
caching
actually
goes
down
a
lot
with
graphically
well
and
we'll
talk
a
little
bit
more
about
caching
later,
but
it's
just
not
something
you
do
as
often
because
you're
making
much
more
specific
queries
for
less
generally
useful
I,
also
forgotten
there
yeah.
The
only
other
thing
I'd
like
to
ask
note,
is
that
you
could
interfere
me
generate
a
very
complex
query
that
asks
for
and
there
are
10,000
projects
and
that
would
kill
the
server.
So
there
are
ways
to
restrict
complexity
of
queries.
D
You
can
restrict
the
maximum
depth
if
you've
got
very
complex,
nested
types
with
many
fields
you
can
say
the
maximum
depth
is
10
or
100.
If
you
said,
the
maximum
depth
is
1
here,
you'd
be
able
to
get
the
project
ID,
but
you
wouldn't
be
able
to
get
the
issues
wherever
you
are
el,
without
limits
how
expensive
it
is
to
pass
the
query
and
to
respond
to
the
query,
you
can
also
limit
the
overall
complexity,
which
is
usually
a
measure
of
how
many
fields
are
asking
for.
D
I've
got
a
slightly
more
complex
query
here,
going
into
pagination,
especially
for
issues
and
project
might
have
millions
of
issues,
and
we
don't
want
to
return
them
all
at
once.
In
rest,
we
deal
with
this
usually
by
having
an
offset
and
I'll
limit.
We've
started
introducing
he
set
pagination
into
the
rails,
API,
which
works
a
little
similarly,
but
you
specify
exactly
what
is
want
in
graph
QL,
it's
very
different.
For
a
start,
the
pagination
information
is
included
directly
in
the
response
in
the
REST
API.
D
So
it's
just
easier
to
put
into
the
headers.
It's
not
necessarily
the
best
place
for
it
to
be.
We
can
see
in
this
query
or
asking
for
the
first
two
issues,
we're
saying
sort
by
created
at
ascending.
You
can
also
go
backwards.
There
are
a
number
of
different
directives
you
can
use
for
the
issues.
The
response
returns
a
very
strange
type
for
issues
which
is
an
issue
connection.
It's
not
an
array
of
issues,
it's
a
type
which
has.
D
D
Issue
connection,
as
you
can
see,
and
it
has
just
two
members-
it's
got
this
page
info
which
carries
the
pagination
and
that's
the
same
for
essentially
every
connection
type
that
exists
in
also
as
it
edges,
which
is
not
an
array
of
issues,
but
an
array
of
issue
edges
issue
edge
contains
I,
cursor,
I've
ever
known
itself,
which
is
actually
the
issue
there
was
some
shortcuts
you
can
use
to
get
around
this
deeply
nested
construct
we're
not
using
the
monkey
like
comment
present.
The
cursor
allows
you
to
say
at
any
point
paginate
from
here.
D
D
It's
easy
to
figure
out
exactly
how
this
works
there
we
go,
so
you
say
after
the
cursor
you
know.
Maybe
you
decide
you've
processed
the
first
five
for
some
reason:
you
throw
away
the
next
one.
You
want
to
go
from
this
one.
Instead,
you
take
this
cursor
and
that
will
go
from
there
instead,
but
you
can
also
just
get
a
page-by-page
by
looking
at
you
cursor
of
the
connection
type
itself.
D
D
So
these
are
all
options.
You've
got
as
I
said.
The
earlier
caching
is
out,
there's
just
no
way
to
make
this
generally
applicable.
It's
completely
gone
patter,
both
usual
HTTP
semantics
for
caching,
because
every
file
is
a
post,
we've
not
actually
bothered
handling
this
at
all.
We've
get
bad,
yet,
instead
we're
just
kind
of
waiting
to
see
if
it
becomes
a
problem
which
might
not
be
the
best
idea,
we'll
see
how
it
goes.
D
The
graphical
way
to
solve
this
is
to
move
caching
into
each
individual
client,
rather
than
having
it
at
the
edge
of
the
server.
So
every
entity
gets
a
graphical
guess,
a
globally
unique
user
ID
and
use
fat
to
decide
whether
or
not
to
go
in
to
fetch
that
object
again.
It's
it's
not
ideal.
Like
I
said
there
are
downsides
as
well
as
upsides
to
graphical.
It's
not
like
an
amazing
addition
to
rest.
It's
a
complete
replacement
of
it.
D
Everything
rest
as
better.
It
gets
closed
back
on
to
the
query
language
you
can
have
fragments
and
you
can
also
have
variables
to
you
pass
into
them.
This
is
a
fairly
complicated
example,
but
essentially,
we've
pulled
the
definition
of
the
fields
we
want
out
of
a
project
into
a
fragment.
We've
included
the
ends
of
all
3
project
definitions,
we've
also
names
for
query,
so
that
we
can
give
it
a
and
parameter
here,
which
is
the
path
and,
as
you
can
see,
Allen
query
variables.
We
specify
the
path.
D
What
this
means
is
that
the
query
itself
can
be
static.
You
don't
have
to
interpolate
into
the
query.
It's
have
a
static
string
in
your
clients,
which
is
designed
for
these
you've
got
in
this
case,
calling
a
project
alongside
get
laps,
EE
EEE,
and
you
just
changed
the
variable
at
runtime
and
passed
that
in
the
query
string
in
order
to
change
the
result
you
get
back
from
the
client
graph.
Ql
is
kind
of
designed
to
help.
You
have
fewer
more
general-purpose
queries
in
that
sense
and
they
can
get
quite
complicated.
D
You
can
specify
multiple
queries
and
then
select
them
by
their
name.
So
if
we
have
in
query
projects,
we've
killed
AB,
just
the
creamy
projects,
we
could
have
the
same
string.
We
send
the
same
string
to
the
server
and
just
say
we
run
this
query.
Instead
of
this
query,
in
this
case,
it
gets
even
more
complicated
with
directives.
The
query
language
allows
you
to
say
include
this
field.
D
If
this
variable
is
true
or
false,
you
can
also
skip
the
field
in
the
same
case,
so
you
can
build
up
to
be
quite
complicated
and
do
a
lot
of
different
things
and
directives.
I
think
you
can't
specify
your
own
custom
directives
as
well.
They
show
up
in
the
schema
I'm
not
really
sure
how
to
do
that,
and
it
was
as
a
deep
dive,
but
I've
not
gone
that
deep.
So
we
will
see
if
that
becomes
useful
in
the
future
as
it
is.
D
C
D
The
server
can
specify
a
default
variable,
a
default
value
for
the
variable.
The
point
with
having
them
evaluated
server-side
is
so
that
the
client
doesn't
need
to
modify
the
query
string
interpolating
into
a
string
is
dangerous.
Think
of
SQL
injections,
so
just
having
a
static
string
that
the
client
always
sends,
along
with
the
variables
for
the
server.
D
The
interpolation
keeps
fat
parts
safe
and
the
query
is
validated
to
be
well
formed
and
the
variables
are
inserted
essentially
like
a
programming
language.
It's
just
a
lot
safer
to
do
it
like
that.
It
does
mean
that
each
request
to
the
server
takes
a
bit
more
text
with
it,
but
those
requests
are
typically
compressed
anyway.
The
queries
don't
get
that
large
and
there
are
ways
to
teach
you
placate.
So
quite
how
much
you
want
to
have
a
interfering
you
could
have
a
single
document.
D
You
always
send
with
every
graph
QL
request
with
a
thousand
queries
in,
and
you
just
select
one
of
the
time
and
depending
on
which
one
you
want
to
run.
Typically
I'd
expect
we'd
break
it
or
a
bit
more
than
that.
We
might
have
a
couple
of
queries
and
most
per
stats
document,
but
each
individual
document
will
be
static
and
graphic
ul
gives
you
the
ability
to
kind
of
choose
how
far
down
that
route.
You
want
to
go
some
examples.
D
D
So
we
talked
about
directives,
that's
fine.
One
thing
we
haven't
talked
about
so
far
is
changing
things
as
you
might
expect
them
from
a
company
like
Facebook.
Most
of
their
work
is
reading
data
customers.
Their
applications
want
to
get
a
lot
of
data
to
look
at
and
make
changes
relatively
infrequently
and
I'd
argue.
This
is
mostly
the
case.
Forget
lab
comm
as
well.
D
It's
not
the
read,
write,
read,
read,
write
web
fat
rest
envisioned,
but
in
general,
people
are
reading
from
it
that
consumers,
rather
than
changing
things
and
mean
producers-
and
this
is
probably
why
web
dev
isn't
very
popular
these
days
either
because
most
of
the
time
you're
reading
data
graphs
QL
is
very
much
optimized
for
us
and
making
it
amazing.
You
can
change
things.
Here's
an
example
on
github.com
we
have
a
single
mutation,
which
is
what
I
called
all
that
does
is
change
the
WIPP
status
of
a
merge
request.
D
D
Instead,
here's
an
action
in
the
name
you've
got
to
do
in
the
rest
kind
of
context,
it'll
be
like
having
lots
and
lots
and
lots
of
new
special-purpose
HTTP
verbs,
so
you've
got
get
put
Pat
patch,
post
and
delete,
but
perhaps
you
might
also
have
HTTP
verb
called
set
whip,
and
that
would
be
very
similar
to
the
RPC
interface
that
we
have
in
graph
QL.
You
do
get
a
lot
of
the
same
benefits,
though
you
can
learn
multiple
mutations
in
a
single
query
and
out's
exam
shown
down
below.
D
We
don't
really
know
what
this
is
going
to
look
like
in
the
future,
as
mutations
get
more
complicated,
I
expect
how
we
handle
them
in
the
in
the
code
base
to
also
get
more
complicated.
I
don't
have
good
answers
yet
for
how
we're
going
to
handle
on
those
complicated
mutations
is
quite
possible
that
graphic
UML
just
isn't
the
best
choice
for
some
of
them.
We'll
just
have
to
wait
and
see
how
it
goes.
D
One
note
I'll
make
is
that
RPC
interfaces
aren't
for
devil
italy,
for
instance,
it'll,
RPC
interface,
using
g
RPC
and
that's
been
a
success.
It's
been
very
good.
What
it's
not
is
general-purpose
I
found
to
have
a
single
client
which
can
reasonably
consume
any
RPC
interface.
You
always
leans
for
special
purpose
code
to
do
so,
and
you
get
advantages
from
that,
but
also
disadvantages
in
terms
of
a
loss
of
generality.
D
One
thing
I've
seen
coming
up
is
G
RPC,
which
is
what
Google
uses
in
JavaScript
and
I
expect
at
some
point
in
the
future,
we'll
see
a
website
which
is
a
web
application
written
in
JavaScript
the
talks
to
a
G
RPC,
the
api
server
in
the
background
and
that'll
be
very
interesting
to
see,
and
that
might
be
better
than
a
graphical
mutation.
It
might
not
one
thing's
for
sure
it
will
not
be
general-purpose
moving
on.
D
We
also
have
subscriptions
in
graphs
as
well,
and
this
is
a
very
simple
slide,
because
I
have
not
got
a
clue.
All
I
know
is
that
they're
very
similar
to
existing
pub/sub
mechanisms
like
action,
cable
over
rest.
Well,
you've
got
a
long,
live
WebSocket
and
you're,
pushing
events
down
it
like
an
event
stream,
and
you
can
also
push
requests
back.
D
The
graphical
web
you
we're
using
on
github.com,
has
support
for
relay
and
actually
label
out
of
the
books,
and
we
might
be
able
to
make
use
of
this
in
the
future
when
we
move
to
Puma
as
the
main
web
server
at
present,
because
we're
using
unicorn,
we
don't
have
much
scope
to
opt
for
these
long.
Live
connections,
they're
just
bad
for
the
backend,
because
they
tie
up
too
many
resources
for
the
connection.
So
we'll
see
how
that
moves
in
the
future,
we
might
be
able
to
make
something
really
good
there.
D
D
You
can
have
cookie
based
authentication
token
based
authentication,
if
you're
using
cookies,
then
your
self
CSRF
protection
in
order
to
prevent
you
from
being
hacked
by
arbitrary
third
party
JavaScript
on
different
web
sites,
and
it
all
works
more
or
less,
as
you'd
expect.
One
advantage
we
do
have
with
graphic
ul.
Is
it's
very
easily
expected
by
authorization
field?
D
You
can
also
ask
for
your
Commission's,
so
in
the
example
we've
got
here,
we
ask
what
permissions
the
user
has
over
that
project
and
the
user
can
see
that
they
are
allowed
to
read
the
project
both
enough
power
to
admin
project,
so
that
can
be
used
in
the
front-end
to
show
I
hide
additional
controls
and
that's
not
something.
We've
ever
really.
D
D
Maybe
someone
will
be
able
to
jump
in
a
bit
later
and
walk
me
through
that,
because
I
honestly
don't
know,
but
here
I've
got
a
list
of
the
important
facts
you
might
want
to
change
and
the
place
you
need
to
look
if
you're,
adding
new
functionality
to
the
gradual
API
we
do
not
currently
have
support
for
ye
only
features.
So
the
graph
QL
API
is
the
same
and
getup
core
as
it
is
in
Gil
of
ultimate.
D
D
New
attribute,
then
you
just
go
into
the
project
type
I,
think
a
field
of
the
name
of
that
attribute,
so
the
clients
can
request
what
attributes
from
the
graph
QL
as
well
as
the
rest,
API,
sometimes
you'll,
add
a
new
type
or
a
new
top-level
query.
At
present,
we
have
very
few
top-level
queries.
We've
focused
on
the
project's
quite
a
lot.
If
it's
an
echo,
it's
got
navigation,
which
is
the
top
level
type
and
from
light
you
get
the
vision
and
condition
of
the
rescue
L
endpoint.
D
D
D
D
D
This
uses
a
gradual
client
library
which
hasn't
been
wrapper
around
the
palo,
but
it
has
a
static
file
that
contains
the
graph
QL
query
and
you
just
put
the
two
together
in
JavaScript
I:
don't
really
know
how
it
works.
It
doesn't
work
it's
great
and
in
general,
what
we're
trying
to
do
is
to
so.
The
way
I'm
trying
to
do
is
reimplemented
the
REST
API
in
terms
of
graph
QL,
so
the
REST
API
itself
becomes
a
client
of
the
graticule
API
and
the
rest
server
they've,
API,
etc
in
the
Gilad
codebase.
D
It's
just
a
set
of
queries
that
are
executed
against
the
graticule
code,
and
this
prevents
us
from
having
to
completely
independent
in
fermentations
of
the
same
functionality.
Going
back
to
that
search
issue.
I
did
have
a
look
at
of
what
we'd
need
to
do
to
support
it.
Actually,
it
was
a
bad
example.
The
Search
API
is
just
far
too
complicated,
so
I
went
back
to
a
different
example.
I
had
a
while
ago,
file
templates
I've
just
been
five
minutes
kind
of
going
through
the
files
that
we've
changed
so
on.
D
So
project
have
templates
for
different
types
of
file
and
project
can
have
a
template
for
its
get
ignore
file.
For
instance,
when
you
add
a
new
file,
you
can
choose
from
different
types
of
templates
to
apply
them
to
the
new
file
that
you're
adding.
We
have
support
for
this
in
the
main
API
right
here.
It
just
uses
this
template
finder
to
get
a
list
of
templates,
essentially
it's
very
simple
templates
of
a
different
type,
but
there's
no
support
being
graph
QL.
D
So
here's
where
we
changed
the
JavaScript
in
order
at
the
moment
we're
calling
API,
lock
project
templates,
which
calls
the
REST
API.
We
need
to
change
this
to
use
the
new
graphical
client
I
have
no
idea
how?
Obviously
the
pagination
essentially
is
the
change
as
well
then
down
in
the
Start
project
type.
D
So
the
graphic
UL
project
type
is
included
from
the
graph
QL
query
type,
which
is
included
from
the
graphic
ul
schema
as
form
in
that
graph
that
we
talked
about
earlier
and
we've
added
a
new
field
to
it
called
file
templates,
which
E
is
a
particular
type
and
we've
told
it
which
resolver
to
use
as
well.
So
we're
returning
these
tanks
as
because
E
is
a
connection
type
just
like
the
issues
were,
so
we
get
all
the
pagination
goodies,
essentially
for
free
and
we're
specifying
a
custom
resolver
of
a
custom
type.
D
This
course
from
get
Webb's
existing
code
base,
and
this
is
what
we
output
in
the
gravity
well
response:
if
we
go
to
the
file
template
type
to
start
with
them,
we
can
see
it's
got
a
name,
a
file
template.
This
goes
into
the
schema
that
we
automatically
generate
and
the
file
template
has
a
set
yields
itself.
It's
got
a
type
which
is
an
enum,
so
it's
got
four
possible
values.
Also
in
the
schema.
The
rest
API
has
this
documented
manually
with
text.
Then
it's
not
machine
possible.
D
It's
just
the
distinction
between
the
two,
but
this
could
be
arbitrarily
complex
code
and
in
the
case
of
the
file
template
resolver,
it
is
arbitrarily
complex
code.
That's
just
saying
how
to
generate
that
field.
With
this
code,
I've
got
a
name
and
content
as
well
and
there's
a
bunch
of
that's
not
supported
yet
here
which
still
needs
to
be
added.
We
have
in
the
missile,
they
hear
them
here.
It's
a
bit
more
complicated.
We
specify
two
arguments
and
essentially
what
happens
is
the
result.
D
D
Yes,
this
was
pulled
together
in
about
five
minutes
last
night,
so
it's
not
convenient
at
the
present.
This
is
vulnerable
to
M,
plus
1
and
query
is
that
if
we
ask
for
three
different
types
of
file
templates
will
run
has
killed
three
different
times
in
the
project
case.
We
shell
out
to
a
loader
here
which
matches
it
up
and
executes
sitting
in
parallel.
We
don't
have
that
support
in
here
yet,
but
it's
fairly
easy
to
add.
D
Finally,
the
API
itself.
What
we've
done
here
is
trying
to
convert
the
existing
arrest
API
into
a
graph,
your
client,
so
we
specified
to
be
query
that
we
want
and
they've
got
a
little
helper
here-
that
execute
the
query
with
these
variables.
Guessing
the
query
is
completely
static.
We
never
need
to
interpolate
the
full
path
of
the
project
or
the
type
template
that
we
want,
and
then
it
returns
exactly
the
same
data
in
the
specs.
D
We
have
modified
the
existing
API
expects
so
we
run
both
without
and
we
have
graph
QL,
and
this
helps
to
verify
that
the
functionality
is
complete.
That
works
in
all
cases,
as
it
is
I
think
there's
a
bunch
of
learning
tests,
because
this
is
not
complete
by
the
way.
It
is
all
passing,
that's
good
so
as
I
want
you
to
do
this,
but
this
is
quite
close
to
adding
this
functionality
without
duplicating
this
piece
of
project
I'm,
quite
proud
of
it.
D
I'll,
probably
finish
it
off
in
the
next
few
days,
get
it
merged
graph
to
LAPI
or
people
slippers
at
the
parity,
which
would
be
very
nice.
So
that's
everything
I
wanted
to
say
and
I'm
really
impressed
that
came
in
on
time.
Actually,
I
wasn't
really
sure
how
long
it
would
take
and
see
if
you've
got
any
questions.
Yes,
we
do
so
efficiently
authorizing
arbitrarily
complex
queries
can
be
a
challenge
that
you
have
any
answers:
flash
or
I.
D
E
D
D
So
I
mean
I,
don't
have
any
good
answers
for
it.
What
we
do
allow
at
present
is
each
field
is
emphatically
authorized
upon
if
the
authorization
check
doesn't
pass.
That
means
that
the
field
isn't
built
yet
and
all
that
I
think
what
this
is
referring
to
is
the
case
where
say
so,
any
queries
a
million
different
commits
and
we
have
to
work
out
wherever
they
can
for
you,
each
commit,
which
is
quite
expensive.
E
Yeah,
no,
that's
fine.
I!
Don't
expect
playing
these
questions
to
necessarily
have
three.
Oh
yeah.
We
sold
this
answers
so
my
second
one,
though
you
gave
an
example
where
removing
the
fields
for
that
elasticsearch,
mr,
would
have
been
able
to
help
but
I
feel
like
in
general.
An
optimization
is
often
like.
We
know
that
we
never
need
to
do
this
and
we
can
show
you
that
we
never
need
to
do
this,
so
we
won't
do
it
and
by
not
doing
it,
we
save
a
bunch
of
time,
but
it
seems
like
graph.
E
D
B
E
But
in
fact,
we
never
actually
do
anything
with
that
in
the
API.
We
just
load
it
for
no
reason,
for
instance,
where
is
in
graph
QL,
you
might
just
be
able
to
say,
like
I
can
load
this
as
well
as
this
at
the
same
time,
so
you
don't
have
that
same
sort
of
certainty
about
what
the
clients
can
I
do
with
it.
D
I'm
not
too
sure
I'm
getting
it,
but
certainly
if
there's
certainly
we'd
never
want
to
expose,
we
still
never
expose
it.
Certainly
the
clients
can
request
combinations
of
things,
but
because
of
the
effort
we
put
into
avoiding
M
plus
ones
in
the
backend
and
all
we
have
to
do
that.
I
think
in
general,
they're
going
to
see
about
the
crews
will
be
more
efficient.
A
A
So
anyway,
you
showed
something
like
that.
We
have
like
an
H
from
Project
two
issues.
Right
and
I
have
a
specific
use
case.
So
I
I
won't
have
like
issues
as
the
top
level
thing
and
I
don't
know.
I
want
to
give
like
so
and
it's
just
something
that
can
be
done
easily
because
it
seemed
to
me
that,
like
if
you
had
a
lot
of
edges,
it
just
gets
like
deepness
the
J's
madness,
yeah.
D
D
There's
no
reason
why
we
can't
do
that.
How
can
you
not
to
have
done
the
head
lessons,
and
maybe
this
talks
a
little
bit
about
what
Sean
was
all
about
as
well?
It's
up
to
us
to
decide
what
fields
are
available.
We
can
do
this
if
we
want
to
we're
specifying
here.
This
is
another
option.
We've
got
at
the
moment
for
SQL
queries
go
against
here
and
that's
something
that
we
can
supporting
these
for
forever.
You
can
also
have
API
gravity
well
projects
like
so,
and
then
that
would
imply
this
project.
A
D
A
Wouldn't
be
interested
in
that
because
that's
boring
I'm
sorry
I
would
be
interested
in
being
able
to
make
great
joins
right.
So,
for
example,
I
have
emails
and
I
want
to
sort
them,
and
these
emails
are
from
C
E
and
E.
E
and
I
want
to
maybe
have
a
nice
joint
query.
Bam
like
okay,
this
5c
is
giving
back
right
and
what
I
prefer
to
add
some
structure
I
have
to
do
that
later
on,
but.
D
And
then
we
just
go
emails,
that's
something
we
can
add
for
support
for,
and
it
would
mostly
be
a
helper
around
this
syntax
that
just
returns
it
in
a
nicer
way.
So
it
is
possible.
We
haven't
done
it
yet
and
certainly
for
interested
in
that
kind
of
thing,
it's
kind
of
a
pointless
to
drag
is
we
want
to
be
able
to
add
these
things
as
people
actually
use
them.
Okay,.
D
Got
anything
else
in
there
YUM,
so
in
general
we
should
use
bachelor.
The
fraud
system
has
many
relationships.
Yes,
absolutely.
Will
this
the
work
of
Nestle
relations?
Yes,
this
is
quite
all
the
cleverer
thing
is
actually
about
how
they
use
in
the
back.
To
that
were
present,
say
we
have
two
projects.
B
D
So,
instead
of
being
four
queries,
it's
two
queries
and,
as
you
add
projects,
level
of
the
ultimate
phase,
it's
still
just
those
two
queries.
That's
how
we
using
The
Bachelor
of
presence,
and
it
doesn't
make
a
big
difference,
obviously,
without
restrictions
on
complexity
in
depth.
This
can
get
up
to
tens
of
millions
of
billions
of
projects
and
issues.
So
it
raises
difference
gathered
of
the
T
challenges
that
you
have
some
knobs
and
bounds
for
them
and
hopefully
you'll
be
able
to
add
more
in
the
future.
That
makes
sense.
Yeah.
F
D
That
would
just
not
look
at
those
likes
users
like
so,
and
that
does
breakdown
from
time
to
time,
but
when
it
does
it's
a
book,
it's
quite
easy
spot
and
we
can
fix
it.
So
it's
very
nice.
It
gets
a
bit
more
complicated
when,
from
here
you
might
go
to
either
users,
maybe
that
has
just
coming
up
to
the
end
of
time
as
well
products
it
doesn't.
If
it
did
have
projects,
then
what
we've
created
is
essentially
a
loop,
and
in
that
case
we
might
see
a
second
query
for
these
projects
at
this
depth.
D
But
if
we
duplicated
this
over
here
as
well,
it
wouldn't
be
two
extra
queries
for
projects.
It
can
just
be
one
extra
query
of
projects
at
this
depth,
so
it
works
out
pretty
well
great.
So
that's
the
our
thanks
a
lot.
Everybody
and
I
hope
that
was
in
some
way
useful.
I'm
I'll
be
around
on
slack
again
in
about
five
minutes.
If
anybody
has
any
questions,
they'd
like
to
follow
up
on
or
if
they're
really
inspired
and
want
to.