►
From YouTube: Gloo Platform GraphQL Automatic Persisted Queries
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hi
in
this
video
we'll
show
you
how
you
can
use
graphql
automatic
persistent
queries
with
glue
platform.
Graphql
automatic
persisted
careers
allows
us
to
Cache
graphql
queries
on
the
server
side,
and
if
you
have
a
graphql
service
that
has
a
large
schema,
then
a
graphql
query
can
be
of
arbitrary
size.
A
The
persistent
queries
allows
us
to
just
send
the
hash
of
the
query
to
the
server
and
the
server
will
use
the
cached
query
on
the
server
side
to
execute
and
return
the
response.
So
we
no
longer
have
to
send
the
full
graphql
query
string,
which
can
get
very
large
over
the
wire,
and
this
means
that
this
can
have
benefits
in
the
form
of
network
latency
and
performance.
A
Here
you
can
see
that
we've
got
a
graphql
service.
The
book
info,
graphql
server,
is
deploys
to
glute
platform.
In
our
rest,
client,
we
can
fetch
the
schema,
because
we've
enabled
introspection
on
the
graphql
servers.
We
can
fetch
the
schema
and
we
get
Auto
completion
to
build.
Our
graphql
query
in
this
case
we're
just
going
to
request
the
title
of
the
book,
and
when
we
submit
that
query,
we
get
a
response.
A
As
you
can
see
here,
if
we
want
to
enable
graphql
automatic
persisted
queries
in
glue,
we
can
do
so
by
applying
a
graphql
persisted
query
cache
policy
to
our
routes.
In
this
case
we
apply
this
policy
to
all
routes
that
have
to
label
route
graphql
book
info.
We
can
see
here
that
we've
got
a
graphql
route
table
deployed
to
our
platform
and
when
we
look
in
the
UI
and
inspect
the
graphql
service,
we
can
see
that
that
is
indeed
a
route
table
that
exposes
our
book
info
graphql
service.
A
A
So,
let's
try
to
call
our
query
using
persistence,
queries
we'll
be
using
the
same
query
that
we've
used
earlier
and
we've
put
that
in
a
book
info
query.graphql
file.
We
export
the
content
of
that
file
to
an
environment
variable
which
makes
it
easier
to
use
that
that
query
string
in
our
career
commands.
A
Next,
we
need
to
calculate
the
sha-256
hash
of
the
specific
query
and
we
use
the
following
command
for
that.
So
we
use
Char
sum
a256
to
calculate
the
Char
sum,
and
the
hat
dashes
64
means
that
we
grab
the
first
64
characters
of
that
output,
because
the
the
Sha
is
64
characters.
This
is
the
value
that
we
then
get.
A
So
what
we
will
do
next,
we
need
to
execute
the
query
now,
but
actually
passing
in
the
query
string
combined
with
the
hash.
So
we
do
that
with
the
following
curl
command
and
when
we
run
that
command,
we
will
see
that
we
get
the
output
that
we
expect.
We
are
returned.
The
title
of
the
given
book
that
we're
we're
trying
to
fetch
and
the
URL
encode
query
here-
means
that
we
create
a
URL
encoding
of
that
specific
query.
So
we
can
get.
A
We
can
send
that
that's
a
query
in
a
in
a
curl
in
a
get
query
parameter
and
when
we
now
execute
that
curl
command
so
passing
in
both
the
Sha
hash
and
the
query
string,
we
can
see
that
we
get
the
the
data
back.
That
we
expect
in
this
operation
now
also
causes
the
query
to
be
cached
on
the
graphql
server
side.
A
So
now,
when
we
execute
the
query
again,
but
this
time
only
provide
the
hash
and
not
the
query
string
itself,
the
graphql
server
will
recognize
that
hash
will
return,
will
execute
the
cached
query
on
the
server
side
and
return
the
same
data.
Obviously,
this
career
is
very
small,
so
the
benefits
of
caching.
This
query
on
the
server
side
might
not
be
that
high.
But
imagine
that
you
have
a
graphql
service
that
has
a
very
large
schema.
Then
the
query
can
be
of
arbitrary
size
and
caching.