►
From YouTube: Incremental Delivery Working Group - 2023-08-28
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
C
B
Yeah
so
now
the
PRS
for
for
pending
is
merged
right
and
we
just
have
to
do
IDs
now.
B
Yeah
there's
no
set
agenda.
Okay,
I'll,
give
it
another
minute
to
see
if
anyone
else
joins,
but
I
I
just
wanted
to
check
in
on
that
pretty
much
cool.
A
Okay,
I'm
wondering
like
I,
mean
I'm,
not
I,
mean
I'm
wondering
like
meaning
what
what
the
next
like,
where
we
are
I
mean
I
know.
Obviously
it's
good
to
like
put
put
these
PRS
in
to
just
because
it
improves
based
on
what
was
on
Main
already,
but
from
what
I
recall
from
what
I
recall.
A
If
we,
the
the
response
format
itself,
was
basically
I,
don't
know,
accepted,
ratified
welcomed
by
the
main
working
group
from
what
I
recall,
but
the
question
and
the
pending
question
was
about
early
and
deferred
execution,
and
that
was
the
you
know.
We
I
feel
like.
We
say
this
about
all
sorts
of
things,
but
that
was
like
the
last
main
blocker
that
I
know
that
I
know
of
right.
Is
that
your
understanding
like
in
terms
of
like
where
we
are
overall.
B
Well,
I
feel
like
that.
We
don't
really
need
to
make
a
decision
on
that.
Well
yeah,
but
what
I
mean
is
I
think
that
we
should
get
get
your
changes
into
graphql.js
and
publish
that.
B
B
We
should
like
figure
it
out
with
like
specific
spec
changes,
I
think
because
I
think
I
think
it's
hard
to
like,
because
we
said,
like
you
know,
we
want
to
cover
all
the
cases,
but
we
also
wanted
to
be
simple,
so
I
think
we
just
need
to
like
iterate
on
the
how
we're
writing
this
back
rather
than
like
make
a
decision
now
and
decide.
That's
what
we're
doing.
A
Let's
see
who
else
is
here?
Oh
Benji's,
here
hi
Benji.
D
Hey,
can
you
hear
me?
Yes,
hey
good
yeah,
sorry
about
that
Zoom
decided
it
needed
to
be
uninstalled
and
fully
reinstalled,
so
I'm
a
little
bit
late,
joining.
B
Yeah
yeah,
we
were
just
talking
about
yako's
PR's
to
get
the
new
response.
Format
are
merged
and
that's
everything,
except
for
ID
at
this
point,
he's
working
on
that
and
once
we
get
that
in
place,
I
want
to
ask
Iman
to
publish
a
new
Alpha,
so
people
can
start
using
it
and
then-
and
then
we
were
talking
about
that
now
now
that
PR
does
early
execution,
that's
what's
going
to
be
there
for
now
we
can
get
feedback
and
then
start
iterating
on
options
on
how
to
support
both
types
of
execution.
B
So
I
don't
really
have
an
agenda
other
than
that,
just
keeping
things
moving
on
getting
the
new
response
format
in
graphql
Js
and
then
once
that's
in
place,
I'm
going
to
start
focusing
attention
on
the
spec
edits
and
then
I
think
once
we
have
some
like
my
takeaway
from
the
meeting
was
that
we
want
to
be
flexible
and
we
also
want
to
keep
the
spec
simple.
So
I
think
that
it
makes
sense
rather
than
like
decide
right
now
what
we're
going
to
do
but
like
get
something
written
down
and
then
iterate
on
it.
D
That
makes
sense
we
received
some
feedback
recently
I've
I'm,
just
trying
to
load
up
the
discussion
of.
B
D
Yeah
last
week,
and
it
related
to
not
just
the
IDS
but
like
broadly
everything
like
the
repetition
and
all
sorts
I
think
have
you
had
enough
time
to
look
into
what
Jaffer
is
proposing
as
like
the
alternative
solution.
B
D
As
Jeffer
quite
rightly
points
out
generally
in
graphql,
you
write
your
query
and
you
get
the
response
back
in
that
same
shape
right,
but
defer
and
stream
kind
of
break
that
not
the
current
version.
The
current
version
that's
out
there
doesn't
break
that,
but
instead
breaks
a
whole
bunch
of
other
stuff
in
other
ways,
and
also
gives
you
this.
D
This
request
multiplication
issue
like
you,
can
already
make
inefficient
graphql
queries,
but
with
just
stream
and
defer,
you
can
make
incredibly
inefficient
graphql
queries
and
it's
a
case
of
like
walking
that
line
between
do.
We
want
to
provide
something
that
is
simple
and
straightforward.
D
That
doesn't
really
need
much
in
the
way
of
tooling
to
support
much
like
graphql,
without
stream
interfer
right
at
the
moment,
you
can
use
that
in
like
a
command
line
utility
like
just
string
in
JQ,
along
with
a
batch
script
and
curl
or
something,
whereas
with
stream
interfer,
but
certainly
with
what
I
propose
with
the
like
pendings
and
the
so
on
and
so
forth.
You
you
actually
do
need
some
level
of
caching
there,
so
it
I
think
it's
it's
a
good
point,
and
it
is
also
hinting
at
what
Matt
Mahoney's
been
saying.
D
All
this
time,
which
is
like,
can
I
just
use
the
data
and
no
I
can't
because
I
need
this
extra
stuff.
But
equally,
if
we
just
go
with
the
simple
exposure
of
the
data,
we're
gonna
be
potentially
opening
ourselves
up
to
to
huge
multiplication
of
the
data
in
memory.
So
there'll
be
a
lot
of
serialization
and
deserialization
costs
both
on
server
and
client
I.
Think
for
data.
That
is
broadly
the
same,
so
it
is
definitely.
D
It's
one
of
those
issues
where
I
think
we
need
to
have
wide
buy-in
to
do
what
we're
currently
proposing
so
I
think
the
right
way
to
go
forward
is
to
do
what
we're
currently
proposing
and
then
see
what
the
feedback
is
and
see
whether
it's
worth
the
cost.
If
the,
if
the
cost
outweighs
the
benefit
or
not,.
C
D
I
personally
think
that
the
the
costs
of
doing
this
are
massively
outweighed
by
the
benefits
I'm
very
concerned
about.
If
we,
if
we
don't
do
something
like
this,
the
resulting
patterns
or
the
the
vulnerabilities
that
could
occur
because
of
not
doing
this
more
efficiently
could
could
be
quite
significant,
especially
keeping
in
mind
that
graphql
is
designed
for
the
client
and
as
much
as
anything.
That
means
it's
designed
for
mobile
clients
and
mobile
clients.
Don't
have
the
most
Ram
so
giving
them
ridiculously
huge
objects
to
destructure
in
memory
seems
unwise.
B
So
so
I
think
that
I
mean
I
I.
Don't
think
that
there's
a
way
to
have
it
all
right,
you
know
I
think
we
would
have
figured
that
out
by
now
if
there
was-
and
we
tried
so
so,
let's
move
forward
with
getting
a
working
version
out
there
and
see
how
it's
received.
D
Well,
looking
more
into
I'm
just
looking
over
jafar's
examples
now
and
broadly,
what
they
seem
to
be
doing
is
just
changing
the
the
IDE
into
using
like
the
path
and
label
right
which.
B
D
But
you
could
make
that
a
responsibility
of
right.
So
if
you
have
a
unique
you
can
make
it
the
server's
responsibility
to
have
a
unique
pairing
of
path
and
label,
and
if
a
second
defer
happens
with
that
same
match,
then
the
server
just
effectively
increments,
a
counter
that
doesn't
send
that
through
to
the
client.
Arguably
sorry.
D
B
It's
not
necessarily
the
IDS,
because
it's
even
I
think
that
might
even
be
a
little
confusing
the
way
we
have
the
ID
in
the
subpath
because
it
makes
it
seem
like
that
field
is
part
of
that
ID,
and
only
that
ID
when
it's
not
it's
just
the
one
that
gives
you
the
shortest
subpath.
B
B
And
or
or
maybe
maybe
it
does
make
sense
if
you
put
an
array
of
labels
onto
the
incremental,
but
then
then
you
could
you
could
just
read
the
data
as
it
comes
in
and
not
have
to
you
don't
have
to
know
what
the
idea
is.
You
don't
have
to
read
pending
to
figure
out
the
ID
before
you
could
process
the
incremental
payload.
That's
the
difference.
D
Yeah,
the
risk
being,
of
course,
that
the
the
paths
can
get
quite
long
quite
fast,
so
it's
still
giving
you
this.
This
large
object
in
memory
and
I
think
for
defer.
That's
not
honestly,
a
big
deal
at
all
and
I
think
over
the
wire
is
also
not
a
big
deal
because
again,
like
gzip
compression
or
whatever
will
handle
that
repetition.
Just
fine.
The
issue
is
in
memory
and
when
it
comes
to
something
like
stream,
where
suddenly
we
might
have
like
a
hundred
nodes
all
repeating
this
path.
B
B
I
feel
like
the
like.
The
amount
of
memory
for
the
path
is
all
right.
Okay,
I
was
talking
to
yako
about
that,
and
it
seems
like
we.
We
don't.
We
can't
really
free
it
on
the
server,
because
we
need
to
hold
on
to
it
to
report
errors
when
it's
when
things
are
being
executed,
but
on
the
client,
you
probably
you
could
I'm
not
sure
how
much
how
significant
it
is,
but
the
points
are
into
the
location
seems
like
the
strongest
argument
for
the
ID.
C
D
B
If
you're
a
client,
you
see
a
bunch
of
pendings
come
in,
they
all
have
different
paths
you
can
have.
You
can
store
like
a
map
somewhere
of
the
pointers
of
where
those
locations
go
into
the
final
object
or
your
normalized
star,
I'm
assuming,
but
then,
when
the
incrementals
come
in,
if
they
have
the
long
path
you
have
to
kind
of
like
at
each
segment
of
the
path
check.
If
you
have
one
of
those
pointers,
whereas
if
it's
the
ID,
you
could
look
it
up
immediately.
B
A
A
One
thing
on
on
that
was
that
I
think
it
clarified
for
me
a
little
bit
that
that
the
IDS
are
not
are
not
going
to
be
useful
to
the
client
on
their
own
without
the
label
unless,
unless
the
client
is
putting
using
like
the
type
name
hack
to
you,
know
you
sort
of
use
their
own
labels
within
the
response,
so
meaning
so
I
think
defer
was
talking
about
a
case
in
which
in
which
there
were
multiple
defers
at
the
same
path
and
the
client
were
suggesting
that
maybe
the
client
would
want
to
know
when
they've
all
completed.
A
You
know,
because
the
client
can't
actually
distinguish
you
know
it
sees
that
you
know
defer.
One
is
completed,
defer,
2
and
defer
3,
but
it
doesn't
know
you
know
you
know
it
doesn't
know
without
a
label
what
those
deferries
actually
are.
A
You
know,
assuming
again
it's
not
using
that
type
name
hack,
so
that
the
deferrer
seems
to
be
assuming
that
that
it
might
be
optimal,
then
to
not
send
the
completed
like
notification
until
the
client
can
actually
meaningfully
distinguish,
meaning
until
all
the
defer
is
at
that
path
have
have
resolved,
and
basically,
what
that
you
know
sort
of
my
understanding
of
what
that
means
is
basically
that's
equivalent
to
merging
defers
that
have
no
label
or
you
know
the
same
label.
A
You
know
as
a
similar
case,
but
specifically
there
merging
defers
that
have
no
label,
and
we
initially
you
know.
We
suggested
that
before
we
had
all
this,
you
know
this
better
way
of
reducing
the
the
response
size
we
had.
You
know
brought
up
that
idea,
and
here
we
have
it
again
so
I,
just
I
I
think
that
I
think
it's
I
think
it
makes
sense.
What
he's
what
that
suggestion
and
I
think
the
answer
to
that
is
well.
A
If
we
do
merge
that
defer
is
there,
then
nobody
can
use
that
that
anybody
who
is
using
that
type
name,
hack
or
type
name
Tool
to
to
as
opposed
to
labels,
will
be
blocked
because
they'll
be
blocked
until
all
of
the
deferrs
at
that
path
have
been.
So.
That
would
be
like
the
argument
against
that.
A
D
Yeah,
that's
an
interesting
point.
Yaakov
and
I
recall
I
think
it
was
me
that
originally
proposed
merging
them
all
together,
and
that
was
for
the
sake
of
efficiency.
It
was
to
make
it
so
we
could
execute
it
quicker
and
also
so
that
you
know
it
wasn't
like
an
attack
Vector.
But
since
we
come
up
with,
as
you
say,
the
the
more
efficient
ways
to
solve
this,
we
know
that
there
are
problems
where
or
situations
where
people
would
not
want
those
to
First
emerge
and
requiring
that
they
add
labels
or
different
labels.
D
In
order
to
achieve
that
is
one
option,
but
I
don't
think
that
it
should
be
necessary,
I
think
if
a
user
explicitly
States
them
as
separate
to
first,
then
they
are
separate
of
hers
that
would
be
more
natural
for
the
user.
I
think
in
most
cases,
so
I
think
what
we've
landed
on.
There
is
probably
the
right
thing,
but
I
I,
yeah
I,
appreciate
you
sharing
your
interpretation
of
jafar's
point.
There.
A
A
See
people's
feedback
we
sort
of
we
could
sort
of
get
like
the
teaching
points
you
know
in
order
of
like.
A
A
Okay,
so
while
we
were
here,
I
I
finished
off
that
that
idea
and
subpass
PR
and
there
there's
also
that
the
one
about
the
weak
Maps
that
was
extracted
but
I,
think
I
think
it
doesn't
pay
to
really
to
look
at
that
until
we
have
some.
You
know
benchmarking
to
see.
If
it
actually
really
saves
anything
you
know,
and
then
we
can
get
into
the
Weeds
on
it,
but
the
ID
and
subpath
one's
ready.
D
And
just
from
a
very
quick
glance
at
the
first
line
in
it,
the
ID
you've
marked
as
optional,
should
it
be
optional.
A
I
think
I
marked
it
as
optional.
Only
I
think
I
intended
it
because
maybe
I
think
because
the
path
itself
was
optional
because
maybe
I'm
not
sure
I'm,
not
sure.
Let
me
let
me
take
a
look.
Another
look.
I
think
the
path
was
optional.
Let
me
look
take
a
look.
The.
D
A
It
up,
though,
no
I
think
I,
think
you're
right
I'm,
just
trying
I'm
just
looking
at
it
now
and
I'm
wondering
why
the
path
was
was
optional.
In
the
previous
version,
I.
A
That
the
it
should
not
be
optional
here,
so
let
me
let
me
hack
at
it
so
get
maybe
a
a
post
when,
if
I've
figured.
E
Cool
all
right!
Well,
thanks
for
raising
that
Yakov
and
for
all
of
both
of
your
work
and
I
guess
chat
again
on
Monday.