►
From YouTube: Incremental Delivery Working Group - 2023-06-12
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
B
Yeah
so
past
so
past
couple
weeks,
we've
been
I
think
kicking
around
a
few
different
ideas
to
address
the
concerns
that
Matt
raised
in
that
working
group
meeting.
I
think
it
was
a
little
over
a
month
ago,
so
I
can
I
can
catch
you
up
on
on
those
and
yeah,
and
you
could
let
us
know
what
you
think
and
I
guess
just
just
hoping
that
there's
a
way
that
we
can
get
unblocked
and
find
a
way
to
move
forward.
B
Yeah,
so
we
had
most
like
this
just
original
gist,
with
the
results
here
and
Matt,
you
had
posted.
B
B
So
so
my
question
on
that
was
like:
if
we
have,
if
we
don't,
if
we're
moving
the
IDS
to
the
individual
payload,
do
we
still
have
a
way
to
track
when
a
whole
fragment
has
been
delivered?
So
you
know
that
it's
okay
to
render
it
and
so
I
think
Yakov
iterated
on
that
a
little
bit.
B
I
think
this
might
be
a
bit
outdated
from
what
you
had,
but
but
the
idea
was
kind
of
you
have
like
two
separate
IDs
one
for
the
individual
pieces
of
data
and
another
one
for
to
track
the
whole
fragments.
B
Would
we
like?
Would
it
be
better
if
we
capped
the
IDS
for
tracking
deferred
fragments
and
when
they're
completed,
but
instead
basically
keep
everything
the
same,
but
only
change
pending
into
some
kind
of
tree
structure
so
that
you
could
Traverse
both
the
data
and
pending
at
the
same
time,
I
think
that
the.
B
So
so
this
doesn't
you
don't
have
to
deal
with
any
issues
of
mixing
data
and
metadata
in
into
the
same
object.
I
will
admit
that
this,
like
array
tree
syntax,
is
a
bit
clumsy.
There's
we
had
a
few
different
ideas
of
different
ways
to
do
that.
We
might
get
slightly
larger
payloads
under
incremental,
because
we
want
to
track
multiple
IDs
on
each
one
and
have
the
whole
path.
B
Yeah
and
here's
an
example
where,
in
an
incremental
payload,
that's
where
you
have
another
deferred,
so
you
will
have
you
so
new
pendings
would
be
introduced
at
the
same
level
as
of
the
incremental
object.
B
Another
idea
of
just
of
how
to
like
represent
that
structure,
maybe
is
a
little
bit
slightly
more
verbose.
A
little
bit.
Less
clumsy
is
just
kind
of
like
have
this
format
of
basically
like
still
using
objects,
but
Nest
everything
on
one
level.
So
you
know
you
can
tell
the
difference
between
like
where
the
pendings
are
and
where
the
and
what
What
fields
are
correspond
to.
What's
in
the
data.
B
D
The
so
yeah
the
the
things
that
I'm
most
worried
about
actually
having
like
looked
at
Facebook
usage
of
defer
and
really
digging
in
a
little
bit
is
one
the
things
people
like
how
big
of
a
foot
gun
are.
We
giving
defer
turns
out
like
a
lot
of
people.
D
D
It's
especially
worrying,
given
we
are
solidifying
a
spec
response
format
as
opposed
to
a
like.
Given
this
happens,
this
is
the
behavior
that
is
expected.
This
is
what
needs
to
be
accessible
when,
but.
D
D
The
recommendation
for
how
to
use
defer
and
when
and
like
there
is
a
cost
and
we
may
are
like
I-
may
not
have
internalized
that
cost
until
recently,.
E
So
you're
just
to
clarify
Matt
you're
talking
about
the
the
existing
like
in
the
world
version
of
defer
at
the
moment
that
doesn't
have
any
of
the
duplication,
correct.
D
D
We
have
like
this
flag
that
you
can
pass
it
if
it's
false
then
return
all
of
it
in
one
payload,
but
you
still
need
to
annotate
that
this
defer
was
fulfilled
and
we
have
some
responses
where
that
is
defer
fulfilled
for
this
specific
label
in
this
specific
path,
where
we
get
like
a
few
hundred,
maybe
a
thousand
of
those,
and
that
like
yes,
it's
on
an
already
large
payload,
so
maybe
it
just
doesn't
matter,
but
the
fact
that
we're
getting
like
hundreds
of
paths
added
outside
of
the
tree
in
order
to
tell
the
client.
D
Yes,
you
did
not.
In
fact
we
did
not.
In
fact
we
did
not
defer
this
like
it
all
comes
back
in
one
payload,
but
the
total
payload
cost
went
up,
probably
like
I,
don't
know
the
exact
number,
but
it's
probably
like
a
10
to
15
percent
payload
cost
up,
and
that's
with
like
two
defers
explicitly
listed
in
the
query.
B
We
we
say
we're
saying
that
we're
allowing
servers
to
inline
defer,
meaning
it's
treated
the
same
way
as,
if
like,
if
was
false
and
comes
back
in
the
same
payload,
and
we
do
not
need
to
send
any
metadata
to
say
that
so
the
clients
understand
that
happening.
It's
only
that
I
feel
like
the
difference
between
what
you
guys
have.
Is
that
because
you
don't
have
like
this
pending
I
I,
don't
think
you
do.
B
That
means
that
you
have
to
do
the
opposite
and
say
this
wasn't
deferred,
whereas
when
we
depending
it's
only
sent
when
something
is
deferred
so
like
the
lack
of
that
being
sent
would
be,
would
enough
for
clients
to
know
that
the
defer
was
ignored
or
inlined.
That's
kind
of
how,
like
we
got
to
adding
this
pending
thing.
So
I
think
that
means
like
the
cost
of
deferring
something
is
a
bit
higher,
but
like
the
cost
of
the
server
ignoring
it
is
lower.
D
Yeah,
do
you
do
we
need?
Yes,
that
is
that's
true
and
I.
Think
that
that's
probably
the
right
trade-off,
so
I
guess
talking
through
it.
That
is
actually
a
good
like
a
good
choice
for
where
we
are
versus,
where
the
initial
Facebook
defer.
C
But
but
with
the
cost
is
a
bit
higher
up,
you
mean
that
we
have
this
extra
pending
metadata
right,
right,
yeah,
but
that's
I
I.
Think
when
correct
me,
if
I'm,
if
I'm
wrong
Matt
is
the
exploding
cost
tile
is
because
of
the
data
that
is
deduplicated
now
in
Facebook
or
when.
D
It's
not
it's
not
the
duplication,
because
the
things
that
are
being
deferred
are
are,
in
fact
on
separate
paths.
Okay,
so
that's
not
it's!
It's
that
yeah
I,
don't
have
a
good
example,
but
it's
it's
really
the
defer
inside
list,
possibly
inside
list
it's
it's
the
same.
It's
the
same
like
defer,
explosion
that
I
think
yeah
of
was
really
like
working
through
a
few
times
and
yeah.
B
Yeah
without
the
the
deduplication,
a
big
risk
is
not
necessarily
defers
inside
of
the
list,
but
defers
with
overlapping
fields,
and
so
I
think
that
that
is
solved
yeah
in
a
good
way
for
for
this
proposal
at
least
where
that's
not
going
to
be
happening
because
we're
only
sending
Fields
once
yeah
I
think
like
list
inside
of
lists
that
not
yeah
I,
don't
I,
don't
have
good
ideas
of
how
to
get
around
that
other
than
some
kind
of
like
server
validation,
where
it's
like
at
a
certain
point.
B
I
I
think
that
there
should
like
in
this
even
like
the
most
simplest
case.
A
server
should
have
some
kind
of
lament
like
where
I
already
deferred
20
things
I'm
not
going
to
defer
anything
else.
I
I,
don't
care,
like
I,
probably
didn't
defer
the
things
that
would
have
been
best
optimal,
most
optimal
to
defer,
but.
C
That's
that's
yeah,
but
that's
really
server
implementation
detail
and
we
allow
that
to
suspect
right,
yeah
and
that
we
we
maybe
should
like.
There
should
be
a
note
that
service
should
ensure
such
behavior
and
think
about
that.
D
It
might
be
the
case
that
the
compiler,
something
should
in
fact
hoist
a
defer
in
like
if
you
have
a
defer
inside
of
a
list
instead
have.
A
D
But
yeah,
basically,
there
might
be
a
way
to
hoist
up
the
defer
inside
of
the
list
in
which,
like
make
basically
a
sibling
path
in
your
request,
with
the
defer
at
the
top
for
the
items
that
you
are
actually
deferring,
so
that
you
basically
get
a
single
deferred
payload
for
all
the
Deferred
list
items
rather
than
but
that's
like,
like
that's
that's
more
getting
into
the
like
how
we
recommend
you
use
defer,
as
opposed
to
like
necessarily
the
spec
implementation,
yeah.
A
D
D
Validation
in,
like
optional
validation,
rule
of
like
hoist
your
list
defer
your
defers
inside
of
list
to
like
sibling
paths
with
a
defer
at
the
top
of
the
path.
B
B
D
E
E
Most
of
those
things
are
also
like
temporary
right.
We
shouldn't
meet
them
at
the
end
they're
just
there
in
the
in
the
interim
yeah.
B
E
Yeah,
so
what
Matt
was
saying
before,
with
the
the
Facebook
version,
where
it
does
the
actual
in
it,
turns
off
the
defer,
but
still
gives
you
the
metadata.
There
was
still
that
metadata
I.
Imagine
at
the
end
in
like
the
resolved
object
selling
you.
These
things
are
now
done
or
whatever.
D
No,
we
we
do
throw
it
away
because
we,
so
we
end
up
internally
having
a
feel
a
client-side
field
that
has
like
is
defer
fulfilled.
D
That's
how
our
client
Works,
and
so
these
the
metadata
basically
causes
that
is
deferred
field
to
be
said
to
true
and
then
they're
thrown
away,
so
they're
only
used
like
as
soon
as
we
get
them
yeah,
it's
really
the
network
cost.
D
The
parsing
cost,
the
even
the
server
calculation
and
creation
cost
I,
don't
think,
are
really
that
high.
It's
really
the
network,
but
this
would
be
a
great.
Like
example,
if
we
have
like
use
cases
where
we
can
show
hey,
we
did
defer
in
an
expensive
way
and
say
gzip,
just
like
the
way
that
we
have
it.
Formatted
gzip
eats
it
up,
and
it's
only
a
0.5
increase
in
actual
Network
bytes
or
whatever
like
that
would
be
pretty
compelling.
Certainly.
D
Right
and
it's
it's
basically
proving
that
would
be
like
a
good,
a
good
thing
for
the
basically
for
being
able
to
just
respond
to
naysayers.
Like
me,.
E
So
Matt
have
you,
you
wrote
up
that
original
like
rough
idea,
I
guess
just
over
a
month
ago,
did
you
have
you
done
like
a
tidied
up
version
of
that
at
all?
No
I
have
not
I'm.
Sorry,
no,
that's
fine,
but
just
because
I've
been
away
a
little
bit.
B
B
I
compilers
can
client
side.
Compilers
can
do
things,
maybe
to
make
things
more
optimal
server,
I,
think
server.
Implementations
can
be
smarter
about
the
way
that
they
batch
groups
into
incremental
execution.
B
It's
it's
not!
It's
not
required
that
every
defer
has
its
own
like
table
that
parses
like
we.
We
built
this
incremental
as
an
array,
so
we
and
all
this
stuff
completed
as
an
array,
so
we
could
put
a
lot
of
stuff
in
here
and
we're
we're
definitely
telling
clients
like
read
all
the
stuff
out
of
incremental
before
you
re-render.
B
That's
that's
a
requirement.
So
do
you
think,
is
there
other
stuff
that
you
think
we
need
to
explore
to
address
the
the
concerns
of
payload
explosion.
B
Yeah
yeah
and
I
I
think
I
think
that
some
some
part
of
it
is
that
I
mean
like
Benji,
has
been
like
pushing
these
these
issues
a
lot
and
so
so
I
think
you
are
catching
up
a
little
bit
and
like
we
like
it
wasn't
that
long
ago
that
our
version
was
like
very
close
to
what
the
Facebook
one
is,
but
it's
changed
like
quite
a
bit
over
the
past
several
months.
Okay,
so
so
the
next
thing
was
Yankovic
go
ahead.
F
B
Yeah
I
was
just
gonna
say
if
we
can
dig
in
some
more
to
the
actual
specific
shape
and
how
we
can
make
it
easier
for
clients
to
to
parse
was
that
where
you
were
going
with
too
or.
F
Well,
so
so
they're
kind
of
they're
kind
of
too,
like
I,
was
saying
at
the
end
of
the
last
me
a
week,
there's
kind
of
like
two
issues:
I
think
that
that
Matt
raised
and
obviously
since
I'm,
paraphrasing
you,
you
know,
feel
free
to
you
know
edit,
but
as
as
I
thought,
there
was
one
about
immutability
and
one
about
presenting
the
incremental
data,
as
you
know,
pending
or
completed
in
ways
that
are
easier
for
the
client
to
parse,
for
example
like
using
a
tree-like
structure.
F
So
putting
putting
aside
that,
you
know
how
to
present
it
a
little
bit
in
just
for
a
second
in
terms
of
immutability
I'm,
starting
to
see
that
there
may
or
think
that
there
may
be,
maybe
more.
You
know
starting
to
agree
with
the
perspective
that
that
that
that
we
need
to
make
make
it
easier
for
clients
to
generate
immutable,
payloads
and-
and
my
issue
is
mostly
actually
not
around
defer,
but
around
stream,
meaning
when
we
have
defer.
F
Even
though
we're
changing
an
object
we're
by
adding
fields,
we
never
actually
change
an
existing
entry,
but
with
stream.
Actually,
on
the
other
hand,
we
do
change
change.
What's
in
the
that
final
reconcilable
object,
because
we
change
like
the
the
length
of
of
a
list,
and
that
could
I
mean
the
clients.
F
Could
you
know
there
therefore
get
lists
with
you
know,
sort
of
unstable
lengths
and
I
guess
they
get
objects
with
unstable
number
of
properties,
but
that
doesn't
seem
like
as
a
as
as
much
of
a
concern,
but
certainly
like
I
could
see
like
you
know,
if
you,
if
you
render
something
that
has
a
certain
number
of
of
items
and
then
all
of
a
sudden
it
has
more
I
could
see
that
that,
could
you
know
sort
of
unpredictably,
because
you're
reading
from
that
reconcile
well
I
could
I
can
see
that
that
would
lead
to
problems
and
I'm,
not
actually
sure
that
the
response
changes
in
the
response
format,
though
that
were
that
we're
envisioning.
F
You
know
in
in
like
some
of
the
comments
on
that
the
the
the
gist
I'm,
not
sure
that
that
actually
like
I,
feel
like
that's
a
real
concern
but
I'm,
not
necessarily
quite
solidified.
How
a
client
might
manage
it.
You
know
I'm,
not
you
know
they
wouldn't
necessarily
you're
there.
It's
more
about
stream
than
about
defer,
so
it's
not
really
about
necessarily
including
in
a
completed
result.
F
You
know
the
full
list
of
payloads,
as
opposed
to
referring
to
the
initial
reconcilable
object,
although
I
think
so
I
think
I
have
to
work
through
that
a
little
bit
more
in
my
head,
but
I
am
definitely
seeing
that
there
could
be
immutability
concerns.
So
I'm
just
wondering
what
you
guys
smarter
folks
have
to.
You
know
think
about
that.
C
But
it
can
only
get
longer,
right
cannot
get
shorter
and
at
the
moment
it's
like
we're
not
inserting
at
the
top
I
mean
we
talked
about
that
there
could
be
other
ways
to
patch
in
the
future,
but
at
the
moment
it's
basically
we
append.
B
I
still,
don't
think
that
clients
are
going
to
want
to
like
remove
things
that
were
already
rendered,
but
it
but
I
I.
Guess
that's
like
a
different
piece
of
metadata
that
you
probably
want
to
handle
on
your
client,
like
you
need
to
know
like
there's
some
things
here,
but
this
is
not
necessarily
the
whole
list,
because
there
was
an
error
and
that's
different
than
like
an
individual
item.
Erroring
out.
D
I
mean
from
building
a
client
perspective
stream,
like
we
already
sort
of
do
this
internally,
but
the
way
I
would
do
stream
is
basically
it's.
D
It
becomes
that
element
internally
becomes
a
list
of
lists
right,
and
so
yes,
I'm
appending
to
the
larger
list,
or
it's
like
a
it's
a
you
know,
objective
list
or
whatever,
with
like
I
index,
ordered
so
I
know:
okay,
the
first
thing
whatever,
but
basically
so
that
I
don't
need
to
mutate
the
objects
as
we
get
them
off
of
the
response,
and
we
are
able
to,
for
instance,
right
if
there's
an
error
at
the
end,
we
can
show
something
different
than
if
there's
a
like,
pending
versus,
if
there's
a
com,
if
the
whole
thing
is
completed,
but.
D
D
B
B
D
Oh
I,
I
I
wasn't
I
may
have
just
forgotten
about
those,
but
basically
for
our
newest
formats.
Do
we
have
like
the
do?
We
have
those
built
into
the
end
to
end
demos.
C
So
if
we,
if
we
are
like
our
way
to
to
update
our
implementation,
that
we
have,
but
if
we
have
more
in
a
more
stable
phase,
then
I
will
go
ahead
and
Implement.
That
I
mean
it
will
take
a
week
or
so
Auto,
but
yeah
yeah.
D
C
But
at
the
moment
it
was
like
so
fluctuating
that
I
stopped
updating,
yeah.
B
C
Yeah
but
but
it
was
at
some
at
phases,
we
had
like
five
proposals
or
so,
and
also
they
were
very
different
in
in
parts
that
it
became
too
much.
If
we
think
that
there's
a
good
thing
that
we
want
to
try
out,
I
will
go
ahead
and
implement
it
over
the
weekend.
C
We
have
a
relay
implementation
for
it,
so
basically
it's
just
the
network
layer
and
you
in
in
the
network
function
you
just
implement
the
patching.
We
have
it
implemented.
For
the
last
thing
we
did
for
several
versions,
even
in
the
same
network
layer,
so
yeah
I
I
can
also
provide
the
code
and
stuff
yeah.
C
So
what
we
did
also
with
the
last
one
we
transformed
into
the
patches
that
we
can
provide
to
relay
you
could
do
the
same
thing
with
that.
It's
a
bit
more
aggregation
stuff
that
you
have
to
do,
but
it
also
when
we
went
out
from
the
first
version
to
the
to
the
later
versions
that
we
had.
It
was
already
where
we
had
to
aggregate
stuff
with
the
incrementals
right.
C
We
have
the
the
graphical
ID
that.
B
C
We
have
and
then
you
can
also
see
all
the
patches.
We
actually
have
a
patch
analyzer,
so
you
can
like.
If
you
go
on
the
request,
you
can
expand
it.
Then
you
have
a
tree
with
all
the
the
patches
that
I
applied
and
things
like
that.
So
I
can
start
implementing
that.
If
you
think
it's
something
we
can
rally
at
the
moment
behind
I
I,
don't
mind
if
I
have
to
redo
it,
but
it's
like
when
we
have
still
like
five
different
versions.
Then
it's
a
lot
of
a
lot
to
update.
D
So
another
thing
that
talking
through
this,
so
one
I
I
think
that
Michael's
yeah.
We
should
not
ask
him
to
implement
five
different
versions
just
to
see
what
feels
a
little
bit
better.
But
another
thing
here
is:
if
we
establish
pretty
strongly
like
you,
do
not
want
defer
explosion,
and
so
there
should
not
be
a
thousand
defers
like
if,
on
average,
there's
going
to
be
one
to
ten
differs.
D
It
doesn't
actually
matter
if
the
pending
is
in
a
list
versus
a
tree,
and
this
is
something
that
I'm
like
just
working
through
right
now
mentally
so
because,
like
yes,
you
have
to
iterate
over
every
element
in
that
pending
and
completed
list.
D
C
C
The
graphical
ID
stuff,
but
like
to
update
this
is
always
a
bit
of
work.
A
D
B
C
Things
and
then
you
had
the
3000
and
then
it's
like.
Okay
now
we
I
actually
made
it
worse:
yeah,
yeah,
I'm
I'm.
Also
in
that
and
like
that,
but
that's
a
good
like
back.
Then
we
were
like
server
shouldn't
change
that
now
we
are
more
like
a
server,
can
change
that
and
even
should
like
if
it's
going
too
much
into
too
many
patches
and
with
that
we
can
make
it
much
more
efficient.
Thank.
D
B
Okay,
so
so
you
think
that
we're
okay
with
moving
forward
with,
what's
in
the
just
assuming
that
we're
making
it
very
clear
but
like
Benji,
wrote
this
comment
in
the
chat
that
defer
as
a
trade-off.
It
enables
you
to
deliver
critical
data
to
users
who
are
lower
latency
at
the
cost
of
slightly
higher
Network
traffic
and
slightly
delayed
request
completion.
B
We
want
to
be
clear:
the
first,
not
a
Magic
Bullet,
just
like
heading
to
Furs,
isn't
like
a
shortcut
to
just
making
everything
faster
like
there.
There's
there's
costs
there,
the
servers
where
I
we,
we
I,
think
we
had
said
before.
We
want
to
recommend
to
servers
like
to
it's.
It's
fine
to
ignore
the
deferrers
for
performance
reasons,
probably
even
by
default.
B
C
B
Okay,
so
all
of
these
other
comments
that
we
had
with
the
different
ideas,
I
I,
feel
and
the
ones
that
I
showed
in
here
trying
to
get
pending
into
a
list.
I
I,
just
I,
feel
like
they.
B
D
Yeah
I
think
it
was
the
right
thing
to
walk
through
what
it
could
look
like
in
all
the
iterations
there,
but
because,
like
it
is
a
question,
and
now
we
have
all
these
examples
of
like
well,
we
could
do
it.
Here's
kind
of
what
it
looks
like
kind
of
doesn't
feel
clean
compared
to
the
list
and
if
we're
only
getting
10
items
back
like.
B
D
F
D
Longer
so
my
issue
with
immutability
is
like
we
have
all
the
data
where
we
could
do
things
immutably
right.
We
could
just
store
the
payload
and
walk
through
it
for
the
for
the
like
client
right
now.
It's
just
the
access
and
like
the
lookup
mechanisms,
are
Big
O
on
the
length
of
the
pending
and
completed
arrays
right.
So
anytime,
you
hit
a
potentially
deferred
field.
D
You
have
to
like
look
through
those
payloads
and
make
sure
that
that
one
is
not
present
or
is
present,
therefore,
where
to
look
for
it
in
the
next
incrementals
and
you
have
to
look
for
each
incremental.
You
have
to
go
through
and
look
through
the
whole
thing
which
matters
if
we're
going
to
have
a
thousand
items
in
the
incremental,
but
because
then
you're
going
to
a
thousand
times.
D
That
feels
a
lot
less
a
lot
less
of
a
problem
and,
in
fact,
might
be
simpler
than
traversing
a
tree.
D
D
D
Can
you
how
complex
and
how
much
processing
power
is
going
to
be
necessary
to
keep
the
data
structures
just
like
immutable
as
they
came
off
of
the
or
somewhat
similar
as
what
they
came
off
of
the
response
format,
and
how
costly
is
that
for
the
client
when
it's
in
its
render
phase,
if
it's
reading,
through
the
purely
immutable
response,
format
and
I?
Think
that,
like
so
my
understanding
of
the
resolution
there's
like
if
there's
one
element
in
the
incremental
payload
and
there's
one
item
that
you've
deferred,
there's
there's
no
cost
essentially
cost
is
super
low.
A
F
B
I
I
want
to
to
call
this
that
you
Yakov
you
just
wrote
like
we
have
to
make
clients
aware
that,
even
on
the
same
field,
the
first
might
start
to
be
inlined
and
Benji
replied,
but
but
yeah
I
I
think
that
that's
fine
for
this
fact
we're
allowing
deferest
to
be
in
line.
Even
if
it's
in
a
list
of
like
20
items,
there's
a
defer
in
each
one.
B
B
E
I
should
I
should
point
out
that
one
of
the
trade-offs
of
doing
this,
of
allowing
the
server
to
choose
to
inline
some
things,
but
not
others,
is
that
it
does
compromise
the
predictability
of
the
shape
of
those
response
payloads.
E
So
it's
it's
easy
I
think
in
my
opinion,
to
just
say,
ignore
them
all
because
then
you
just
know
it's
going
to
be
that
final
as
if
there
were
no
defers
and
no
streams.
That's
easy.
Deferring
and
streaming
everything
that
is
specified
is
also
straightforward
and
gives
you
these
predictable
payloads.
But
if
we
do
allow
the
server
to
choose
to
inline
things,
and
especially
as
yaakov
says,
if
we
allow
them
to
in
mind,
like
you
know
the
first
20
times,
an
item
is
deferred
but
they're,
not
the
final.
E
E
If
we
don't
care
about
the
payload
shapes
being
predictable,
then
that's
not
a
big
deal,
but
there
is
something
that
we
did
talk
about
and
I'm
not
sure
how
much
of
a
concern
having
those
predictable
payload
shapes
is
I
just
wanted
to
to
raise
it
as
something
to
think
about.
B
Yeah,
that's
that's
why
we
have
like
this
possible
incremental
payload
shape
should
be
predictable
and
why
I
put
it
as
a
question
mark,
but
yeah
I
think
it
should
just
be
a
a
no.
If,
because
we're
also
allowing
server
and
lining
so.
E
I
think
before
we
concretely
decide
on
allowing
server
and
learning,
we
should
actually
weigh
up
what
the
actual
like
implementation,
what
the
costs
are,
because
it
would
be
a
lot
simpler
to
specify
that
either
everything
is
in
line
or
nothing
is
rather
than
having
to
specify
what
happens.
If
you
choose
to
inline
or
not
in
line
at
each
point.
E
There's
clients
like
Apollo
client,
at
least
what
a
client
polar
client
used
to
be
I'm,
not
sure
if
there's
specific
details
anymore,
but
it
used
to
not
care
what
the
request
was
at
all.
It
only
cared
about
the
data
that
it
was
reading
off
the
network.
So
so
long
as
the
data
is
sufficiently
descriptive,
it
wasn't
going
to
be
an
issue,
but
for
a
client
that
does
understand
its
requests
and
then
tries
to
synchronize
its
request
at
the
response
from
the
server.
It
may
be.
E
B
Why
I
think
that
the
inlining
is
important
is
I
like
it
like
I
feel
like
a
server?
Maybe
just
wants
like
a
simple
cutoff
that
says:
I'm
not
going
to
defer
more
than
20
things
in
the
request
and
you're
not
going
to
know
like
they.
They
there
is
a
defer
on
a
list,
and
you
don't
know
how
many
list
items
there
are
until
you've
already
started
executing
them,
and
maybe
you
sent
some
to
Ferris
already.
So
you
can't,
like
you,
can't
decide
when
you're
in
the
validation
stage,
to
reject
the
whole
thing
or
not.
C
A
C
Also
I'm,
also
and
I
mean
I
was
holding
from
the
beginning
from
the
server
has
control
over
this
and
a
lot,
and
we
can
essentially
inline
certain
the
first
and
because
we
could
also
decide
you
are
deferring
their
stuff
that
we
anyway
have
in
the
in
the
first
instance.
D
There's
an
alternative
mindset,
which
is
that
either
it
should
always
like
an
exploding
defer,
should
actually
or
like
an
inefficient
defer,
should
actually
be
an
error
right
where,
like
the
server,
oh,
the
server
starts
deferring
and
then
hits
like
the
500th
item
and
Justice
like
I.
Don't
want
to
do
this
anymore
right,
like
this
is
potentially
dangerous
for
me.
I'm
deferring
all
these
things.
D
F
E
C
D
Yeah
I
mean
we
do
have
lists
where
if
the
list
grows
too
large
well,
basically
either
just
like
not
give
you
more
than
not.
A
D
You
all
the
items
in
the
list
and
just
lie
to
you
or
explode
the
and
be
like
yeah.
This
component,
that
was
working
two
years
ago,
like
it
did
something
on
the
server,
has
changed.
Either
more
data
has
come
in
like
the
and
now
it's
a
problem
that
you
need
to
fix
server
side.
Basically,
because
you've
got
this
deployed,
two-year-old
thing
you
have
to
like
figure
out
which
of
those
200
items
like
which
50
of
them
you
actually
care
about.
E
So
I
wanted
to
just
comment
on
the
previous
discussion
for
a
moment
again
so
I'm
actually
I'm
in
favor
of
inlining
and
always
have
been
that's
where
my
specs
have
always
enabled
that,
but
I
do
want
to
make
sure
that
we
are
making
that
decision
based
on
actual
data
from
an
implementation
and
not
just
based
on
our
gut
feel,
because
my
gut
feel
says
we
should
forget
about
making
the
payloads
predictable
and
we
should
instead
trade
towards
the
idea
of
inlining,
because
it's
more
efficient,
but
what
I
would
like
to
do
is
actually
validate
that.
E
That
gut
feel
and
make
sure
that
actually
does
it
make
that
much
effort
for
the
server
to
actually
not
inline
these
things.
Is
it
going
to
be
that
much
of
a
cost
in
network,
especially
once
gzipped,
because
I
suspect
that
the
network
cost
is
going
to
be
minimal?
E
I
suspect
that
the
server
won't
actually
have
to
do
a
huge
amount,
more
work
to
do
these
things
as
deferred
I
mean
all
it's
doing
is
just
putting
them
in
a
later
Json
payload,
like
rather
than
or
even
in
the
same
one,
in
the
pending
block,
rather
than
straight
away
in
in
line
so
I.
Don't
think
that
that
cost
is
particularly
huge
either
I,
don't
think
the
cost
of
memory
usage
is
going
to
go
up
by
that
much
so
I'd
I'd
really
like
to
get
some
actual
measurements
and
say
like
oh,
no
actually,
yeah.
E
C
Yeah,
it's
like
the
one
thing
we
tried
out
is
I'm
not
worried
about
the
response
size
actually,
but
about
the
like
I,
don't
know
we
have.
We
have
a
task
scheduler.
So
every
time
we
spawn
these
tasks,
they
become
an
overhead
in
the
system
and
can
at
some
point
yield
to
a
very
bad
performance,
and
that
is
where
we
actually
look
at
inlining,
so
to
lower
the
the
branches
that
we
do
on
the
processing.
E
A
C
Like
yeah,
not
not,
that's
that's
correct,
but
if
we
have
like
branching
in
the
algorithm,
then
we
basically
do
that
there
or
we
inline
it
and
then
it's
just
in
the
resolver
and
it's
done
and
it's
written
to
the
response
shape
right.
E
Okay,
yeah,
that's
cool
in
in
my
system.
The
the
writing
of
the
payload
like
to
the
response
is
completely
separate
from
the
calculation
of
the
data.
So
I
can
I
can
do
the
calculation
of
it
in
line
effectively,
but
then
still
write
it
out
to
a
different
location
in
the
response.
Without
Really,
any
additional
cost.
E
C
That's
like
these
are
two
different
rented
objects
they
live
in,
but
this
implementation
did
that
this
implementation.
B
I'll
also
say
that
allowing
inlining
is
a
much
less
risky
move
for
this
from
for
the
spec
like
if
we
decide
that
it
was
a
bad
idea
and
that
we
really
do
want
the
the
consistent
payloads.
It's
not
a
breaking
change
to
remove
that
from
the
spec
and
now
like
servers
that
follow
the
spec
from
whatever
day
going
forward.
Don't
do
in
lighting
anymore.
But
if
we
start
from
day
one
that
inlining
is
not
allowed,
we
can't
ever
start
to
allow
it
because
it
would
break
clients
that
expect
things
to
never
be
in
mind.
B
Yakov
your
comment:
what
do
you
mean?
Can
you
can
you
explain
that.
F
Yeah
I
mean
we
were
just
talking
about
well:
I'm,
not
I,
actually
forget
how
whether
Benji
was
talking
about
this
specifically
or
more
generally
about
the
issue
of
inlining,
but
meaning.
If
we
have
this,
but
basically
the
question
is
whether
we
should
allow
if
we
do
allow
inlining
or
would
we
allow
on
a
on
a
particular
field
for
some
of
that
was
nested
under
a
list?
Would
we
allow
inlining
on
only
some
of
those
objects?
F
So
you
know
if
it's
streamed,
then
we
have
and
we've
already
deferred
some
of
the
fields
that
we've
already
sent.
F
We
can't
you
know
we
we're
basically
forced
into
into
changing
the
response
shape
if
we
want
to
also
have
a
hard
limit
on
the
number
of
defers,
because
we
don't
we've,
we
have
to
commit.
You
know
with
the
first
couple
items.
Potentially
you
know
whether
we're
going
to
defer
or
not-
and
we
don't
know
yet
whether
we'll
reach
that
theoretical
heart
limit
I
think
Ben
yeah,
so
so,
meaning
it
and
then
Benji
was
talking
about
the
predictable
response
shape
and
how
that
might
be
desired.
F
People
not
with
respect
to
you
know,
let's
say
let's
say
we
are
going
to
allow
inlining,
so
there's
a
certain
branching
there,
but
it's
on
a
per
field
level,
but
here
we're
seeing
an
example
where
it's
even
on
a
per
object
level
and
we're
sort
of
forced
into
that
by
you
know
it's
like
a
tricky
operation
like
this.
E
E
You
allow
any
inlining,
then
the
predictable
response
shape
the
predictable
little
payloads
go
out
the
window,
whether
you
think
of
them
as
field
level
or
object
level.
I
was
only
ever
thinking
of
them
as
object
level,
but
yeah
I
think
that
just
goes
out
the
window,
just
by
virtue
of
choosing
to
be
able
to
inline
anything.
B
Okay,
let's
we're
running
out
of
time,
so
let's
wrap
up
next
steps.
What
do
we
need
to
do
if
we're,
if
we're
all?
Okay
with
this,
this
payload
shape
now
I
I
think
that
Matt
was
like
the
only
one
that
had
concerns
from
the
last
time
that
we
presented
it
should
I
guess
we
can
start
moving
forward
with
this
now
Yakov
you
have
your
implementation,
the
PRS
I'm
I'll
review
the
publisher.
One
I
think
that
since
we're
okay
with
this
payload,
we
can
start
getting
this
merged.
B
Now,
we'll
start
working
on
the
spec
and
I
think
probably
the
next
meeting.
We
should
dedicate
to
what
Benji
brought
up
about
early
execution
of
deferred
fields
and
talk
through
that
next
week,.
A
A
B
B
E
Yeah,
it's
me
and
uploading
all
the
YouTube
videos
is
on
me
as
well,
so
what
I
would
do
is
just
upload
this
preferentially
onto
YouTube.
Oh.