►
From YouTube: Incremental Delivery Working Group - 2023-01-16
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
B
B
B
A
B
It
was
good,
it
was
good
thanks,
yeah
it
got
cold
here,
which
is
great.
That's
what
it's
supposed
to
do
this
time
of
year.
It's
very
very
strange
when
it
wasn't
cold
here,
yep.
D
A
B
B
Is
it
it's
all
working,
okay,
good,
so
I
think
Benji
might
not
be
able
to
make
the
call
today,
but
he
also
said
that
he
did
get
this
calendar
invite
set
up.
Well,
he
got
the
the
details
set
up,
which
I
was
able
to
paste
into
the
same
calendar.
Invite
we've
been
using,
but
he's
in
the
process
of
getting
this
meeting
converted
over
to
be
in
the
Foundation
calendar.
It's
just
a
bit
of
an
ordeal
to
get
that
to
happen.
B
So
at
some
point
we'll,
probably
cancel
this
calendar
invite
and
get
it
switched
over
and
one
other
thing
to
call
out.
I
know
from
the
Apollo
side,
as
you
can
probably
guess
by
the
fact
that
Benoit
and
I
are
not
in
the
U.S
and
we're
on
the
call
I
think
a
lot
of
folks
have
Martin
Luther
King
day
off
today,
so
I
think
it's
going
to
be
pretty
quiet
from
the
Apollo
side.
B
So
this
might
be
everybody
today
and
where
do
we
want
to
dive
in
I
know,
Yakov,
there's
been
a
lot
of
back
and
forth
in
the
in
the
main
discussion
thread
of
some
some
work
and
and
verifying
things
that
you've
been
you've
been
trying
out.
E
Yeah
I
think
so
not
sure
if
you
can
speak
to
a
Yakov
or
if
you
want
me
to
to
jump
in
up
to
you.
D
I'm,
a
little
listen
only
today,
just
because
I
have
a
few
things
running
around
it's
about
5
p.m.
Today,
so
Rob
I've
been
chatting
with
Rob
I.
Think
it'd
be
great.
If
you
could
take
a
point,
awesome.
E
All
right
cool,
so
I'm
going
to
share
my
screen
back
into
this,
a
large
issue
that
we've
been
talking
about
for
some
time
now:
hi
everyone.
B
Hello,
hey
Michael,
all
right,
we're
just
getting
started.
Michael
we're
diving
in.
E
Yeah,
so
so
we've
been
talking
about
this
for
a
while
about
merging
defers
on
the
same
object
level
and
if
that
could
be
a
replacement
for
what
we
previously
had
as
the
the
label-
and
we
were
talking
about
I
put
a
bunch
of
examples
in
here,
but
one
that
we
were
talking
about
that
I
want
to
talk
that
I
want
to
go
over
in
some
detail.
E
Is
this
example,
where
you
have
kind
of
this
object
that
has
a
defer
with
the
field
underneath
it
then
the
same
object
and
another
defer
with
another
deferred
side
of
that
with
a
different
field
here
and
I
I
was
saying
how
this
could
lead
to
getting
to
payloads
with
the
same
path
and
I
think
that
was
just
kind
of
a
bias
on
my
part
because
of
how
I
had
implemented
it
before
we
started
merging
these
changes
and
Yakov
did
a
lot
of
great
work
with
a
new
algorithm
that
does
that
would
actually
emerge
these
together
and
keep
the
guarantee
that
you
always
have
one
payload
per
path,
yeah,
so
kind
of
so
this
was
what
my
my
implementation
was
yielding,
where
we
would
have
Boo
and
Bar
in
separate
payloads,
but
with
yaakov's
algorithm.
E
We
have
now
just
the
one
payload
with
this
nested
object
path
and
you
get
all
the
fields
there
and
the
difference
is
that
what
I
was
doing
was
when
I
encountered.
E
Think
that
we
were
talking
a
lot
about
deduplication
and
I
kind
of
feel,
like
with
this
issue,
solved
that
we
should
have
the
deduplication,
be
it's
a
separate
conversation
so
yeah,
so
I
want
to
want
to
get
your
thoughts
on
that.
Yakov
did
go
and
Implement
deduplication
at
the
at
the
leaf
field
level.
E
I
think
that
there's
some
examples
here,
yeah!
So
that's!
That's
this
example.
Where
is
it
that
is
it
that
one?
No
it's
this
one,
sorry
yeah!
So,
even
though
the
fields
were
deduplicated
out
of
here,
the
object
is
still
sent.
E
And
yeah,
so
I
guess
kind
of
my
thoughts
are
I'm
wondering
if,
like
we
should
use
this
algorithm
in
the
spec
and
in
the
reference
implementation
and
leave.
The
deduplication
at
all
is
just
an
optional
add-on,
but
but
we
would
be
guaranteeing
that
paths
are
are
unique.
Does
that
make
sense.
F
E
A
E
F
E
E
On
top
of
that,
I
mean
Yakov
does
also
have
PR's.
On
top
of
that,
that
do
some
amount
of
deduplication
where
it
gets
tricky
is,
is
this
case
where
you
can't
statically
know
how
to
do
the
deduplication
either
this
payload
could
come
first
or
this
one
could
come
first
and
you
would
have
to.
E
A
F
E
Yeah
I
I
I
feel
like
I
I'm,
not
like
dead
set
on
any
direction
here.
My
initial
thoughts
are
in
the
spec.
We
could
write
this
algorithm
that
doesn't
do
deep
deduplication,
but
have
a
note
that
if
servers
want
to
do
that,
they
can
so
clients
should
expect
either
it
could
be
deduplicated
Arc
or
not,
but
if
it
is
any
like
prior
payload
should
have
all
the
fields
that
they're
expecting.
E
If
they're
recreating
the
response
from
the
incremental
results,
then
by
the
time
you
get
one
pilot
in,
you
should
have
all
the
fields
whether
they
were
in
a
previous
one
or
in
the
car
and
payload.
Does
that
make
sense.
G
I
have
a
question:
I,
don't
actually
understand,
what's
problematic
and
this
query
so
like
what
I
expected
to
to
do
is
like
you
get
ABCD
in
the
initial
response
and
when
you
get
like
e
and
f
and
g
and
H
differed
so
like
what?
What
would
the
issue
in
particular
yeah.
E
So
initial
response
you're
going
to
get
a
b
c
and
d,
and
now
what
if
G
and
H
are
very
slow
but
E
and
F
are
fast.
Now
you
could
get
this
payload
first
and
this
payload
shouldn't
need
to
include
ABCDEF
because
it's
already
sent
in
the
prior
payload.
E
E
G
E
The
way
that
I
would,
if
I,
was,
writing
this
query,
it
would
be
likely
I
would
have
a
UI
that
is
rendering
that
has
like
one
component
with
this
data
and
another
component
with
this
data
in
it
and
I
wouldn't
want
I
would
want
to
make
sure
that
that
component
doesn't
isn't
rendered
until
all
of
the
data
that's
in
here
is
is
returned.
G
So,
like
ABCD
is
part
of
initial
response
right,
so
there
is
no
objection
to
the
left
and
it's
duplicated
in
from
second
to
four
right
with
zakos
algorithm.
So
a
ID
is
not
there
sorry.
So
it's
just
the
question
about
wrapping
types.
So.
G
G
Any
field
response
tied
only
to
path
so
like
path,
is
uniquely
identify
like
any
any
field.
In
response,
it
cannot
be
wipe
two
Fields
with
the
same
path
and
because
they're
like
in
the
same
Json,
we
had
this
problem
with
weibose,
because
we
we
had
like
puff
persuable
combination
and
that's
why
we
can
have
like
different
parts.
G
Different
company
here
we
have
like
the
same
the
same
problem.
We
have
a.
We
have
effectively
like
buff,
which
is
a
b,
see
no
a
b
e
and
f
path
inside
the
response
plus
path
where
the
four
originated.
So
we
have
a
combination
of
like,
in
both
case,
profit
said:
responseological,
profits,
the
same
a
b
e
f,
but
we
have
different
path
of
D4,
which
is
in
first
case
it's
a
B.
G
E
G
G
Here
is
it's
the
same
issue
like
like
which
and
semantic
of
response?
So
previously
we
had
like
only
one
Json
object.
Now,
it's
not
like
it's
not
like
one
there's
an
object
for
series
of
patch,
but
instead
that's
Wonder.
It's
an
object
was
series
of
sub-objects
tied
to
particular
D4,
in
which
data
can
be
duplicated,
which
is
quite
way
more
complex
mental
model.
A
G
Like
you
know,
like
I,
don't
want
to
book
progress
but
at
the
same
time
I'm
like
I,
why
I'm
passion
about
it,
I
I,
think
like
in
ideal
scenario.
We
have
like
deterministic
and
deterministic
behavior
of
a
server
the
more
deterministic
the
better.
Since,
like
the
duplication,
we
remove
problem
with
the
duplication
and
we
facing
this
issue
which
prevent
us
like
from
I'm.
Okay,
if
like,
if
some
staff
compete
on
this
on
the
same
level
but
I'm
like
I'm,
not
a
kid
something,
one
server
can
do
duplicate
something
another.
G
F
But
just
just
to
hook
that,
in
with
defer
and
stream,
we
will
anyway
have
difference
of
servers
right.
One
server
might
optimize
a
defer
differently,
so
you
get
the
complete
different
defer
structure
because
they
did
a
naive
implementation.
Another
server
will
implement
it
more
efficiently
and
you
get
less
differs.
So
you
all
already
with
different
stream,
do
not
have
the
guarantee
that
you
that
you
get
the
same
deferred
structure
or
the
same
response
structure
as
with
a
different
server.
G
You're
right,
it's
an
it's
in
a
sense
of
like
I,
need
to
formulate
preparation
in
a
sense
of
like
initial
response.
Plus
Deltas,
but
some
doubters
can
be
like
much
in
initially.
So
it's
optimization,
it's
a
real
reason.
It's
like
you
need
it
because
of
because
the
server
need
to
be
able
to
do
that
optimization.
But
with
question
like
with
example,
F,
it's
a
question
of
not
it's
not
requirement
of
of
a
server.
It's
not
like
a
requirement
of
like
users
of
graphql.
It's
a
requirement
of
how
we
Define
spec
and
yeah.
G
So
I'm
like
I,
want
to
explore
option
of,
and
I
taken
on
me
explore
option.
Can
we
make
it
like
Deltas
approach
more
instead
of
like
especially
since
now
we
remove
variables
I,
think
it's
quite
easier
to
implement
like
puff
algorithm
power
forward
and
then
to
create?
G
F
Yeah
yeah,
it's
also
a
feature
right.
Okay,
it
depends
on
how
you
look
at
it,
because
there
could
be
a
million
things
going
on
in
the
show
where
you
have
concurrency
coincidence,
issues
where
some
something
gets
slower
or
diff
essentially
different
than
the
server
but
the.
F
But
this
way
you
I
mean,
if
you
formulate
your
query
like
that,
then
you
get
your
e.
No,
then
you
get
what
we
have
there:
the
F,
for
instance,
whenever
one
of
the
two
payloads
is
ready,
yeah.
G
F
Know
it's
it's
more!
It's
more!
My
I'm
I'm
more
on
this
one
I'm,
more
pragmatic
I
would
say
but
I
agree,
but
I
agree
with
you
on
this.
So
in
general,
yes,.
G
So
I
want
to
explore
with
Deltas
if
we
can
do
Deltas
and
our
server
to
just
send
like
say
like
this
path
with
data,
we
want
to
whisper
and
you
should
apply
it
previously.
We
discussed
like.
G
Json
match
I
think
it's
called
algorithm
and
he
said
like
it's
hard,
because
patching
algorithm
need
to
take
into
account
arrays
and
it
implement
it,
but
I
believe
it's
like
I
will
try
to
implement
that
I
believe
it's
like
100
points
of
code,
50
lines
of
code,
so
it's
simpler
than
to
create,
like
with
Traders
and
to
create
complete,
more
complicated
execution
semantic.
E
Yeah,
just
what
I
think
that
is
important
is
that
a
client
is
able
to
know
when
all
the
fields
that
are
inside
of
their
defer
are
are
returned
because,
like
like,
let's
say
you
did
write.
E
You
wrote,
you
wrote
some
kind
of
thing
that
deduplicated
enf
out
of
this
query
so
out
of
this
fragment,
so
this
fragment
only
returns,
G
and
H.
But
if
this
enf
is
coming
afterwards,
then
it
would
be
bad
for
this.
The
whatever
UI
component
is
tied
to
this
fragment,
to
render,
without
F
actually
being
available.
F
D
All
right
just
a
quick:
how
does
changing
their
response
to
more
of
a
patch?
How
does
how
would
that
get
around
this
problem,
meaning
we
would
still
have
to
sort
of
Define
what
should
be
obtained
included
within
each
patch.
G
Yeah,
so
how
I
looked
into
this
query
is
D
is
shipped
initially
and
e,
f
and
H
trip
into
in
subsequent
prewards,
like
they
can
be
shipped
in
one
incremental
by
what
one
additional
package,
or
they
can
be
shipped
in
two
depending
like.
If
server
things
you
can
ship
like
F
clusters
and
HOH
faster
than
F.
So
if
we
we
switch
systematics
to
to
like
initial
response,
because
Delta
there
is
no
issue
here,
send
the
an
initial
response,
but.
F
F
No,
no,
if,
if
I,
if
I
now
at
the
moment,
we
how
we
Implement
that
once
we
shipped
a
tree,
so
we
rent
memory
in
in.net,
I'm,
renting
memory
in
this
memory
in
this
piece
of
memory,
I'm
creating
a
tree
and
as
soon
as
I
delivered
this
tree.
To
my
result,
a
response
stream,
it's
gone,
I,
don't
need
to
know
what
data
I
did
there,
because
it's
it's
it's
branched
off!
I
just
have
to
guarantee.
F
Yeah,
how
do
you,
how
do
you
know
what
you've
sent
so
if
you
want
to
so
maybe
I
misunderstood
you
there,
but
if
you
want
to
generate
like
the
the
patch
structure
or
the
data
structure,
you
need
to
know
to
what
you're
generating
the
data.
G
And
no
it's
it's
the
same.
Like
idea
when
what
Jacob
implemented
I
wrote
like
with
things
that,
on
each
level,
it's
distinguished
what
staff
are
like
sent
and
what
stuff
is
different
and
when
you
Fork,
when
you
defer
stuff
resolver
like
code
around
resolver,
no,
it's
like
response
path.
G
Yeah
during
execution,
every
resolver
have
access
to
response
path
like
in
graphql
JS,
but
you
keep
like
your
response
path
right
when
you
go
into
execution,
so
you
just
keep
like.
There
is
no
like
G
think
or
there
is
no
like
anything
you
just
so
idea,
is
to
execute
F
once
and
only
once
and
have
a
tie
to
particular
response
path
and
ship
like
Delta.
Based
on
that
response,
power.
G
Yeah,
like
algorithm,
like
yeah
current
code
field,
algorithm
guarantee
you
that
you
get
one
you
executed
resolver
once
because
everything
is
corrected.
All
the
nodes
collected
into
array
and
you
execute
stuff
once
stuff
that
Jacob
implemented
I
didn't
check
implementation,
but.
G
No,
no
dynamically
I
like
at
least
Yakov
in
his
comment.
I
did
not
check
his
implementation
abroad
like
last
time.
Remember,
explain
you
how
you
can
do
duplicate
things
and
Jacob
wrote
that
he
Implement
see
more
thing
or
something
that
behave
like
like
a
Roser
I
didn't
check
implementation,
but
what
what
algorithm
basically
explain?
You
explain
how
you
can
in
one
pass
during
execution.
You
can
yeah
much
stuff.
Do
you
duplicate
stuff.
D
No
so
I
just
wanna,
just
in
terms
of
the
algorithm
that
that
I
used
to
make
sure
nothing
shows
up
in
a
deferred.
Payload
that
is
was
already
that
was
sent
to
the
initial
payload
is
easy,
because
there's
a
guarantee
that
the
initial
payload
will
always
be
sent.
So
so
you
can.
D
You
just
have
to
check
whether
that
leaf
was
was
present
in
the
initial
payload,
but
to
make
sure
that
to
make
sure
that
a
deferred
payload,
a
deferred
Leaf,
doesn't
show
up
in
a
late
deferred
payload
if
it
showed
up
in
an
earlier
deferred,
payload
and
I
need
to
I
need
to
do
a
cash
I
need
I
need
to
create
a
so
so
for
for
every
leaf.
F
So
what
we
do
in
in.net
is
we
compile
the
tree
everything
that
you
do
dynamically?
We
have
compiled
once
only
on
the
first
execution,
and
then
we
run
on
that
with
every
execution
the
same.
So
if,
if
you,
if
we
now
start
with
figuring
out
on
runtime
these
things,
then
you
make
all
these
servers
slow.
That
do
that.
Do
that
at
least
my
my
feeling
here.
So
so,
that's
why
I'm
I'm
a
bit
so
stay.
It
must
work
statically
or
we
will.
We
have
the.
H
So
I
I
agree,
I,
think
hi,
everyone,
sorry
I'm
a
bit
late,
I'm,
not
sure
if
you
saw
my
messages
in
the
chat,
yeah
I've
basically
said
similar
to
Michael
I
think
that
we
should
worry
about
merging
things
between
like
parent
and
child,
but
not
between
siblings
or
cousins.
H
That
kind
of
thing
and
another
reason
for
this
on
top
of
the
reasons
that
Michael
just
laid
out,
which
also
affect
graphast,
is
that
if
you
want
to
stream
large
amounts
of
data
like
gigabytes
of
data,
for
example,
any
kind
of
cache
like
that
is
going
to
grow
and
grow
and
grow
and
I.
A
F
G
Okay,
so
to
to
get
the
feedback
on
my
proposal,
the
biggest
worrying
is
like
cash
and
I'm.
Maybe
I
don't
get
something
but
I
believe
I
can
try
to
do
it
without
question.
It
can
work
without
cash
and
stuff.
Maybe.
A
G
D
So
I've
been
I,
I,
implemented
it
with
a
weak
map,
but
that
and
that
will
automatically
free
memory
as
you
go
and
that
may
help
with
what
Benji's
talking
about
but
but
but
not
you
know,
but
again,
I
think
all
this
is
optional
can
be
added
all
the
duplication
we
could
say
is
optional
and
if
somebody
gets
a
a
great
implementation
later,
you
know
we
can
go
for
it.
Yeah
I,
just
Ivan.
D
What
I
would
recommend
the
case
that
I
thought
about
is
to
make
sure
is
what,
if
you
have
a
list,
that's
using
the
same
grouped
field
set
or
same
field
Group.
You
have
to
manage
that
also,
so
it's
not
it's
not
something
you
can
save
on
each
on
each
field.
Group
set.
You
know
you
need
it
for
each
path.
F
Yeah
I
also
I
also
I,
also
think
the
client
should
expect
duplicated
data
and
if
we,
if,
if
there's
a
server
that
figures
it
out
how
to
completely
de-duplicate
things.
Fine
with
that.
But
if
you
for
instance,
it
could
also
be
a
design
choice
to
duplicate
certain
data
because
you
just
can
produce
it
so
much
faster
because
you
have
no
locking.
F
D
So
that's
basically
how
I
phrased
it
in
the
spec
except
I,
didn't
say
which
cases
like
you
know
that
you
should.
Although
we
you
know,
we
could
work
on
that.
F
D
There
so
the
initial
payload
is
the
simplest
one,
because
we
always
know
that
sense.
First
and
you
don't
need
actually
need
a
cast
for
that
you
can
just
inspect.
You
know
what
was
requested
first,
as
you
go.
G
Yeah,
so
so,
basically,
the
mode
forward
I
will
try
to
to
without
like
Chris,
without
focusing
on
the
response
format.
Question
right
now
and
main
concern
right
now
can
be:
cannot
the
duplication
be
done
effective
effectively
without
working
and
without
interlocking
data
between
each
other,
so
data
ships
as
they
execute,
and
there
is
no
like
working
and
everything
can
be
figure
out
statically,
but
it
should
not
require
query
planning
in
a
spec
right,
exactly
yeah,
okay.
G
So
if,
if
this
always
think
can
be
achieved,
deduplication
is
behavior
that
we
all
want
right
in
Ideal,
World.
F
F
F
Exactly
because
sometimes
as
I
said,
it
can
be
faster
to
duplicate
and
the.
The
essence
is
that
we
don't
have
duplicated
data
in
a
single
response
and
that's
what?
If
you
look
at
graphql
now
it
deduplicates
a
field
merging,
but
that's
just
one
response.
Now
we
have
multiple
and
that's
that's
why
I
I
find
it
okay,
but
in
we're
doing
still
doing
field
merging
Within,
These
patches
and
there's
also
like
what
I
did
with
hot
chocolate
is
with
query
planning.
F
We
get
a
lot
of
cases
out
there,
but,
for
instance,
the
case
that
Rob
showed
here
where
you
have
competing
the
first
that's
difficult,
because
because
you
can
only
make
the
one
wait
for
for
the
other,
otherwise
you
CA,
you
cannot
really
deduplicate
it
without
blocking
them.
G
Is
that
I
don't
get
something
or
or
it's
possible
so
I
will
try
and
outcome.
It's
possible
or
I
will
learn.
I
will
understand
why
it's
not
possible,
and
in
this
case
Michael
you
implemented
it.
So
you
have
like
multiple
knowledge.
Maybe
I
need
to
to
try
to
implement
myself
and
go
into
a
corner
understand.
G
Why
not
because
right
now,
I,
don't
understand
why
not
so
yeah,
it
would
be
exercise
for
me
and
it's
ready
for
diversion
conversation,
but
I
wanted
to
mention
it
and
see
like
if
it's
right
end
goal
is
like
we
agreed
on
the
end
goal
and
the
question
is:
is
it
possible
or
not
so
yeah
I,
I.
E
Got
my
answer,
yeah
I
think
it's
good
to
discuss
it.
Yeah
I,
so
my
my
I
mean
my.
E
My
question
is:
if
we
want
to
like
break
this
down
into
the
simplest
thing,
could
could
we
move
forward
with
a
spec
algorithm
that
guarantees
a
single
payload
per
path
and
but
does
not,
but
does
not
specify
any
deduplication
but
says
that
deduplication
is
allowed
as
long
as
you're
only
deduplicating
a
field
as
long
as
you're
you're
deduplicating
fields
that
were
sent
in
Prior
payloads
I
feel
like
that's
like
the
most
flexible
way
to
go
forward,
because
then
the
spec
could
be
could
be
updated
to
have
deduplication
if
Ivan
or
someone
else
figures
out
a
clever
good
way
to
do
it.
E
G
Yeah
one
quick
one,
quick,
not
my
like
thing
attempt
to
do
his
thing:
it's
totally
parallel
issue,
but
if
it's
successful
I
will
ask
a
question
about
tweaking
response
format
a
bit
because
in
current
form
it's
not
it's
not
compatible.
But
before
that
the
main
question
is
we
discuss.
Is
it?
Is
it
possible
or
not
to
do
it
performance
without
question
and
everything
else
so
yeah,
but
I
believe
it's
should
not
take
one
to
to
try,
so
we
can
do
it.
E
E
Yeah
I
I
think
that
it
would
be
good
to
settle
just
the
the
merging
in
one
just
have
a
direction
for
there,
and
then
we
could
iterate
on
the
the
pending
objects
and
and
I
would
I
also
want
to
go
back
to
one
of
the
previous
issues
about
stream.
Where
I
think
that,
with
this
merging
now,
we
could
simplify
I.
Think.
E
Yeah
I
think
that
once
we
sat
on
this,
I
also
want
to
talk
about
the
stream
path
again
and
I.
Think
that
the
reason
that
we
could
maybe
drop
this
and
that
would
give
better
symmetry
for
like
pending
with
a
stream
list
and
also
getting
completed
all
we
could
have
the
same
path
for
each
one.
E
So
next
apps
Ivan
you're
gonna,
look
at
that
implementation
and
I'll
I'll
add
an
agenda
to
the
meeting
on
on
Thursday
and
I.
Guess
we
could?
Oh,
we
can
talk
about.
B
Yeah
makes
sense,
could
we
could
we
touch
on
Benoit?
You
had
a
question
about
label
going
away
down
further
in
the
discussion
thread.
Did
we
get
the
a
full
answer
there
like
implementing
the
parser
for
your
work,
for
example?
Benoit?
Is
that
going
to
cause
massive
problems
without
the
label.
C
C
On
the
other
hand,
the
other
issue
I
have
is
that
I
mean
on
our
side.
The
fact
that
we
generate
parsers
is
that
it's
gonna
be
hard
to
it's
gonna
be
hard.
If
we
can't
really
know
in
advance
for
each
path
which
fields
we
we
can
receive
or
not,
I'm
not
I
still
need
to
to
wrap
my
head
around
this,
but
I
think
this
will
be
a
problem
for
us.
F
Okay,
but
but
what
about
the
so?
What
what?
What
can
be
done?
There
are
two
things
here.
One
thing
is
that
we
kind
of
moved
this
issue
out
a
bit
with
fragment
ideas,
and
the
second
thing
here
is
that
you
can
work
around
it.
F
Like
did
you
guys,
look
into
the
thing
that
Matt
proposed
and
we
tried
it
out
and
with
that
we
figure
out
most
things
like
you
have
a
fragment,
and
then
you
use
the
type
name
field
and
Alias
it
to
a
section
that
you
want
to
have
marked
with
it,
and
when
you
find
that
you
know
that
the
that
the
fields
that
you
have
in
this
selection
set,
where
you
have
this
marker,
are
fulfilled.
F
Give
me
a
second
so
just
grab
this
and
just
quickly
have
a
look
at
that.
So
typically
do.
F
E
B
F
Okay,
good
to
know
okay,
so
so
the
problem
is
now
also
if
we
dump
the
first
right,
the
the
label
doesn't
help
you
in
any
case,
because
the
moment
the
server
starts
optimizing
the
first
here
you
get
the
problem
because
the
server
could
decide.
This
defer,
gets
optimized
right
right
and
to
figure
out.
F
If,
if
you
have
delivered
something
in
your
selection
set,
you
can
use
a
marker,
for
instance,
this
type
name,
this
type
name
would
be
merged,
but
the
workaround
that
works
really
well
is
to
have
such
a
marker
here
and,
if
I
execute
that
now
get
rid
of
that
should
yeah.
That's
it's
plugging
here
and
you
should
see
somewhere.
My
marker.
A
F
A
F
What
that's,
what
Facebook
essentially
did,
and
so
the
idea
forward
is
to
formalize
it
that
actually
nicer
to
introduce
something
like
this.
F
Depends
really
on
how
you
build
your
tree
yeah,
because
we
still
have
the
problem
more.
F
That,
if
I
drop
a
defer,
then
and
I
wait
for
the
trigger
of
yeah,
because
you
don't
yeah,
that's
the
second
part
you
you
now
have
the
pending
and
stuff
right
and
with
the
pending.
You
know
that,
but
that
could
be
more
complex
for
clients
to
implement
Rob,
actually.
E
F
Because
in
the
client
you
you,
you
need
to
know
that
the
component
is
fulfilled
in
a
certain
section
of
your
graph
and
we
are
announcing
now
what
we
will
differ,
and
this
is
the
inverse
we're
not
telling
you
what
we
are
not
differing
or
where
we
dropped
it.
Hey
could.
F
E
Go
ahead,
I
think
I
think.
Maybe
the
question
is
that
if
we
have
deferries
at
the
same
level
and
we're
merging
them
together,
maybe
we
should
say
that
the
server
should
either
at
this
particular
path.
If
there's
deferrers
it
should
it
shouldn't
drop
some
deferries
and
not
others,
it
should
either
drop
all
of
them
at
that
path
or
the
ones
yeah
right.
F
E
F
A
F
E
F
I
didn't
I
had
to
understand
it,
so
the
the
I
I
think
the
last
discussion
we
had
is
that
we
put
this
thing
this
thing
off.
They
say.
Oh,
we
we
say
we
can
solve
that
either
with
the
type
name
still
or
in
combination
with
the
pending
or
at
some
point
we
will
introduce
something
like
fragment
modularity,
which
fully
solves
that,
because
that's
the
what
Matt
is
working
on.
F
E
F
Yeah
yeah,
it's
a
okay,
you're
right.
You
have
a
couple
of
of
solutions
here
and
they
have
different
trade-offs,
but
it's
solvable
what
you
want
to
do
either
you
you
have
a
marker
or
you
check
for
the
pendings
that
you
have
that
I
announced
with
the
pendings.
You
actually
can
see
what
is
what
is
deferred
and
then
you
should
know
that
sub
trees
are
complete,
because
if
a
subtree
essentially
doesn't
tell
you
that
there
are
more
differs
coming,
then
you
know
it's
complete,
that's
a
way
you
could
also
track
it.
E
G
Yeah,
come
on
quick,
quick
question
just
on
the
Sunday,
if
I
quite
correctly
understands
the
problem,
so
you
have
like
custom,
you
generate
Parcels
for
J
for
data
field
inside
incremental,
based
on
previously
generated
based
on
the
rainbow
so
be.
If
you
saw
like
a
particular
label,
you
use
particular
parcel
right
for
stuff
inside
the
data.
C
It's
not
exactly
like
that.
The
what
we
have
in
in
the
generated
code,
this
each
each
fragment
as
is
its
own
structure,
and
then
we,
when
reading
the
so
when
reading
the
the
payloads,
we
assemble
them
in
a
very
Json
way.
C
So
we
it's
like
a
patching
the
Json
and
then
we
pass
that
to
our
our
parser
and
what
it
does
is
that
it
will
try
to
read
each
fragment
or
not
depending
on
if
we
have
received
the
we
use
the
path
and-
and
we
and
we
use
the
so
the
label
to
know
if
we
should
read
each
fragment
or
not,
and
so
if
we
don't
have
a
label-
and
we
know
that
each
path
is,
we
can
uniquely
identify
a
fragment
that
also
works.
C
G
Question:
okay,
so
why
it's
not
Json
parser!
It's
like
stuff
that
get
that
fields
necessary
to
field
few
particular
fragments
right,
yeah,.
C
A
G
Can
populate
with
with
stuff
from
normal
oyster
and
during
incremental,
you
also
put
stuff
in
normal
store
right
or
so
so
you
can,
if
you
can
check,
fragment
against
normalized,
store
and
see
if
all
the
data
there
is
it
enough
for
a
fragment,
and
you
have
a
code
that
populate
fragment
from
a
normalized
store.
Why
you
need
anything
special
for
incremental?
You
can
populate
a
normal
store
after
each
each
part
of
incremental
delivery
and
work
through
work
through
normal
store.
C
C
G
So
in
case,
if
it's
like
Global
normalized
store
is
shut
down,
we
still
have
a
code
that
work
with
that.
So
we
can
create
small
normalized
stores
just
like
temper
normal
store
until
this
query
is
finished,
the
and
dispose
it
after
after,
like
we
got
all
the
responses
so
yeah
and
we
actually
discussed
it
on
a
working
group
when
we
agreed
on
a
matching-
and
it's
it's
known
trade-off
when
we
agreed
on
imagine
we
said,
like
clients
now
needs
to
understand
what
stuff
shipped
or
not
shipped,
it's
need
to
understand.
G
What's
inside
pragmat
or
and
video
hydrated
fragment,
or
you
like,
fully
or
partially
or
general
hydrated
at
all.
So
it's
like
it's
trade-off
of
margin
stuff.
We
cannot
do
the
duplication,
optional
or
like
enforced
by
spec.
We
still
like
discussing
that,
but
we
did
the
turn
toward
toward
margin,
stop
in
the
duplicates
and
stuff,
and
what
mean
you
need
to
to
do
like
stuff
is
more
Dynamic
for
you.
It's
not
like
visible,
contains
all
the
stuff,
that's
necessary
for
this
fragment.
It's
it.
G
We
make
a
turn
to
to
say,
like
clients
need
to
and
during
discussion.
My
exact
argument
was,
since
most
of
the
clients
have
normal
store
anyway,
and
they
have
a
code
to
work
with
it.
So
it's
the
same
to
the
same
functionality.
Basically.
G
E
Yeah
I
can
I
can
see
getting
in
a
situation
where
you're
not
sure
if
fields
are
being
deferred
or
not.
If
the
server
like
has
a
bunch
of
deferrers
at
the
same
level
that
should
be
merged
together,
but
decides
to
inline
some
of
them,
but
not
the
others,
so
I'm
thinking
that
we
shouldn't
allow
that
like
either
you
should
just
does
that
make
sense.
F
Hey,
so
if
you,
if
you
draw
a
drop,
a
defer,
it's
on
the
on
the
should.
A
F
F
No,
so
that
you,
if
you
invoke
two
times
the
server
you
get
two
times
the
same,
defer
the
same
default
structure,
if
you
in,
if
you
invoke
the
same
server,
so
it
should
shouldn't
be
like,
while
you're
executing
you
decide
now,
you
reach
this
threshold,
and
now
we
are
dropping
because
this
way
you
could
get
very
strange
besides,
because
for
all
what
I'm
worth
is
like
the
server
should,
if,
if
the
server
wants
to
drop
order
to
First.
E
Yeah
yeah,
so
so
this
let's
say
this
example.
There's
basically
there's
going
to
be
two
payloads
here,
one
with
the
defer
path
of
the
root
and
one
with
the
Deferred
path
under
nested
object.
E
If
we
decide
if
the
server
decides
to
ignore
this
deferred,
but
not
this
one,
then
bar
gets
in
line
to
the
initial
response,
but
Foo
doesn't
or
maybe
the
other
way
around.
Then
I
think
it's
hard
for
a
client
to
know
it's
only.
It
sees
this.
It
has
a
pending
path
that
nested
object,
but
it
doesn't
know
that
that
it
has
to.
It
doesn't
need
to
wait
for
for
bar
anymore.
E
So
I
think
as
you're
going
through
collect
fields
and
now
you've
collected
bar
and
Foo,
and
you
know
that
there's
one
defer
at
this
path
with
these
two
Fields,
that's
when
you
should
be
like
I've
already
deferred
too
many
stuff,
I'm,
just
inlining
it
yeah
or
you
should.
F
Because
you're
traversing,
essentially
the
right
Prairie
tree
and
that
that
means
you're
going
from
the
top
and
further
in
and
then
you
it's
it
I
I
mean
so
I
think
it
wouldn't
make
sense
to
remove
the
first
from
the
outer,
because
you
don't
know
if
it's
too
much
well,
it's
most
likely
the
the
deeper
Nest
that
differs
that
that
get
you
into
trouble.
E
Yeah,
if,
if
this
whole
thing
was
inside
of
a
list
that
had
10
000
items
and
then
you'd
only
decide,
you're
gonna
at
Max
have
like
20
payloads.
Then
after
the
20th
item
on
the
list,
you
you're
still
you're
gonna
ignore
both
of
these
deferries.
But
you
shouldn't
decide
that
I'm
Gonna
Keep
deferring
the
foods,
but
not
the
bars
because
they're
at
the
same
path.
F
Anyways
think
very
statically
on
that
I
mean
what
what
we
would,
what
we
technically
do
is
we
analyze
the
query
and
we
look
essentially
like
the
complexity.
Algorithms
that
are
are
out
there.
We
look
for
the
maximums
where
it
can
go.
I
I
personally,
wouldn't
wouldn't
defer
things
in
a
list
and
then
at
item
20,
stop
differing
the
same
thing
in
the
list.
I
would
do
predictions
on
okay,
this
list,
the
user
asked
for
50
items
if
they
are
just
five
I,
don't
care
because
I
don't
want
to
do
these
checks
at
runtime.
F
F
I
mean
that's,
that's
how
you
can
do
it
statically.
So
if
a
user
asks
for
two
items
in
the
paging,
for
instance,
then
we
can
expect
that
this,
this
user,
that
there
are
a
maximum
of
two
items.
If
you
don't
ask
for
fixed
items,
then
we,
for
instance,
look
at
what's
the
default
page
size
that
we
have
configured
in
the
server
like,
let's
say
it's
15
or
so
so
we
expect
his
past.
The
user
is
passing
in
a
variable
and
the
maximum
alert
is
15.
Then
it's
probably
15.
F
E
But
whether
you're
doing
it
statically
or
dynamically,
it's
it's
just
I
I,
don't
think
it
matters!
It's
just
that
you
should
decide
whether
you're
going
to
be
enabling
the
deferrer
at
the
at
the
path
level
and
not
the
individual
fragment
since
they're
all
going
to
be
merged
together.
G
So
just
to
qualify
mental
model
here
is
like
we
have
query
without,
like
any
messed
up
query,
but
any
optimization.
Imagine
that
we
do
should
result
in
a
query.
Like
you
kind
of
you
know
our
example.
We're
writing
one
query
to
like
optimize
one
and
but
you
cannot
do
like
Dynamic
optimization.
You
cannot
decide
different
for
items
inside
the
array
like
on
one
item.
I
want
the
environment
on
other
item.
I
don't
want
to
unwind
it.
You
cannot
do
that.
You
can
rewrite
a
query
or
execute
but
mental
model.
G
You
can
do
it
like
inside
your
execution
engine
depending.
Do
you
use
Query,
pointing
or
not,
but
meta
model
is
like
you're
writing
one
part
into
another.
It's
a
first
thing
and
second
thing:
it's
like
a
suggestion.
You
either
like
when
you
write
a
query
during
this
rewrite.
You
decide
if
you
remove
the
all
the
default
on
the
same
level
or
not,
is
it
right?
Is
it
I
get
your
discussion
correctly?
A
E
That's
on
nested
object.
You
have
to
not
render
the
food
component
because
it
doesn't,
you
won't
know
if
even
maybe
it's
in
the
response
or
not,
you
don't
know
if
it
is.
F
G
E
I
I
was
thinking
yes,
but
just
because
because
they're
different
paths
but
okay
yeah,
that's
I
was
thinking
yes,
but
I
haven't
spent
a
ton
of
time.
Thinking
about
it,
I
just
thought
of
it.
During
this
meeting.
E
G
F
Yeah
I
mean
it
was
it
was.
It
was
a
good
good
thing,
so
Ivan
wants
to
Branch
something
out
and
but.
G
If
qualification
I,
because
I
made
the
promise
and
I
just
understand
that
it's
like
different
context
by
working
group,
I
mind
my
main
working
group
so
like
yeah
before
next
group,
main
working
group
next
month,
I
will
try
to
have
like
to
show
either
I
can
do
it
or
agree.
That's
like
not
possible
to
do
easily.
F
And
I
mean
we:
we
go
ahead
anyway
for
for
now.
A
E
So
there
there's
a
there's,
a
working
group
meeting
on
Thursday
this
week
that
that's
not
what
you're,
referring
to
you
mean
the
one,
the
next
main
one
yeah
and
the
first
first
week
of
February.
G
Yeah,
my
assumption
is
like
we're
not.
We
will
probably
not
move
to
next
stage
on
Thursday,
but
we-
and
we
aim
to
do
that
next
month
and
to
not
prevent
like
from
moving
to
next
stage
next
month,
like
online
working
group.
I
will
I
will
like
do
it
before
next,
my
working
group
and
it's
not
a
broker
for
Thursday
right,
no
ideas,
yeah,
probably
change
stages
on
Main
working
group
for
big
features,
so
I
like
that
anyway,
right.
E
Okay,
right
so
I'll,
I'm
gonna,
add
myself
to
the
agenda
on
Thursday,
yeah
and
I'll
I'll
summarize
what
we
talked
about,
how
I
think
that
there's
some
level
of
agreement
on
how
we
could
move
forward
without
deduplication,
but
guaranteeing
unique
paths
and
you're
gonna,
keep
working
on
that
and
see
if
it
could
be
optimized,
but
what
but
yeah
so
then
I'll!
That's
what
I'll
report
on
and
we'll
see
if
there's
any
other
input
on
Thursday
sound
good.