►
From YouTube: Incremental Delivery Working Group - 2023-01-30
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
B
Out
of
them,
because
I
know
that
the
snooze
on
my
on
my
phone
is
nine
minutes,
so
I
set
the
alarm
to
be
10
minutes
before
I'm
eating
and
then
I
snooze
it
yeah
can.
B
A
E
Is
still
happening,
yeah
yeah.
D
A
Different
different
thing:
it's
like
oh,
also,
often
telephone.
If
you
have
things.
D
Yeah,
it's
like
poho
Ukraine
after,
like
nuclear
Russians,
go
to
like
nuclear
power
station,
but
Power
Station
generate
20
of
electricity
for
Ukraine,
and
it's
not
even
the
question
of
like
rocket
missile
attacks
on
your
pressure
conversations.
They
just
bought
like
biggest
nuclear
power
plant
in
Europe,
and
it's
not
generating
electricity
for
Ukraine
anymore,
like
it
was
three
or
four
months.
So
we
we
create
do
I
come.
There
is
not
enough
electricity,
even
if,
if
manufacturing
is
not
working
just.
C
D
E
I
was
how's
everybody
doing.
Where
do
we
want
to
kick
things
off?
Based
on
the
discussions
from
last
week,
I
missed
some
of
the
discussions
last
week,
but
got
caught
up
so
yeah.
Where
do
we
want
to
kick
things
off
today?.
F
Yeah
I
guess
I
won
check
in
with
Yvonne.
If
you
spend
any
time
implementing
what
your
proposal
was.
F
D
D
And
throw
that
and
maybe
I
will
actually,
what
do
you
think
should
I
should
I
added
to
a
gender,
the
separate
item
about
that
or
it
will
be
part
of
like
batch
stream,
deferred
batch
or
we
should
have
separate
one?
What
do
you
think
Rob?
What
would
work
the
best.
F
I
I
don't
have
anything
else
to
talk
about
right
now.
So
if
you
want
to
just
add
that
and
then,
if
we
do,
if
something
else
comes
up
in
the
meeting
or
something
that
we
also
want
to
discuss,
we
can
bash
it
together.
Okay,.
D
So
I
will
add
the
agenda
item
and
I
will
try
as
much
as
possible
to
like
have
prototype
for
the
first
day,
I'm,
not
sure
about
spec
changes.
So
I
will
focus
on
JavaScript
to
show
it's
like
it's
possible
to
implement
and
spec
text.
If
it's
possible
to
implement
spec
text
can
be
quite
pretty
easily
written
yeah.
F
E
For
me,
that's
awesome,
so
that's
the
game
plan
by
this
week's
primary
working
group
call
Ivan.
Do
you
think
you'll
have.
D
That's
awesome
yeah.
We
discussed
it
with
Jaco
last
time
because
we
cannot
like
stretch
stream
different
indefinitely,
so
yeah
I
propose,
like
idea
from
shape
up.
You
know
with
methodology
when,
where
you
say
there
is
feature
important
enough
to
dedicate
a
week,
for
it
is
not
important
enough
to
like
for
one
can
definitely
and
like
block
progress.
D
F
Yeah
I
know
Nat
had
some
opposition
that
he
he
posted.
He
told
me
that
he's
not
able
to
make
it
today,
but
we
could
probably
discuss
more
with
him
on
Thursday.
D
Yeah,
what
would
be
great?
It's
one
that
it's
one
you
in
my
comment
right
in
reply
to
my
comment,
or
there
is
something
else
yeah.
He
replied
to
your.
F
D
A
Okay,
yeah,
because
it's
we
now
have
multiple
differs
again
on
the
same
puzzle.
A
C
G
D
It's
it's
like.
It
would
be
good
to
discuss
it
with
Matt
personally
on
Thursday,
but
one
thing
I
understand:
if
you
don't
care
about
fragments,
if
we
forget
about
for
not
in
a
way
everything
is
about
a
fragment.
But
if
we
speak
about
like
entire
query,
you
don't
know
how
many,
how
many
players
you
need
to
fulfill
entire
query
or
in
a
certain
path,
might
comment
on
the
applied
to
fragments.
So
previously
it
was
guaranteed
no
actually
like
per
puff.
D
D
Yeah
done
property,
we
use
it
for
stream
and
ideas
to
use
it
for
with
the
same
link-
and
we
discussed
last
time
so
done
and
thank
you
Benji,
because
you
have
to
comment
and
additional
qualification
on
that.
That
done
actually
mean
all
the
differs
on
this
level
is
finished.
It
doesn't
mean,
like
all
the
different
recusively
finished.
Only
this
path
like
stuff
on
this
path
is
finished.
Foreign.
A
But
do
we
still
like?
Do
we
still
tell
the
client
in
advance
what
what
is
being
deferred.
D
He
also
doesn't
mention
pending,
so
we
need
painting
anyway
for
to
solve
a
regional
issues
so
like
issue
about
clients
figuring
out
what
was
in
line
and
what
wasn't
in
line.
So
we
need
payment
for
that
and
like
heads
up,
heads
up
I'm
like
have
like
some
alternative
proposal
for
pending,
but
for
for
this
week,
I
I
will
focus
on
on
that
and
I
will
do
like
a
pendant
Proposal
with
a
little
bit
different
depending
formatulator.
D
Yeah,
it's
the
same
like
any
pending
cup.
Why
to
to
my
Approach
or
to
Roba
approaches
in
in
the
senses.
Are
talking
now
be
a
concern
and
by
the
way,
like
opinion,
is
not
kind
of
implemented
yet
as
I
understand
why
it's
relatively
new
bundles
suggested
that,
like
a
month
or
two
ago,
so
it's
like
still
raw
and
we
didn't
deeply
discuss
so
I
gave
you
all
agreed
on
this
specific
format
or
not.
G
Ivan
I
had
a
question
in
terms
of
your
proposal
about
error
boundaries.
Are
you
still
right
now?
You
know
we
can't
we
don't
Bubble
Up.
You
know
further
than
the
the
branch
I'm
just
wondering
if
that,
if
that
I
mean
you
know
here
we're
sort
of
merging,
in
some
sense
the
payloads,
if
you're,
if
you're
thinking
that
yeah.
F
That
was
what
we
decided
on
before,
but
I
guess
that
if
you're
splitting
one
defer
path
into
several
different
payloads
and
the
payloads
could
be
at
different
paths
than
you
would
have
to
the
error,
boundary
would
also
have
to
coincide
with.
However,
the
server
is
splitting.
It
is
that
correct.
D
Not
not
necessary,
so
in
a
sense
we
have
like
a
data
property.
So
if
error
happened
and
we
can
ship
like
the
path
with
errors
and
then.
A
The
problem
is
it's
not
that
even
it's
that,
if
you
take
no
null
propagation
into
it
yeah,
then
it
could
be
that
the
whole
result
would
be
erased.
But
since
you
have
no
multiple
patches
in
the
same
Parts
you're
delivering
the
first,
but
the
second
would
actually
delete
the
whole
data
section
that
you
now
sent
yeah.
D
It's
not
much
imagine
without
the
duplication,
I
returned
back
to
away
box
to
what
what
everybody
already
implemented,
kind
of
yeah
and
the
experiment
of
work.
So
with
variables.
Both
we
have
variable
a
and
label
B
and
both
include
our
fields
that
throw
and
label
a
already
shipped
and
weibo,
be.
G
D
Not
shared
field
so
label
a
already
shipped
and
now
it
will
be
kind
of
like
do
no
propagation.
So
since
we
decide
to
do.
A
More
deterministic,
like
the
idea,
is
that
a
deferred
election
set
is
essentially
its
error
boundary,
and
this
is
I
I.
Don't
say
that
this
is
a
stopper
or
whatever,
but
it's
it's
changed
now.
This
would
change
it
because
now.
G
Right,
I,
I,
I
guess
what
I
was
thinking
is
that
it
might
be
possible
to
keep
the
same
error
boundaries,
but
you
know
the
mental
model
where
we
are
saying
that
we
are
never
we're,
never
sending
duplicate
data
and
we're
acknowledging
you
know
that
we
are
working
off
of
essentially
a
cash.
You
know
the
client
that
the
client
is
patching.
You
know
the
original
results
it's
sort
of
in
in
that
mental
model.
May
no
longer
make
sense
to
have
that
those
error
boundaries.
G
That's
you
know,
I
think
we
could.
You
know
I
think
we
can
be
so
basically
in
the
implementation.
I'm,
not
sure
that
you're
gonna.
You
know
that
the
Prototype
that
you
hopefully
will
get
get
working,
I'm,
not
sure
if
you'll
have
time
to
deal
with
that,
but
I'm,
just
I,
guess:
I'm
plugging
that
as
like
a
open
question.
D
Yeah,
it's
a
good
thing.
I
will
write,
I,
don't
know
what
what's
better
I
will
probably
eat.
My
original
comment
not
to
have
like
something
important
in
the
middle
of
the
thread
and
I
will
put
it
on
top
and
saying
this.
This
is
the
change
I'm
like
and
explain
why
I,
don't
think
it's
it's
a
an
issue.
So
yeah
it's
important
question
you're
right.
It
doesn't
make
sense
to
if
it
if
the
first,
not
it's
not
like.
D
C
A
D
A
D
Think
and
if
one
do
patient
and
constantly
check,
if
it's,
if
it
have
enough
data
to
a
few
certain
fragments
for
that
mental
model,
having
quite
saying
like
this
patch,
basically
moves
out
with
subtree
makes
sense,
because
client
will
do
checking
anyway.
We
will
do
much
and
get
checking.
D
Yeah
yeah
and
actually
in
a
sense,
we
return
back
to
basic
graphical
boundaries,
is
based
on
not
on
fragments
and
unwind
fragments,
but
on
response
as
a
single
tree.
So
delivery
now
is
based
on
the
response,
a
single
tree
without
like
influence
of
fragments
and
errors.
The
same
so
I
actually
like
think
using.
G
G
You
know
a
deferred
component,
two
different
deferred
components,
let's
say,
and
one
of
them
causes
an
error
and
one
of
the
doesn't
I'm
I'm,
not
sure
that
I'm
not
sure
that
the
client
is
going
to
to
expect
that
an
error
in
a
in
a
one
deferred
component
crashes,
the
other
or
crashes
the
original
tree,
you
know,
I
mean
so
so
again,
I
I
think
it's
important
to
flag
it
because
yeah,
this
is
a
mental
model.
You
know,
I
would
suggest
that
everybody
yeah.
E
You
can
definitely
speak
to
Apollo
client
better
here,
but,
for
example,
if
we
had
two
deferred
fragments
and
one
errored
one
didn't,
and
they
were
in
the
same
kind
of
initial
request,
we
would
just
have
one
error:
State
stored,
that
comes
back
right
and.
A
It's
also
like,
like
one
thing
that
we
have
to
remember
here,
is
if
it's:
if
we
dropped
all
the
first,
then
this
error
bubbling
would
anyway
happen
to
the
whole
tree,
so
it
right
anyway
delete
these
site
site
election
sets,
yeah
yeah.
D
G
Yeah
but
I
think
the
problem
is
that
the
semantics
are
are
the
same,
and
but
it
actually
does
lead
to
surprising
Behavior,
because
if
you
did
inline
everything
then
sure
you
would
never
get
an
initial
response.
But
here
we're
dealing
with
the
situation
in
which
the
initial
response
was
already
sent
and
was
considered
valid
and
were,
were
you
know?
G
Potentially
you
know
you
know,
just
you
know,
making
the
tree
dirty
to
some
extent
by
nulling
it
after
you
know,
after
the
data
has
already
showed
up
so
I
I,
again,
I
think
I
I'm,
not
sure
again,
I
I,
I
I
it
fit
in
line.
It
fits
with
the
mental
model
that
you're
suggesting
to
to
get
rid
of
these
error
boundaries,
but
I
think
we
can
tweak
the
mental
model
a
little
bit,
meaning
we're
talking
up
Amy
and
another
way
of
looking
at
this
is
instead
of
each
deferred
payload
patching.
G
The
original
response
is
that
another
way
of
looking
at
it
is
that
you
can
supplement
the
Deferred
payload
from
the
original
response,
meaning
you
can
view
the
initial
data
or
whatever
cache
data
is
available.
You
can
view
that
as
the
patch
and
the
you
know
to
to
to
what
to
the
payload
that
is
coming
down,
or
there
are
multiple
patches
have
now
led
to
this
fulfilled
fragment.
You
know
so.
I'm
I'm
not
really
sure
that
the
preferred
Behavior
once
we
send
an
additional
response
would
be.
G
You
know,
would
be
to
to
null
it
with
the
Deferred
payload
I'm,
not
even
sure
it's
sort
of
like
undefined
Behavior
like
how
would
the
clients
deal
with
that,
so
so
I'm,
just
not
sure
about
it.
I
I,
I,
just
don't
think
it's
so
simple.
I
can
see
really
both
sides.
A
C
A
We
can
put
them
side
to
side
and
see
what
we
take
from
which
from
these
ideas,
because
I'm
not
sure.
Yet
if
it's
worth
yeah,
okay,
so
the
strange
feeling
here
is
you
put
the
fur
on
somewhere
like
the
merging
that
well
with
me.
But
now
we
are
having
like
it
feels
like
you,
have
no
control
at
all
or
on
the
incremental
data
shapes.
D
C
A
I
mean
I
also
argued
a
lot
that
the
server
has
to
be
more
in
control
of
this
thing,
and
it
cannot
be
all
up
to
the
user
right
for
a
lot
of
efficiency
reasons.
But
it
feels
strange
that
we
broke
out
this
rule
again,
that
we
have
multiple
patches
per
path.
D
Removing
label
was
a
Tipping
Point
because,
together
with
removing
the
labels,
we
start
do
Merchant
off
of
the
duplication
of
the
fields
and
that's
what's
what
sequence
of
event
we
remove
the
label.
We
did.
We
agreed
on
partial,
due
duplication
and
we
kind
of
like
moved
away
because
in
a
label
scenario
had
like
a
full
control,
you
have
the
error
boundaries
and
you
had
like
a
control
over
a
shape
and
you
have
guarantees
that
stuff
delivered
at
the
same
time,
even
without
the
merging
at
all.
D
So
initial
response
was
totally
separate
from
labeled
one.
You
didn't
need
even
too
much
like
so
I'm
I'm,
like
my
position
on
that
there
is
one
Paradigm
Visual
and
we
export
that
and
find
like
bunch
of
coordinate
cases
in
the
program
we
shift
like,
in
my
view,
halfway
towards
do
I,
complete
the
duplication
and
imagining
approach
we
wish
it
were
halfway
and
I'm
proposing
to
move
it.
The
full
way
to
to
be
like
opposite
so
in
a
label
scenario,
go
and
have
like
all
the
control
and
server
need
to
just
fulfill
that
variable.
D
D
Yeah
I
mean
at
the
moment
I'm
doing
quite
another.
Crackhead
jspr
I
want
to
finish
it
today
and
I
like
switch
and
I,
have
nothing
else.
Oh
more
like
it
will
be
number
one
priority
to
do
that
stream,
G4
prototype
and
since
I
think
like
yeah
Yakov
or
have
like
partial,
partly
similar
permutation,
and
he
do
duplicate
like
leaves.
D
Yeah
I
will
walk
into
it
and
see.
Can
I
reuse
way,
part
of
Yakov
work
to
make
a
thing
faster.
F
Awesome
yeah
I
mean
I,
guess
I'm,
just
curious.
What
everyone
else
is
feeling
is
on
the
scale
of
deduplication
versus
the
amount
of
payloads
being
sent.
That's
really
what
the
the
trade-off
is
right.
A
Yeah
yeah,
it
depends
on
for
me,
it
depends
on
a
couple
of
things,
but
these
are
implementations.
You've
got
in
the
end
is
how
much
or
how
often
do
we
Branch
execution,
because
that's
expensive
or
can
be
expensive?
B
I
also
have
a
non-standard
use
case
that
I'd
like
to
use
stream
for
that.
I
think
that
this
might
impact,
and
that
is
I
want
to
be
able
to
use
stream
to
stream
large
amounts
of
data
and
I'm
talking
like
hundreds
of
megabytes
gigabytes,
even
which
stream
could
do
now.
Obviously,
this
isn't
targeted
for
the
browser,
because
browser
doesn't
want
to
receive
quite
that
amount
of
data,
but
it
can
be
useful
in
various
situations
so
having
to
sort
of
maintain
this.
B
This
cash
or
this
information
in
server
memory
to
know
what
has
and
has
not
been
sent
is
not
ideal
but
I'm,
not
sure
that
that's
actually
required
in
what
ivana's
proposing
at
the
moment,
I'd
need
to
see
the
the
final
implementation
and
certainly
also
then
think
about
how
you
know
what,
if
we've
got
multiple
things
that
differ
and
then
end
up
streaming.
B
You
know
the
same
field
or
all
the
edge
cases,
but
ideally
I
mean
my
my
aim
for
this,
and
it
was
me
that
brought
up
the
idea
originally
of
of
reducing
the
amount
of
payloads
that
we're
sending
is
I
would
like
to
to
reduce
it
in
the
in
a
reasonable
way
that
we
can
do
statically.
So
I'm
not
worried
about
fully
eliminating
every
single
bit
of
deduplication.
I
just
want
to
like
get
rid
of
the
the
worst
offenders
effectively
like
if
we've
got
multiple
defers
at
the
same
level.
B
Let's
merge
them
together
that
kind
of
thing,
but
if
we've
got
defers
that
are
children
of
other
defers
or
streams,
I'm,
not
so
worried
about
that,
not
so
worried
about
pushing
it.
The
whole
way.
E
Yeah,
okay,
so
I
guess
I!
Guess
we're
waiting
to
see
what
you
come
up
with
Ivan
then
and
then
we'll
we'll
have
a
lot
more
to
talk
about
I!
Guess
at
this
meeting
next
week
around
that
and
the
first
day.
E
Know
how
much
time
we'll
get
into
I
was
looking
at
the
agenda.
It
looks
like
we
have
lots
of
room
right
now
on
it,
but
who
knows
things
can
fill
up
quickly
during
the
week
yeah.
G
Great
on
on
Benji's
point
I
think
I
think
the
implementation
you
know
is,
is
you
know
whether
that'll
actually
be
a
problem
is
definitely
going
to
be
implementation,
specific
I'm
hopeful
that
it
wouldn't
be,
but.
C
D
Yeah,
just
to
clarify
a
bunch
about
your
comment:
you
you
worried
about
cash
on
a
server
or
on
a
client
server
only
on
the
server
again,
oh
okay,
yeah!
So
on
a
server,
it's
easy
to
throw
away
yeah
I
plan
to
do
it
without
any
walking
I.
Think
it's
like
in
my
head.
It's
possible
to
do
simply
without
walking
and
kind
of
like
statically,
without
query,
pointing
there's
a
similar
way.
D
How
Jacob
the
duplicate
like
leaves
so
I
want
to
do
this,
the
same
thing
and
without
like
branching
execution
and
without
maintaining
original
response,
just
just
remembering
what
what's
already
said,
and
what's
not:
okay,
without
storing
quite
original,
so
I
plan
to
implement
to
like
alleviate
your
your
control,
excellent.
F
Is
is
there?
Is
there
a
concern?
I
think
that
Jacob
brought
this
up
in
the
thread
of
of
a
query
written
in
a
way
that
could
lead
to
like
a
lot
of
payloads
again
I
guess
I
haven't
like
fully
thought
it
through,
but
I.
Don't
know
if
you
have
like
10
different
deferries,
but
they
all
overlap
with
each
other
by
only
one
field
or
something
could
that
lead
to
like
100
payloads,
something
like
that
in.
D
So
haven't
actually
like
my
proposal,
so
that
problem,
because
you
don't
have
like
a
coins,
don't
have
any
control
over
duplication
and
have
no
influence
on
duplication.
So
you
cannot
create
big
response
because,
like
what
I'm
proposing
is
like
4G
duplication,
so
and
number
of
incremental,
like
on
the
same
level,
number
of
five
pivots
is
not
is
controlled
by
a
server
not
by
a
client.
G
G
Would
it
make
sense
like
for
the
initial
proof
of
concept
to
to
make
sure
that
we
are
not
sending
additional
payloads
and
we're
only
duplicating
meaning
we're
not
sending
payloads
as
they're
ready?
Sorry
we're
not
sending
Fields
as
they're
ready
unless
a
fragment
is
completed
and
then,
when
a
fragment
is
completed,
it's
it's
sent,
but
it's
you
know
deduplicated,
even
though
we
could
send
you
know
Fields
when
they're
not
ready.
Would
it
make
sense
to
like
separate
that
from
this
initial
implementation.
G
But
but
the
idea
would
be
that
the
idea
would
be
that
you
know
we
that,
because
that's
because
we
could
send
Fields,
you
know
partial
data
for
it
for
fragments
before
it's
completed.
But
where
you
know
you
you've
changed
the
model
so
that
to
some
extent
so
that
we
can
send
incomplete
fragments.
But
we
don't
need
to
you
know,
send
incomplete
fragments.
G
We
could
just
send
the
complete
ones,
use
the
new
model
and
have
completely
deduplicated
data,
and
that
I'm
just
wondering
if
that
might
solve
solve
for
any
performance
issue
in
terms
of
getting
too
many
payloads
I'm,
not
sure
if
that
was
Matt's,
only
concern.
But
that's
basically
you
know
you
were
suggesting
that
the
server
could
do
that,
but
I
guess
what
I'm
saying
is.
Should
we
mandate
that
the
server
does
that,
at
least
for
the
initial
release.
D
If,
if
at
some
point,
we
decided
to
return
like
cuevos
yeah
away,
bus
is
another
thing
like
if
at
some
point,
we
divide
some
mechanism
where
people
can
say
like
on
this
path.
There
is
like
two
groups
of
fields
and
like.
C
C
D
But
what
I
think
might
be
I'll
I,
like
personally
my
my
momentum
model
is
like,
is
it's
up
to
server
to
like,
but.
A
It's
not
like
for
me
it's
not
it's,
not
a
problem
for
fragment
right
I
was
I
was
really
quite
happy
with
the
with
the
initiative
where
we
said
we
have
one
incremental
payload
per
path,
because
that
stops
a
ton
of
issues
and
you
don't
need
label.
You
don't
have
the
problem
with
the
fragments
and
stuff
like
that.
F
C
F
That
and
still
have
without
deduplication.
D
I
cannot
support
right,
yeah,
I,
think
you're
right.
So
basically,
question
I
I
need
to
see
if
it's
possible,
but
if,
if
it's
possible,
you're
right,
because
we
should
not
budge
proposal
and
changes
together.
So
if,
if
you
can
do
this
duplication
with
current
guaranteed
number
of
by
Awards,
we
should
do
that
because
it's
what
what
we
agreed
on
and
if
I
have
like
a
proposal
to
water
to
send
like
stuff
in
series
like
I,
walk
to
server,
to
break
it
to
smaller
chunks.
D
D
Yeah,
it
wasn't
possible
proposal,
we
do
imagine
on
on
different
levels
between
D4
on
the
different
levels.
So
this
is
the
core
thing
here.
So
if,
if
you
open
my
command,
I
have
like
a
free
three
points:
if
you
scroll
a
little
bit
down
yeah,
so
so,
basically
with
like
with
three
points
and.
D
So
So
like
we.
D
C
F
You
would
be,
you
would
have
to
merge
across
the
first
and
that
could
lead
to
so.
Like
I
mean,
let's
say
that
these
two
Fields
were
on
the
same
level,
but
they
were
in
different
Affairs
of
different
paths.
They
would
get
merged
together
and
one
of
them
being
slow
would
hold
up
the
other
defer
from
being
completed.
Then
that's.
D
G
A
Yeah
so
so
that
would
like,
if
we
can
keep
that
I
would
have
less
concerns.
F
That
that
trade-off
for
me
isn't
a
good
one,
because
if
I
were
writing
this
query
as
a
client
developer.
It's
because
I
have
a
component
that
renders
three
these
three
fields
and
a
component
that
renders
these
fields
and
now
just
be
because
of
that
overlap.
They
are
completely
blocked
by
each
other.
You.
A
Can't,
oh,
no!
No!
No!
No!
No!
That's
not
what
I
meant
I
I
just
for
me.
If
we
have
a
solution
without
the
the
blocking
issues.
Also,
then,
like
I
had
less
concerns,
but
at
the
moment
like
for
me
that
we
duplicate
data
is
not
a
concern
because
it
can
be
faster
and
anyway
in
the
client.
Updating
a
cached
thing,
just
with
the
same
information
again
for
me,
is
not
a
major
concern,
but
having
like
multiple
patches
per
pass,
makes
the
client
more
difficult.
E
E
They're
gonna,
they're
gonna
chunk
them
up
and
if
they
know
they
can
use
multiple
defers
to
split
things
up
further
and
know
that
they're
going
to
get
that
data
back
when
it's
ready
and
it's
not
dependent
on
another
fragments
data
coming
back
slower,
I
I
think
that's
a
selling
feature
for
defer
and
I
think
that
will
throw
people
off
if
we
tie
them
together.
A
C
A
E
D
G
D
But
you
have
multiply
a
multiple
multiply
per
word
per
path
and
we
have
like
third
one
that
Jacob
suggested
that
have
no
deduplication,
no
duplication.
It
have
like
two
pay
words
per
initial
and
second
per
path,
but
we
have
walking.
So
we
have
like
a
combination
of
three
three
options:
each
one
have
like
yeah.
C
A
F
Too
yeah.
C
E
A
Yeah
to
write
them
down
and
see,
see
yeah
just
to
have
the
information,
even
if
we
know
I
mean
we
did
the
same
with
the
Union
stuff
right.
We
we
already
knew
that
some
proposals
will
not
lead
to
a
solution,
but
it's
good
to
have
it
written
down
and
the
maybe
other
improvements
that
we
can
take
from
that.
So
yeah
yeah.
F
So
after
this
meeting,
I'll
leave
a
comment
on
this
on
here.
That
says
that
we
talked
about
this
other
option
where
we
keep
the
the
number
of
payloads
per
path,
but
that's
the
trade-off
that
you
get
and
we
were
mostly
not
comfortable
with
that.
So
we're
not
considering
it
yeah
that,
because
we
have
so,
we
have
it
written
down.
G
Yeah
I
mean
the
option.
I
think
we're
all
agree.
The
option
that
I
was
suggesting
also
was
there's
this
non-blocking
option
and
again
I
I.
Think
it's
I
think
it's
possible.
The
main
reason:
I
haven't
I'm
like
taking
the
opposite
attack,
because
I've
been
I'm,
hesitant
to
try
implementing
it
unless
I'm
actually
sure
that
we
want
it
I'm,
pretty
sure
we
can
but
I
think
it's
a
lot
of
work.
It's.
G
Yeah
yeah
I
I
do
I
think
it
can
be
done
without
blocking
I
think
you
know
you
basically
report
out
each
field.
You
know
you
execute
each
deferred
field
and
then
you
report
it
to
all
the
fragments
that
have
requested
it,
and
then
you
need
a
new
mechanism,
at
least
in
graphql
JS
I.
Guess
you
need
a
new
mechanism
for
deciding
whether
a
frag
you
know
deferred
payload
is
complete
rather
than
because
you
know
you've
already.
G
You
know
you're
sort
of
assembling
the
fields
from
assembling
the
whole
payload
from
the
leaves,
as
opposed
to
sort
of
the
other
way
around.
G
D
Qualify
what
I
say
working
is
not
in
server
server
working,
it's
not
like
about
server.
It's
it's
a
working
in
a
sense
like
you
have
inner
defer
and
energy
for
should
wait
for
multiple
of
default
to
be
shipped
first
before,
because
staff
G
duplicated
between
inner
decor
and
outer
default
outer
deferred
winds.
So.
G
Oh,
no
so
I'm,
sorry,
I
I,
don't
I,
didn't
I,
didn't
take
I,
think
I'm,
looking
at
it
that
you
can
de-duplicate
without,
like
the
outer
winning
versus
the
inner
winning
meaning
whatever
resolves
first
would
win,
is
in
my
conception
and
I
again.
I
think
that's
also
possible.
I
think
that's
what
people,
what
what
might
be
most
desirable
I
think
it
is
I,
think
it
is
possible
I'm.
Just
not
I
think
it
seems
like
a
more
work
than
I
had
to
have
on
my
plate
that
I
can
handle
right.
C
E
E
It
would
be
great
as
if
this
week,
when
we're
talking
about
different
stream
stuff,
like
we
usually
do
at
the
working
group
meetings,
we
give
a
little
update
on
where
things
stand.
It'd
be
great
if
the
top
level
list
of
discussion
items.
If
you
go
back
to
just
the
list
of
open
discussion
items,
if
we
could
make
sure
the
ones
that
really
are
are
blocked
like
really,
we
really
need
a
decision
to
be
made
on
are
all
kind
of
labeled
the
same
way.
E
I
know
we
have
three
open
question
ones,
but
is
the
is
the
process
we've
been
following
if
it
has
an
open
question
label
on
it?
That's
really.
Those
are
really
the
most
pending
ones
to
have
figured
out
right,
because
if
so
I'm
just
wondering
if
we
could
add
open
questions
to
some
of
these
newer
ones
that
we've
been
talking
about
as
well.
Yeah.
F
Yeah
definitely
so,
let's
put.
C
E
If
we
also
had
the
duplicate,
Fields,
one
yeah,
December,
19
2022.,
just
the
fifth
one
down
I,
think
I
think
that
one
probably
needs
because
I
think
this
is
your
proposal.
Ivan.
D
Yeah
we
just
decide
to
to
I
I
think
like
yeah,
which
basically
is
what
I'm
Pro
I
will
close
this
this
one
yeah,
you
close
it
like
after
a
call
I,
will
close
it
and
explain
like
all
the
thing.
It's
the
same
idea
of
initial
responses,
patches
I,
just
like
figure
out
a
way
how
to
do
that.
So
it's
fully
fully
like
no.
E
C
D
One
question
for
me
and
I
think
we
should
have
like
separate
if
we
move
the
duplication
field
into
separate
one
I
I
feel
like
we
need
the
same
about
pending,
to
discuss
like.
D
Our
proposal
for
because,
like
after
we
remove
label,
it's
two
main
questions:
how
to
merge
stuff
and
how
to
throw
to
acquire
that
something
is
in
line
or
not
unwind.
An
initial
response,
so
I
think
it's
an
open
question,
but
it's
hidden
on
the
under
like
huge
discussion
in
issue
or
clients
cannot
reviable.
C
F
E
F
C
C
E
Anything
that
is
labeled
with
open
question,
we're
kind
of
considering
as
being
the
things
that
need
to
be
figured
out
to
kind
of
help,
progress
through
the
stages,
essentially
yeah.
Okay,
that's
great.
F
E
I
know
Benoit:
you
were
asking
earlier
today
about
some
of
the
internal
statistics.
We
have
on
defer
usage
that
we
could
bring
to
this
working
group
and
share
just
to
give
people
a
heads
up
of
how
things
have
been
going
since
we've
had
people
using
it,
but
yeah
I,
don't
know
if
you
looked
further
into
the
stats
Benoit
or
no.
E
Okay,
at
some
point
in
a
future
call
we'll
share
some
some
interesting
findings.
As
we
put
the
put,
the
stats
together
it'll
be
fun
to
talk
about
that.
C
F
Right
and
sounds
good,
we'll
see
most
of
you,
hopefully
on
Thursday
and
then
again
next
Monday.