►
From YouTube: Incremental Delivery WG - 2022-01-06
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
B
C
A
D
Yeah
I
linked
to
the
specific
comment
in
that
discussion,
where
I
have
The
Proposal.
That
I
was
talking
about
in
the
last
working
group
and
yeah,
so
I
think
there's
there's
a
few
angles
to
go
at
this.
There's
the
new
Behavior,
where
we
merge
all
the
differs
on
the
same
level.
D
There
is
the
the
pending
with
the
path.
So
you
know
that
you
know
the
locations
where
a
defer
or
stream
is
in
the
works
and
that
you
can
use
to
know
that
if
basically
something
will
be
in
mind
if
you're,
if
you're
expecting
a
defer-
and
you
did
not
get
one
of
these
things-
then
you'll
know
that
it
was
in
mind
just
because
why
not
I
also
said
we'll
add
this
completed
true
for
stream.
So
now
we
can
know
when
an
individual
stream
ends.
That
seems
pretty
straightforward.
D
D
But
the
the
open
question
that
we
ended
with
at
the
last
meeting
is
how
are
Fields
merged
that
are
part
of
both
a
deferred
and
non-deferred
selection
set.
So
that's
this
example
here
you
have
almost
the
same
Fields
inside
of
here
and
inside
there
all
like
the
top
level
ones.
Objects
are
the
same,
but
basically
F2
cfj
is
the
only
Lee
Fields.
D
That's
in
the
defer,
that's
not
outside,
so
we
said
that
it
would
be
desirable
for
these
two
to
be
equivalent
where
the
stuff,
that's
in
the
in
the
defer,
gets
merged
into
the
non-deferred
selection
set
and
we're
talking
about
how
how
to
implement
that
and
that's
where
I
think,
that's
why
we
didn't
just
immediately
go
for
it
because
there's
questions
of
how
it
should
be
implemented
and
there's
some
edge
cases
that
I
think
we
need
to
figure
out
too.
D
B
We
get
regarding
this,
we
that
that
was
where
we
left
off
in
the
working
group
discussions.
So
if
anybody
not
was
not
there,
that's
essentially
where
we
branched
off
and
I
in
the
meantime
implemented
that
in
hot
chocolate
and
but
we
do
essentially
a
pre-traversal.
So
we
compile
essentially
the
query
and
anyway
simplify
it
and
that's
very
easy.
But
the
problem
is
and
that's
where
I
had
to
with
even
a
discussion
on
or
where
in
the
working
group
is
to
do
that
with
the
collect
Fields
algorithm.
B
B
D
One
reason
why
I
think
that
this
is
pretty
desirable
is
because
it
does
simplify
a
lot
with
the
responses
where,
for,
if
you
look
at
this
example,
we
have
this
top
level
object.
That
has
a
defer
underneath
it
with
the
leaf
field,
and
then
we
have
another
defer
that
has
that
same
object
inside
of
it
with
a
differently
field
inside
of
yet
a
third
defer
and
if
they
do
all
get
merged
together.
D
It's
basically
equivalent
to
this,
and
now
you
these
deferries,
because
they're
under
the
same
object
should
get
merged,
and
you
get
one
neat
response
with
a
path
that
points
to
inside
nested
object
and
you've
got
these
two
Fields
Back.
If
you
don't
and
you
instead
like
fork
and
execute
them
separately,
you
would
end
up
with
two
paths
that
point
to
the
same
place
with
different
objects
on
them
and.
D
But
now
it's
kind
of
you
have
to
like
count
how
many
pendings
came
in
with
that
same
path
and
then
know
that
all
of
them
came
back,
but
also
the
way
that
Yvonne
described.
It
had
me
think
of
it.
A
little
bit
differently
and
I.
Think
that
maybe
like
an
entirely
different
approach,
could
work
I'm,
not
I'm,
not
sure
if
it
would
work.
Yakov
also
has
a
implementation.
D
E
Be
great
yes,
so,
as
we
discussed
last
time
and
Michael
actually
make
me
to
to
write
that
one
by
one,
just
just
to
be
sure,
like
it's
possible
deep
nice
thing,
so
idea
is
as
an
input
correct,
Fields
right
now
receive
one
set
of
selection
set.
E
So
you
just
pass
like
initially
pass
entire
query
like
selections
array
of
Resurrection
set
of
entire
query
and
as
a
result
you
get
like
collected
Fields
with
defer.
Quite
even
previous
proposal.
You
get
the
same
input,
but
you
receive
like
two
two
different
outputs
field.
If
fields
to
execute
like
right
now
in
fields
to
differ,
what
I'm
basically
proposed
is
to
change
my
function
more
complicated
in
a
sense
yeah,
but
it's
possible
to
do
without
qualifying
it's
possible
to
do
it.
E
So
it's
accept
two
sets
of
fields
like
all
the
fields
that
we
currently
that
we
want
to
grab
subfields
from
and
also
set
of
fields
that.
E
From
other
differ
from
that
point,
so,
basically,
when
they
go
through
query
the
track
and
not
only
field
that
we
currently
executing,
but
we
also
tracking
which
of
those
fields
originated
from
under
defer
and
when
we,
when
connect
Fields,
do
a
job
instead
of
just
returning.
You
like
too
few
two
things
collected
fields
and
Fields
to
be
deferred.
E
I'm
have
a
problem
with
naming
so
I
wanted
to
have
something
concrete
to
show
here
or
like
for
people
to
come
at
asynchronously,
so
I
just
put
it
there
and
I
use
most
complex
example
from
from
post.
So
if,
if
you
find
a
problem
with
it,
maybe
I
missed
something,
but
this
look
like
it
should
work
without
quite
a
point.
B
Ideally,
the
the
complexity
of
the
collect,
Fields
wouldn't
or
the
the
algorithm
wouldn't
change
that
dramatically,
because
it's
very
Central
and
very
complex
tool
like
a
lot
of
graph
engines.
When
you
implement
it,
it's
one
of
the
complex
things
to
do
so,
the
you
could
even
do
it
without
career
planning.
If
you
so,
there
are
two
things
we
we
could
do.
B
We
could
accept
that
we
get
inefficient
things
with
the
core
algorithm
and
that
we
in
some
cases,
don't
figure
out
the
best
optimal
query,
and
then
we
essentially
sent
duplicated
data
down,
but
we
have
a
node
that
you
could
do
that,
for
instance,
with
with
a
pre-traversal
of
the
queries,
or
something
like
that.
If
you
want
to
implement
that
and
then
basically,
we
can
get
away
with
a
very
simple
algorithms
for
the
stack
but
other,
but
more
advanced
graphical
engines
can
then
Implement
these
things
in
a
better
way.
B
F
You
mentioned
earlier
Michael
that
the
one
of
the
risks
of
of
doing
the
defers
later
is
that
there's
that
effect
should
effectively
this
like
potential
delay,
this
efficiency
to
that,
but
I
actually
think
that
that's
potentially
desired.
If
you
imagine,
for
example,
that
you
are
running
the
graphql
query
in
a
transaction
in
the
database
to
ensure
consistency,
then,
when
you
have
those
defeated
deferred
Fields,
you
wouldn't
necessarily
want
to
start
selecting
those
things
first,
because
other
things
need
to
be
selected.
F
First,
you
don't
actually
want
to
kick
off
that
action
in
a
transaction,
for
example,
and
I-
think
that
might
not
be
the
only
case
if
the
processing
of
the
selection
sets
or
I
don't
know,
the
processing
of
the
data
itself
is
expensive.
You
might
be
able
to
get
that
first
payload
faster
to
the
client.
F
F
Thinking
about
that
a
little
deeper,
and
also
combining
that
with
the
fact
that
we're
considering
getting
rid
of
labels
and
with
that
we're
not
trying
to
express
kind
of
like
the
identity
of
these
things
anymore,
it's
more
just
about
the
past
again
like
it
was
kind
of
originally
I'm,
actually
wondering
whether
a
query
like
the
one
that
I
wrote
previously,
the
the
F2
with
the
ABCDEF
Etc
is
actually
more
equivalent
to
actually
turning
into
multiple
defers,
where
each
individual
defer
is
just
a
very
small
selection
set.
F
F
Now
this
is
weird
right,
so
it's
a
bit
out
of
left
field
because
it's
taking
that
one
defer
and
turning
it
into
three,
but
it
is
now
much
more
straightforward
to
understand
what's
going
on
and
we
can
from
the
server
side
we
can
express.
There
is
a
promise
here
at
this
path.
F
This
path
isn't
complete
and
same
for
these
other
two
paths
that
are
deeper
down
and
part
of
the
reason
I'm
thinking
about
things
like
this
is
I'm
quite
interested
to
see
how
we
might
also
be
able
to
use
this
mechanism
to
implement
live
queries
so
with
live
queries
we
might
want
to
be
able
to
do
patching
at
arbitrary
locations
in
the
graphql
response
tree,
and
so
I
think
that
this
would
actually
tie
in
quite
nicely
with
that.
Though,
of
course,
it's
out
of
scope.
B
So
the
thing
is:
if
you
have
something
like
this,
it
could
be
that
your
server
DDOS
himself
or
itself,
if
it's
not
done
properly
right,
because
you
suddenly
transform
wonderful
and
then
maybe
you
have
that
in
the
list
and
so
on.
And
then
your
creating
a
lot
more
differs
and
and
actually
create
this
multiplier
that
we
want
to
get
rid
of
for.
B
But
apart
from
that,
so
in
the
beginning,
the
first
iteration
that
we
did
is
actually
hold
off
on
the
defer
on
the
Deferred
task
and
that
actually
leads
to
not
so
nice
performance
characteristic
and
a
lot
of
feedback
we
got
from
the
community
is
that
the
people
essentially
want
to
have
the
difference
as
early
as
possible.
B
B
One
or
the
other
thing
is
wrong:
that's
just
the
feedback
that
we
got
from
the
first
from
the
iterations
that
we
did
on
the
fur
and
producing
hot
chocolate
bits
to
the
community.
So
the
communities
with
the
performance
of
the
current
implementation,
where
we
spawn
of
essentially
really
quickly
much
more
happy
than
with
the
first
iterations,
where
we
spawned
off
almost
at
the
end
of
the
the
wheat,
Fest
and
I,
don't
know
it's
yeah
go
ahead.
G
Yeah
I
I
do
think,
there's
so
I
think
there
would
definitely
be
a
conceptual
mismatch
if
we
ended
up
splitting
a
single
defer
into
multiple,
tiny
defers
and
in
practice.
I
think
most
I
could
be
wrong
about
this,
but
I
think
most
servers
can
do
like,
given
that
you
have
already
calculated
the
values
of
a
c
d.
F
h,
I
the
cost
of
re-including
them
in
the
Deferred
payload
versus
the
cost
of
calculating
up
front.
G
What
is
going
to
be
in
the
Deferred
payload
seems
it
seems
like
recalculating
the
value
like
recalculating
the
values,
just
a
cash
lookup,
whereas
like
figuring
out
which
specific
field
has
not
yet
been
deferred,
and
only
in
like
doing
a
diff
algorithm
like
I
know,
at
least
with
relay.
G
For
instance,
we
had
to
stop
doing
query
diff
algorithms,
because
the
cost
of
that
versus,
admittedly,
this
client
side,
not
server
side,
but
the
cost
of
like
trying
to
compute
what
the
difference
was
between
the
data
that
we
had
already
received
was
much
larger
than
just
asking
for
all
the
data
all
over
again.
B
Yeah,
so
so
so
what
we
ended
up
doing
like
like
so
so
how
this
is
easy
to
implement
is
actually
two
traversals
about
the
query
tree
and
then
you
can
compute
it,
and
the
server
actually
can
cache
that
simplified
query.
So
essentially
we
just
rewrite
the
query
and
then
we
get
a
sim
Cloud.
That's
what
it's
not
even
a
query
plant.
B
It's
just
re-writing
the
query,
so
we
are
stripping
the
differs
and
stuff
like
that
and
rewriting
it
actually
into
a
simpler
aquarium.
And
then
this
is
easy
to
implement,
but
I
I
think
in
essence
or
Benji,
more
or
less
convinced
me
is.
We
don't
have
to
figure
out
the
most
efficient
algorithm
for
the
spec
that
fixes
all
our
edge
cases,
because
that
is
up
to
the
seven
dimensions.
We
just
have
to
find
a
reasonable
algorithm
and
then
add
notes
you
can
make
that
more
efficient.
If
you
choose
to.
G
Yeah
I
I
think
that
if
we,
if
we
have
the
algorithm,
we
and
we
can
do
the
two
pass
or
whatever
we
could
like
offer
as
tooling
on
the
client
hey.
This
is
a
way
the
results
will
be
functionally
equivalent.
If
you
like,
do
an
optimization
pass
before
sending
this
query
to
the
server,
but
yeah
I'd
much
prefer
on
a
personal
level.
I
think
it
would
be
better
for
the
server
response
to
be
as
deterministic
as
possible
to
the
client
versus
as
optimized
as
possible.
D
If
I
was
writing
a
query
like
this,
it
would
be
because
I
have
a
component
that
wants
to
render
all
these
fields
and
that's
what's
deferred
and
then,
if
it
does
get
split
up
into
multiple
I
would
have
to.
Maybe
it
could
be
handled
by
like
the
client
library,
but
you
would
at
least
have
some
logic
that
has
to
know
to
not
start
rendering
the
stuff
until
all
of
these
deferes
came
back.
F
Yeah,
that's
actually
that's
a
critical
thought
in
the
side
of
this
approach,
because
if
you
imagine
adding
Matt's
type
name
hack
to
this,
then
it
would
just
come
through
in
that
first
defer
and
those
second
and
third
defers
wouldn't
come
through
at
the
same
time.
So
yeah,
okay,
that's
good
enough
to
out
to
rule
it
out.
For
me
thanks.
D
Yeah
I
I
have
a
like
a
few
other
examples
that
are
that
I
think
that
maybe
we
should
just
talk
through
and
see
like
what
we
would
expect
to
happen.
So
in
this
example.
D
D
Each
of
these
deferrs
are
under
a
different
object,
so
they
all
have
a
a
different
path,
and
so,
if
we
did
do
that,
that
type
of
merging
would
we
expect
that,
like
this
defer,
that's
under
here,
B
isn't
included
in
here,
because
it
was
sent
not
in
the
initial
payload,
but
in
like
a
previously
deferred
payload,
and
then
you
also
get
into
issues
with
I
guess
ordering.
E
Like
in
the
rest
of
graphql
in
one
fragments
and
fragments
are
not
affecting
not
affecting
like
not
by
what
so,
in
this
case
like
what,
however,
you
would
arrange
like
fragments,
you
will
receive
the
same
response
and
I
I.
Believe
it's
the
same
here
more,
moreover,
like
since
we're
now
discussing
margin.
E
Basically,
nothing
prevents
us
towards
the
Foreign
Fields
itself.
Maybe
it's
it's
not
critical,
I'm,
not
saying
like
please,
but
basically
like,
since
we
imagine
everything
we
can
do
different
fields
without,
like
selection
set,
so
I
I
think
optimized
version
where
it
should
be
like
everything,
squashed
and
stuff
that
was
requested.
Synchronously
return,
like
synchronously
stuff,
that
requested
deferred
no
matter
on
which
level
and
how
deep
they
are.
They
should
be
returned
in
default.
Password.
B
But
but
the
watering
problem
doesn't
exist,
Rob
doesn't
exist
here
because
you
have
to
first
defer
here,
which
is
over
nested
right
and
we,
we
discussed
something,
a
couple
of
meetings
back
where
we
said
that
in
inner
defer
cannot
overtake
the
outer
defer,
so
the
B
would
already
be
there.
We
could
strike
it
of
all.
The
other
differs
and
also
the
a
is
already
there.
So
we
could
strike
it
right
and
then
then
we're
talking
about
C,
which
could
which
should
go
further,
but
now
we're
in
the
same
selection
set.
B
But
there
should,
if
it's
a
nested
defer,
it
should
still
be
guaranteed
that
it
cannot
overtake
its
outer
defer.
I
mean
that's
a
that's
a
guarantee
we
discussed.
B
D
All
right,
yes,
next
next
example
hold
on
one
second.
C
Yeah
are
we
are
we
tabling
for
now,
hi
everybody?
Are
we
tabling
for
now
Ivan's
point
about
allowing
defer
on
fields,
I
think
I,
yeah.
A
B
B
E
Yeah
my
point
was
not
toward,
but
like
a
mental
model
that
in
actually
like
in
one
fragments,
there
is
like
just
a
group
fields
and
they
differ
like
in
mental
model.
It
shouldn't
matter
if
it's
like,
we
have
like
every
field
in
separate
in
one
fragment
which,
before
or
the
grouped
results,
would
be
the
same.
E
G
Yeah
so
I
guess
the
one
of
the
weird
things
here
is
like
stream.
We've
established.
We
like
know
how
to
do
the
path
always
because
it
ends
up
at
a
field
like
we
could.
If
we
wanted
to
reduce
the
scope
of
The,
Proposal
I
know
that
this
again
throws
a
huge
wrench
into
it,
but
actually
only
allowing
defer
on
fields
reduces
the
scope
substantially,
because
we
always
have
that
anchor
of
the
path
we
always
like
the
algorithm
for
merging
we
know,
and
then
in
the
future
we
could
expand
it
to.
G
D
So
you're
saying
in
this
case
a
is
not
inside
defer
a
is
also
inside
defer.
That's
a
validation,
error.
G
So
we
like,
we
could
implement
it
like
that.
That's
one
potential
I'm
saying
like
the
mental
model,
for
what
you
expect
as
a
developer,
is
like.
There
are
two
valid
mental
models
here.
One
is
oh
fragment,
spreads,
always
merge
so
because
a
is
outside
of
the
defer.
I
expect
the
inner
a
to
just
merge
with
it.
G
Another
mental
model
is
defer
or
fragment
spreads
when
a
spread
is
in
fact
fulfilled
when
it's
on
the
correct
type,
all
the
shape
of
that
fragment
spread
is
guaranteed
to
be
in
the
response.
Therefore,
I've
deferred
this
inner,
a
and
all
the
things
below
it.
So
I
expect
the
response
shape
coming
out
of
that
defer
to
be
to
look
exactly
like
what
I've
written
in
my
graphql.
B
It's
not
it's
actually
a
good
point
man,
because
the
if,
if
we
disallowed
the
fur
on
fragments
and
only
allow
it
on
on
fields,
you
could
basically
do
the
same
thing,
because
we
have
a
few
merging
algorithm.
So
it
essentially
would
work
the
same
way.
C
G
B
B
G
E
B
But
then
we
don't,
we
don't
get
rid
of.
So
the
the
idea
is
oh.
What
we
are
trying
to
solve
here
is
to
not
stand
too
much
data
down
to
the
cloud
to
duplicate
too
much
data.
G
E
One
like
for
me,
if
I
mentally
request
before
we
skip
and
set
like
a
flag
to
for
or
to
true
so
I,
replace
before
we
Skip
and
set
my
query
variable
to
true,
you
would
get
everything
like
ABCD,
you
would
get
it
so,
like
initial
response
should
get
everything
which
is
which
is
not
under
default,
and
second
thing
like
data
duplication,
and
for
me
it's
like
why
stuff
should
be
duplicated,
like
libraries
like,
if
you
do
this,
this
kind
of
situation
will
will
price.
E
If
you
have
like
a
fragment
based
Frameworks
and
if
you
have
fragment
based
Frameworks
like
they
can
can
do
predictable
shape
for
every
fragment.
They
will
do
it
anyway.
It's
possible,
like
you,
get
ABC
G
in
the
first
response,
and
you
will
get
dressed
only
like
default.
E
and
F
NH
like
default
and
one
when
they
deferred,
like
a
framework,
can
work
and
understand
like
inside
the
same
fragment.
E
That
also
was
asked
for
stuff,
that
is,
from
initial
payroll,
took
it
from
initial
port
and
like
pass
it
to
component
I
I,
don't
see
like
a
problem,
just
sticking
with
first
mental
model,
there's.
G
The
like
it
I
were
implementing
this
just
naively.
The
way
I
would
do.
It
is,
like
my
fragment
model
points
to
a
Json
object
and
then,
if
my
fragment
is
deferred,
I,
just
like
point
to
the
Json
object
that
we
get
in
the
defer
response
and
I
don't
do
any
merging
like
that's,
that's
the
most
naive
way
of
operating
a
client.
So
that's
what
I
would
expect
like
a
product
developer,
who
has
no
actual
client
and
is
like
custom
building
like
their
components
based
off
of
what's
actually
coming
down
the
network.
D
I
mean
how
how
crazy
is
it
to
actually
have
the
validation
rule
where
you
can't
have
a
field,
that's
both
deferred
and
not
deferred.
It
means
that
you
have
to
like
write
a
bunch
of
aliases.
You
have
you'll
either
have
to
consciously
say
I'm,
okay,
with
duplicated
data
and
write
a
bunch
of
aliases
until
yeah
fragmentally
says
come
around.
B
Yeah,
but
there
was
still
I
think
the
merging
of
this
actually
makes
it
more
like
the
efficiency
gains
that
we
get
by
merging.
These
is
something
that
we
shouldn't
throw
away
for.
For
me,
we
don't
have
to
get
everything
in
Edge
case,
it's
more
like
a
best
effort.
We
try
to
to
merge
it
if
we
send
a
bit
more
data
down
in
this
implementation
of
that
doesn't
actually
matter
like
you
always
have
a
transport
layer
like
in
Decline
application,
and
there
you
have
to
aggregate
this
stuff
in
the
end.
D
Yeah
well
just
just
to
talk
to
this
example.
Yakov,
who
posted
this
in
Discord
what's
interesting
about
it,
is
that
you
have
this
defer,
which
is
nested
under
its
path,
would
be
a
b
and
this
one
whose
path
would
just
be
the
the
root,
and
so.
D
It
I
think
this
one
actually
does
depend
on
the
order,
because
if
this
EF
is
coming
first,
then
it
all
of
this
could
be
removed
from
here
and
but
if,
if
this
one
is
coming
first,
then
you
definitely
can't
remove
this,
because
a
client
is
going
to
expect
these
fields
to
be
there
and
I.
Don't
think
it
would
depend
on
your
data,
of
which
one
comes
first
so
like
it
can't
be
like
statically,
analyzed,
I.
Think.
B
But
we
try
to
make
it
as
efficient
as
possible,
so
it's
the
best
effort
approach
like
not
to
like
the.
The
counter
example
is
that
I
showed
where
we
stand
tons
of
duplicated
data
down,
and
then
you
have
a
problem
otherwise,
and
these
in
these
cases
we
actually
can
get
rid
of
yeah.
G
So
I
think
there's
a
difference
between
trying
to
reduce
the
number
of
deferred
responses
and
trying
to
reduce
duplicated
data
and
I
I.
Think
that
reducing
the
number
of
deferred
responses
makes
sense.
If,
in
this
case,
you
could
either
end
up
having
two
defers
or
just
one
or
zero,
in
fact,
right,
depending
on
what
the
server
decided
was
most
efficient
and
defer
should
be
like
the
defer.
Spec
needs
to
be
expressive
enough
that
a
client
knows
which
case
they're
in
in
any
of
those
situations
right.
G
There's
in
my
mind,
like
the
spec,
is
actually
the
spec
is
not
a
network
protocol,
even
though
we
talk
in
Json-
and
we
say,
oh
if
you're
using
Json.
This
is
how
it
should
end
up.
Looking
we
like
the
spec
itself,
explicitly
says
yeah,
you
can
do
all
kind
between
what
the
server
algorithm
is
and
what
the
access
pattern
is.
G
You
can
do
whatever
optimizations
you
want
right,
so
you
could
have
a
protocol
in
between
there
that
reduces
all
the
duplication
and
then
reassembles
it.
On
the
other
side,
but
when
I'm
accessing
the
object
that
has
been
deferred,
I
expect
all
of
the
fields
underneath
my
defer
to
be
available
on
that
object.
How
it
gets
there
like.
We
could
have
graphql
over
HTTP,
for
instance
like
reassemble
it.
G
F
I,
don't
agree
with
that
exactly
as
stated,
or
at
least
I
may
not
100
be
following
you,
but
if
you
think
of
it
in
terms
of
just
the
basic
Json
that
the
spec
describes,
I
think
what
you're
saying
is
that
Json,
that
is
the
Deferred
thing,
should
have
all
of
that
stuff
in
and
if
you
want
to
do
a
network
optimization
and
throw
it
out,
then
sure
you
do
that,
but
from
it
from
a
spec
point
of
view,
that
Json
object
has
everything
I,
don't
think
that's
really
the
case
and
I
sorry
I,
don't
think
we
should
be
pushing
for
that.
F
I
think
we
should
see
the
incremental
deliveries
other
than
the
root
level,
one
which
is
like
Ivan
described
with
all
the
skips
instead
of
defers
like
yep.
These
have
all
been
skipped.
Here's
the
rest
of
the
stuff,
everything
above
that
I
think,
is
a
patch,
so
I
wouldn't
expect
you
to
be
able
to
use
that
patch
on
its
own.
I
would
only
expect
you
to
see.
F
E
If,
if
they
can,
handle
can
be
handled
in
congeneration
or
Quant
libraries,
they
should
be
handled
there
and
libraries
should
have
a
freedom
to
innovate.
Some
libraries
can
choose
to
do
it
like
expose
it
as
they
give
some
as
well
as
awake.
Full
Blobs
of
you
ask
for,
for
this
fragment
to
get
all
the
data
from
this
fragment
after
everything
is
different,
like
in
some
cases,
maybe
like
you
will
get
like
two
stages.
G
H
Yeah
I
I
think
I,
I,
hope,
I'm
taking
Michael's
point
and
running
with
it
in
that
I,
don't
think
we
have
to
solve
all
the
edge
cases.
I
think
you
know
to
get
this
moving.
We
can
just
pick
which
model
I
mean
like,
like
a
lot
of
people
have
been
saying
that
we're
going
with
and
I
I
I've
always
assumed
that
we're
going
with
or
not
always
I
mean
in
this
iteration
of
I'm.
H
H
From
there
I've
been
working
on
an
algorithm
that
does
deduplicate
completely
and
it
quickly
gets
complex
and
so
far
it's
not
quite
even
working
so
I'm,
not
sure
that
you
know
I'm,
not
sure
that
you
know
in
the
time
frame
that
we
want
to
release
this,
we'll
get
it
you
know,
but
but
we
we
might,
but
maybe
it
pays
to
just
you
know
say
that
we
can
add.
H
D
This
one
like
if
the
if
the
server
is,
is
like
pretty
naive.
If
this
algorithm
is
naive-
and
you
you
do
end
up
just
executing
this,
then
separately
executing
this,
then
executing
these
defers
on
their
own.
Is
it
okay
that
you
get
back
basically
to
payloads
with
the
same
path.
H
I
mean
I
I
personally,
think
that
clients
should
expect
to
be
able
to
merge
anything
that
makes
sense,
I
mean
at
whatever
path
you
know
and
and
whatever
path
they
get.
They
should
be
able
to
merge
it,
and
they
should
just
expect
that
things
should
be
complete
at
that
at
that
path.
H
Now,
that
we've
discussed
merging
so
there
could
be
extra
payloads,
I
suppose,
or
even
you
know,
empty
payloads,
but
but
a
path
should
be,
and
now
that
we've
said
that
we're
merging
the
path
should
be
complete,
I
think
everything
should
be
ready
once
in
an
individual
path
has
been
sent.
B
But
there's
a
good
interjection
for
Matt,
so
basically
an
algorithm
that
always
deduplicates
everything
is
still
stack,
compliant
and
I.
Think
that
should
be
true,
but
our
algorithms
should
tackle
most
of
the
issues,
but
for
the
client
the
client
should
expect
that
data
can
be
duplicated.
B
F
No
worries
in
The
Proposal
that
I
placed
I,
don't
know
three
three
or
four
weeks
ago,
whenever
it
was
with
the
IDS.
You
may
recall,
in
that
I
had
imagined
that
there
would
be
multiple
hits
for
the
same
pass
like
you've
modeled
here
Rob,
so
they
just
got
different
IDs.
F
That
way,
when
you
had
the
pending,
you
would
say:
there's
a
pending
thing
at
this
path
and
there's
two
pending
things
at
this
path,
and
then
you
could
resolve
them
individually
without
them
affecting
each
other
effectively
and
without
you
having
to
have
any
confusion,
because
the
same
path
has
come
through
twice,
and
that
would
also
allow
you
to
do
like
the
implicit
completeness,
because
at
the
moment,
in
what
you
have
here,
you've
said
that
this
path
is
pending.
You've
not
got
anything
that
says
that
path
is
complete.
F
You
have
sent
through
some
data
for
that
path
which
might
implicitly
Market
as
complete
or
it
might
not,
and
then
you've
got
some
more
data
come
through
for
it,
which
is
either
unexpected
or
not
so
I
think
you
either
need
to
at
some
point
later
have
like
a
this
path.
Is
now
complete,
or
you
need
to
do
something
like
I
did,
where
you
actually
give
them
individual
IDs
that
you
can
close
off
and
write
to
individually,
which
honestly
I
think
we
should
bring
back
I
think
it
was
quite
nice.
D
Yeah,
either
ideas.
What
I
do
have
here
is
that
there's
two
pendings
with
the
same
path,
so
you
have
to
expect
to
to
incremental
responses.
E
I
have
a
question
all
right,
I
think
there
is
two
issues
here.
All
right.
One
is
what
is
the
ideal
result,
and
second,
one
is
like
what
technical
limitation
prevent
us
from
ideal
result
and
what
compromise
we
want
to
do
to
resolve
it
so
like,
and
we
discussing
them
in
parallel,
so
I'm
kind
of
confused.
E
E
E
C
E
I
mean
like
inside
incremental,
can
have
interest
with
different
paths,
but,
like
everything
related
to
single
part
should
be
like
until
this
object
is
fully
like
everything
on
this
level
is
default.
You
should
wait,
accumulate
everything
on
your
swivel
and
send
it.
Everything
on
this
level
is
one
one
goal,
so
you
effectively
need
to
batch.
A
We
we
yeah
sorry
to
leave
in
a
cliffhanger.
I
know
we're
right
in
the
middle
of
the
discussion.
I
believe
we
might
have
to
end
it
there.
So
do
we.
Maybe
this
is
something
we
can
take
to
the
discussion
thread.
Do
we
have
clear
next
steps?
A
Just
more
to
talk
about
for
sure,
okay,
Benji
I'll
be
in
touch
with
you
and
we'll
figure
out
the
getting
the
meeting
set
up
from
the
foundation
side
of
things
and
yeah.
We
can
definitely
talk
async
in
Discord
or
in
the
discussion
thread
as
well,
and
aim
to
touch
base
again
next
week.
F
E
One
quick
thing:
I
want
to
ask,
especially
from
Europe
like
we
discussed
in
research
cases,
and
you
have
a
bunch
of
examples.
Can
you
can
you
like
actually
cover
them
somewhere,
combination
of
stream
and
default
and
examples
that
we
discussed
today?
It
would
be
great
to
have
it.
D
C
A
Right
Rob,
what.
D
Gonna
keep
working
on
the
implementation,
because
I
think
that
changing
it
could
at
least
make
things
better,
but
I,
but
I
haven't
been
able
to
spend
enough
time
on
it.
So,
like
have
like
a
clear
idea
of
how
it's
going
to
shake
out
okay.
B
H
Yeah
I
mean
my
my
initial
work
is,
is
uploaded,
but
it's
not
working.
It
breaks
on
a
whole
bunch
of
edge
cases,
so
I'm
not
I'm,
not
super,
confident
that
it's
actually
the
right
direction,
but
but
luckily
I
think
I'm
pursuing
a
slightly
different
technique.
So
then,
then
Rob,
so
we
were
exploring
like
two
different
Avenues
and
maybe
one
of
us
will
get
it
so
maybe
wrong.
A
We
better
end
it,
but
yeah
we'll
all
be
in
touch
thanks
very
much.
Everyone
we'll
talk
again
soon.
Thanks,
take
care,
bye,.