►
From YouTube: Incremental Delivery Working Group - 2023-03-20
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
A
D
A
E
All
right
so
last
week
we
talked
about
a
proposal
that
Benji
put
forward
in
the
form
of
a
spec
PR.
E
We
went
through
that
a
little
bit,
I.
Think
since
then,
Benji
had
some
ideas
on
how
to
evolve.
That
I
also
have
a
I
think
a
different
idea
of
how
to
evolve.
That
and
I
know
that
yaakov
has
been
working
on
a
graphql
jspr.
E
That
also
does
some
defers
without
branching.
E
Yeah,
so
so,
basically,
it's
like
coming
off
of
the
last
working
group
meeting.
The
decision
was
that
we
have
to
rule
out
any
solution
that
doesn't
lead
to
a
fully
reconcilable
result,
meaning
that
because
of
defer,
we
don't
want
to
execute
the
same
field
twice
at
the
same
path
and
get
different
results
for
that
and
yeah.
So
I
kind
of
want
to
go
over
my
proposal
first
and
then
maybe
we
talk
about
Benji
and
yeah
cause
after
that,
so
I'm
gonna
share
my
screen.
One
second.
E
Okay,
so
in
in
Benji's
original
proposal
there
were
fields
that
could
be
sent
in
this
incremental.
E
They
don't
line
up
with
where
the
paths
here
are
not
going
to
line
up
where
the
deferrs
are
also
the
path
in
the
pending
object
does
not
line
up
where
the
defer
is,
and
instead
I
is
the
the
most
common
ancestor
from
the
from
the
all
the
Deferred
fields,
and
so
here
it
changes,
because
now
there's
in
this
case
it's
F2
CF,
because
it's
only
this
field
that
is
not
in
bulk,
not
outside
of
the
defer
here.
It's
both
the
top
level
type
name
and
the
J
field.
So
it
moves
up
higher.
C
E
Yeah,
this
I
think
this
is
better
right.
I'm,
looking
at
the
the
GitHub
issue,
65
with
your
post
from
two
weeks
ago,
yeah
perfect!
This
is
good.
Okay,
sorry
about
that
yeah.
E
So
so
my
my
iteration
on
this
was
instead
now
we
lock
into
the
spec
that
the
path
that
returns
inside
of
pending
always
matches
where
the,
where
the
defer
is
in
the
path
and
I,
think
that's
beneficial
for
clients
as
because
they
could
use
that
path
to
know
whether
a
field
was
inlined
or
not
whether
a
defer
was
in
line
or
not,
and
when
they're
rendering
rendering
a
component
at
this
path
that
depends
on
all
the
fields
inside.
E
They
could
more
easily
know
that
all
the
fields
are
available,
so
inside
of
incremental
you'll
get
back
all
of
the
fields
that
correspond
here
in
this
case
that
we
talked
about
a
bunch
of
times
where
E
and
F
overlap
in
two
different
fragments
or
two
different
paths.
Here,
you're
going
to
get
back
two
pendings,
because
there's
two
deferries
at
two
different
paths
and
you'll
get
back.
E
These
two
field
results
they're,
they're,
separate
because
e
can't
be
combined
with
the
other
ones
because
of
it
shared
at
two
deferries,
but
whenever
that
one
is
ready,
whenever
all
the
fields
that
go
under
one
of
these
deferries
is
ready.
That's
when
that'll
get
sent
and
with
the
indication
that
it's
completed
and
then
in
a
follow-up,
you'll
get
all
the
other
ones
that
correspond.
E
E
and
F
are
not
sent
twice
because
it
was
previously
sent.
But
now
you
know
that
this
first
empty
path
is
completed,
and
so
I
have
for
the
same
query,
an
example
result
for
both
when
potentially
slow
field
a
finishes.
First
and
potentially
Soul
field
B
finishes
first,
and
so,
basically,
all
of
these
same
incremental
paths
are
these
objects
are
all
exactly
the
same.
It's
just
that
within
which
response
they're
returned
in
is
different.
E
Yeah
and
I
have
a
few
more
examples
that
show
that
one
thing
right
now
what
I
have
in
my
spec
is
that
basically
there's
only
ever
going
to
be
one
field
in
each
inside
of
each
data,
but
I
know,
but
I
think
that
that
could
be
fixed
in
a
way
that.
C
Yeah
so
one
of
the
things
I
mean
this
is
still
carrying
through
that
we're
only
delivering
each
Leaf
once,
which
is
great,
and
what
that
means
as
well
is
that
the
merging
is
straightforward,
you're,
never
over
writing
a
property
or
a
key
on
an
object
you're
only
ever
adding
it
similar
for
stream
I
would
assume,
is
you're
only
ever
appending.
C
So
this
is
great
from
a
Simplicity
point
of
view,
I
hear
what
you
were
just
saying,
which
is
an
interesting,
so
you're,
effectively
saying
we
would
never
deliver
the
potentially
slow
field
a
and
the
e
in
the
same
incremental
entry
like
what
you
have
highlighted,
because
you
might
have
delivered
the
the
GH
and
potentially
slow
field
B
first,
which
would
want
to
come
with
the
the
EF.
So
in
order
to
make
the
object
predictable,
which
is
the
the
critical
thing
that
I
think
you
were
saying
there.
C
You
need
to
keep
them
separate
the
rest
of
the
the
rest
of
the
fragments,
so
like
the
GH
and
potentially
slope
field
B
they
could.
They
could
all
be
in
the
same
thing,
because
they're
not
referenced
anywhere
else,
so
that
could
have
easily
just
been
one
incremental.
If
you
were
to
make
the
the
relevant
spec
edit,
so
I
think
is
what
you're
saying
right:
yeah.
E
F
C
The
main
advantage
Yakov
is
for
strongly
typed
clients
that
expect
payloads
to
be
validatable
via
some
kind
of
like
you
know,
Json
passing
library
or
whatever.
They
need
to
be
able
to
work
out.
C
All
the
possible
object,
types
that
they
might
receive,
so
they
can
then
coerce
them
into
there
and
I'm
not
talking
like
typescript
here
right,
because
it's
incredibly
flexible
I'm
talking
other
like
fixed
static
languages
that
want
to
take
Json,
which
is
a
very
fluid
format
and
then
turn
it
into
a
struct
that
they
have
in
their
programming
language
and
that's
hard
to
do.
If
we
allow
you
to
just
like
have
a
wishy-washy.
Sometimes
it
has
this
thing.
Sometimes
it
doesn't
kind
of
type
definition.
F
F
I'm
not
sure
that
that's
I'm,
not
sure
that's
okay,
I
mean
the
incremental
will
be
wishy-washy,
but
you
know
the
the
the
the
the
merged
result
will
will
never
I
mean
once
the
completed
section.
You
know
once
once
something
has
been
completed.
The
data
will
always
match.
B
The
studio
work
that,
if
you
don't
get
any
more
incrementals
for
apply
that
there
is
nothing
more
coming,
because
that
would
be
an
implicit
computer.
B
I
mean
we
talked
about
that
of
meeting
effects
that
essentially,
if
you,
if
you,
if
you
differ
a
part
of
the
tree,
then
if
you
don't
get
new
increments
for
that
part
of
the.
E
Tree
then,
it's
completed
I
do
I,
think
that
you
would
have
to
have
this
completed
because,
while
like
there
could
be,
there
could
be
more
stuff
that
is
lower
than
path.
But
it's
not
necessarily
like
yeah.
For,
for
this
example,
like
this
path
is
referring
to
the
Deferred
path.
Maybe
we
even
want
to
just
call
this
field
deferred
path
and
not
path
to
make
that
clear,
and
when
this
one
is
completed
here,
that
doesn't
mean
that
there's
nothing
underneath
the
root
that's
still
pending.
E
It
just
means
that
everything
that
was
grouped
under
the
defer
at
this
path
is
completed,
but
also
I
want
to.
Maybe,
if
I
explain
like
how
the
algorithm
works,
it
makes
sense.
What
what's
what's
happening
is,
as
we
collect
Fields
I'm,
building
up
a
a
map
of
the
the
paths
for
all
the
diverse
and
a
list
of
all
of
the
field
sets
that
need
to
be
that
need
to
be
executed
for
those
defers.
E
So
for
this
example,
there's
the
defer
with
a
path
a
b
and
there's
a
defer
with
a
path.
That's
at
the
root
and
a
b
has
is
going
to
have
one
one
execution
record
for
enf
and
another
execution
record
for
slow
field
a
and
then
the
one
at
the
root
is
going
to
have.
E
It's
going
to
point
to
that
same
instance
of
the
same
object
that
has
enf,
and
it's
also
going
to
have
another
one
for
GH
and
potentially
slow
field
B
and
now,
when
it's
time
to
send
the
results,
we
could
be
like
for
each
path
which
one
has
all
of
its
defers,
all
of
its
deferred
records
that
are
completed
and
then
send
those
results
and
we'll
put
a
marker
on
the
ones
that
were
sent
so
that
when
we
go
through
the
next
time,
we
know
that
EF
was
already
sent
and
we
can
ignore
that
and
send
only
the
one
here.
F
E
So
so
I
I
do
think
that
also
the
way
that
these
are
combined
has
implications
on
null
propagation,
whereas
so
these
fields
that
are
that
are
separated
want
a
null
that
bubbles
up
isn't
can
affect
the
ones
that
are
outside
of
it's
incremental,
so
I.
So
I
do
think
that
one
weird
thing
that
I
think
is
true
for
both
my
proposal
and
benjies,
and
maybe
also
Benji's
iteration,
is
that
you
will
be
changing
how
the
null
bubbling
works.
But
if
you
added
more
Fields
outside
of
the
diverse
because.
E
C
Yeah
I
think
we
have
that
problem
in
general
anyway,
and
what
would
happen
there
is
the
the
F
would
bubble
up
to
the
E,
which
would
then
bubble
up
to
the
entire
defer
and
invalidate
that
entire
defer
I
think
that's
what
we've
we've
discussed
previously
anyway,
like
even
when
we
were
talking
about
the
branching
form
where
e
would
be
executed
twice
in
that
case,
like
the
second
one
would
effectively
invalidate
the
entire
defer
payload.
E
C
E
C
That
is
a
major
issue.
Actually:
okay,
yeah,
because
that
breaks,
fragment
consistency,
I,
think
if
you,
if
that
top
one
potentially
slow
field,
a
was
a
done
to
type
name
saying
that
you
know
this
fragment
is
complete
and
then
that
EF
couldn't
come
through
because
of
an
error.
You
wouldn't
have
a
field
there.
C
C
Yeah
and
that's
even
worse,
because
if
you,
if
you
were
to
query
the
E
field
outside
with
with
some
like
Dunder
type
name
inside
of
it,
something
like
that,
whatever
whatever
then
you
have
any,
and
then
you
pull
in
your
potentially
slow
field,
a
which
is
your
fragment
thing,
which
says:
if
this
is
present,
then
this
entire
fragment
has
been
pulled
down,
but
that
F
field
wouldn't
be
present
right,
but
your
fragment
should
either
be
fully
true
or
not
be
true
at
all,
and
this
I
think
would
break
that
I
can!
C
F
I
mean:
are
we
only
shipping,
the
the
the
piecemeal
parts
of
of
you
know,
potentially
slow
Field
Day
once
the
entire
that
entire
deferred
fragment,
meaning,
even
if
it's
being
shipped
in
separate
payloads
that
are
deduplicated?
Are
we
only
shipping
it
if
it
completes
entirely,
and
so,
therefore,
we
should
still
observe
the
same
null
bubbling.
C
Yeah
we'd
have
to.
We
are
I
believe
because
Rob's
based
his
one
on
my
pull
request,
so
I
believe
that
this
is
still
done
in
an
atomic
way.
So,
even
though
there's
multiple
incremental
entries,
they
are
treated
as
an
atomic
whole,
as
in
all
of
them
must
be
applied
for
consistency
to
be
maintained.
So
I
think
what
you're
saying
yaakov
is.
We
could
detect
that
this
issue
with
f,
happened
and
thus
invalidate
the
potentially
slow
field
a
and
yes,
that
would
be
a
that
would
be
a
way
of
solving
this
problem.
F
Oh
sorry
go
ahead.
Well,
if
we
met,
if
we
get
to
the
you
know
my
proposal
later,
which
I'm
not
sure
you
know
I
sort
of
work
differently
from
work
from
the
implementation
rather
than
spec
I'm,
not
sure
how
different
it
is,
I
can
sort
of
show
where
we
might
be
able
to.
E
Yeah,
so
so
is
this
one?
This
is
like
clear
enough
to
everyone.
Do
we
want
to
talk
about
it
more
or
do
we
want
to
move
on
to
Yakov
or
Benji.
G
Two
yeah,
like
I,
have
a
question
about
difference.
So,
like
a
biggest
difference,
you
said
like
to
do
it's
like
type
of
responses,
predictable
and
Pathways
matching
a
different
path
right.
G
C
Right
I
think
there's
there's
two
main
differences.
One
is
that
the
pendings,
the
path
in
them
relates
to
the
defer
path.
Whereas
in
my
previous
proposal
the
path
was
the
the
deepest
point
at
which
something
was
pending,
and
the
other
difference
is
that
the
incrementals
are
being
broken
up
a
bit
more.
So
they're
broken
up
on
a
per
field
basis,
whereas
mine
was
more
like
I
guess,
like
a
subset
of
a
selection
set.
A
G
Yeah
I
think
like
it's
just
a
clarify,
I
think
like
painting
and
completed,
can
be
like
change
independently,
but
I
agree
like
the
big
biggest
differences.
Like
you
can
budget
proposal.
It's
it's
more
for
me.
It's
more
logical
and
more
like
more
is
complemental
model,
so.
G
So
but
then
what
proposal
everything
is
calmed
down
into
like
one
big
block
of
things,
and
here
here
we're
like
one
key
difference
for
me
is
like
in
this
proposal
fragment
is
is
a
thing
it's
like
similar
to
how
how
in
graphql
today,
fragments
is
just
like
quite
a
convenience
thing,
but
it
doesn't
affect
like
response
in
any
shape,
shape
or
form.
E
E
Wouldn't
say
that
that
is
totally
true.
It's
not
to
be
clear.
Mine,
like
the
grouping,
is
not
per
fragment,
but
it's
at
a
at
a
path
where
defer
exists,
so
you
would
have
exactly
the
same
results
if
this
was
one
deferred
only
with
enf
inside
and
another
defer
at
the
same
level,
with
only
potentially
slow
field
inside,
but
because
these
deferries
are
at
the
same
path.
There
they're
treated
as
one.
G
Okay,
yeah,
it's
fun,
yeah,
you're
right!
It's
not
that
it's
tied
to
a
password
difference
like
okay.
G
So
it's
it's
passed
by
Script,
it's
a
past,
a
test
if
we
apply
D4
on
the
fields
without
like
our
with
directive
on
the
fields
without
like
fragments
responsible
with
the
same
so
yeah
you're
right.
My
comment
was
incorrect.
E
Yeah
and
it
and
it's
guaranteeing
that
all
the
results
for
a
given
deferred
path
are
returned.
G
Yeah
so
from
one
point
of
view,
the
biggest
difference
is
now
because
our
execution,
so
we
can
have
like
some
some
fragments,
satisfied,
words
and
other
fragments
on
the
same
level.
G
Yeah,
it's
it's
a
big
Advantage,
yeah
yeah
thanks
now
now
I
understand
like
yeah,
like
difference
between
proposal.
Okay
and.
E
Same
thing
could,
in
theory,
be
achieved
with
fragment
analysis
in
the
in
the
future.
F
3862
awesome,
so
basically
I
didn't
get
I
sort
of
worked
from
the
you
know,
like
I,
said
the
different
direction
and
I'm
honestly,
not
sure
all
yet
of
all
the
of
all
the
differences
between
between
so
I'm
just
gonna
try
to
work
backwards
from
the
implementation
to
spec
edits
and
also
try
to
you
know
to
to
highlight
here,
I
guess
the
differences
that
I've
seen
so
basically
I
I
worked
from
the
existing
meaning,
the
existing
and
graphql.js
Main
implementation,
which
is
before
we
dropped
label
okay
and
before
we
before
we
merged
before
we
merged
defers
at
the
same
path
and
I
I
tried
to
work
from
there
to
achieve
some
of
some
of
what
we
were
talking
about
and
and
basically
the
main
first
thing
that
I
tried
to
try
to
do
is
to
see
if
I
could
get
a
non-branching
executor
and
so
I.
F
F
If
we
could,
you
know
if
we
could
basically
work
from
Rob's
or
I
guess
the
first
dibs
I,
guess
the
you
know
original
implementation
to
to
see
what
we
could
do
and
and
basically
I
I
think
you
know
I
think
you
know
in
terms
of
Test
passing
in
terms
of
you
know
everything
seeming
to
work,
I
think
I
think
that's
been
achieved.
So
that's
just
you
know,
I
guess
we
have.
We
can
sort
of
weigh
the
existing.
F
You
know
the
the
changes
from
you
know
these
new
spec
edits
against.
You
know
an
actual
implementation
and
and
see
that
everything
works
out
and
okay.
So
so
how
did
I
do
that?
So
I
don't
have
spec
edits
for
you,
but
I
kind
of
like
a
high
level.
I
can
sort
of
discuss
what
it
what
it
is
and
and
basically
the
format
of
the
payloads
doesn't
change
which
that
hasn't
changed.
F
Yet
I
should
be
clear,
so
there's
no
pending
there's
no
completed,
there
are
still
labels
that
are
optional
but
seeing
as
but
we
now
merge
all
defers
at
the
same.
One
change
is
that
we
merge
all
defers
at
the
same
path
as
long
as
they
either
have
no
label
or
they
have
an
identical
label.
Okay
and
so
we're
still
able
to
achieve
we're
still
able
to
achieve.
F
You
know
separate
components
at
the
same
path
asking
for
you
know
getting
responses
when,
when
they're
ready
like
by
a
subset
even
before
we
have,
you
know,
fragment
modularity,
so
so
again,
I'm
sorry
that
I
don't
yet
have
spec
edits,
but
and
there's
not
too
much
examples
of
different
payloads
I
can
because
again
the
payload
is
still
the
same
as
of
yet
but
I
can
point
out.
You
know,
sort
of
the
high
level
work
as
I
was
saying
so
that's
over
here.
F
So
the
first
is
that
similar
to
a
different
PR
that
I
had
I
separated
out
like
a
a
publisher.
You
know
the
original
implementation
had
a
list
of
promises
that
were
erased.
F
What
I
do
now
is
I
create
a
publisher
that
is
passed
down
within
the
execution
context
that
graphql
just
as
execution
context,
and
then
it's
made
available.
You
know
throughout
the
execution
tree
and
when
something
is
ready
it
can
just
can
just
be.
You
can
just
push
it,
you
complete
it
and
then
it's
pushed
it's.
It's
not
pushed
it's.
It's
set
to
a
pending
status
because
it
has
to.
F
We
have
to
make
sure
that
its
parent
has
been
completely
finished
and
so
that
his
parents
has
been
sent
and
then
once
once
a
payload
is
actually
sent
all
of
its
all
of
its
tree.
All
of
its
descendants
that
are
are
waiting
can
then
be
sent
automatically
and
so
that
that
sort
of
just
you
know
in
my
mind,
sort
of
made
it
a
little
bit
easier
to
to
work
with
work
with
the
the
events
as
they're
happening.
F
So
I
think
you
know
so.
I'll
come
back
to
that
later,
in
terms
of
like
I
I.
Don't
think,
I
think
that
publisher
construct
is
sort
of
like
passing
the
stream
down.
You
know
passing
that
publish
event
down
to
every
single
level
of
execution,
whereas
it
looks
like
Benji
and
then
Rob's
iteration
is
sort
of
doing
a
similar
thing,
but
in
you
know,
but
I
think
I
I
have
to
look
at
it
more
and
understand
it
better,
but
I
think
there.
F
You
know,
you're
sort
of
you
know
you're
receiving
the
results
of
the
pending
defers
and
streams
and
then
and
then
working
with
that
result.
So
I
think
there
may
be
a
large.
You
know
areas
of
equivalence.
So
then
we
introduced
the
field
Group
type
to
store
information
about
a
merged
set
of
fields
at
a
particular
depth
within
the
operation.
F
So
formerly
the
the
group
field
set
was
map
or
respond
keys
to
their
field
notes
and
now
they're,
a
map
of
response
keys
to
the
field,
Group
I
think
that's
corresponds,
I,
think
to
an
entity.
You
know
in
the
in
the
spec,
edits
I
think
was
called
field
details,
which
is
a
similar
sort
of
thing,
and
then
we
pass
in
that
field
group
to
collect
Fields.
F
Every
time
we
run
collect
Fields
so
that
the
result
from
collect
fields
that
map
of
group
field
set
can
have
information
about
about
which
defers
you
know
at
every
path
which
differs
which
deferred
payloads.
You
know
the
fields
should
go
to
maybe
maybe
go
to
the
initial
payload.
They
may
go
to
different
payloads
at
the
same
path
with
different
labels
or
different
payloads
at
different
paths.
F
So
that
and
then
again
the
result
of
collect
Fields
is
going
to
include
or
not
again.
Finally,
the
result
of
collect
field
is
going
to
include
that
group
fields
that
set
with
all
that
extra
enriched
information
and
also
the
set
of
identified,
deferred,
fragment
references
at
at
that
level.
I
should
say
over
here
at
that
level
within
the
operation,
and
so
the
basic
highlights
is
that
it
works
similarly
with
execute
field.
F
Zero,
really
but
I'm
going
to
just
highlight
how
it
works
with
exeu
fields
that,
as
usual,
We
Begin
by
getting
the
results
of
collect
fields
for
that
level,
the
operation
and
then
we
we
now
have
the
defer
fragment
references
within
that
operation.
So
we
create
any
new
deferred,
fragment
record
objects
as
needed,
so
some
of
those
may
have
already
been
created
from
a
higher.
You
know
higher
level
of
the
tree,
so
that
means
we
have
to.
We
have
to
pass
that
down.
F
Also
any
originally
created,
deferred,
fragment
objects
from
higher
in
the
tree
need
to
be
passed
down,
and
we
also
have
to
pass
down
the
set
of
parent
async
payload
records
for
these
new
records.
So
a
parent
in
this
context
is
anything
that
lets
deferred
for
when
it's
completed
it
lets
a
deferred,
a
deferred
record
be
be
sent,
meaning.
F
Let's
you
know,
an
object
has
to
be
completed,
as
you
know,
sent
down
as
the
empty
object
or
an
object
with
some
initial
random
fields,
and
then
so
once
that's
sent,
then
the
Deferred
payloads
can
be
sent,
but
there
could
be
numerous
different
payloads,
numerous
different
parents
from
different
ones
in
the
tree,
so
it's
a
as
opposed
to
a
single,
because
we
because
we're
no
longer
doing
branching,
and
so
now,
instead
of
having
like
a
parent,
async
payload
record,
we
have
a
set
of
parent-acing
payload
records,
but
it's
basically
similar
to
how
it
was
in
the
initial
Rob's
initial
implementation.
F
So
after
that,
after
we
create
those
new
deferred
fragment
records
for
each
deferred
field
within
the
field
Group,
we
have
to
tell
that
async
payload
record
to
create
that
you
know
that
it's
expecting.
You
know
that
that
it
needs
to
wait
for
that
field
result.
So
we
basically
Loop
through
all
of
those.
F
You
know
field
field
groups
and
we
tell
them
that
you're
going
to
be
considered
complete
when
this
you
know
when
all
of
these
items
there
comes
come
out,
so
I
I
I'm,
not
I,
didn't
I
didn't
yet
read
through
Rob's
batching
proposal,
but
that's
basically
where
batching
is
done.
We
we
tell
the
spacing
payload
record
that
you
you
can
only
be
sent.
You
can
only
be
completed
when
all
of
your
your
records
complete
okay.
F
So
once
we
get
through
two
and
three
most
of
the
work
has
been
done,
then
we
kick
off
field
execution
as
usual
with
the
only
change
is
you
know
in
the
reference
implementation,
at
least,
is
that
we
delay
the
fields
whose
field
groups
should
be
referred
by
a
single
tick,
so
so
that
I,
you
know
again.
That's
that
would
be.
This
would
be
equivalent
to
the
execute
deferred,
Fields,
step,
I,
think
in
in
in
and
the
proposal,
and
then
some
of
the
spec
changes
that
I've
seen
now.
F
Here's
where
I'm
not
sure
if
it
works
in
the
same
mechanism
exactly
during
field
complete
completion,
meaning
during
the
complete
value
step.
We
notify
the
async
payload
record
that
the
field
is
complete.
If
it's
a
list,
we
just
tell
it
that
it's
going
to
be
an
empty
array.
If
it's
an
object,
we
just
tell
it's
going
to
be
an
empty
object
and
when,
if
it's
a
field,
then
we
then
we,
then
we
get
sorry
a
leaf.
F
F
Okay,
so
when
no
remaining
fields
are
left,
you
know,
then
we
publish
and
then
obviously
number
six
is
that
we
have
to
handle
errors,
so
I
mean
so
this
is
kind
of
like
the
minimal
change
of
set
of
changes.
That
I
saw
to
you
know
to
add.
A
F
No
branching
functionality
to
achieve
deduplication
we
can
basically
what's
happening.
Is
that
we
have
the
initial
result
is
is
returned,
but
we
also
have
async
payload
records
that
are
set
up
notified.
What
they're
awaiting
and
then
informed
as
each
object
and
leaf
value
completes,
and
so
we
can
achieve,
and
then
we
publish
that
full
async
payload
record,
but
we
could
easily
change
this.
You
know
we
can
separate
out.
F
You
know
when
the
async
payload
record
is
finished
in
terms
of
and
then
separate
out
how
exactly
it
is
published.
You
know,
for
instance,
we
can
add
in
that,
when
it's
published
simply
by
saying
that
it's
completed
and
the
individual
leaves
are
published
separately.
F
That
would
be,
you
know
some
just
in
a
sort
of
an
easily
altered
Behavior,
it's
very
easy.
Now
that
we
don't
have
branching
and
it's
sort
of
built
in
we
could
we
don't
have
without
weak
Maps
or
without
caching.
We
can
simply
annotate
the
paths
right
now
in
the
graphical
jss
implementation.
The
paths
are
annotated
with
with
the
type
names,
but
we
can
annotate
them
and
then
we
pass
it
down
through
the
tree,
but
we
can
annotate
them
with
whatever
we
want
and
you
know
very
easily.
F
D
F
Here
we
go
so
so
one
prop
one.
Just
wrinkle
is
that
we
we
currently
expose
the
the
path
to
to
to
resolvers
and
so
in
order
to
I
guess
avoid
circular
dependencies
Even
in
our
type
script,
definitions,
anything
you
know
and
since
I'm
now
annotating
the
path
with
the
field
Group.
F
F
You
know
that's
yeah,
just
a
curiosity
over
here,
but
here
is
basically
the
information
that's
being
saved
in
the
collect
field,
results
and
I,
and
so
that
we're
we're
sending
down
the
parent
type,
the
fields
which
are
a
map
of
which
deferred
fragments
they
belong
to,
and
here
the
Deferred
fragments
are
unique
based
on
their
based
on
their
label
based
on
their
label.
F
Like
sorry,
they're
they're,
just
they're,
just
based
on
their
object
identity,
but
we
are
careful
to
to
only
create
new
ones
again
if
they
have
different
if
they
have
different
labels
and
so
and
then
we
have
a
structure
called
a
tag
field
node
the
tagged
field.
Node
is
not
a
regular
field.
Node!
It's
a
node
and
also
it's
which
deferred
Frac,
that's
a
field
node
and
also
which
deferred
fragment
it
belongs
to,
and
so
it
could
even
belong
to
a
deferred,
fragment
way
up
in
the
parent
hierarchy.
F
F
Let's
helps
us
determine
it's
actually
helps
us
determine
whether
we
need
to
initiate
a
new
defer
and
I
can
I'm
gonna
I
can
work
through
I'm
trying
to
see
if
I
can
decrease
the
number
of
entries
over
here,
but
basically
the
the
key
one
is
whether
we
should
initiate
a
defer
and
we
should
initiate
a
defer
if
basically,
the
the
a
defer
hasn't
yet
been
initiated
for
this
group.
You
know
group
of
fields
because
even
you
know,
deferred
subfields.
F
If
they've
already
been,
if
their
parent
field
has
already
been
deferred,
we
don't
need
necessarily
to
wait
that
initial
tick.
So
that's
what
this
Boolean
is
doing
and
we
use
the
you
know
these
priority
values,
basically
to
determine
that.
But
we
shouldn't
really
need
to
use
them
during
execution
the
depth,
we're
saving,
basically
just
to
figure
out
how
stream
Fields
interact
with
all
of
this,
because
so
every
time
you
have
a
new
stream
field.
F
Basically,
you
can
sort
of
ignore
defers
from
earlier
from
earlier
on
in
in
there,
but
that's
that's
kind
of
like
a
I
need
to
write
that
out.
Actually
because
I'm
not
sure
I
can
explain
that
yeah.
F
Just
to
get
a
sense
of
the
the
number
of
the
number
of
the
different
pieces
of
information,
we're
saving
in
the
field
Group
and
then
I'm,
not
gonna
I'm,
not
going
to
walk
you
through
every
single
code
change
but
I'm
just
going
to
show
you
the
execute
Fields
algorithm
and
to
me
to
my
eye.
At
least
it
looks
pretty
similar
to
some
of
which
I'm
happy
about
it's.
How
some
of
the
spec
changes
are
adding
up
and
basically.
F
A
C
C
F
F
Well,
so
we
could
deliver
leaves
multiple
times
like
so
this
naive
implementation
does
deliver
leaves
multiple
times.
It
has
a
lot
of
duplication,
but
but
it's
very
I
think
it
should
be
fairly
straightforward
to
you
know
to
move
from
this.
You
know
working
implementation
to
one
in
which
we
have
complete
deduplication,
because
basically
we
we
we
have
events.
A
F
That
are
working
as
expected
in
terms
of
whether
a
payload
is
complete
and
once
it's
complete,
we
ship
it.
But
what
do
we
ship
so
right
now
we
ship
everything
that
everything
that
we
initially
declared
that
was
making
it
pending,
which
would
be
complete
duplication,
but
what
we
could
ship
instead
is
just
a
little
entry
that
says
it's
com
that
deferred
path
with
that
label
is
completed
and
any
leaves
that
have
not
yet
been
shipped.
F
And
so
you
know
this
is
you
know
pretty
straightforward.
You
know
and
I
didn't
want
to.
You
know,
there's
still
a
lot
of
discussion
as
to
what
exactly
we
should
ship,
but
right
now
we
have
the
capability
from
here
to
to
achieve.
You
know
complete
deduplication
to
set
up
a
pending.
You
know
every
time
we
have
new
pending
payloads
to
ship,
that
every
time
we
have
new
completed
payloads
to
ship
that
we
could
complete.
You
know
we
could.
F
F
To
to
that,
that
actually
brings
me
back
or
to
the
last
bit
from
the
pr
that
I
wanted
to
share
and
from
that
PR
comment
and
then
I'm
basically
done-
and
that's
that's
over
here,
oh
in
the
bottom
here
so
right,
so,
okay,
so
I
think
I.
Basically
with
your
question,
we've
covered
that
so.
F
Deduplication
we
can
alter
the
you
know
the
the
what
happens
when
async
payload
records
are
completed,
to
only
include
the
difference:
you're
right,
okay,
yeah,
that's
basically
what
we
said
and
then
we
can
similarly
also
easily
include
pending
and
completed
arrays
using
the
above
flow.
So
what
we
can't,
what
we
can't
do,
which
is
which
is
what
is
what
I
think
that
Rob
suggests
I?
F
Think
if
I
understood
correctly,
what
Rob
suggested
we
meaning
we
don't
we're,
not
actually
breaking
apart
async,
you
know
the
Deferred
fragments
into
smaller
bits
and
pieces
I
guess
we
could
do
that,
but
we're
not
yet
doing
that.
So
we
can't
do
yet
easily.
Do
is
take
out
the
parts
that
are
common
to
two
different
deferred,
fragments,
separate
them
and
separate
them
out,
and
so
that
we
wouldn't,
depending
on
the
order
in
which
things
resolve
the
payload
shapes,
wouldn't
be
completely
predictable.
Unless
we
chose
to
ship
them
Leaf
by
leaf
and.
E
G
Just
to
clarify
and
correct
me
if
I'm
wrong,
but
my
assumption
is
that
you're
a
factor
quad
but
didn't
change
like
external
Behavior
right.
So
it's
it's
Miss
Paris
like
quadrifactoring,
and
it
can
be
used
as
a
basic
future
proposal.
But
it's
by
itself
what
what
currently
done
in
the
code
is
not
proposal.
It's
just
like
refactoring
of
existing
state
of
the
things
in
graphic
adjustment,
Branch
right.
F
That's
that's
pretty
much
exactly
right,
I
mean
we
have
spec,
we
will
also
have
corresponding
spec
changes.
It's
basically
a
proposal
to
say
you
know
a
baseline
I
think
for
any
additional
changes
that
we
might
make
in
that
we
no
longer
have
branching.
So
we
no
longer
have
that
Troublesome
issue
that
you
know
we
may
have
different
values
within
the
you
know,
and
it
shows
that
we
can
retain
labels.
We
can
do
deduplication
or
we
could
not
do
duplication
and
we
could
send
completed,
Fields,
etc,
etc.
F
You
know,
but
it's
possible
to
to
do
all
of
that.
Without
you
know
it's
possible
to
do
all
that
without
necessarily
changing.
You
know
the
spec
into
a
new
into
new.
You
know
completely
new
spec
text,
but
you
know
there
are
going
to
be
spec
changes,
so
I'm
not
sure
again,
I'm,
not
sure
whether
it
you
know
I'm
gonna
work
on
spec
text
that
matches
this
I'm,
not
sure
whether
it
pays
you
know
to
to
to
work.
F
You
know
which
version
you
know
a
lot
of
things
in
motion,
so
I'm
not
sure.
If
the
spec
text
that
I
create
that
matches.
This
will
necessarily
be
easier
to
follow
than
what
Benji
has
already,
but
you
know,
I
just
wanted
to
share
share
that
you
know
progress,
good
news.
E
Yeah
I
did
I
did
like
implement
a
like
extremely
extremely
rough
version
of
what
I
wrote
out,
basically
just
to
like,
validate
that
it
does.
What
I
was
thinking
you
would
be
doing
while
I
was
working
on
the
spectacs,
not
in
any
state
that
it's
like
ready
to
share
or
anything
but
but
yeah.
G
G
Yeah
yeah
so
proposal
can
we,
if
we
think,
is
that
the
Jacob
created
is
not
proposal
by
itself,
but
a
baseline
for
like
future
proposal,
and
we
can
because
right
now,
if
I
open,
Avengers
pairs,
he
took
overcome
man.
It's
all
quite
a
bunch
of
change
everything
it's
like
huge
change,
but
in
reality
it's
have
two
things:
it's
how
like
a
description
of
algorithms
and
it's
have
like
a
new
thing
so
like
what?
What
the
what
as
far
as
understand,
what
yaakov
did
you
extracted
like
you
extracted
the
algorithm
that
allow
us
to.
G
To
understand
what
has
come
to
run,
everything
simultaneously
and
understand,
what's
already
shipped
and
what's
not
and
send
them
it
batches
without
like
without
like
execution,
which
is
useful
in
any
I,
would
say
it's
like
this
change
makes
sense
in
any
proposal
about
like
the
duplication
or
without
the
duplication
with
proposal
make
makes
a
lot
of
sense,
because
staff
will
executed
once
right.
You
you're
not
doing
it's
like
alternative
to
a
question.
G
It's
like
caching
build
in
into
algorithms,
so
even
if
we
decide
not
to
change
anything
like
if
we
decide
to
stick
with
the
labels
and
if
we
decide
to
like
hold
the
like
everything
that
we've
done
in
this
working
group,
it's
not
useful
in
return
back
to
the
state
like
half
a
year
ago,
this
pair
is
useful
and
related.
Spec
changes
is
useful
and
what
I
advocate
for,
if
a
magic
it
will
allow
us
to
clearly
see
which
purpose
of
changes?
G
What
has
changed
by
what
proposal?
Because
right
now,
if,
if
I
decided
to
to
implement
like
branches
proposal,
it
will
be
a
bunch
of
code
and
if
I
decided
to
implement
my
crop
proposal,
it
will
be
also
batch
of
code.
But
80
percent
of
that
code
is
like
see
more,
and
this
is
like
jacospeare.
It's
like
80
percent
of
the
codes
that
we
need
for
benches
proposal
and
if
we
do
the
same
inspect
it
will
allow
us
to
see
like
to
how
I
can
pairs
that
propose
like
actual
changes.
G
Because
whisper
is
not
it's
refactor,
it's
not
proposing
actual
changes.
We
will
have
like
there's
production
changes
in
the
spec
and
graphql.js,
where
only
staff
related
to
this
proposal
is
changed
so
like
I'm
I'm
Pro,
for
for
using
that
as
the
best
one
and
budget
into
into
graphical
dress
and
much
something
like
that
into
a
spec.
E
Yeah
yeah,
that's
fair,
we're
running
out
of
time,
so
I
want
to
give
Benji
some
time
to
talk
about
the
latest
iteration
that
he's
thinking
of.
C
Yeah
I'm
not
sure
if
there
is
enough
time
to
be
honest,
but
let's
give
it
a
go
all.
C
All
right
so
I'm
just
gonna,
quickly
scan
over
this
presentation,
so
incremental
delivery
fragment
consistency
is
one
of
the
things
that
I
care
about.
There's
two
ways
of
achieving
that
one
is
to
deliver
the
fragment
all
at
once.
Another
is
to
deliver
the
two
parts
like
Rob
was
doing
but
make
sure
they're
in
the
same
incremental
payload.
C
We
need
to
have
the
reconcilable
final
object,
so
every
path
must
settle
on
value.
Just
like
yakov's
thing
ensures.
We
want
to
know
which
fragments
have
actually
been
delivered
so
that
it's
useful
for
the
client.
We
need
to
be
careful
if
not
of
nulls
result
amplification
and
then
there's
a
bunch
of
options
that
we've
gone
through.
This
is
my
previous
proposal,
which
I
am
not
happy
with
I.
Think
it
the
one
of
the
problems.
Is
it
prevents
that
independent
execution
of
siblings?
C
Another?
Is
you
know
it's
various
there's
various
issues
in
it
and
also
Rob's,
pointed
out
there's
some
oversights
in
the
edits
that
I
made
themselves,
so
they
aren't
actually
self-consistent
currently,
which
is
fine,
I'm,
probably
not
going
to
fix
that,
because
I
have
an
alternative
solution.
C
So
in
this
alternative
solution,
we
build
up
incremental
paths
which
are
a
little
bit
similar
actually
to
what
Yakov
was
saying
with
his
incremental
librarian
check,
uses
the
object
as
the
key
in
a
map,
except
rather
than
this
object
with
a
label
in
it
being
the
key.
This
would
actually
be
the
selection,
so
it
would
be
a
field
or
an
inline
fragment,
spread
or
a
named
fragment
spread.
So
for
defer
it's
going
to
be
one
of
the
fragment
spreads
for
stream.
C
It's
going
to
be
a
field,
but
that
is
an
incremental
selection
and
through
those
selections
you
can
effectively
form
a
path
that
describes
what
the
path
is
to
that
field
through
all
of
its
deferred
selections.
So
then,
you
start
by
looking
at
the
root
selection
set.
You
execute
everything,
that's
not
deferred,
and
then
you
look
at
the
next
layer.
Look
at
everything
that
has
been
deferred
once
and
I
would
talk
through
this,
but
we
don't
have
enough
time
so
I'm
going
to
skip
over
it
to
go
to
some
examples.
C
So
let
me
turn
on
my
little
laser
pointer,
okay.
So
here
this
is
a
query.
We've
seen
a
number
of
times
here
we
have
a
root
level,
defer
we've
labeled,
it
D2
and
we've
added
in
the
the
JK
here.
So
our
initial
payload
has
all
of
this
stuff,
and
then
we
say,
there's
going
to
be,
the
D1
is
going
to
come
later
and
here
comes
the
D1
with
the
JK.
So
this
is
similar
to
what
we
what
I
was
saying
last
week,
but
I've
added
in
this
ability
for
you
to
have
labels.
C
Now
the
important
thing
about
these
labels
is
they're
entirely
optional.
They
do
not
change
the
way
that
the
thing
executes.
If
you
don't
put
the
label
in
here,
it
will
have
no
effect
on
the
payloads
that
you
respond
other
than
this
additional
bit
of
metadata
won't
be
there
anymore.
C
So
we
can
keep
this
this
Improvement
in
the
the
path,
and
then
we've
got
this.
Second
defer
now
this
is
actually
in
the
same
selection
set,
so
we've
got
the
F
with
the
hi,
the
JK
and
the
LM
all
in
the
same
one,
but
this
will
actually
be
deferred
to
a
later
Point.
C
So
that
would
come
then,
in
a
second
incremental
and
then
we're
complete
labels
are
optional.
Labels
don't
affect
the
grouping.
If
we
look
at
this
original
query
that
I
raised
I
think
back
in
October
this,
because
all
of
these
share
the
same
bio
field,
they
would
all
be
merged
together.
So
we
would
treat
them
this
in
a
list.
Okay,
so
there's
going
to
be
three
entries
or,
however
many
there
are
in
your
list,
but
when
we
have
this
pending,
this
pending
satisfies
all
of
those
labels.
C
All
eight
of
these
labels
are
satisfied
by
this
same
pending.
We
only
need
to
deliver
the
data
once
and
then
we
say
that
that
is
done.
This
done
true
replaces
what
we
had
before
of
the
completed
like
it
doesn't
really
matter
whether
we
do
it
through
completed
or
through
done
through,
doesn't
matter.
It
just
saves
us
having
to
list
the
ID
twice,
because
we
know
it's
here.
So
we
can
just
say
it's
done
over
here,
and
that
is
it
for
this
one
optionally.
C
C
Here's
an
interesting
one.
This
one
has
a
defer
inside
of
a
me
and
a
defer
outside
of
the
me
that
includes
the
me.
So
the
difference
is
here:
we've
got
this
ID
in
here.
We've
got
this
value,
and
this
was
Quaker's
issue
of.
If
the
list
returned
here
and
the
list
returned
here
gave
us
different
values,
then
we
get
problems.
C
What
my
currently
proposed
solution
says
is
because
these
defers
overlap.
They
both
have
this
oops
sorry,
they
both
have
this
list
field
here.
C
They
therefore
overlap,
and
thus
they
must
be
delivered
together,
and
so
they
come
through
together
and
everything's.
Fine,
just
by
virtue
of
how
the
field
collection
works.
C
C
So
that's
all
fine!
You
can
deliver
them
in
whatever
order
you
like.
B
C
C
So
the
the
spec
edits.
Actually
we
don't
even
look
at
the
Deferred
Fields
until
after
the
initial
payload
has
executed
and
what
that
means
is
like.
If
there's
a
thing
there
that
causes
something
to
be
null,
then
that
would
never
even
be
added
as
a
deferred
thing.
So
we
don't
have
to
worry
about
invalidating
fields
that
aren't
never
going
to
come
because
a
null
was
raised
because
we
literally
do
not
hit
it,
because
we
never
get
that
far
through
the
yeah.
B
Yeah
execute
selection
set
I
know
in
the
other
case.
You
have
to
do
that
later
on
and
essentially
have
something
processed
that
you'll
never
be
able
to
send
down.
C
B
C
B
B
Okay,
but
but
that's
that's
totally
It's,
just
in
the
in
the
spec,
we
say
so
it's
not
like
recommended
to
do
it
this
way,
like
the
spec.
Does
it
like.
C
So
in
this
one,
we've
taken
this
same
same
query
that
we
had
here
that
we
just
discussed
and
I've
added
in
this
ID
field
into
both
of
them.
Now
that
means
that
these
defers,
the
d1a
and
the
d2a
narrow
overlap,
because
they
both
have
the
ID,
and
so
we
now
only
get
these
two
pendings
Each
of
which
satisfy
both,
and
here
they
come
with
all
three
bits
of
data.
E
C
Which
I
mean
in
your
proposal
you
automatically
batch
them
together
anyway,
so
it's
it's
a
subtly
different
approach
to
that.
So
here's
another
one
imagine
you've
got
like
a
widgets
page.
You've
got
the
project
widget
and
the
billing
Widget,
the
billing
one.
You
know
slow,
so
you
defer
it.
C
The
billing
widget
itself
knows
that
getting
the
previous
invoices
is
also
very
slow,
so
it
does
its
reasonably
fast
stuff
first
and
then
it
defers
the
previous
invoices
till
later.
So
this
allows
you
to
defer
multiple
times
at
the
same
level,
and
that
will
work.
So
you
get
the
initial
data.
We
then
say
billing
is
going
to
come
later.
C
Billing
then
comes
through
with
your
latest
invoice,
total
or
whatever,
and
then
we
say,
then
your
previous
invoices
is
going
to
come
later
and
then
it
comes
through
with
you
know,
every
list
of
your
20
most
recent
invoices
or
whatever.
C
So
this
also
allows
defers
within
defers
at
the
same
level
and
I
think
that
that's
actually
an
important
feature
that
my
previous
proposal
broke
and
is,
in
fact
something
that
we
opted
against
when
we
were
thinking
about
it
in
terms
of
response
explosion.
But
response
explosion
isn't
an
issue
with
this
approach,
so
we
can
start
looking
at
that
in
a
different
way
again
much
in
the
same
way.
How
are
labels
now
are
just
in
this
proposal
just
a
little
bit
of
metadata?
They
don't
actually
change
the
grouping
of
the
defers.
C
So
again
we
can
think
about
them
from
another
point
of
view,
yeah
I
didn't
think
that
was
going
to
be
enough
time
and
I've
rushed
through
that.
G
Yeah
can
I
do
like
a
quick
question.
Can
you
go
previous
wide
yeah,
so
I
have
a
question:
how
like
ideas,
overlap
and
I
understand
why
I
did
how
I
attached
to
it,
but
why
we
batch
key
in
value
into
one
payroll,
which
is
one
for
me
so
so
like
if
first
incremental
is,
is
not
the
understand
why,
but
second
one
I
can
draw
this.
One
is
like
questionable
why
why
Crystal
Fields,
together
into
one
payload
and
specifically,
why
label
G1
should
wait
for
for
D2.
C
Okay,
so
so
D1
and
D2
are
doing
exactly
the
same
thing
right,
they're,
assuming
you've
already
resolved
me.
Then
they're
getting
the
list
and
the
items
the
list
and
the
items
no
selection
set
inside
right
because
it's
deferred
it's
fully
deferred.
So
they
are
doing
exactly
the
same
thing
just
list
an
item.
So
that's
what
they
do
here
list
an
item.
That's
the
same
result
for
both
of
them,
so
it's
Satisfied
by
the
same
labels.
That's
D1
D2.
Does
that
make
sense.
G
Like
it's
like
it's
the
biggest
problem
with
this
proposal
right
now
for
me,
it's
like
imagine,
I
wrote
like
this
fragment
and
it
wasn't
a
code
base.
I
never
even
worked
great
as
expected,
and
once
suddenly
some
someone
from
another
team
or
another
like
part
of
application
at
another
fragment
and
they
end
up
in
the
same
queue
and
now,
like
my
first
fragment,
which
was
fast
now,
it's
suddenly
became
entangled
with
another
problem,
so
like
fragment
modularity,
it's
like.
C
E
E
Don't
think
it
would
happen,
that's
my
proposal,
I,
don't
I,
don't
I
want
to
I'll
write
an
example,
but.
C
Right,
yeah
yeah
go
back
to
one
side,
so
so
I
wanted
to
get
onto
this.
Actually,
so
what
Rob
was
proposing
earlier
were
the
the
ID,
because
it's
shared
can
come
through
in
its
own
payload
I.
Think
that's
a
really
great
Innovation
and
I'd
like
to
put
that
into
this,
because
I
didn't
know
about
that
before
this,
so
I
think
I
can
put
that
into
this
proposal
and
still
keep
all
of
the
benefits,
and
that's
actually
one
of
the
things
that
I
didn't
like.
So
it
was
fantastic
that
we
can
solve
that
as
well.
E
Yeah,
can
you
back
up
one
side,
one
more
yeah,
so
this
example
for
for
me
in
in
my
proposal
after
the
because
D1
and
D2
are
at
different
paths,
there
would
be
one
incremental
object.
That
has
a
list
in
the
item
in
me
list
item
inside
of
it,
and
then
there
would
be
separate
incremental
objects
for
ID
and
value
and
yeah
depending
on
when
those
are
resolved.
C
C
Well,
actually,
I'm
not
entirely
sure
that
we
can
do
that.
The
one
of
the
motivations
for
doing
it.
This
way
is
that
the
the
deduplication
and
the
you
know
the
execution
only
executing
things
once
comes
out
as
an
automatic
property
of
the
algorithm.
Without
any
effort
like
there
isn't
like
a
caching
or
a
tracking
that
needs
to
go
on
it.
This
is
just
you
know.
It's
like
field
merging.
F
Uplication
has
to
you
know,
emerge
from
the
algorithm
itself
right
without
any
tracking.
Even
if
there's
no
cash,
no
weak
map
I
mean.
Is
that
an
addition,
because
that
would
be
a
problem
for
certainly
for
my
Approach.
C
F
B
B
Yeah
I
would
like
to
ponder
it
after,
but
so
can
you
can?
You
share
the
slides,
maybe
yeah.
B
C
B
No
worries
just
so
I
can
start
thinking
on
that.
B
Yes,
yeah
and
and
your
staff
Rob,
is
it
already
do
you
have
already
spec
edits
in
the
pr
that
we
have
or
did.
E
B
Okay,
I
will
work
through
that
and
see
also
what
impact
is
to
update
the
implementation
from
us.
Yeah
awesome,
yeah.
E
F
So
I'm
also
going
to
work
on
spec
edits
for
my
proposal
or
or
sort
of
like
Baseline
proposal.
You
know
we'll
see
I
guess
you
know
where.
G
D
G
E
Yeah
and
Jaco,
if
you
had
to
open
a
PR
to
update,
collect
Fields
into
collect
subfields
in
the
spec
a
while
back
and
so
I
base
mine
on
top
of
yours,
so
I
think
that's
worth
pursuing
to
getting
it
merged,
because
I
think
that
it's
Benji
also
mentioned
that
he
was
thinking
of
doing
the
same
thing
in
his
edits.
But
I
think
that's
like
a
generally
useful
update.
C
E
C
I'd
like
to
bike
shed
terminology
in
that
pull
request
a
little
bit.
I
haven't
come
up
with
better
terminology
I,
just
not
a
fan
of
the
collect,
subfields
name
of
the
algorithm,
but
yeah
definitely
I'm
interested
in
pushing
that
forwards
again
as
a
base.
C
E
All
right,
yeah,
thank
you.
Everyone
see
you
all
next
week.