►
From YouTube: GraphQL Working Group - 2022-12-01
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
B
A
Me
too,
I
mean
we.
We
already
talked
about,
I
mean
we,
we
are
doing
like
execution
plans
for
I
think
three
years
now.
Indeed,
it
really
is.
C
D
E
F
A
I
saw
you,
you
proposed
one-off
topic
right.
Oh.
A
E
So
we'll
be
be
an
interesting
conversation.
Yeah
just.
E
A
It's
so
it's
it's
20
minutes
in
the
ups
and
so
I'm
I'm,
just
waiting
a
couple
more
days
to
have
a
bit
more
and
then
yeah.
But
my
snowboard
is
ready.
Nice
fresh
freshly,
waxed.
H
Followed
by
yeah.
H
That
would
be
a
good
time.
Yes
do
we
have
close
to
Quorum
here,
let's
see
how
many
folks
we
got
13
pulling
up
our
agenda
file.
C
There
was
a
little
bit
of
confusion
lee
quite
a
few
people
thought
it
was
on
the
12th
I
think,
possibly
because
of
the
12
Dash
deck
numbering
of
the
the
folder,
oh
wow,
which.
H
Good
to
know,
I
was
not
aware
of
that
confusion,
I'm,
actually
in
the
middle
of
putting
together
a
PR
to
add
2023
agenda
file
templates
and
have
adopted
I
think
Michael.
Maybe
it
was
your
suggestion
to
add
the
day
into
the
the
file
name,
but
I'm
doing
that
for
2023
and
onwards,
so
hopefully
that
makes
it
less
confusing.
But
yes,
December
is
in
fact
the
12th
month
of
the
year.
That's
that's
the
12.,
but
I
can
certainly
see
how
that
could
get
confusing.
H
But
we
got
14
out
of
16
the
folks
who
are
listed
as
attendees
I.
Think
that's
pretty
good,
so
we'll
get
started.
Welcome
everybody
to
the
last
working
group
or
primary
working
group.
H
I
should
say
of
the
year
because
we've
got
the
you
and
the
and
the
APAC
ones
set
up
later,
which
will
be
exciting
too,
of
course,
as
always
by
joining
we're
agreeing
to
the
spec
membership
agreement,
participation,
guidelines,
contribution
guide
and
code
of
conduct
links
in
the
agenda
file,
if
you
ever
want
to
refresh
yourself
what's
in
there
what
you're
agreeing
to
by
joining
as
per
usual,
let's
do
a
quick
round
of
intros
names
to
faces
that
way.
Anyone
who
happens
to
be
new
can
hear
from
everyone.
H
Also,
a
great
A
B
check.
I
took
the
liberty
of
sorting
the
agenda
file
by
first
name
alphabetical
order,
with
the
exception
myself,
since
I
knew
I'd
be
talking
first,
so
hello,
everybody,
my
name
is
Lee,
we'll
go
down
the
list.
Benji
you're
up.
J
Hi
I'm
yaakov
individual
contributor.
H
B
B
Yeah
yeah,
so
I
submit
PR
last
minute.
H
Welcome
welcome
folks,
your
journey
for
the
first
time
or
for
the
first
time
in
a
while
and
yeah,
please
feel
free
to
send
a
quick
PR
that
we
can
get
your
names
on
there.
I'll
get
them
merged
as
we
dig
into
the
content.
H
I
know
I
see
Benji
already
in
the
the
note
stuck
as
per
usual
I
see
a
couple:
people
they're
all
Anonymous,
so
I
I
assume
they're.
Can
we
just
get
a
quick
confirm
of
who's?
Helping
us
take
notes,
especially
in
the
cases
where,
where
Benji
is
chiming
in
for
a
discussion,
it'd
be
helpful
to
have
a
designated
backup
note
taker.
H
I
heard
multiple
I,
don't
know
exactly
who
who
was
saying
what
where,
but
thank
you
for
the
enthusiastic,
multiple
voices
in
to
help
out
with
notes,
hugely
valuable
thing
that
we
have
high
quality
notes
as
the
result
of
these.
Thank
you.
H
Let's
take
a
quick
look
over
the
agenda
for
the
day.
If
there's
anything
missing,
we
can
add
it
in,
as
per
usual,
we'll
take
a
quick
look
over
any
action
items
that
may
happen
to
be
open
from
previous
meetings,
see
if
there's
any
updates
on
those
I
would
like
to
talk
a
little
bit
about
the
technical
steering
committee
elections
that
are
now
underway.
H
Oh
I
had
thought:
I
had
added
an
agenda,
maybe
I
I
missed
something.
Another
thing
that
I
wanted
to
talk
about.
That's
not
listed
here
is
the
scheduling
of
these
meetings
heading
into
next
year,
mostly
just
want
to
get
a
gut
check
that
we
can
keep
doing
what
we've
been
doing,
but
I
think
it's
worth
the
discussion.
H
Deferring
stream
discussion
with
Rob
and
next
steps
for
the
one
of
from
Q
I
might
flip
those
orders
just
so
that,
if
that's
a
short
discussion,
we
can
have
that
first,
any
other
things
that
we
want
to
talk
about
or
changes
to
the
agenda.
We
want
to
have.
H
I'm
editing
now
to
apply
the
changes
I
just
suggested
I
will
take
silence
as
a
confirmation
that
we
are
talking
about
the
right
stuff.
H
All
right,
I
have
updated
the
agenda
if
everyone
would
mind
taking
a
quick
refresh.
H
Okay,
let's
do
a
quick
look
at
open,
Action
items,
I'm
just
gonna
myself
pop
open
these
first
few
links
I
see
nothing
that
is
explicitly
marked
as
ready
for
review.
That's
okay!
We
can
just
look
at
the
things
that
are
actively
open
and
they
are
sorted
by
last
change
which
many
of
these
are
from
roughly
a
month
ago.
So
I
have
a
feeling
that
there's
probably
not
a
lot
for
us
to
close
yeah
I.
Think
those
are
the
notes
that
we
took
from
last
time.
H
Yep
all
right
issues.
H
H
I
think
this
is
an
open,
Action,
Yvonne
I
know
it's
been
sitting
on
your
plate,
but
please
feel
free
to
delegate
this
to
someone.
This
is
one
of
those
important
to
investigate,
but
certainly
not
urgent.
So
if,
if
anyone
feels
excited
about
looking
into
relaxing
this
field,
selection,
merging
rules
or
just
wants
to
learn
more
about,
it
then
drop.
If,
on
a
note
and
I'm
sure,
I'll
tell
you
all
about
it.
H
It
shouldn't
continue
yet
this
is
an
exploration
task
to
add
a
little
bit
of
clarity
here.
The
discussion
that
ledis
here
was
relaxing
a
constraint
on
field
selection
merging
and
the
spec
text
is
relatively
straightforward,
but
the
ramifications
of
that
were
less
straightforward
and
we
realized
the
right.
Next
action
was
to
write
some
code
that
actually
implemented
this
and
have
that
as
a
testing
ground,
to
better
understand
the
implications
that
was
the
yeah.
H
Of
course,
one
of
these
other
open
ones
is
on
you,
I'm,
totally,
okay,
for
this
one
to
remain
open.
F
Yeah,
let's
leave
it
because
I'm
I'm
first
I
was
a
busy
sorry
and
still
rethinking
the
strategy
around
this.
H
This
one-
maybe
we
can
wrap
this
into
the
discussion
that
Hugh
wanted
to
tap,
but
I
think
this
is
you've
got
continue
to
have
some
irons
in
the
fire
to
talk
about
input
unions
in
particular,
and
then
the
broader
problem
domain.
That
structure
against
Benji
Simpsons
you've
got
no
nothing
written
here
that
you
have
nothing
to
report.
Nothing.
H
Did
this
end
up
happening?
There's
this
conversation
about
meta
fields.
H
With
the
idea
to
open
a
public
discussion
about
this
I
guess,
some
of
the
discussion
is
happening
live
here:
okay,
I'm,
Gonna,
Leave,
This,
One,
open.
H
There,
okay,
there
are
two
everyone
tasks
which
are
both
from
mid-year
and
I'm.
Gonna
buy
us
in
the
direction
of
closing
these
Rob.
One
of
these
was
yours
to
have
eyes
on,
which
was
about
making
sure
that
the
discussion
that
you
had
opened
yielded
a
useful
outcome
and
got
you.
The
appropriate
feedback.
I
know
we're
going
to
talk
more
about
the
current
status
of
deferred
stream.
L
Yeah,
it
has
and
my
related
discussion
topic
we
had,
we
came
to
consensus
on
and
it's
marked
as
resolved,
so
we
can
close
this
awesome.
H
C
Yeah
I
think
we
can
go
ahead
and
close
that
we
obviously
Dennis
has
been
doing
the
work
on
on
the
test
suite
for
graphql
over
HTTP
there's
also
been
a
lot
of
input
from
Apollo
and
various
other
places.
C
So
this
is
definitely
underway
and
I
think
it's
got
enough
eyes
on
it.
So
yeah
foreign.
H
H
That
one
is
on
me
that
one's
open
I
know
about
okay,
I'm
gonna,
stop
there
that
gets
us
a
year
back.
We
don't
need
to
go
through
all
these
cool.
Thank
you
all
for
the
discussion.
I'll
stop
sharing
here.
H
We
will
move
forward
good
to
close
some
open
actions
from
before.
H
Next
up
is
I
wanted
to
make
sure
everybody
here
was
aware
of
the
technical
steering
committee.
Elections
that
are
coming
up.
I
have
some
links
that
are
in
the
agenda
to
that
I
posted
this
a
couple
of
weeks
ago.
I
realized
that
the
the
nomination
opening
date
had
kind
of
slipped.
H
Past
me
what
was
supposed
to
happen
with
what
we
have
written
in
the
charter
is
that
we
sort
of
opened
nominations
and
announced
that
at
the
beginning
of
November,
and
so
the
last
of
these
primary
meetings,
when
we
probably
should
have
been
talking
about
this,
that's
okay,
I've
shifted
the
whole
kind
of
schedule
for
these
back
by
one
month.
H
So
please
check
out
that
first
link,
which
is
the
issue
thread
that
I
have
open.
That's
essentially,
tracking
the
entire
elections
process
got
a
ton
of
information
there.
The
most
important
thing
that
most
folks
hear
need
to
keep
in
mind
is
that
nominations
are
open
and
they
will
be
open
through
the
end
of
this
Monday
there's
a
link
to
the
form
both
in
the
agenda
and
in
that
issue.
So
if
you
are
interested
in
becoming
a
member
of
the
TSC,
then
please
apply
there.
H
I'll
preempt
one
question
that
may
be
in
the
back
of
your
minds
for
some
folks,
which
is
what
does
it
mean
to
be
on
the
TSC?
What's
the
difference
between
that
and
just
joining
these
calls?
H
The
answer
is
not
a
ton.
There
are
a
handful
of
things
that
are
explicitly
requiring
boats
where
you
need
a
sort
of
fixed
denominator
to
run
those
votes
and
the
TSC
runs
those
voting
process.
These
elections
is
one
of
them,
but
there
are
a
couple
others.
Actually
one
of
the
more
exciting
ones
is
how
we
decide
on
issuing
technical
grants.
So
the
foundation
board
gives
us
a
source
of
funding
and
we
Dole
those
out
via
grants
and
the
TSC
is
actually
the
ones
who
make
those
decisions.
H
The
other
thing
that
the
TSC
is
responsible
for
is
administration
of
GitHub.
So
many
folks
here
have
write
permissions
for
various
repos,
but
TSC
members
have
right
permissions
across
the
entire
set
of
repositories.
The
goal
there
being
to
scale
myself
and
Elisa
to
make
sure
that
we
make
sure
everything
make
sure
everything
remains
well
maintained.
H
We
ran
this
last
year,
I
think
it
was
pretty
smooth.
We
got
a
handful
of
new
folks
in
the
TSC.
As
a
result,
I
did
note
that,
or
in
this
comment
thread
both
Nick
Dan
and
Rob,
mentioned
up
front-
that
they
were
not
planning
to
renominate
themselves.
H
I'll
check
with
Andy
I
believe
Andy
probably
will
want
to
renominate
himself
and
I'll
check
with
Sasha,
and
so
it
seems
like
at
a
minimum.
We'll
probably
have
I
mean
all
the
seats
are
open,
so
there'll
be
a
vote
for
all
of
them,
but
there's
a
healthy
appetite
for
folks
to
nominate
themselves.
H
H
All
right,
cool,
well
I,
hope
to
see
a
handful.
You
nominate
yourself
to
be
part
of
the
TSC
and
we
will
run
the
voting
process
starting
at
the
beginning
of
next
year
and
that'll
probably
happen
fast.
I
have
us
earmarked
to
have
a
month
open
to
run
the
vote.
I
think.
Last
year
we
actually
managed
to
complete
the
voting
process
in
less
than
a
week.
So
it's
it's
likely
the
case
that
sometime
in
early
January,
we
will
know
what
that
looks
like
and
can
roll
up
the
change.
M
H
Cool
that
was
that
one
next
one
is
also
on
me.
This
is
likely
a
very
quick
conversation
which
is
just
I
want
to
gut
check,
meeting
Cadence
for
these
working
group
meetings
heading
into
next
year,
as
I
was
mentioning
before
I've
got
a
pull
request
that
I
have
in
Flight.
That
is
adding
the
agenda
file
templates
for
2023
and
if
there's
any
changes
that
we
want
to
talk
about,
whether
it's
dates
times
frequencies
structure
of
the
templates,
any
feedback.
Anybody
has
I'd
love
to
hear
it
before
I
get
that
up
and
going.
F
F
H
Yeah,
it's
a
mixed
bag,
at
least
there's
the
GitHub
UI.
You
can
hit
the
like
edit
icon
and
it's
like
editing.
A
text
file
live
in
the
browser
which
is
yeah,
it's
slightly
less
bad
than
having
to
locally
clone
and
push
a
PR
and
stuff
yeah.
No
I,
don't
know
what
do
people
feel
about?
This
is
the
is
the
benefit
of
having
this
is
flat
files
and
a
repo
worth
the
kind
of
editing.
A
E
G
E
F
J
Maybe
somebody
could
come
up
with
a
bot
where
we
could.
You
know
automate
some
of
that
for
members
who
have
already
joined.
A
But
there's
all
integrated
the
the
bot
that
checks
if
you're.
If
you
signed
the
agreement,
yeah.
H
That's
actually
pretty
important
because
it
is,
it
is
in
fact
a
requirement
of
attending
these
meetings
that
you,
you
have
done
that
which
we
are
we're
not
super
aggressive
at
policing,
but
that
that
is
the
there's.
An
important
reason
to
that,
which
is
this
is
an
open
conversation
and
therefore
anything
anyone
says
about
anything
could
be
contributed
or
construed
as
a
contribution,
and
the
last
thing
we
want
is
for
someone
to
go.
H
Grumpy
Rogue
in
the
legal
sense
and
litigiously
cause
this
group
to
to
grind
to
a
halt,
and
so
that's
that's
the
primary
reason
for
that
yeah.
Putting
that
in
the
pr
path
is,
is
useful,
just
potentially
other
paths.
It's
kind
of
interesting
I
like
this
idea
of
a
like
an
automated
way
like
there's,
certainly
a
manual-ish
cleanup
that
has
to
happen
where
you
know.
H
If
people
put
these
up,
they
wait
for
them
to
get
merged
and
multiple
people
put
them
up
at
the
same
time
and
you've
got
to
deal
with
merge
requests
and
like
on
the
back
end
of
getting
these
merges
actually
kind
of
equally
an
annoying
process,
and
maybe
maybe
there's
some
set
of
things.
We
can
investigate
in
terms
of
Auto
PR,
like
if
you've,
if
you've
signed
the
COC
like
there's,
there's
zero
reasons
why
we've
ever
rejected
a
pull
request
to
one
of
these
files.
F
C
K
K
Could
also
automate
issues,
so
you
open
an
issue
and
you
basically
describe
what
your
agenda
would
be
and
the
bot
would
push
that
to
a
PR,
a
single
PR,
of
course,
which
is
a
collection
of
everybody
that
had
opened
that
issue.
That
would
be
one
way,
but
of
course
that
would
spam
the
issues
Tab
and
there
would
be
a
lot
of
issues
for
each
individual
wanting
to
join.
G
G
H
Feels
particularly
nerd
sniped
by
this
and
wants
to
write
a
bot
or
do
something
else
to
make
administering
the
like
adding
to
the
agenda
and
managing
the
agendas
that
would
be
quite
useful
to
see.
But,
okay,
this
is
a
useful
thread
to
pull
on.
H
So
thank
you
for
the
prompt,
Roman
and
I
I,
also
I,
heard
kind
of
mostly
radio
Silence
about
Cadence,
which
maybe
is
not
not
super
surprising,
since
it
was
only
a
couple
of
months
ago
that
we
talked
about
extending
this
to
the
three
a
month,
but
I
thought
there
was
just
a
gut
check
like
those
are
still
good.
H
I
know
a
handful
of
folks
are
attending
multiple
of
those
a
month.
Yeah.
E
Yeah
cool
I
do
like
that.
It
seems
that
you
know
this
is
usually
the
most
popular
meeting.
It
is
nice
that
the
other
two
have
room
for
longer
form
discussion,
sometimes
so
I
think
that's
been
a
nice
kind
of
result
of
splitting
things
up
a
bit.
F
H
Oh,
but
that
is
reminding
me
that
one
thing
I
think
I
had
mentioned
a
meeting
or
two
ago
that
we
would
add
a
standing
agenda
is
to
sort
of
report
back
on
major
things
that
had
been
discussed
in
those
other
meetings
so
that
we
don't.
We
don't
just
rely
on
everyone
here,
going
back
and
reading
those
certainly
they're.
Almost
all
of
them
are
always
interesting.
H
It's
useful
to
at
a
minimum,
take
a
look
at
the
agenda
and,
if
anything
is
interesting,
follow
through
to
the
notes,
but
that's
something
I
can
add
to
the
standard
agenda.
Template.
C
It
would
also
be
good
if
we
can
automate
the
uploading
to
to
YouTube
as
well,
so
that
they
go
out
a
bit
quicker.
C
H
A
A
F
H
That's
a
reasonable
flag.
Certainly
you
know
at
a
minimum.
We'd
want
to
make
sure
everyone
is
comfortable
with
that
and
it's
a
good
thing
to
default,
to
assuming
no.
A
M
A
H
H
Okay,
we'll
table
that
it's
I
think
the
main
idea
here
is
not
necessarily
to
make
it
go,
live
it's
it's
just
to
make
the
mechanics
of
uploading
the
videos
after
the
fact
easier,
so
we'll
follow
up
on
that
later.
Okay,
thank
you
for
the
useful
conversation.
Everyone
I've
got
good
feedback
on
how
to
make
sure
the
2023
meeting.
Cadence
goes
well.
E
Yeah
sure
thing
this
should
just
be
a
super
quick
discussion,
I'm
thinking,
but
we've
definitely
heard
from
a
lot
of
folks
that
are
asking
about
one
of
support
and
are
seeing
that
the
proposal
made
it
to
stage
two,
but
people
are
a
little
bit
confused
by
the
struct
RFC
and
how
it
looks
like
it
might
be
superseding
one
of
and
I
was
just
wondering
if
we
have
any
clarification.
What
are
we
thinking
about?
C
So
I'll
take
this
there
so
yeah,
sorry
for
letting
this
one
sit
for
a
little
while
I've
been
incredibly
busy
doing
other
things,
but
I
definitely
want
to
get
back
to
this.
It's
a
it's
a
big
motivation
for
me
at
the
moment
so
struck
and
one
of
are
not
strictly
mutually
exclusive.
We
can
definitely
have
both,
but
it's
not
clear
whether
we
need
both
and
I
am
quite
keen
on
the
struct
idea.
C
So
I
need
to
sit
down
I've
already.
You
know
obviously
written
up
that
document
that
already
exists
written
out
a
whole
number
of
of
the
various
use
cases
that
it
could
be
used
to
address,
which
isn't
necessarily
what
it's
intended
to
address.
But
it
is
things
that
it
can
address
or
things
that
it
impacts.
So
people
can
look
into
that.
Read
those
understand
see
whether
there's
any
you
know
red
flags
there
or
if,
in
fact
it's
actually
like
a
good
solution
for
a
few
of
those
problems.
C
I
didn't
really
deliberately
create
this,
or
at
least
I
didn't
jump
onto
this
because
of
the
input
Union's
problem.
This
was
actually
I
wanted.
Straps
and
we've
wanted
straps
for
a
while,
we've
discussed
them
quite
a
lot
and
for
me
it
was
related
to
a
few
problems
that
I've
seen
with
various
client
projects,
such
as
tables
of
contents.
C
C
But
they
can
also
address
the
the
input
unions
problem
and
that
could
be
done
either
via
a
one-off
pattern,
in
which
case
it
would
be
a
one-off
pattern
that
can
be
replicated
on
output
and
input,
which
would
be
nice
or
it
could
also
be
done
potentially,
with
the
the
union
style
approach
that
I've
laid
out
in
the
struct
dock
now,
which
of
I
wouldn't
I,
wouldn't
recommend
that
we
have
both
of
those
which
is
kind
of
the
reason
why
struct
is
on
Ice.
Sorry,
why
one
of
one
of
is
a
nice.
C
Thank
you
because
I
want
to
see
which
of
those
if
we
are
to
go
with
struct
and
that's
a
longer
discussion
which
of
those
Unions
would
be
better
to
do
in
there,
whether
it
would
be
the
one
of
or
whether
it
would
be
the
more
Union
sort
of
natural
approach
and
then
based
on
that
would
then
influence
the
one-offs.
So
it's
kind
of
they
are.
They
are
tied
together,
and
that
is
the
reason
for
the
delay
right
now
got.
H
It
I
will
flag
that
one
of
the
reasons
why
we,
because
we
had
a
counter
proposal
to
One
of
that,
was
very
similar
to
struct
earlier
on,
and
one
of
the
reasons
why
we
decided
not
to
to
pursue
it
in
favor
of
one
of
was
the
realization
that
many
existing
schemas
already
had
a
structure
that
mirrored
what
one
of
was
doing,
but
just
didn't,
have
the
tools
to
guarantee
it
and
therefore
didn't
have
the
ability
to
Leverage
The
constraint,
and
there
was
the
realization
that,
in
order
to
roll
out
input
unions
for
those
existing
schemas,
a
new
kind
of
thing
like
I
struck
would
actually
not
really
be
viable.
H
Like
the
the
migration
path
from
what
was
already
there
to
something
like
a
struct
would
be
pretty
painful
and
most
most
people
probably
wouldn't
go
through
it,
but
the
application
of
one
of
since
that's
essentially
already
the
the
modeling
that
they've
been
trying
to
to
display
would
be
much
easier
to
adopt
for
existing
schemos.
I.
Think
that's
something.
To
keep
in
mind
is
that
assuming
future
uses
of
graphql
that
haven't
happened
yet
where
they
can
start
their
schemas
from
scratch.
H
There's
a
there's,
a
steeper
curve
for
adopting
a
new
kind
of
thing
than
it
is
to
adopt
the
directive,
which
is
you
know,
I
earlier
on
I
also
favored,
a
struct
kind
of
shaped
approach.
I
got
excited
about
that,
and
then
this
sort
of
line
of
thinking
led
me
to
the
I
think
Benji.
You
were
actually
making
the
case
that
the
directive
was
like
much
easier
to
adopt,
so
that
may
doesn't
necessarily
to
your
point.
H
These
aren't
mutually
exclusive,
but
maybe
a
reason
to
continue
enthusiasm
and
momentum
around
one
of
instead
of
you
know
worrying
that
if,
if
the
struck
proposal
were
to
go
forward,
then
maybe
one
of
it's
less
exciting,
I
think
what
we
learned
before
is
it's
probably
not
quite
right.
A
The
impact
of
one-off
is
very
small
to
the
spec,
so
it
doesn't
really
matter
if
we
would
say
we
take
it
into
the
spec,
and
people
already
have
an
immediate
value
from
it
with
new
validation
rules.
But
then
a
struct
could
come
at
a
later
point
because
it
will
take
a
lot
more
time.
C
So
one
of
the
other
reasons
I
mean
one
of
the
issues
with
one
of
right
is:
should
it
apply
on
output
as
well,
or
should
it
only
apply
on
input
but,
more
importantly,
we've
also
been
having
discussions
about
metadata
about
exposing
things
like
directives
with
our
output
schema
and
with
that
it
could
be
that
we
don't
actually
need
one
of
to
be
a
spec
specified
thing.
C
It
could
be
its
own
thing,
much
like
a
custom
scaler,
especially
it
that
will
reduce
the
problem
of
having
too
many
different
forms
of
polymorphism
in
in
graphql,
because
I'm
a
little
bit
concerned.
If
we
have
already
interfaces
and
unions
and
then
we
add
one
of
and
then
if
we
maybe
want
to
do
something
with
struct,
that's,
maybe
it
more
of
a
union
style
thing
as
well.
I'm
a
little
bit
worried
that
you
know
we're
gonna
not
want
to
do
that
so
giving
people.
C
If
we
can
solve
the
metadata
problem,
people
can
start
using
one
of
through
introspection
and
everything
like
that
with
our
having
to
go
too
deep
into
the
spec
right
now,
and
then
we
can
fully
specify
it
later.
If
that
is
the
right
direction,
but
yeah
I,
actually
I
think
the
struct
and
one
of
work
quite
nicely
together.
So
I
am
open
to
the
idea
of
pushing
one
of
through
but
I.
Certainly
don't
wouldn't
want
to
do
it
to
the
exclusion
of
struct.
E
H
Yeah,
it
certainly
wouldn't
be
I
mean
the
I
guess
now
somewhat
controversial.
Take
that
we
have
both
inputs
or
sorry.
Both
the
interfaces
and
unions
Nick
Schrock
always
pressured
Me
On,
whether
that
was
the
right
call
or
not
early
on.
So
you
can
give
him
some
credit.
H
The
Benji
my
hunch
is
that
having
it
in
the
primary
spec
will
allow
it
to
be
most
useful
because,
rather
than
having
to
cross-reference
things-
and
there
was
enough
subtlety
to
the
behavior
that
it
is
worth
having
like
a
definitive
definition
for
how
it
works.
If
someone
comes
across
it,
it
doesn't
feel
foreign
there's
like
a
single
source
of
Truth,
where
we
document
these
mean
things,
and
it
may
take
some
wind
out
of
the
sales
of
a
motivation
for
a
struct
type.
H
But
honestly,
if
it
did
then
that
that's
probably
a
reason
not
to
purchase
drugs,
I
think
like
struct
actually
has
it
does
different
and
unique
things
Beyond
simply
one
of,
and
if
those
other
things
aren't
useful
enough,
then
that's
maybe
a
sign
that
it's
doesn't
quite
make
the
right
balance
of
value
added
for
complexity,
introduced
and
so
like.
That's
that's
probably
actually
like
a
very
useful
pressure
test
for
us
to
apply.
A
Another
thing
that
Benji
mentioned,
that
that
is
one
thing
we
have
to
think
about
is
if
we
want
to
really
solve
only
the
input
case
or
if
we
also
want
to
apply
the
output
case.
That's
yeah
also
something
we
we
have
to
think
about
at
the
moment.
I
think
the
last
state
was
that
we
just
applied
for
the
input
side,
because
that's
a
problem
area
and
that's
also
where
we
have
already
the
tech
Union
pattern
in
graphql,
and
it
essentially
just
adds
the
validation
around
that.
So
it
is
anyway
what
people
do.
F
Can
I
say
something
so,
basically,
I
don't
feel
I'm,
although
it's
a
nice
idea,
I,
don't
feel
it
needs
to
be
in
this
pack,
and
here
is
why
so,
let's
say:
I'm
application
developer
I
have
a
choice,
application,
not
framework,
but
particular
graphql,
API
and
building
a
model,
so
I
can
use
Union.
The
problem
with
Union
returning
is
that
it
returns.
F
Something
and
client
doesn't
exactly
know
it
has
to
guess
what
is
that
if
it
needs
to
instead
I
can
create
the
type
which
contains
a
field
with
variation
for
every
version
of
the
one
off
type
essentially
and
make
all
fields
nullable
and
then
for
each
variation.
I
have
specific
tag,
the
field
name,
so
this
is,
it
can
be
either
this
thing
or
this
thing,
but
I
have
preceded
it
by
the
by
the
field
name
and
makes
them
clear
and
it's
essentially
equivalent
right.
F
The
only
thing
I'm
missing
here
with
the
validation
rule
is
that
oh
I
have
to
make
all
fields
nullable
and
essentially
it
means
that
I
can
return
the
thing
with
the
all
Fields.
Now
it's
inconvenience
and
one
off
can
add
to
this
the
additional
validation.
Okay.
It's
only
one
field
and
Skip
all
others.
This
is
essentially
kind
of
primitive
thing,
a
premium
more
or
less,
not
simple
attribute
that
can
be
implemented
as
a
custom
attribute,
and
the
question
is:
should
we
standardize
this
or
not
I
don't
feel
like
it
should
be
standardized.
A
What
what
it
so
the
benefit,
and
that's
why?
Why
are
you
talking
about
the
output
side,
where
we
already
have
the
union
or
were
you
talking
about
the
input
side?
Because
for
the
input
side,
you
have
like
query
validation,
and
that
is
something
that
takes
this
validation
out
of
the
runtime.
So
the
typical,
how
you
do
that
with
the
tagged
Union
pattern
that
many
people
do
in
their
graph
schemas
at
the
moment,
is
that
this
validation
happens
on
the
runtime.
A
F
Should
it
be
standardized
or
not
and
required
from
everybody,
because
essentially
I,
don't
think
it's
that
essential
that
they
will
be
required
because,
again,
if
I'm
on
the
input
I'm
using
the
type
which
behaves
as
one
of
we
can
miss
the
fields
on
the
input
and
that's
all
right
right,
it's
still
compliant
if
it's
nullable,
so
essentially
it
always
will
be
in
normal
behavior
in
normal
functioning
of
the
API.
It's
only
one
field
with
a
specific
type
and
so
on,
and
it
works
nicely.
F
J
H
Required
all
of
them
that
it
should
be
standardized
because
we've
gotten
this
is
one
of
the
most
demanded
changes
to
graphql,
that
we've
been
hearing
they'd,
be
one
thing
if
just
like,
amongst
This
Crew
would
be
like.
Oh,
that
would
be
a
nice
thing
to
have
the
whole
reason.
We've
been
working
on
this
and
trying
to
get
it
right
is
feedback
from
the
Greater
graphql
Community
that
this
isn't
a
critical
missing
piece
and
we
see
it
and
and
schemas
people
trying
to
emulate
this
and
having
a
really
hard
time.
H
That's
why
I
feel
pretty
strongly
that
solving
this
root
problem
is
is
quite
important
and
whatever
form
it
takes
want
to
obstruct
whatever
we
need
to
make
sure
that
it's
in
the
main
specification.
Otherwise
there's
enough
subtlety
to
applying
these
rules
that
it's
very
easy
for
things
to
get
out
of
sync
or
for
the
behavior
to
be
unclear,
because
there
is
no
one
place
where
you
go
to
find
the
definition
of
how
it
works.
F
Then,
in
this
case,
I
would
vote
for
one
of
instead
of
input,
unions
or
anything
like
basis.
Okay,
we
want
to
standardize
something
then,
let's
standardize
one
off
rather
than
other
options,
and
so
structs
make
it
it's
much
more
advanced
case
and
should
let
on
this.
This
is
absolutely
clear,
straightforward
case,
so
I
I
don't
mind
that
it
should
be.
Then
probably
most
implementation
should
have
it
I.
My
I
was
questioning,
should
it
be
in
got
enforced
by
the
spec
and
if
you
say
so,
yes,
then
my
vote,
for
example.
E
H
L
L
Yeah,
it
was
opened
by
Avon,
and
it's
that
there's
cases
where
it
is
ambiguous
to
clients
if
a
defer
was
basically
inlined
or
ignored
or
or
not
so
here,
here's
one
case
with
defer.
It
could
also
happen
with
stream.
Let's
say:
you're,
you
have
a
stream
with
initial
count:
zero.
You
get
back
in
empty
array.
L
If
there's
other
deferrers
or
streams
in
there,
you
don't
know.
Are
you
getting
more
items
in
this
array,
or
is
the
array
actually
empty?
L
One
of
them
was
that
perhaps
there's
like
some
kind
of
metadata
we
could
send
to
give
like
the
state
of
the
server,
so
the
client
will
know
what's
pending.
Is
it
working
on
this
defer?
Was
it?
Is
it
not
intending
to
send
it?
Is
it
already
in
the
response?
L
I
I
put
a
quick
example
of
one
potential
solution:
I
not
claiming
this
is
like
fully
baked,
but
just
as
like
a
starting
point
for
discussion,
so
I
have
an
example
here
with
both
deferred
and
a
stream,
and
the
idea
would
be
there's
a
new
top
level
field
pending
payloads
and
that
is
kind
of
a
snapshot
of
what
the
graphql
execution
is
currently
working
on.
L
So
in
this
case,
it's
working
on
the
zeroth
index
of
the
item.
That's
in
films,
it's
also
working
on
this
defer.
So
we
have
both
of
these
here.
Just
as
a
reminder.
Label
when
it's
provided
and
path
is
a
unique
key
that
can
identify
any
kind
of
defer
stream
payload.
L
You
need
both
to
do
that
yeah.
So
in
this
first
example,
we're
working
on
that
first
item,
that's
in
the
in
the
film
stream
and
the
defer
now
in
the
second
payload.
The
second
response,
we're
sending
this
payload
for
that.
First,
that
first
item
in
the
array.
So
now
this
has
been
we're
working
now
on
still
working
on
that
defer
and
we're
also
working
on
the
second
item.
That's
in
that
array
now
in
the
third
response,
we're
sending
back
the
defer,
so
that's
no
longer
included
in
pending
payloads
we're
working
and
we
are.
L
Yeah
this
is
this
should
be.
This
is
a
typo
that
should
be
one,
but
because
we're
still
still
haven't
sent
the
second
item
in
that
index.
But
when
it
is
there,
then
it
would
be
removed.
A
Yeah
this
this
sounds
really
and
so
I
read
over
it
already,
but
it
it
sounds
very
complicated
and
also
it
doesn't
solve
the
problem
fully
I
mean
the
easiest
way.
I,
it's
really
to
put
metadata
into
the
query
system
like
essentially
private
modularity,
would
solve
that
and
I
think.
Actually
the
label
we,
the
more
I,
think
about
the
label,
is
really
not
needed
and
we
we're
trying
to
architect
around
that
and
and
I
think
overall.
This
makes
it
much
more
complex
and
error-prone
is,
is
my
take
from
that.
I
My
take
is
almost
the
opposite,
where
the
it's
almost
like.
What
we
really
want
is
a
uuid
of
each
deferred
or
each
each
subtree
that
is
deferred
or
streaming
like.
That
has
not
been
fulfilled
and
an
ability
to
say
this
object
either
was
already
fulfilled.
So
just
like
it
like
it's
in
the
existing
payload,
and
maybe
we
say
that
by
just
not
including
it
and
pending
it
all
or
like
yeah
you're,
going
to
have
to
wait
for
this.
I
A
I
met
the
thing:
why
I
think
it's
actually
making
it
more
error
prone
is
if
we,
if
we
think
so
we
we
are
experimenting
a
lot
with
the
implementations
of
before
at
the
moment,
and
if
you
work
with
relay
it
can
very
easily
happen
that
you
actually
degrade
the
server
performance
by
just
putting
another
stream
on
a
list,
and
then
you
you've
suddenly
go
from
20
patches
to
over
3000
Patches
by
accident,
and
if
we,
as
the
server
fix
that
error
by
just
ignoring
one
of
some
of
the
streams
that
make
the
performance
bad,
you
don't
get
these
pending
payloads
or
these
labels
anymore,
and
the
way
we
found
the
best
to
track
that
is
actually
with
with
metadata
and
the
query.
I
It
feel
like
that
hack.
What
that
hack
really
is,
is
it's
placing
some
you
you
it's
like
placing
in
the
actual
response
itself,
a
location
that
can
say
hey.
Yes,
we
did
this.
So
don't
like.
Don't
even
look
at
pending
payloads,
don't
even
look
at
the
incremental
key
and
I
think
that
that,
having
some
way
of
right,
the
server
should
basically
be
able
to
take
that
potentially
three
thousand
deferred
items
response
and
collapse
it
down
into
like
three
deferred
items
right.
That
should
be
acceptable.
I
Than
much
I'd
argue
basically
to
have
this
fulfilled
defer
field
exist,
but
what
the
value
that
it
produces
not
true
but
is
instead
the
identifier
for
the
pending
payload
in
possibly
like
or
possibly
a
tuple
of
the
identifier
and
whether
it's
already
fulfilled
or
something
like
that.
H
Does
it
make
sense
for
something
like
this
to
be
opt
in,
or
maybe
another
way
to
frame?
This
is.
Is
this
a
a
generalized
problem
where
the
absence
of
this
information
across
all
the
Furs
is
problematic
or
is
this
is
like
a
localized
problem
where
there
can
be
a
case
where,
for
a
particular
defer,
but
most
of
the
other
ones
like
most
most
of
them?
You
don't
care
about
for
a
particular
defer.
You
really
care
about
knowing
whether
it's
here
or
not
here.
I
Yeah,
so
it's
it's!
Basically,
if
we
always
put
deferred
payloads
in
incremental,
then
this
is
not
necessarily
a
problem.
If
we,
if
we
know
ahead
of
time
that
you
always
get
the
Deferred
payload
or
that
all
of
the
Deferred
payloads
are
in
fact
in
the
initial
response
again
not
really
a
problem.
It's
when
you
have
a
mix
where
some
of
the
Deferred
payloads
are
fulfilled.
A
It's
it's
either
metadata
or
it's
all
the
the
fragment
aliasing,
but
would
also
work
it's
actually.
What
I
think
is
the
what
I
like
at
the
moment,
the
most
the
from
an
aliasing
you
lose
a
bit
on
the
emerging
optimizations,
but
I
think
that
is
actually
worth
it,
and
yet
so,
just
for
I
don't
know
if
everybody
gets
gets
the
problem,
because
it's
a
lot
of
connected
things
here,
the
the
most
that
we
worked,
maybe
can
I
take
the
screen
for
a
second.
A
So
because
I'm
actually
experimenting
on
this
a
bit.
M
A
So
when
we
have
like
a
query
structure
like
this,
and
you
can
have
that
in
relay
or
whatever
Apollo
doesn't
matter
actually
but
think
about
the
frontman
structures
here-
and
you
essentially
stream
this
and
defer
this
thing,
and
we
would
execute
that,
then
we
could
see
that
we
get
around
16,
maybe
more
patches
that
resolve
here
but
yeah
this
okay.
So
this
is
where
the
performance
stays
okay
and
by
just
putting
another
stream
here.
A
This
will
now
explode
to
3000
patches
right,
and
that
is
where
now
you're
dead
performance
wise
and
also
you
put
a
lot
on
your
back
end.
So
what
we
do
in
the
back
end
and
that's
why
I
always
argue
so
hard
so
that
the
server
actually
can
is
allowed
to
optimize.
This
is
we
could
actually
identify
that
this
happens
on
the
server,
because
we
can
essentially
a
bit
estimate
essentially
do
a
complexity
analysis
on
this.
A
But
if
we
get
rid
of
this,
then-
and
if
this
is
labeled
here,
then
we
essentially
loosen
our
context
of
is-
is
this
coming
later?
It
is
completely
missing
from
the
payloads
or
the
the
client
cannot
identify
if
certain
things
are
here
and
that's
where
we
essentially
need
metadata
or
Alias
fragments
fragments
either.
Something
like
this,
where
we
essentially
can
say.
A
This
fragment,
for
instance,
is
now
called
Foo,
so
we
can
identify
that
or
things
like
that,
and
why
I
think
this
is
actually
good
is
because,
with
the
metadata,
we
still
have
the
problem
that
now
the
the
the
structure
of
the
client
generated
code
is
different
from
the
graphql
structure.
Right
and
with
this
aliased
approach,
aliased
from
an
approach
that
Yakov
I
think
did
that
introduced,
you
would
have
an
alignment
of
the
client
generated
structure
through
the
graphql
query,
just
to
catch
everybody.
Up
on
that.
I
I
000
responses
doesn't
actually
change
the
fact
that
you're
going
to
have
all
those
responses
as
separate
sub
trees
in
some
sense
so
like
there
is
an
argument,
even
with
fragment
aliases.
You
want
to
be
able
in
the
response
what
goes
over
the
wire
for
our
default
Json
response
format
should
be
collapsed
and
merged,
but
that's
adding
complexity
on
top
of
complexity.
A
H
Joseph
owner
will
be
thrilled,
he's
been
pitching
that
one
for
like
forever
that's
that's
sort
of
what
I
what
I
was
getting
at
like
that's
a
particular
implementation
of
that.
But
my
my
suggestion
here
is:
if
the
default
mode
is
one
where
you
do
not
get
this
sort
of
tracking
of
what
is
deferred
and
not
deferred.
You
just
get
the
sort
of
global
sense
that
overall,
in
this
query,
there's
something
that
got
deferred
and
therefore
another
payload
will
come.
Is
that
a
reasonable
default
state
or
there
is?
H
Is
it
not
like
if
it's
the
case
that
and
everyone,
every
single
person,
who's
using
Apollo
client,
every
single
person,
who's
using
relay
client
is
going
to
be
frustrated
by
that
default.
Behavior
then
we're
in
bad
shape,
but
if
the
vast
majority
of
them
will
be
totally
okay
with
it,
with
the
exception
of
specific
instances
where
they
need
to
have
more
control
over
it,
and
if
the
answer
there
is
well,
you
know
don't
use
defer
on
a
Anonymous
fragment
or
use
fragment,
aliasing
or
use
like
some
other
tool
that
we
can
provide.
H
That
can
produce
this
specific
behavior
that
we
want.
That
might
be
an
interesting
way
to
essentially
put
the
control
back
in
the
in
the
client's
hands
to
decide
whether
or
not
it
needs
this
tracking
information
or
not.
I
F
A
It
doesn't
matter
you
can't
you
can't
produce
it
with
posts.
I
just
made
it
because
it
was
the
quickest
way
to
get
to
that
era.
There
are
other
examples,
also
Benji
posted
a
couple
of
them
where
you
can
produce
simpler,
similar
things
with
the
first,
where
you
get
into
the
the
thing
that
you
essentially
for
the
back
end,
make
it
inefficient
and
then,
if
the
back
end
optimizes
this,
you
have
the
problem
that
the
client
doesn't
know
if
something
arrived
actually,
because
you
cannot
infer
that
from
the
data
that
you
get.
I
You,
the
key
thing,
is
when
you're
deciding
whether
to
render
a
component
that
is
based
off
of
some
fragment
you.
The
key
thing
you
need
to
know
is
like
at
the
point
in
the
tree
that
we
may
or
may
not
have
deferred
or
be
streaming
whatever,
like
that.
We
might
have
incremental
responses
coming
in
at
that
point,
am
I
in
the
am
I
in
one
of
three
states
am
I
in
the
I
we've
made
the
request,
we're
waiting
for
a
deferred
data.
I
F
I
agree
and
I
actually
agree
with
the
benches
common
that
it's
better
to
explicitly
say
it's
over
for
a
certain
stream,
instead
of
repeating
in
every
response
that
we're
still
bending
a
whole
bunch
of
stuff.
So
basically,
by
default,
it's
continuing.
If
there
is
no
other
meta
information,
there
is
the
Rival
of
certain
fragment
by
default.
Everything
is
continuing
and
then
eventually
you
get
signal.
This
is
done
and
your
component
has
this
signal.
Okay,
I
can
render
it
if
it
needs
the
entire
set
right.
A
That's
essentially
I
mean
that's
essentially
what
the
metadata
does.
It
says
it's
done,
but
I'm
I'm
not
advocating
just
for
the
metadata
I
think
just
that
yeah
I'm
still
thinking.
F
A
Right
but
but
there
is
so
it's
a
stand
to
hold
batch,
but
you
can
still
end
up
with.
If,
if
you
do
incremental
I
have
a
couple
of
experiments
you
can.
You
can
really
get
the
server
into
trouble
with
that?
If
you,
if
you
not
let
the
server
in
in
some
case
optimize,
because
these
incremental
batches
can
become
huge,
so
you're,
building
up
huge
responses
in
memory
and.
A
F
A
There
are
a
lot
of
yeah,
it's
it's
a
problem
domain
that
that
we
have
to
fix
the
the
second
thing
of
that
is
that
at
the
moment,
we
discourage
people
to
essentially
optimize
in
the
server
and
I
think
even
we
should
have
an
idea
and
give
them,
because
if
you
now
go
and
do
a
naive
implementation
of
the
spec,
you
actually
introduce
a
security
risk
enter
your
graph
class
and
that
shouldn't
be
the
case
that
if
I
just
go
like
this
against
your
graphql
server,
you
essentially
get
into
this
thousand
of
thousand
of
patches,
and
we
have
to
describe
a
way
to
avoid
that
that
you
get
into
these
cases
could
be
complexity,
analysis
or
some
case.
F
But
then
maybe
I
already
kind
of
I
think
expressed
this
idea
that
the
defer
is
actually
permission.
It's
not
forcing
server,
but
the
client
explicitly
says
there
are
things
I
need
as
soon
as
possible,
and
there
are
things
I
know
certain
history.
You
know
crap
that
will
take
time.
I,
give
you
permission
to
deliver
it
when
it's
available.
Essentially
it
doesn't
force
you
to
certain
way
of
delivering
the
server.
Has
the
freedom
to
batch
it
in
ten
one
thousand,
ten
thousand,
probably
reasonable
server
would
do
it.
F
A
Is
also
about
like
even
creating
in
the
in
the
back
end
the
deferred
tasks,
so
you're
branching
the
execution.
So
if
you,
if
you
cause
the
execution
to
Branch
3000
times,.
A
Depends
I
just
threw
a
number
in
here.
It's
a
it's
about
the
the
spec
at
the
moment.
Just
describes
how
we
Branch
it,
but
it's
actually
more
efficient
too.
In
some
cases,
just
drop
the
drop,
the
stream
or
the
further
Act
and
then
just
fold
it
in
instead
of
again
creating
this
context,
branching
and
then
starting
processing.
Something
I
mean
that's
that's
closely
related
to
the
specs.
That
is
spec
text
that
we
have
at
the
moment
right.
L
Yeah
we
say
in
respect
that
you
should
do
it
and
then,
if
you
have
an
advanced
use
case
now
too,
then
you
don't
have
to
that.
I
mean
that's.
What
leads
to
this
whole
discussion
of
where
the
client
doesn't
know
if
it
actually
did
the
defer
or
not
I,
don't
I
definitely
think
that
that
language
should
be
updated
and
we
should
have
more
clear
guidelines
on
when
a
server
should
or
should
not
be
doing
it.
Just
like
General
recommendations.
Benji
brought
this
up
in
the
last
meeting.
L
M
A
L
Yeah
yeah,
it
does
mean
that
doing
this
and
having
these
recommendations
makes
it
important
that
we
solve
this
problem,
because
it
means
that
it's
very
likely
that
clients
can
run
into
this
type
of
situation
where
it's
it's
adding
yeah,
where
the
server
had
potentially
ignored
some,
but
not
all
of
your
defers
or
streams.
L
E
Did
we
did
we
explore
Ivan's
option
that
I
think
was
added
after
after
years,
Rob
I
don't
know
if
we've
talked
through
that.
M
L
B
Yes,
so
so
idea
as
Benja
point
out,
we
kind
of
have
like
two
similar
problems,
one
if
stuff
gets
unwind
and
second
problem
is
like
stream
is
finished
or
not
because
like.
If
you
have
two
streams
in
inquiry,
you
don't
know
if
one
is
finished
and
you
like
the
whole
choir
is
finished.
It's
separate
problem,
but
this
one
kind
of
solves
both
and
doesn't
introduce
no
concept.
So
idea
is
instead
of
pending
pivots,
which
is
it's
not
Boolean.
It's
like
list
of
things.
A
reuse
has
next
our
individual
items
inside
the
incremental.
B
So
if
stuff
gets
unwind,
you
don't
get
any
interest
in
incremental
things,
stay,
as
is
if
staff
is
not
in
line
and
cannot
be
sent
in
first
payload,
you
get
like
a
label
path
and
has
next
true,
meaning
like
stuff
is
zero,
but
we
will
ship
you
right
through
you,
don't
repeat
within
in
Next
Period.
You
just
send
data
when
you
finally
get
them
and
send
has
next
rows,
meaning
quick,
I,
already
shipped
everything
it
worked
for
deferred
to
work
for
stream,
and
it
also
have
like
two
benefits.
B
First
benefit
is
like
it's
all
like
issue
with
stream,
which
is
totally
different
issue,
but
still
worth
solving.
So
we
know
like
stuff
is
finished
and
also
it
removed
like
like
the
now
of
service
thing
or
performance
thing
when
we
have
like
a
huge
opinion
by
words,
and
we
have
like
Rob
needs
to
repeat
stuff
in
painting
by
words
between
message
between
like
messages
because
we
need
to
like.
B
So,
instead
of
like
having
this
thing,
when
you
have
a
Boolean
state
by
having
something
in
Array
and
when
removing
it
from
array
now
it's
like
bulletpack
and
it's
more
explicit.
So
if
something
gets
rejected,
you
just
send
has
has
next
pose,
meaning
quite
like
you
should
not
wait
for
it.
B
L
You
would
still
have
have
the
case
that
Benji
put
in
the
chat
right,
yeah.
A
But
it
it
I
mean
it
applies
it
three
wise
it
has
next,
which
solves
already
the
issue
that
we
don't
so
essentially
you're,
saying:
okay,
this
subtree
is
complete
and
that
serves
already
a
couple
of
problems.
F
Sure
I
I
have
as
a
server
mostly
server-side
programmer
I
have
a
kind
of
problem
with
this
one.
So
what
you
say
when
you
send
additional
batch
or
whatever
batch
of
data,
you
explicitly
say
at
this
moment,
has
next
or
not
the
problem
with
the
server
side,
if
the
server
actually
feeds
it
from
some
other
source
database
or
external
API.
At
the
moment,
when
I
get
the
payload
to
send
I,
don't
know,
will
there
be
more
or
not
so.
A
F
A
G
M
B
Yeah
yeah,
it's
it's
like
benja's
idea
from
our
previous
group
when
so
I.
If
we
signal
that's
like
initially
when
I
opened
an
issue,
I
propose
to
just
like
put
stuff
inside
the
incremental,
but
when
the
Wake
point
was,
let's
create
the
number
of
service.
If,
if
you
have
at
least
if
you
apply
three
modif
for
an
item,
you
will
have
a
huge
stuff,
huge
amount
of
stuff
inside
the
incremental.
B
G
A
And
there
we
really
have
to
be
careful,
because
we
we
tried
now,
as
so
I
created
a
couple
of
tests
where,
where
it's
easy
to
to
Dos
server,
that
implements
the
file.
A
And
then-
and
that
is
the
veggies
query-
is
actually
good
because
it
shows
it
how
to
do
it
just
with
to
to
lists
and
differ
mm-hmm.
L
Do
we
think
that
the
solution
for
the
denial
service
problem
is
having
better
guidelines
around
how,
when
the
server
should
ignore
or
not
ignore
deferring
stream?
Do
we
or
do
you
think
that
we
need
like.
A
Is
one
thing
we
should
at
least
so
we
shouldn't
discourage
people
from
dropping
the
film
that
that
that,
because
at
the
moment
we
are
discouraging
them
with
the
node.
So
for
advanced
use
case
we
say
you
should
you
can
do
it,
but
essentially
you
should
honor
it
and
that
way
most
people
will
then
try
to
honor
it.
And
then
they
exactly
run
into
the
issue
that
you
can
do
ask
them,
and
then
we
we
need
to
figure
good
ways
out.
A
C
Well,
one
of
the
things
as
I
mentioned
in
the
chat
just
a
moment
ago
that
I
quite
like
about
this
is
it
effectively
applies
it
on
a
per
item
basis
as
well
right.
So
the
the
the
path
that
you
have
there
might
be
index
three
and
then
index
five
in
the
second
list,
and
then
your
your
deferred
fragment.
C
So
you
could
actually
say:
oh
you
know,
I've
got
half
of
this
data.
I've
already
got
all
of
it,
so
I
can
just
inline
that
and
the
other
half
I
don't
so
rather
than
whereas
with
a
defer
with
a
label.
You
might
say:
oh,
this
entire
label
is
deferred
or
it
isn't.
This
allows
you
to
actually
be
more
granular,
which
I
quite
like
I,
think
you
could
also
like.
As
you
add,
these
individual
items
to
the
I
mean
this
is
effectively
like
a
list
of
promises
right.
C
This
is
saying
I'm
putting
these
promises
in
instead
of
giving
you
the
data
and
once
your
list
of
promises
gets
above
a
certain
threshold.
Let's
say
you
know
200
something
like
that.
Then
maybe
you
just
inline
everything
from
there
onwards
and
that's
fine.
You
can
do
that
with
this
system
and
you
can
do
it
on
a
on
a
discoverable
basis
as
you
as
you
walk
through
resolving
all
of
the
data
which
is
interesting.
L
A
Yeah,
one
of
the
main
issues
where
we
have
to
analyze
is,
if
you
have
something
in
a
list
of
different.
That's
that's
already
an
exclamation
mark,
somehow,
especially
when
you
have
like
one
list,
could
be
okay
but
like
in
this.
We
have
two
list
structures
and
you
could
already
already
have
this
multiplication.
L
J
L
H
Okay,
we're
coming
up
to
the
hour.
This
is
a
tough
one.
Rob
I
feel
like
there's,
there's
some
agreement
that
this
is
the
there's
a
real
problem
that
has
to
get
solved
here.
I
think
there's
confirm
I'd
be
very
curious
to
to
know.
If
there's
like,
is
there
a
missing
primitive
that
we
have
like
I,
I'm
resonating
with
Michael's
earliest
thought
here,
which
was
if
we
had
fragment
aliases?
Would
this
just
straight
up,
not
be
a
problem
at
all
and
I?
H
H
To
block
upstream
and
defer
necessarily
on
doing
that,
but
I
am
curious
if
knowing
if
like
the
ideal
solution
here,
is
something
that
speaks
to
a
missing
piece
as
opposed
to
working
around
what
we
have
that's
worth
investigating
so
I
don't
know.
Hopefully
this
has
been
helpful.
It
feels
like
we
haven't
really
hit
on
as
a
resolution
forward,
but
hopefully
it's
enough
input
for
you
to
continue
jamming
on
this.
L
A
A
lot-
maybe
maybe
we
should
also
like-
maybe
have
have
a
list
of
the
problems
that
really
are
around
that
and
then
we
can
organize
a
meeting
around
that.
Because
then
everybody
can
also
get
his
thoughts
on
at
home,
and
we
have
a
because
because,
like
this
discussion
already
yielded
a
couple
of
problems
and
also
Solutions
and
yeah
it's
maybe
we
should
all
come
together.
First
produce
a
little
small
document,
I
I
can
write
down
what
what
we
have
tested
with
this
and
maybe
Benji
puts
his
sword
to
it.
L
Yeah,
definitely
and
and
I
want
to
keep
it
I
think
it
makes
sense
to
keep
it
separated
between
the
two
issues
of
telling
the
server
good
heuristics
of
when
to
ignore
and
not
before,
and
not
ignore
and
not
ignore,
and
how
the
client
should
what
information
the
server
needs
to
give
the
client.
So
it
has
the
necessary
information
to
handle
those
responses,
they're
separate
but
related
issues
that
we
need
to
figure
out
both
yeah,
so
I
I
won't
take
up
any
more
of
your
time,
but
yeah.
H
We'll
we'll
do
that
cool.
Thank
you
for
bringing
attention
to
this
one.
This
is
a
doozy
okay,
folks,
thank
you
all
for
joining
and
all
the
healthy
discussion
across
all
of
our
agenda
items.
We'll
see
you
all
in
the
next
one.