►
From YouTube: Incremental Delivery Working Group - 2023-07-10
Description
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Get Started Here: https://graphql.org/
A
B
C
A
Yeah
good
I,
don't
know
like
public
heaven
like
a
quote
today
or
it
was
canceled
or
we're
waiting
for
a
rock
to
join.
C
I'm
not
actually
aware
I
know.
Last
week's
was
canceled
like
officially
but
I'm
not
aware
whether
there's
been
any
changes
this
week
or
not.
A
Because
I'm
a
little
bit
outdated
on
Discord
this,
this
courtesio
worked
for
me.
I
need
to
figure
out
how
to
use
it.
Yeah.
C
No,
as
far
as
I
know,
nothing
has
been
for
the
Discord
one
of
the
good
ways
of
dealing
with.
It
is,
if
there's
channels
that
you're
just
not
interested
in
then
mute
them,
and
then
they
won't
like
mess
up
here
on
reds.
A
Thank
you
can
I
unsubscribe
somehow,
because
I
see
like
a
huge
number
of
channels
automatically
some
discussions,
I
can't
anyway
remove
them.
You.
C
Didn't
used
to
be
able
to,
but
nowadays
I
think
you
can.
However,
as
a
TSC
member
I'd,
recommend
that
you
don't
because
you
might
need
to
pop
into
them
to
do
any
moderation
or
anything
like
that.
So,
instead
and
I
know
it's
a
bit
of
a
manual
process,
you
just
right
click,
each
one
and
just
do
mute
until
I
unmute.
A
C
What
I
do
is
I
actually
just
collapse
all
of
the
folders,
and
that
way
it
only
shows
me
the
chats
that
I've
got
unread
messages
in
them.
A
C
Yeah
definitely
feels
like
it
could
be
improved
for
communities
like
the
graphql
Discord,
but
that's
the
way
that
I'm
dealing
with
it
at
the
moment.
C
C
Do
you
do
you
also
collapse?
Is
the
categories.
C
I
find
that
works
quite
well,
so
it
just
hides
everything
away
unless
there's
an
unread
in
there
and
then
you
can
use
I
think
it's
Ctrl
K
yeah
control
K.
If
you
want
to
skip
to
a
particular
Channel,
but
then
rest
of
the
time
like
I,
don't
really
see
much
unless
there's
actually
unread
messages.
So
a
lot
less
and
overwhelming
that
way.
B
Yeah
I
mean
from
going
from
the
last
the
last
meeting.
I
yeah
I
want
to
spend
more
time.
Thinking
about
the
how
we're
gonna
write
this
fact
and
I
feel
like
we
can
simplify
it
and
maybe
get
to
the
point
where.
B
Where
we
do
cover
both
early
and
delayed
execution,
maybe
just
by
putting
that
algorithm
for
for
filtering
payloads
in
an
appendix
but
I
I
feel
like
like
we
can
do
it
in
a
way.
That's
not
overly
complicated
and
doesn't
do
mutations,
but
I
need
to
spend
time
thinking
about
it
and
I
haven't
had
the
time
to
do
that.
Yet.
A
One
quick
question
because,
for
me,
like
working
group,
discussion
was
similar
to
discussions.
We
had
except
one
new
thing.
I
heard
there.
It
was
interesting
to
to
her
heroic
history
was
back
about
like
deferable
and
deferred
yeah,
but
it's
a
bit
of
trivia,
but
it's
yeah.
It
was
interesting
and
the
second
thing,
which
is
more
practical,
is
that
idea
that
in
sin
could
threat
applications.
Why
default
execution
makes
sense
because,
like
initial
part
of
resolver,
is
usually
CPU
bound
and.
A
I
had
the
same
same
idea
in
my
mind:
wait
without
formulating
it.
It's
so
clear
there
and
what's
impressed
me,
is
that
it's
not
only
like
my
opinion.
So
what
do
you
think
about
this?
Is
it
like
arguments
that,
because
previous
discussion
was
structured
in
a
way
like
early
execution,
is
always
kind
of
better,
and
but
we
choose
the
forward
execution
because
of
consequences.
A
B
I
wonder
like
how
much
still
it's
still
like
deferred
delayed
execution
feels
like
like
a
very
broad
way
to
go
at
it.
So
I
wonder
like
how
much
I
would
want
to
like.
Try,
like
you
know,
I
I,
I'm,
pretty
I,
think
I'm.
Pretty
convinced
that
in
single
threaded
you
don't
want
to
just
actually
synchronously
execute
deferred
stuff
that,
but
if,
if
everything
went,
waited
like
a
you
waited
to
like
process
next
tick,
that
would
at
least
put
it
behind
other
synchronous
stuff.
But
I
wonder
like
maybe
it's
like
a.
B
Maybe
you
want,
like
a
smaller
like
a
slightly
bigger
delay
than
that,
but
not
going
as
far
as
deferred
as
like
delaying
until
everything
else
is
completely
done,
because
if
you
only
need
to
get
ahead
of
like
the
other
CPU
blocking
stuff,
if
that's
the
case
that
most
resolvers
are
CPU
blocking
the
beginning
and
then
IO
blocking
at
the
end,
then
it
should
be
fine
to
just
get
ahead
of,
like
the
CPU
blocking
part
right,
not
waiting
for
all
the
I
o
stuff
to
finish,
but
I.
A
Yeah
I
just
want
to
point
out
like
a
funny
moment
for
me
eventually
when
you
wrote
with
like
Fred
initially
your
twine
like
three
possible
solution,
early
execution,
early
execution
on
next
week
and
the
weight
execution-
and
you
said
like
a
second
one-
early
execution
divided
by
one
tick,
doesn't
make
sense
and
you,
but
now
we're
thinking
it's
actually
like
early
execution.
The
way
by
one
tick
is
universally
better
than
early
execution
right.
So
I.
C
That
is
effectively
my
opinion,
the
the
option
one
and
two,
where
we
wait
to
take
our
essentially
the
same.
That
said
what
Rob
has
said
here
is
is
true
right.
If
the
synchronous,
computational
logic
is
a
big
factor
than
just
waiting,
that
tick
could
be
a
nice
little
save,
but
I
still,
it
doesn't
solve
the
other.
Various
issues
that
I've
raised
and
also
like
the
moment
that
any
of
those
resolvers
have
a
network
fetch
in
them
or
something
or
some
asynchronous
work
like
there's
still
going
to
be
processing
of
the
received
data.
A
Yeah
and
we
cannot
like
because
of
the
nature
of
graphql,
there
is
no
like,
like
realistically
zero
chance
for
five
trillion
per
choir
me
and
quite
you
can
fine
tune
particular
resolvers,
but
even
like
in
very
complex
environments.
You
don't
know
how
much
particular
is
over
would
take,
and
you
cannot
like
optimize
particular
queries.
You
just
get
random
set
of
fields
from
coin.
Basically,
so
like
we,
we
cannot
do
optimization.
C
Yeah,
you
also
have
the
issue
that
some
of
the
proposed
Solutions
at
the
moment,
like
awaiting
a
promise
in
the
resolve
info,
which
I
suggested,
have
their
own
performance
overheads
like
even
just
awaiting
A
promise,
is
expensive
in
JavaScript
it's
ten
times
as
expensive
as
just
doing
something
synchronously
it's
it's.
A
Resumables
generator
resume
a
single
thread
thing,
but
you
capture
it's
a
stack.
It
would
be
the
same
in
like
any
language
so
like
maybe
what
else
is
faster,
but
trust
also
how
I
sent
away
it
and
you
cannot
beat
computer
science
if
you
restore
a
stack
and
if
you
switch
like
execution
processor,
cache
to
execute
another
piece
of
code.
Every
time
you
do
like
with
switching
you
spend
resources,
so
yeah
I
would
I
would
say
it's
not
JavaScript
specific,
it's
specific
to
presumables
and
simple
thread.
Environment.
C
And
my
concern
is
I
would
want
to
make
sure
that
for
a
query
that
doesn't
use
stream
and
defer,
it
should
not
be
slower
now
that
stream
interfer
exists
than
it
it
was
before,
or
at
least
not
significantly
slow.
We
can
have
you
know
the
cost
of
an
if
Branch,
that's,
not
so
bad,
but
introducing
more
tick,
jumping
and
stuff
like
that
would
not
be
ideal
which
isn't
what's
being
proposed
to
be
clear.
C
But
if
you
did
this,
if
you
put
this
promise
there
I
guess
we
could
just
set
the
promise
to
null
and
then
you'd
be
awaiting
null,
which
I
assume
V8
would
optimize
away,
but
I'm,
not
entirely
sure
and
I.
Don't
know
you
know
exactly
what
the
deal
is
there
to.
A
C
Yeah
well,
I
was
talking
well,
actually
both
full
request,
latency,
because
anytime,
that
you
do
in
a
way,
there's
generally
an
increase
in
latency.
C
So
that's
the
total
length
of
the
the
request
time
right,
but
also
there
is
computational
overhead
as
well
in
managing
that
which
will
reduce
throughput
also,
and
we
also
when
we're
talking
about
stream
and
defer.
We
have
a
third
one
that
we
care
about,
which
is
the
latency
to
the
arrival
of
that
initial
payload,
and
that
is
what
stream
interfere.
Is
meant
to
be
optimizing
for
right
to
give
you
that
data
as
early
as
possible.
A
Yeah
I
think
we
have
disagreement
on
that.
We
discussed
it
the
last
time
right
so
I'm
I'm
in
the
same
company
as
you
I'm
like,
if
you
specify
a
deferred,
you
want
your
initial
part
to
be
as
fast
as
possible
and
you're
okay
to
wait,
but
there
is
like
non-working
understanding
of
different
or
minion
quake.
I
want
to
run
with,
and
I
want
to
get
initial
response,
but
like
I,
want
to
get
like
response
and
I
don't
want
to
wait,
and
you
understand
right.
C
Yeah
definitely
so
there's
there's
two
there's
two
competing
things
here:
there's
either
we
optimize
for
that
initial
response
payload
or
we
optimize
for
not
increasing
the
total
request,
latency
and
they
are
kind
of
competing.
We
can
either
go
fully
One
Way
fully
the
other
way
or
we
can
have
a
hybrid
approach
that
tries
to
achieve
The
Best
of
Both
Worlds
trading
off
a
little
of
each.
A
A
Sentient
result
like
from
with
query
in
IDs
or
something
and
pushing
it
into.
Second
query
and
second
query:
execute
compared
to
that
solution.
Like
the
weight
execution
is
clear,
weird
you
you
get
initial
response
roughly.
A
Initial
response
execution
time,
almost
the
same
as
like.
First
query.
Second,
query:
you
know,
like
two
query:
Solutions
second,
query
is
slower
than
the
fourth
part,
because
you
need
to
use
IDs.
You
need
to
acquire
a
database
to
get
work,
entities
and
start
to
go
through
root,
Fields,
so
thinking
as
evolutionary
path
to
what
you
can
do
right
now.
The
way
the
execution
is
like
a
clear
win
without
any
like
losing
thing.
C
A
Potentially,
obviously
Champs,
so
on
what
basis
for
some
people
it
would
be
like
for
some
use
cases.
It
would
be
not
as
a
clear
win.
B
Oh,
so
so,
just
to
take
a
step
back
like
my
my
takeaway
from
the
working
group
was
that
that
there's,
like
servers,
we
I
mean
I,
think
we
agree,
like
servers,
should
have
the
choice
of
implementing
both
right,
especially
like
multi-threaded
multi-cpu
servers
stuff,
like
that
they
are
probably
going
to
want
to
do
early
execution,
and
so
like.
The
question
is
like
to
come
back
to
what
we
want
to
do
in
the
spec
right
and.
B
B
A
A
Instrument
pressure
from
Community
right
like
if
people
like
really
would
want
it
and
after
they
understand
why
important
thing
like
I
thought
about
both
that
I
suggested
last
time
and
yeah
I
think
yeah
I
agree
with
Europe.
It's
like
bad
idea
right
now,
because
people
don't
really
understand
like
and
we
don't
understand.
Maybe
you
actually
won't
understand
better
because
you
use
it
in
first
dips
right,
you
get
some
some
so
I
think
it's
interesting
to
have
with
feedback
from
community
and
point
about
multi-fradian,
Emoji,
CPU,
yeah
I.
A
Think
because
there
is
like,
like
some
interlocking
can
be
an
issue,
but
if
you
organize
your
system
in
a
way
that
it's
not
an
issue
for
you,
it's
like
it
works
or
similar.
If
you
organize
your
system
that
way,
it's
not
whatever
it's
a
clear
win
to
do
like
all
execution
in
multi
CPU
environment,
we
can
actually
have
like
another
suggestion.
B
Yeah
and
and
I'm
hoping
that
the
way
that
we
can
implement
it
is
just
that
we
can
describe
describe
it
as
early
execution
but
say
like
here's,
the
delay,
here's
where
you
delay
for
x
amount
of
time,
if
you
want
delayed
execution
and
if
you
are
or
the
other
way
around
and
then
there's
this
one
other
function,
that's
just
its
own
algorithm.
B
That
has
a
note
that
says:
skip
this
if
you're
doing
delayed
execution,
if
you're
doing
early
execution,
follow
this
to
make
sure
that
error
bubbling
works,
fine
and
it's
going
to
be
tricky
to
figure
out
how
to
do
that
without
without
like
functions
that
mutate
variables
to
keep
it,
how
to
do
it
with
PR
functions,
but
I
feel
like
it's
possible
and
I
want
to
spend
some
time.
Thinking
about
that,
like.
B
If
we
come
up
with
something
I
think
like,
could
we
come
up
with
something
that
follows
that
general
direction
that
that
you
guys
would
agree
on.
A
If
you,
if
you
have
resources
to
do
that,
and
especially
like
current
Pro
current,
like
algorithm
like
do,
we
have
like
algorithms
for
current
proposal
from
somebody
right.
B
It's
early
there
there's
kind
of
I.
B
His
his
does
have
like
quite
a
bit
of
of
changes,
I
think
because
it
describes
like
in
very
high
detail
like
exactly
how
the
payloads
have
to
be
handled
and
which
I
think
might
be
like
more
detail
than
we
need
in
the
spec
and
because
of
that,
there's
there's,
like
some
mutation
happening
between
different
functions,
so
I'm
wondering
like
if
we
can
simplify
that
and
get
us
get
something
that
everyone
agrees
on
yeah.
C
But
one
of
the
more
interesting
things
that
came
out
of
the
discussion
last
Thursday
was
Lee
Lee's
statement
that
he
was
surprised
that
it
was
easier
to
specify
the
delayed
execution
than
the
early
execution,
and
it
might
be
that
Lee's
still
thinking
of
defer
and
Stream
in
terms
of
the
existing
algorithm
I.E
re-evaluating
the
entire
selection
sets,
which
would
make
sense
right,
because
then
it
would
be
simpler
to
just
use
the
existing
algorithm.
C
Could
you
just
run
the
selection
set
and
you're
done,
but
since
that's
effectively
not
what
we're
proposing
anymore
it
I
would
I
would
be
very
interested
to
see
an
algorithm
for
early
execution
that
doesn't
have
the
the
mutation
that
we
currently
see
in
in
yakov's
proposal.
C
I
had
a
go
at
writing
up
the
deferred
execution
solution
as
spec
edits,
so
I've
just
posted
a
a
compare
of
that
into
the
chat.
Now.
I
should
point
out
that
this
is
not
finished.
It's
not
ready
yet
also
I've,
based
this
on
a
branch
that
I've
called
incremental
common,
because
it
turns
out
that
a
bunch
of
my
different
goes
at
this
all
need
like
similar
core
algorithm
changes
to
the
spec,
which
are
very
similar
to
the
ones
that
Yakov
has
already
proposed
in
the
collect.
C
Was
it
collect,
subfields
or
something
like
that?
He
did
as
a
as
a
pull
request.
So
I
commented
on
that
pull
request
recently,
just
a
few
days
ago,
with
my
changes
which
yaakov
has
looked
over
and
things
would
be
good,
so
we'll
I'll
look
at
raising
that
as
a
separate
PR
as
well,
but
yeah,
assuming
that
we
make
those
changes
and
we
effectively
move
over
from
execute
selection
set
to
execute
merged
field
set
or
whatever
it's
called
now,
then
the
the
Deferred
execution
is
actually
pretty
straightforward.
C
B
Yeah
and
I
I
just
want
to
call
out
maybe
you're
aware
of
this,
but
I
see
you
have
path
as
an
argument
in
collect
fields
and
in
graphql
JS
path
is
memorized,
I
mean
collect,
Fields
is
memorized
in
a
way
that
passing
path
to
it
would
break
the
memoization
and
I
think.
That's
that's
also
a
where
a
fair
amount
of
the
additional
complexity
that
comes
in
yakov's
changes,
because
he
was
avoiding
that.
C
That
is
a
good
point
and
it
was
something
I
was
already
aware
of,
and
it
might
actually
be
why
I
haven't
yet
proposed.
These
I
haven't
raised
any
pull
requests
for
these
yet
they're,
just
branches
that
I'm
working
on.
So
it
might
be
that
that's
one
of
the
things
I
was
planning
to
Reap.
Is
that
I?
Don't
have
my
notes
open
so
I'm,
not
sure,
but
that
would
make
sense.
A
A
B
Not
that
path
is,
is
immutable
or
not
it's
that
when
you
have
an
array-
and
you
are
completing
the
fields
for
every
element
in
the
array-
the
path,
because
there's
no
path
in
collect
Fields,
it's
it
doesn't
rerun,
collect
fields
for
every
array
element.
B
A
B
Even
even
if
it's
like
several
layers,
deep
inside
of
the
array,
it's
collect,
Fields
has
the
same
Arguments
for
an
object,
five
levels
down,
so
it's
memoized
there
too.
It's
not
like
specifically
like
the
array
element.
It's
it's
every
every
field
under
the
array,
too,.
A
Okay,
I'm
interested
on
the
way
interesting
project,
I,
don't
know
what
it's
about,
but
interesting
thing.
I
noticed
in
the
for
a
totally
different
unrelated
reason
is
that
we
kind
of
have
static
path
except
like
arrays,
so
for
me,
I'm
playing
with
the
idea
to
have
like
actually
two
Puffs
one
is
like
a
puff
inside
the
query
and
obviously
quiet
I
don't
have
like
a
Syntax
for
arrays.
There
is
no
difference
between
object
and
array
of
objects.
A
A
And
what
what
to
put
basically
so
it's
like
a
big
change
for
graphic
address,
but
having
like
two
separate
Buffs,
one
for
like
question
inquiry
and
correct
Fields,
yeah
you're
right,
it's
not
depend
on
the
like
arrays,
even
if
arrays
happening
outside
everything
is
keep
include.
Everything
is
not
foreign.
A
C
So
I
have
the
same
need
a
van
and
I
call
it
operation
path.
C
For
me,
it
was
based
on
the
the
operation
Expressions
RFC
that
I
raised
a
while
back.
That's
based
on
the
schema
coordinates,
so
it's
effectively
takes
the
parts
that
you
browse
through
the
operation
to
use
it,
but
one
of
the
important
things
there
is
that
it
captures
the
type
as
well
so
like
for
polymorphism.
If
it
could
be
like
a
cat
or
a
dog,
then
the
part
of
the
path
is
actually
like
cat
dot,
name
or
dog.
Dot
name
is
part
of
the
operation
path.
A
Okay,
so
in
your
case,
yeah
in
this
case
like
if
whispass
is
a
formatted
question,
is
three
row
so
correct
Fields,
the
result
is
fully
depend
on
which
single
argument,
if
you
put
like
there,
is
a
response-
type
actual
type,
concrete
type,
yeah
concrete
type
in
the
park
so
with
the
whole
caching
depends
on
this
web.
C
Yeah,
and
as
per
as
for
this
issue
with
the
the
caching
of
it
I'm
I'm
wondering
whether
the
collect
fields
that
I've
put
into
this
current
proposal
could
actually
be
split
into
two
parts:
a
casual
bit.
That
actually
does
like
the
collection
and
then
a
bit
over
the
top
of
it.
That
factors
in
the
what
I'm
calling
the
fragment
delivery
groups,
which
is
the
the
stream
and
defer
accommodation.
A
A
B
A
Okay,
yeah
just
what
do
I?
What
way
you
don't
need
to
split
cover,
Fields
yeah,
you
don't
need
to
mess
with
like
fragments
after
you
do
character,
but
yeah
I,
agree.
Yeah,
maybe
like
in
that
case
you
wouldn't
that
fragments
will
quash.
Fragment
names
will
quash
in
in
proposal
where
everything
controlled
by
fragments.
We
got
an
awesome
for
enforce
way
about
uniqueness,
which
is
a
good
thing.
So
yeah.
B
A
Was
like
just
brainstorming
on
how
to
avoid
splitting
cocktails.
A
B
A
A
In
any
way,
just
to
inform
like
quite
where
something
is
available
is
delivered
right.
It's
just
like
in
current
proposal.
You
need
to
preserve
two
things
you
need
to
preserve
like
group
stuff
under
from
yeah.
It
was
basically
like
yeah.
Maybe
it's
the
same
eventually
you
just
name
the
group.
A
That
I
will
think
about
this
yeah.
It's
not
related
to
to
Urban
doing
the
execution.
C
So
to
to
Circle
back
to
the
beginning
of
the
conversation,
I
still
find
myself
of
the
opinion
that
just
deferred
execution
is
probably
sufficient
for
most
people
and
that
actually
early
execution.
C
Is
it
optimizes
some
things
at
the
cost
of
others,
and
it
would
only
be
suitable
to
be
used
in
certain
circumstances,
where
you
fully
understand
exactly
what
your
infrastructure
is
doing,
because
the
the
cues
and
things
that
could
be
involved
could
come
from
anywhere.
They
could
come
from
your
kernel.
They
could
come
from
the
programming
language
itself,
whether
it's
single
or
multi-threaded
could
come
from
mutexes.
They
could
come
from
remote,
Services
databases.
They
could
come
from
the
file
system.
C
There's
so
many
places
that
cues
like
that
could
cause
that
initial
payload
to
be
delayed,
I
think
the
predictability
of
having
a
deferred
execute
is
just
it's
pretty
straightforward.
It
just
says
this
stuff
isn't
going
to
be
executed
until
after
the
other
stuff
has
been
simple,
as
that,
basically
so
I
I
still
would
push
for
that.
But
but
if
we're
pushing
for
that,
then
I
think
that
we
should
put
some
guidance
in
the
spec
to
say
once
you're.
You
know
two
or
three
levels:
deep.
C
You
probably
just
want
to
inline
everything
at
that
point
because
I
think,
having
you
know,
15
levels,
deep
of
defer
if
you're
doing
delayed
execution,
could
potentially
lead
to
a
very
significant
increase
in
the
total
query
time.
The
total
request,
time
sorry
and
I
have
not
seen
I've
not
seen
a
use
case
for
nested
defer
more
than
really
two
levels.
C
Deep
I,
don't
think,
there's
that
originally
I
didn't
really
think
there
was
one
for
more
than
one
but
I
think
I
came
up
with
one
for
the
widgets
page,
but
honestly
I've
not
seen
any
that
have
been
where
more
than
two
levels
of
of
defer
is
actually
a
valuable
thing
that
justifies
the
complexity
or
the
performance
cost.
A
Small
use
cases
today
explain
my
girlfriend
she's
like
doing
manual
or
learning
how
to
do
automatization.
Okay,
testing
and
I
try
to
explain
what
single
page
application
is
and
how
they're
working.
So
we
did
like
some
experimentation
with
Gmail,
a
huge
Gmail
with
example,
and
I
restrict
like
internet
and
what's
happening
in
the
Gmail.
It's
like
a
first
level.
A
It's
showing
you
your
basic
info
that
can
well
it's
rendering
email
list
and
third
level.
It's
rendering
attachments
per
email,
attachments
attached
file
attached
by
email
in
email
list.
So
in
Gmail
you
can
open
the
list
of
emails
and
if
somebody
send
you
a
PDF,
you
can
click
them.
So
Gmail
do
aggressive
idea
like
with
free
the
waves
but
yeah
I.
Think
it's
one
of
the
extreme
cases.
A
So
I
would
say
like
three
and
there's
five
is
normal,
but
it
wasn't
that
it's
like
I
also
don't
see
big
use
cases,
but
in
the
wild
there
is
like
Gmail
doing.
C
Yeah
and
I'm
definitely
not
saying
that
this
would
be
a
hard
preset
limit
that
we
would
write
into
the
spec.
It
would
be
more
of
a
recommendation.
Make
this
a
configuration
parameter
for
your
implementation,
so
Google's
Gmail
would,
for
example,
just
set
it
to
three,
because
it
thinks
that's
a
reasonable
trade-off.
There.
B
C
A
B
Think
the
specs
should
just
say
that
any
interfer
can
be
in
line
and
and
then
you
know,
even
if
we
don't,
we
don't
know
what
the
best
practices
are
right
away.
Different
scenarios
would
come
up.
We
could
always
like
change
the
best
practices,
but
you're
not
going
to
have
any
clients,
depending
on
any
specific
behavior.
A
Spark
ignored,
like
quite
a
complexity,
it's
it's
possible,
but
it's
ignored
in
a
sense.
Nothing
like
and
I
can
say
is
about
like
number
of
fields.
You
should
support
it
and,
like
other
way,
you
you
can
do
then
our
sort
of
setbacks.
So
maybe
it's
not
worth
to
mention
it
at
all
and
see
how
critical
it
is.
C
Yeah
I
agree
in
general.
Actually
it
needs
mentioning
somewhere,
but
maybe
the
spec
is
not
the
right
place
for
it.
B
A
A
So
if
we
speak
just
about
streaming
differ,
if
you
like,
put
a
spec
spark,
doesn't
have
anything
about
anything
else.
There
is
no
like
security
consideration
or
appendix
or
anything,
and
we
suddenly
put
this
thing
for
one
feature:
it
create
illusions
that
this
particular
feature
is
problematic.
A
I
think
it
is
a
need
to
write
like
some
appendix
and
explaining
it's
it's
not
like
problems
with
graph
heal.
It's
like
challenges,
implementing
for
public,
ipis
or
iPads,
that
you
don't
have
control
over
queries,
and
but
we
should
not
put
it
for
particular
feature
because
it
it's
weird.
If,
if
without
context,
if
I
just
view
like
pairs
at
this
exit,
I
will
assume
it's
like
go
to
type
of
mechanism.
So
it's
there,
but
it's
like
problematic
mechanism
and
I
should
I
strongly
consider
before
using
it.
A
So
that's
why
I
wouldn't
mention
like
security
as
a
problem
for
one
particular
feature.
What's
that,
like
I,
like
your
way,
you
formulated
it's
not
briefing
yeah,
it's
like
I
didn't
quite
differ
anyway.
Make
your
overall
query
longer
early
execution,
the
right
execution.
It
will
always
be
longer
overall
because
of
additional
computation.
How
much
longer
it's
like
a
question,
but
it
will
be
always
you
always
paying
a
price
for
specifying
differ.
So
in
what
formation?
There
is
no
like
security.
A
B
A
Yeah
feels
great
and
the
best
case
scenario
we
will
have
something
to
present
on
on
working
group,
but
honestly
don't
compared
to
Heaven
it's
it's
your
first
child
or
it's
all
right
yeah.
You
know,
you
know
what
you're
doing,
but
it's
probably
still
stressful
so
like
compared
to
that
is
important.
Definitely
one
thing
you'll
cancel
like
once
right,
so
everybody
will
be
yeah.
B
All
right
anything
else
that
we
wanted
to
talk
about
today,.
A
B
Yeah
all
right,
yeah,
I'll
I'll,
let
you
know
to
cancel
those
next
three
meetings
and
I'll
see
you
guys
in
August,
then.