►
From YouTube: GraphQL.js Working Group - 2022-01-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
I
think
we
have
everybody
in
everyone
who
is
the
self
and
our
agenda
and
if
you
didn't
edit
yourself,
please
and
yourself
in
agenda
and
let's
start
so
first
thing:
if
you
want
to
participate
with
media,
you
need
to
sign
specification,
membership
agreement,
participation,
guideline
and
accept
code
of
conduct.
It's.
A
Yes,
yeah:
I
will
match
them
afterwards,
only
if
you
have
like
gender
items
or
something
okay,
yeah
yeah.
What's
thanks
for
submitting
that
and
thanks
for
submitting
their
measured
afterwards,
so
you
need
to
send
these
documents
and
it's
the
same
documents
for
every
working
group.
So
if
you're
watching
youtube
and
want
to
participate-
and
we
saw
the
working
group
on
the
crafty
foundation-
please
sign
this
documents.
A
C
Hi,
I'm
lauren
I'm
from
the
guild,
and
I'm
also
interested
in
talking
about
different
stream.
Today,.
E
D
G
My
my
name
isn't
in
the
agenda
yet,
but
I'm
alex
I'm
I'm
from
yelp
and
I'm
working
on
client
control
and
all
ability.
A
G
A
A
Next
item
is
yeah
review
previous
meeting
action
items.
I
think
we
didn't
submit
most
of
them,
but
basically,
like
all
the
recent
ones
except
except
like
I
promised
a
hatch
to
explain
like
in
some
sort
of
embark.
I
think
it's
even
better.
If
I
write
some
document
or
something
I
still
need
to
do
that
other
than
that,
like
all
other
action
items
from
previous
time
like
canary
releases
and
documentation
website,
everything
is
in
agenda
today.
So,
let's
start
with
hundred
items
so
rob
you
you
first
with
streaming
differ.
A
Well
sorry,
yeah
before
we
switch
because
related
to
all
agenda
items
right
before
this
meeting
prior
I'm
sorry
about
doing
it
was
mean
but
yeah
I
did
some
some
stuff
related
to
previous
action
items.
So
I'm
much
like
as
a
hp
for
documentation
website
and
I
release
new
version
of
graphql
js
16
3.0
and
I
switched
mine
to
1700
alpha
0.
A
So
it's
like
stuff
that
promised
like
last
time
and
since
I
wanted
to
tell
it
before
we
start
agenda
because
related
to
half
of
agenda
items.
Basically
so
sorry
rob
about
and
interrupting
yeah.
But
let's
switch
to
your
agenda
item
about
different
stream.
B
B
Also,
I
think
that
there
may
be
use
cases
going
forward
where
some
servers
may
not
ever
want
to
do
it
if
they're
they're,
not
using
a
network
layer
that
is
compatible
with
that
type
of
thing.
So
I
had
originally.
I
have
implemented
right
now,
just
a
flag
on
the
graphql
schema
constructor
that
both
enables
the
execution
and
inserts
the
two
directives
into
the
past
directives,
yeah,
and
I
I'm
going
to
post
the
discussion
issue.
A
D
To
enable
extremely,
why
agree
that
we
need
to
have
the
option
to
enable
or
disable?
I
think
we
just
not
fully
agree
on
how
it
should
be
done
and
how
this
should
be
propagated
to
other
libraries.
Later
I
mean,
if
we'll
do
a
small
change
now
that
will
make
it
easier
for
apollo
server,
but
will
require
changing
20
other
libraries
or
I
guess
the
other
way
around
eventually.
A
Yeah,
actually
it's
a
little
bit
unrelated.
I
also
thought
about
that,
so
I
I
have
object
like
I
think
I
think
I
don't
remember
to
propose
it,
but
I
think
it's
consensus
that
the
guild
to
to
do
it
like
explicit.
You
need
to
say,
like
please
support
before
my
objection
to
that
was.
A
C
I
think
that's
already
like
it
depends
because,
like
not,
every
server
must
actually
allow
deferring
and
streaming
results
because,
like
I
think,
it's
a
certain,
it's
not
really
clear
for
the
different
streams
back
right
now
and
also
by
the
graphical
over
http
spec,
because,
like
you,
could
have
the
defer
and
stream
directive
in
your
schema,
but
that
does
not
really
imply
that
the
result
must
actually
be
deferred
or
streamed
because,
for
example,
if
you
have
a
client
that
does
not
understand
the
first
stream
and
sends
a
request,
but
only
accepts
application
json
or
something
and
not
like
application
multi-part
data,
then
he
wouldn't
be
able
to
like
interpret
a
stream
result
right.
C
So
in
that
case
the
server
would
have
to
opt
out
of
different
stream
anyways.
So
I
think
it's
not
like
always
supportive
for
all
this
or
stream,
or
it's
more
like
on
a
request
basis.
We
need
to
a
way
to
determine
whether
we
want
to
use
defer
and
stream
results
or,
like
event,
streams.
D
To
that
that
it's
like
it
feels
like
a
server
feature
more
than
a
schema
feature.
Yeah,
you
see
what
I
mean,
because
if
you
support
different
stream,
it's
a
matter
of
how
you
implemented
the
server
and
it
shouldn't
be
part
of
the
schema
itself,
at
least
from
my
point
of
view.
Maybe
I'm
missing
something
here.
B
C
B
Yeah
I
mean
so
what
we
have
so
yeah,
because
there's
there's
two
layers
of
it.
I
guess
in
the
in
the
spec
I
have
her
in
that
deferred
stream
is,
is
not
mandatory,
so
server
could
not
support
it
at
all
and
in
those
cases
it
should
not
be
in
its
introspection,
but
then,
as
far
as
honoring
the
individual
deferred
stream
directives,
it's
a
should
recommendation
and
not
a
must
where
I
like,
for.
B
Like
the
rfc
like
defines
what
the
definition
of
should
must
and
optional,
and
so
it's
three
different
levels
where
I
I
don't
think
that
it
should
really
be
designed
around
a
server
or
not
and
doing
it.
It's
like
it's
just
that
we're
just
using
should,
and
that
means
that
if
the
server
has
like
a
good
reason,
it
should
do
it,
but
it
should
understand
the
full
implications
of
ignoring
it
just
to
clarify
it
should
so
I
I
don't
think
it's
likely
that,
like
it
shouldn't
be
like
a
normal
behavior.
F
C
Okay,
it
doesn't
make
sense
yeah,
I'm
just
thinking
about
use
cases
where
it
could
couldn't
be
like
that.
But
the
only
thing
that
I
have
in
mind
right
now
would
be
like
if
you're
using
persistent
operations
or
queries,
maybe
there
the
client
doesn't
actually
know
the
actual
operation,
but
even
then
doesn't
make
too
much
sense.
Another
thing
would
be
like
if
there's
like
too
much
traffic
to
opt
out
of
it,
but
it
doesn't
all
really
make
sense
as
well.
B
That's
that's
kind
of
what
we
heard
from
facebook
that
they
they
had
some
way
where
they
could
on
the
server
override
it
and
they
didn't
want
their
server
to
be
spec
in
compliance.
So
that's
why
they
had
assets
for
the
that
it'd
be
a
should
and
not
a
must,
also
just
that
it
keeps
things
open.
I
don't
know
if,
like
we
find
out
in
the
future,
that
some
specific
pattern
of
deferrer
is
always
less
performant
than
ignoring
it.
That
then
a
server
could
make
that
an
optimization.
B
But
otherwise,
you
had
said
lauren
a
case
where
a
client
sends
a
query
that
contains
deferrer
stream,
but
it
doesn't
actually
support
the
response
type.
Is
that
what
would
how?
What
would
that
use
case
be.
B
Yeah
that
definitely
sounds
like
an
equivalent
case.
I'm
not
I'm
not
sure
yeah.
D
And
the
way
that
you
opt
in
is
by
let's
say,
adding
the
subscription
type
to
the
schema,
and
this
is
the
way
you
opt
into
the
subscription.
And
then
you
are
in
charge
of
implementing
the
transport
and
everything
so
so
that
that's
like
the
the
reasoning
for
making
the
schema
in
charge
of
stream
and
defer.
D
D
The
schema
define
this,
the
the
directives
I
mean.
I
understand
that,
like
a
feature
flag
might
be
like
a
nice
api,
but
eventually
we'll
need
to
remove
it.
Probably
so
we
can
just
say
we
have
the
option
internally
to
have
stream
and
defer,
but
if
you
want
to
do
it
just
add
this
definition
of
a
directive
to
your
schema,
and
this
is
the
way
you
opt
in
without
feature
plates.
C
That's
also
one
of
the
solutions
that
I
proposed
in
my
sum
up
of
all
the
discussions
we
had
in
the
github
issue,
discussion
and
another
issue.
I
see
right
now
with
having
a
feature
flag
that
is
then
also
adding
directives
to
the
directive
thing
like
on
the
on
the
schema.
We
already
have
a
directives
option
and
now
we
add
another
option
to
the
constructor
which
modifies
the
directives
option
and
to
me
it
kind
of
seems
like
weird
and
redundant
that
we
like
have
something
that
mutates
another
option
yeah.
H
Hi
octopus
here
how's
it
going
I
just
want.
I
just
want
to
suggest
also
that
you
know
there
may
be
custom
information,
implementations
of
the
deferred
stream
directives
so
and
it's
important
to
you
know
to
make
sure
that
we
still
provide
support
for
those
as
well.
So
if
we
have
like
a
flag
on
the
schema
it
should
be,
you
know
you
should
there
should
be
some
way
to
you
know
substitute.
You
know
your
own
directive
for
that.
H
So
I'm
not
sure
how
that
how
you
know
currently
that's
a
little
complex,
because
you
might
want
to
not
get
the
default
directive
but
still
enable
the
current
stream.
B
A
A
H
I
mean
I
I'm
not
disagreeing
that
it's
that
it's
experimental,
I'm
just
I
mean
I'll
point
out
what
the
use
case
in
particular.
H
You
know
for
right
now,
according
to
the
spec
there's
no
way
just
just
you
know,
for
one
example,
I
don't
think
there
we
have
implemented
yet
when
an
individual
stream
might
complete,
you
know
we
send
the
has
next.
No,
if
the
entire
stream
is
over,
but
let's
say
individual
stream
completes,
we
we
don't
yet
notify
the
clients
and
we
may
never.
You
know,
within
the
spec
choose
to
notify.
D
H
Clients
of
that,
but
right
now
you
know
one
of
the
many
things
that
I
wish
I
had
time
to
do
is
to
set
that
up
working
in
graphql
executor,
and
so
that
would
be
a
custom.
You
know
argument,
probably
whether
you
would
want
that
additional
information,
and
so
you
know
you
need
to
modify
the
x.
You
know
currently
experimental
included
directive
for
stream
within
graphql
js,
or
you
know
much
more
simple
simply
probably
would
be
to
you
know:
roll
your
own
directive.
H
You
know,
that's
just
an
example
I
mean,
even
though
it's
experimental,
I
don't
see
how
that
changes.
Anything
you
still.
You
know
you
may
want
to
provide
custom
arguments.
I
mean
there's
extensions,
the
ability
to
add
in
extensions
to
the
current
payload
format,
so
there's.
H
You
know
cool
things
that
we
could.
You
know
enable
on
the
custom
side.
B
I
I
would
think
that,
if
you're,
if
you're
doing
that,
you
would
also
need
changes
to
the
execution
algorithm
or
just
just
the
execution
library.
So
I
I
think
if
you're
like,
I
can't
see
how
we
could
make
the
default
execution
like
accept
any
kind
of
arbitrary
defer
or
stream
if
it
is
intended
to
behave
differently
right.
H
I'm
talking
about
is
within
graphql
executor,
which
is
obviously
is
a
custom.
You
know
it's
a
package
separate
than
graphql.js
that
does
you
know
a
few
custom
things
differently
during
it
during
execution.
So
so
yeah
you
would
you
need
to
give
your
own
execution.
H
H
I
mean
I
guess
that
flag
would
only
be
true
for
the
the
native
graphql.js
and
would
just
be
ignored
by
my
custom
executor
and
I
suppose
that
might
work.
But
it's
just
a
little
a
little
strange.
C
C
D
Yeah,
but
I
guess,
like
the
defaults
matters
right
so
if
by
default,
the
directives
are
not
there
and
the
the
end
user
adds
the
directives.
I
guess
we
can
infer
that
the
user
wants
to
use
different
string,
that's
fine
not
to
put
it
by
default
for
sure.
As
long
as
it's
experimental,
I
just
feel
like
an
ex
very
explicit
feature.
Flag
is
a
bit
too
much
because
we
can
infer
or
assume
what
the
user
wants
based
on
the
schema.
A
Default
or
stream
directive
in
his
sdr,
so
it's
much
like
yakov
case
and
like
person
wants
by
different
stream
server
library.
In
what
case
can
check
like
each
schema
have
default
directive
and
by
given
and
say
like
we're,
not
supporting,
defer
stream
or
voting,
say
saying
we
will
ignore,
so
we
discussing
quite
default
one.
It's
a
question
like
if,
if
person
don't
write
anything
and
basically
like
some
of
the
users,
don't
care
the
most
when
different,
they
don't
write.
Anything
don't
specify
anything
so
in
with,
in
this
case,
like
I
think,
servers
should
decide.
A
Basically,
if
server
supports
34,
it's
like
after
thinking
a
lot
about
this.
I
think
problem
here
is
that
graphql
js
decides
it
should
be
server
that
decide
like
what
what
is
it
is.
A
Is
stream
default,
supported
or
not,
if
user
likes
explicitly
specified,
server
need
to
do
error
rewarding
or
something,
but
like
it's
a
problem
that
default
is
like
graphql
js,
injecting
default
directives
at
all
like
in
every
stage.
So
if
you
do
like
schema
transformation
at
every
stage,
the
four
directives
are
injected
and
it's
like,
I
think
it's
worked
for
really
simple,
simple
use
cases
previously,
but
now,
since
we
have
like
the
whole
ecosystem
of
like
one
library,
create
schema,
another
library
expose
it
to
our
ctp.
A
So
right
now
it's
like
typed,
graphql
or
graphql
js,
configured
by
type
graphical,
decide
basically
decide
stream
and
differ
and
server
like
just
get
schema
with
directives
inside
so
it's
my
proposal
was
little
bit
more
related
to
item
number.
I
think
we
have
it.
Yeah
item
number
10
an
agenda,
but
for
number
10
it's
like
very
similar.
It's
a
problem
with
default
directives.
A
H
Ivan
I'm
not
sure
I
I
got
all
of
that,
but
I
think
I
got
some
some
of
it.
I
think
that
the
graphql
js
does
have
to
decide
to
some
extent
about
defer
and
stream,
because
you
know,
as
mentioned
one
once
we
have
these
namespace
directives
that
are
in
the
spec.
H
I
don't
think
you
can
add
custom
functionality
to
defer
and
stream
and
still
be
spec
compliant,
but
you
can't
you
can
no
longer
use
defer
and
stream
for
any
other
custom.
But
you
know
you
know
for
any
other
custom,
meaning
meaning
that
the
the
vern
stream
director's
happen.
You
know
have
to
be
honored.
You
know
strongly
recommended,
except
you
know
the
you
know
the
one
situation
in
which
you
know
the
server
thinks
there's
better
performance.
Otherwise,
and
if
you're
not
you
know,
you
won't
be
spec
compliant.
H
If
you
have
custom
directives
called
defer
and
stream
that
do
something
else.
So
if
deferring
stream
are
in
the
schema,
you
know
that
you
know,
then
it
is
graphql.js
to
some
extent
that
has
to
decide
rather
than
the
server
and
then
there's.
H
Maybe
I
misunderstood
that
part
of
what
you
were
saying,
but
I
think
to
you
know
to
that.
To
that
point
I
think
to
some
extent
the
graphql
gs.
It's
not
really
graphql
gs.
So
to
speak.
It's
the
schema
generation
library
that
decides.
You
know
on
whether
it's
enabled
or
not.
A
I
will
share
screen
for
five
seconds,
so
my
proposal
is
right.
Now
the
whole
directives
are
injected
here
so
like
actually
like
not
build
schema.
But
if
you
do
another
example
like
here
so
build
schema,
inject
directives
extend
schema
return
like
another
schema,
also
with
default
directives.
So,
like
every
stage
of
schema
processing,
the
four
directories
are
added.
A
Which
my
proposal
is
and
for
also
for
other
reason
here,
is
like
a
list
of
two
other
reasons
like
here:
two
other
reasons
why
we
need
that
and
yaco
is
partly
related
to
issue
that
you
open
about
validation
errors.
I
responded
there,
but
basically,
my
proposal
is
that
execute
and
validate
another
function
that
should
work
on
complete
schema.
A
You
can
create
schemas
without
like
root
query
type.
It's
a
world,
it's
like
a
schema,
the
final
one,
the
schema
that
you
pass
to
execute
should
have
like
query
type
by
spike,
so
my
proposal
is
to
make
it
more
explicit,
create
basically
new
class
and
what
class
will
be
responsible
for
in
constructor,
for
validating
and
for
injecting
the
four
directives,
and
here
you
can
configure
it.
We
can
add,
like
another
option,
basically
default
directives,
and
it's
not
confusing
since
like
directives
coming
here
and
default
directives.
D
I
I
have
to
just
I'm
not
100
sure
ivan.
Maybe
I
can
explain,
I
mean
I
think
we're
all
aligned
on
the
fact
that,
as
long
as
stream
and
defer
are
experimental,
we
shouldn't
add
the
directives
by
default,
or
I
I
missed
anything
here.
So
this
shouldn't
this
specific
topic
of
stream
deferred
directives.
D
Doesn't
I
mean
this
could
be,
like
a
very
let's
say,
a
nice
change,
but
I
I'm
not
sure
how
it's
related
to
the
topic.
A
A
So
I
basically
I'm
agree
with
warren
idea
of
like
let's
don't
do
flux
at
all,
because
at
some
point
like
what,
when
streaming
d4
became
part
of
a
spec
was
the
difference
between
include
and
stream
and
default.
There
is
like
both
spec
component
directive.
A
A
Why
I'm
so?
We
should
include
all
the
four
directives
or
don't
include
any
different.
Any
standard
directives
like
why.
What's
the
difference
like
when
streaming
for
the.
D
A
more
broad
discussion,
I
feel
I
mean
I
I
agree
with
you.
I
mean
this
is
a
discussion.
We
should
all
have
at
some
point
and
I
agree
like
how
the
schema
building
should
be
constructed
and
is
it
like.
Let's
say
something
that
could
be
like
step
by
step
and
it's
fine
to
do
validations
after
composing
the
entire
schema,
and
I
agree,
but
I
mean
I'm
just
not
100
sure
how
this
I
mean
for
the
concrete
issue
of
stream
and
defer
feature
flags
and
to
move
on
with
that
topic.
D
I
feel
like
we
need,
like
a
let's
say,
an
intermediate
solution,
something
now
that
will
address
this
issue
and
will
let
us
to
move
forward,
because
otherwise
we
can't
wait
for
like
changes
from
like
a
year
from
now.
I'm
not
saying
it
will
take
a
year,
but
you
know
broader
changes,
especially
around
schema
building
and
changing
classes
takes
time
and
it
feel
like.
We
need
something
when
we
need
like
a
decision.
D
A
Okay
in,
in
that
case,
like
we
can
agree
on
on
the
simplest
solution,
basically
solution
that
and
sorry
if
I
mispronounce
your
name
warren
right
propose.
Basically
you
you
proposing
not
adding
like
any
at
all
just
passing
directives,
since
we
in
alpha
stage,
we
can
add
that,
because
it's
like
zero
code,
basically
zero
changes
and
during
alpha
stage
we
can
add
like
discuss
as
a
way
to
enable
it
wait.
My
my
my
issue
with
that.
That's
like
currently
ergonomic
one,
but
it's
fair
that
such
asia
should
not
block
adoption.
A
A
B
So
so
we're
saying
that
defer
and
stream
directives
get
exported
from
graphql
js
they're,
not
included
in
specified
directives,
and
the
executor
execution
will
check
if
those
directives
are
in
the
schema
to
decide
whether.
D
D
A
A
I'm
like
my
thing.
I
don't
like
ergonomics
of
like
injecting
directly
but
manually,
but
it's
alpha.
So
if
we
find
better
solution
before
release
will
implement.
If
not
not
like
my
proposal,
I
send
it
in
a
chat.
It's
like
what
issue
so,
basically
changing
schema
construction,
a
bit
in
non-breaking
way,
so.
D
D
The
execute
function
will
adjust
based
on
the
existence
of
these
directives
and
to
address
the
issues
that
we
just
talked
with
on,
like
a
apollo
server
or
other
libraries
that
might
not
support
stream
and
defer.
Yet
it's
part
of
their
responsibility
to
make
sure
that
the
schema
that
they're
going
to
execute
is
have
like
all
the
capabilities
that
the
server
has.
D
I
think,
by
the
way
I
think
today,
if
you
people
just
provide
like
a
schema
with
subscription
to
pull
a
server
without,
let's
say,
apollo
server
3
that
doesn't
have
like
a
transport
by
default.
Nothing
will
happen
right.
There
is
no
like
a
warning
or
an
exception,
I'm
fine
with
with
like
having
a
warning
or
an
exception,
because
streaming
differ
is
still
in
alpha
and
it's
still
experimental.
D
So
it
should
be
more.
Let's
say
descriptive
that
you
have
something
that
is
not
supported
in
your
schema
but
yeah
in
the
future.
I
guess
it's
the
same
as
subscription.
If
you
don't
have
a
transport,
that's
user
responsibility.
A
Yes,
one
thing
I
want
to
clarify,
so
I
I
think
I
need
to
check,
but
I
think
current
behavior
of
execute,
if
you
don't
have
you,
can
create
schema
without
default
directives
right
now
through
through
constructor,
you
can
specify
directives
as
empty
array
and
do
whatever
and
it
will
miss,
keep
include.
B
A
D
So
so
the
execution
actually
supports
it,
but
if
you
don't
have
specified
on
the
schema,
it
will
just
fail
on
validation
before
getting
to
execution.
That
makes
sense,
yeah
sure.
A
B
B
Yeah,
what
what
are
everyone's
thoughts
on
this
I
I
actually
had
tried.
I
actually
had
built
something
similar
to
this
early
on.
There
was
a
lot
of
code
duplication
because
you
need
both
two
versions
of
execute
two
versions
of
the
graphql
function,
but
but
yeah
just
wanna
hear
everyone's
thoughts.
H
I
I'm
I'm
not
sure
how
how
this
differs
from
an
execute
flag,
an
execute
level
flag,
meaning
it.
H
Safety,
but
you
know
when
you
can
just
have
a
wrapper
that
asserts
like,
if
you
know
that,
if
it
gives
you
the
wrong
response
with
the
flag,
then
there's
been
some
sort
of
error.
Just
like
we
have
with
execute
sync.
I
think.
H
We
may
eventually
need,
as
lauren
pointed
out
a
flag
on
on
execute
as
well.
Let's
say
the
client
specifies
that
they
can't
get
like
a
multi-part
response.
I
think
the
case
that
he
mentioned,
but
you
know
I
I
think
you
know
right
now.
It's
pretty
easy
to
set
up
execute
flag.
We
have
that
you
know,
and
this
could
be
easy
as
well.
Technically
speaking,.
C
I
think
one
of
the
benefits
that
also
obviously
called
I
don't
know
his
name
marisa
just
says:
github
name
he's
super
like
into
this
cloudflare
worker
stuff,
and
he
wants
to
achieve
like
super
small
bundles
wherever
possible
and
by
not
bundling
like
the
functionality
you
don't
need,
you
can
achieve
smaller
bundle,
size
and
things
could
be
a
bit
smaller
and
therefore
faster,
but
yeah.
D
C
B
B
I
think
I
don't
think
there
would
be
any
code
size
differences,
because
it's
basically
execute
regular
execute,
would
have
all
the
logic
for
everything,
but
just
be
throwing
an
error
if
it
gets
back
an
async
iterable
the
same
way
that
the
the
sync
function
does
where,
if
it
gets
back
a
promise,
it
throws
an
error
so
by
not
including
execute
incremental
you're,
not
really
saving
anything,
because
that's
just
a
rapper
that
does
basically
execute
would
call
execute
incremental
and
throw
an
error
if
it
gets
back.
An
async,
iterable
yeah
wouldn't
be
any
code.
Savings.
C
B
C
B
C
Now
that
you're
right,
there's
also
like
subscribed
to
be
honest
when
I
first
started
out
working
with
graphical
js,
I
was
even
like
a
bit
confused,
because
why
do
we
have
execute
and
subscribers
and
not
just
execute?
Why
do
I
have
to
determine
the
document
type
before
and
then
I
either
call
subscribe
or
execute,
and
now
with
this,
we
would
also
have
an
additional
case
where
you
need
to
like
first
detect
is
it's
a
subscription?
Is
it
a
mutation?
C
Is
it
a
query
or
it's
like
a
mutation
or
query
or
even
like,
then
you
also
have
a
subscription
which
could
be
like
using
different
stream
right
or
the
way
it
gets
wrong.
Subscriptions
can
also
be
deferred
and
streamed
right.
B
C
H
I
mean,
I
think,
a
structure
in
the
long
term.
Right
now
in
graphql
executor,
we've
integrated
completely
execute
and
subscribe.
I
think
in
the
long
term
it
might
make
sense
to
have
three
functions.
One
would
be
regular,
execute
that
has
everything
one
would
be
execute.
Sync
that
you
know
can
only
have
values
and
not
promises
or
async
iterables,
and
then
maybe
something
like
like
execute
pro.
You
know,
but
not
incremental,
which
which
throws
an
error.
H
You
know
if
it
does
get
an
incremental
but
allows
a
promise
or
sync
I
mean
I
think
those
might
be
the
three
that
might
be
most
useful
in
the
long
term.
Right
now,
I
think
you
know
like
like
ivan
was
saying
like
we
can.
We
just
want
to
get
something
going
on
the
feature
branch
and
I
don't
know
that
we
need
to
finalize
it
right
now,
but
that
would
be
the
long-term
proposition
that
I
think,
would
make
the
most
sense.
A
A
So
if
somebody
at
some
point
wants
to
work
like
have
valid
use
case,
why
I
think
iterable
is
problematic
for
something
we
can
always
add.
I
think
execute
a
sync
function,
because
we
have
execute
sync
we
can
add
advocate.
I
think
that
return
promise,
but
doesn't
error
because
you
can
convert.
I
think,
a
terrible
to
one
promise
it's
possible
so,
but
only
if
somebody
have
use
cases
and
willing
to
actually
contribute
it,
so
we
can
always
say
that
afterwards,
that's
why
proper.
H
I
mean
in
general,
I
guess
I'm
in
favor
of
pulling
off
the
band-aid
on
the
breaking
changes
and
the
types
you
know
to
get
at
that
point.
More
explicitly.
A
Yeah
yeah
also
great,
I
think
like
there
is,
who
can
sense
that
I
see
streaming
g4
is
coming
like
we
all
discuss
the
details,
but
and
wait
in
every
shape
and
form
as
a
stream
and
therefore
require,
I
think,
iteratable,
to
be
the
return
value.
So
I
also
agree
that
we
need
to
pull
a
trigger.
H
But
I'm
not
I'm
not
sure
lauren
did
you
have
strong
feelings
about
whether
we
also
need
an
execute
level
flag
in
terms
of
that
use
case
of
when
the
client
says
that
it
doesn't
have
can't
support.
H
C
C
A
Yeah
and
if,
by
the
way,
if
one
can
do
their
own
validation,
if
a
poor
client
or
if
a
power
oracle
doesn't
support
like
streaming
response,
they
themselves
can
because,
like
most
of
the
client
parts
query
anyway,
even
like
simple
coins,
I
think
even
oracle
is
percent
stuff.
So
they
can
look
and
see
it's
having
different
error
because
they
have
like
strong
rhythm.
B
Okay,
so
I
I
think
we're
all
on
the
same
page
now
I'll
I'll
make
that
change
where
we
remove
the
flag
directives
aren't
added
by
default
and
and
then
I
think,
I'll
rebase
and
then
maybe
we
could
publish
another
version.
A
Walk
postal
tutorials
stuff,
it
would
be
good
idea
to
add,
like
races,
instruction
for
17.00
alpha
if
you
use
more
recent
graphql
js
so
like
if
you
in
future,
please
check
the
current
instruction.
B
A
Yeah
yeah:
we
can
have
tutorial
on
stream
before
by
the
way,
it's
a
good
idea,
because
this
new
thing
and
sorry
alex
you
have
your
handwriting.
G
A
Like
yeah,
it's
a
little
bit
different
because,
like
a
stream
deferred,
doesn't
reduce
any
syntax,
new,
syntax,
okay
and
and
kind
of
like
consensus,
a
little
bit
bigger.
I
would
say
on
it's:
it's
it's
a
pretty
big
consensus
on
coincidental
liberty
also,
but
yeah
comparatively
so
think
here
is
mostly
about
like
how
to
do
it
yeah.
So
in
your
in
your
case,
since
we
agreed
and
technically
I
think
it's
even
another
stage
for
extremely
like
stage
two
and
your
proposal
stage,
one
so
don't
book
you
like
having
heaven
clark
and
parser
makes
sense.
A
Okay,
so
keep
the
flags
yeah
and
especially,
I
think
it
was
still
some
discussion
about,
and
I
promise
to
write
my
opinion
and
I
will
do
it
just
remember
what
about
syntax
for
list
yeah,
so
yeah.
H
I
mean
a
radical
idea,
though,
would
be
to
I
mean
I
guess,
because
we
are
talking
about
enabling
it
on
the
experimental
branch
right,
meaning
the
main
the
unstable
main.
The
radical
idea
would
be
to
just
drop
the
feature
flag
for
client-side
and
eligibility
as
well
and
assume
that
if
the
client
is
using
those,
you
know
the
new
operators
in
the
document,
then
they
are
basically
opting
in
yeah
question.
A
Here
is
like
prettier
and
other
tools
are
using
graphql.js
person.
A
So
if
we
drop
flag
and
I'm
okay
dropping
it
alpha,
but
if
we
didn't
and
like
I'm,
not
sure
we
will
standardize
in
time
for
for
for
like
17
years,
maybe
it's
happened
in
that
case,
we'll
drop.
A
A
H
A
Offline
yeah,
we
can
force
there's
something
one
quick
thing:
it's
like
preachers:
don't
care
about
your
schema,
some
tools
like
a
power
client
that
doesn't
have
any
idea.
Graphql
tag,
don't
know
anything
about
schema,
so
a
bunch
of
tools,
work
with
query
syntax
and
they
don't
have
access
to
schema
so
for
actually
this
on
the
path
function.
Basically,
otherwise
is
the
only
way
for
those
working
with
only
with
squirrels,
without
skin.
A
A
A
Yeah
I
need
to
like
I
need
to
do
it
offline.
I
think
I
exposed
wrong
one
yeah
yeah,
but
yeah
I
will
fix
that
offline.
But
basically
sorry
I
forget
that
I'm
not
sharing
my
screen
so
with
this
is
the
issue
and
it's
about
twist
function
and
people
said
like
it's
is
already
used
and
is
the
fact
the
public
ipa.
So
it's
need
to
be
exposed
and.
A
I
A
A
We
should
not
the
way
feature
because
we
don't
have
like
ideal
solution,
especially
if
it's
like
not
connected
to
graphql
spark.
This
thing
is
not
connected
to
graphql.
Spec
people
use
it,
so
there
is
no
way
reason
why
not
expose
it.
If
we
want
to
create
a
better
way
to
do
something,
it
will
be
the
future
and
we
can
duplicate
this
function
so
yeah.
A
It's
explanation.
Why
and
right
now,
I'm
kind
of
rethinking
very
thinking
how
such
issue
resolved
so
yeah
yeah
and
since
we
switched
to
unstable
mind
is
basically
mean
like.
A
A
Yeah,
so
I
think
we
can
switch
to
number
eight
docs
update
and
it's
the
same
thing
happened
here
so
yeah
just
to
give
a
little
bit
context
a
hedge
create
documentation
website,
it's
like
preview
version.
So
it's
how
I
right
now
we
have
docs
here
on
the
graphql.org,
slash,
graphql,
js,
yeah,
and
but
it's
not
updated,
not
contained
and
it's
not
generated.
A
A
A
If,
if
you
want,
you
can
write
help
by
writing
a
letter
to
ricky.
I
think
he
is
the
person
who
did
it
last
time,
so
he
configured
netlify
and
other
things.
So
we
can
talk
with
him.
Basically,
I
don't
have
any
idea
how
grackel.org
is
so.
I
and
I
think,
he's
the
right
person
to
ask
about.
So
it
would
be
great.
If
I
can
do
it
myself.
E
A
A
A
A
So,
and
my
requirement
for
prs
is
like
content-wise
is
pretty
well,
only
thing
is
yeah
should
pass
and
it
shouldn't
be
any
warning
and
like
other,
like
shia,
related
stuff,
but
the
threshold
for
commitment
is
lower
and
it's
not
tied
to
any
to
any
content.
Basically,.
A
So,
let's
switch
to
the
next
one.
Next
one
is
camera
releases
yeah
and
one
control,
knob
ability
is
part
of
them,
so
yeah
I
did.
A
I
switched
to
1700
alpha
on
on
main
about
canary
releases.
I
need
to
check,
like
change
set
suggested
by
doton.
C
So
I
can
explain
this
actually
because
we're
using
like
changes
on
a
large
scale
for
like
very
big
mono
repositories,
and
the
way
it
works
is
that
if
you're
like
developing
a
bug,
fix
or
like
new
feature
in
your
own
isolated
branch,
you
can
like
execute
a
comment
locally,
which
is
beyond
change,
set
or
npm
change
set
create.
C
Then
it
gives
you
like
a
drop
down
where
you
select,
but
it's
a
fix,
a
fix,
a
pad,
no
like
a
patch
a
fix
or
major
change,
and
then
it
generates
a
markdown
file
and
there
you
can
write
the
change
log
for
this
feature
or
bug
fix
that
you're,
writing
and
then
once
you're,
and
obviously
you
will
commit
this
also
to
the
the
branch
you're
working
on
and
once
you
merge
that
to
the
main
branch.
C
The
change
setbot
will
automatically
create
a
pull
request
on
the
main
branch
which
re
removes
the
chain
sets
that
have
accumulated
on
the
main
branch
and
then
updates
all
the
change
logs.
The
change
looks
can
be
a
changelog.md
file
in
each
package
or
it
could
also
be
like
a
github
release
or
something
like
that.
So
releasing
a
new
version
is
as
easy
as
just
like
merging
the
pull
requests,
and
then
it
will
take
like
all
these
changes
that
individuals
wrote
into
the
markdown
files
and
make
one
big
release
log
out
of
it.
C
So
it
kind
of
solves
the
issue
where
you
have
to
like
go
through
all
the
commits
and
pick
the
relevant
changes
when
you're
doing
a
a
new
release.
Instead,
you
only
have
to
like
merge
one
pull
request
that
is
automatically
created
and
rebased
once
new
commits
are
merged
or
added
to
main,
and
another
thing
on
top
of
that.
That
is
useful
is
that
you
can
also
like
set
up
a
bot
that,
once
there
is
a
change
set
in
a
certain
branch,
it
will
automatically
publish
that
to
npn.
C
This
could
have
the
benefit
that
once
rob
pushes
new
commit
automatically
a
new
alpha
version
is
built
and
also
published
to
npm,
and
then
people
can
immediately
start
testing
it,
and
we
don't
need
to
wait
for
someone
to
do
all
that
stuff
manually
yeah.
So
that's,
basically
how
we're
using
change
sets
right
now
and
also
graphical,
is
adopting
them,
and
we
all
think
that
it
might
be
a
huge
improvement
for
graphical
js
as
well.
A
Yeah,
I
need
to
check
one
as
we
discussed
like
previously
my
main
requirement,
and
it's
it's
related
to
to
like
automatic
dependency
updates.
My
main
requirement
is
that
stuff
not
polluting,
commit
history
so
commit
history
should
be.
A
C
There's
basically
like
on
on
the
individual
branches:
you
don't
really
have
like
commits
that
clutter
it
aside
from
like,
add
change,
set
or
something,
but
then
people
when
when
we
merge
a
pull
request,
it's
squashed
anyway,
so
that
disappears
and
the
only
only
only
time
there
will
ever
be
like
a
commit
created
by
change
set
on
master
on
the
main
branches.
Actually
only
when
you
do
a
release
where
you
bump
the
version,
so
that
will
be
the
only
commit
and
within
that
single
commit,
there
will
be
like
the
version
bumps
of
the
packages.
C
In
that
case,
it's
only
a
single
package
and
the
updates
of
the
change
logs.
If
we
store
the
change
log
within
the
repository,
as
I
said
before,
the
change
lock
can
only
be
a
guitar
release
so
yeah.
I
don't
think
there
will
be
like
much
noise
generated
by
change.
That's
in
terms
of
yeah
commits
yeah,
so.
A
I'm
I'm
okay
like
with
so
what
we
discussed
previously.
It's
like
two
models
for
existing
branches,
15
and
16
6
and
became
a
branch
so
model
for
them
is.
A
Only
like
fixes,
so
if
somebody
volunteered
to
backboard
feature
it's
released
so
basically
idea
more,
was
to
release
with
every
commit
and
same.
We
discuss
the
same
about
canary
releases
so
either
way
once
per
day
or
after
every
commit
or
like
some
some
form.
Basically,
so
people
don't
wait
like
releases
and
I'm
okay
like
if
something
is
squashed.
A
What
I'm
like
worried
about,
if
I
have
like
one,
can
actual
commit
and
another
commit
to
release
it.
So
we
double
the
number
of
companies,
but
I
think
we
can
derive
it
asynchronously
about
particular
two
and
pros
and
cons
and
how
how
to
manage
it.
Better,
yeah.
C
I
mean
you
can
just
take
a
look,
for
example,
at
the
graphical
repository
and
look
at
the
commit
history
and
there
you
should
actually
see
like
that.
The
cluttering
is
not
really
that
much
and
you
can
also
look
like
how
such
pull
requests
or
such
a
versioning
commit
does
actually
look
like
yeah,
so
yeah.
A
I
will
I'm
just
worried
that
we
technically
were
like
three
hours,
but
it
would
be
a
deal
to
to
to
be
in
our
hour
and
a
half
or
so
don't
want
to
like
prolong
with
this
goal
too
much
for
technical.
Well,
sorry,
my
headphones!
A
Here's
one
okay,
so
so
I
think
we
can
discuss
change
set
and
I
will
take
a
look
at
graphical,
especially
since
it's
under
graphql
foundation
and
it
it's
see
more,
not
see
more
in
some
aspect,
but
some
more
in
others.
I
would
take
a
look
how
it's
done
graphical
I'll,
just.
A
Yeah
great
thanks
sahaj,
you
opened
what
one
do
you
do
have
any
other.
Like
question
background,
you
want
to
provide
for
the
canary
releases,
yeah
yeah,
connected
releases.
I
A
I
One
thing
we're
trying
we're
just
going
around
the
circles
trying
to
optimize
for
like
not
having
a
commit
or
something
which,
like
I
mean
the
goal
today
is
like
we
are
not
even
publishing
anything
and
it
takes
forever
to
get
a
release
out
and
like
we
can
optimize
anything
like
th.
That's
not
the
point.
The
point
is
to
get
candy
releases
when,
when
can
we
get
the
kennedy
releases?
Is
the
question.
I
I
A
If
by
the
last
working
next
working
group
like
there
is
no
alternative,
we
just
measured.
A
I
Thing
like
I
can
set
it
up
right
now
and
send
a
pull
request
in
next
five
minutes.
It's
not
a
difficult
job.
The
thing
now
we
are
pushing
it
again
to
a
month
and
then
after
that
month
goes
to
march
and
like,
if
you
think
about
it,
alex
needed
like
you're,
just
lowering
like
it's
like
contributors
are
waiting
to
get
a
release
like.
I
If
I
want
to
change
on
the
forum
stream,
I
have
to
ping
you
a
couple
times
to
get
a
release,
and
so
I
can
test
around
on
our
projects
and
like
it
just
doesn't
seem
ideal
to
me
to
wait
for
a
month
like
if
I
open
a
pull
request,
I
look.
It
should
be
just
reviewed
and
like
merged
in
like
in
a
week
or
something
because
I
mean,
if
you
have
an
alternative,
you
can
open
up
a
request
and
we
can
replace
it.
A
I
B
I
A
I
A
So
so
what
I
do
productly
like
even
for
graphical
I
open
like
main
history,
so
I
open
that
wet
link.
A
So
what
one-
and
I
do
it
for
every
repository
I
use
like
when
I
update
dependency,
I
open
like
a
repo
and
see
what
what
change
and
what
repo
so
I
manually
view
and
click
from
and
see
like
what
changed.
Github
allows
you
to
give
between
releases.
So
I
mean.
I
J
Hello
camille
here.
What
so
just
is
trying
to
actually
explain
is
that
we
will
do
basically
what
you
do
here
and
I
send
a
link
which
is
a
commit
per
release,
which
is
basically
the
same
right.
We
squash
like
we
merged
the
pull
request
that
actually
releases
a
new
version
and
it
becomes
a
commit
right.
So
there
you
have
it.
A
A
C
A
misunderstanding:
what
a
canary
release
actually
is
here,
because
I
think
what
what's
the
hajj
is
thinking
about
is
that
for
every
commit
that
is
pushed
to
a
specific
branch,
there's
automatically
like
an
alpha
version
published
to
npm
and
no
changelog
or
github
release
or
anything
required.
It's
just
like
something
that
we
have
the
version
of
this
commit
for
people
to
install
and
the
way
that
we
don't
need
a
fancy
changelog
for
this
or
anything.
C
C
Version
is,
it
would
just
be
like
something
like
craft
girl
at
then.
You
have
to
some
commit
and
that's
like
under
the
alpha
tag.
So
it's
not
like
tied
to
a
specific
version.
It's
just
something
arbitrary
and
yeah,
and
people
can
just
install
the
that
version
of
that
commit
with
that.
So
they
can
just
test
the
like
the
branch
or
the
feature
that's
being
developed
in
that
specific
branch.
C
So
they
don't
have
to
like
go
this
extra
mile
of
like
installing
graphical
jazz
locally
and
then
building
everything
and
then
yarn
linking
or
npm
linking
it
into
their
projects.
If
there's
like
a
bug
or
something
they
can
just
see
or
someone
did
the
pull
request
for
that
nice
and
then
they
can
go
into
it
and
then
the
pull
request
says
hey.
We
need
feedback
if
that
works
for
everyone,
and
then
people
can
just
like
copy
that
version
install
it
locally
and
just
verify
that
it
solves.
A
In
that
case,
it
can
work,
so
in
that
case,
like
everybody,
okay,
that
that
we're
doing
I'm
I'm
not
like,
have
troubles
to
do
if
we
have
like
some
button
or
much
or
pull
requests
to
do
releases
manually
once
a
month
or
once
like
some
some
period
or
after
a
big
feature
or
a
critical
bug,
I'm
totally
okay.
We
have
it
right
now
in
commit
history,
I'm
for
doing
it
way
faster,
I'm
just
against
like
creating
create
and
commit
for
every
commit.
C
I
will
also
like
link
a
graphical
graphical
pull
request
in
the
issue
where
it
shows
the
the
showcase,
like
you
create
a
pull
request.
You,
you
can
put
your
changes
in
this
case.
It's
ricky,
then
he
added
a
change
set
and
then
the
bots
of
the
pull
request.
So
the
board
works
like
this.
C
If
there's
a
change
set,
it
will
publish
a
canary
version
2
or
alpha
whatever
to
cannery,
let's
say
cannery
to
npm,
and
then
it
comments
in
in
that
specific
pull
request
and
then
people
can
install
it
and
once
a
new
commit
is
pushed
to
the
pull
request.
The
bot
will
automatically
update
the
message,
so
people
will
always
install
the
latest
if
they
try
out
the
pull
request.
A
C
We
are
confusing
here
two
things.
One
thing
is
main
alpha
releases
and
canary
releases
to
kind
of
release.
This
work,
this
way
that,
once
you
have
a
pull
request,
branch
and
start
pushing
to
it,
the
bot
will
automatically
build
and
publish
each
commit
in
in
that
pull,
request
and
comment
on
it.
So
people
can
try
that
out
and
alpha
releases
on
main
or
something
are
something
completely
different
and
are
not
related
to
this
at
all.
C
C
E
I
think
what
they're
trying
to
say
is
like
that:
the
if
you,
if
you
still
use
the
tool,
the
automation
tool,
it
has
nothing
to
do
with
the
git
history.
You
can
get
the
exact
query
that
you
want
and
still
use
that
automation
tool.
So
what
I
think
is
why
won't
we
just
try
start
using
it,
and
if
you
see
that
the
git
history
is
wrong,
then
we
could
revert
it.
We
could
change
it.
We
could
just
take
the
tool
in
another
way,
but
that
tool
has
nothing
to
do
with
how
you
do
get
history.
E
So
I
don't
understand
that
your
argument
of
like
why
you're
pushing
away
this
tool
because
of
git
history,
no,
no.
My
idea
is
like.
A
My
idea
was,
I
thought,
what
we
discussed
in
here
by
canary
releases
and
usually
it's
like
for
for
big
projects
like
my
baby,
not
so
quite
big,
but
for
like
chrome
and,
like
other
things,
canary
releases
is
cut
from
from
from
development
branch
like
main
or
develop
or
master
or
some
some
kind.
So
I
always
assume
canary
releases
by
canary.
We
mean
like
canary
from
maine,
if,
if
a
question
is
to
release
like
something
from
pr,
I'm
like
totally
okay
for
it.
If
this
tool
is
not
creating
pr
to
to.
E
C
Already
posted
and
those
are
actually
not
canary
releases.
Those
are.
I
Yes,
they
both
are
two
separate
things,
so
we
got
cannery
releases
on
the
pull
request,
which
means,
if
I'm
working
on
a
branch-
let's
say
the
foreign.
So
if
I,
if
I
push
something
with
the
forehand
stream,
I
expect
a
release
for
the
forehand
stream,
for
that
particular
commit.
So
I
can
test
it
and
let's
say
you
merge
the
forehand
stream
to
main,
which
is
unstable
main,
and
I
expect
a
alpha
release
for
the
foreign
stream,
which
could
be
also
automatically
published.
The
same
way
we
publish
candy
releases.
E
Yeah-
and
that
has
nothing
to
do
with
what
you're
talking
about
on
graphical.
This
is
a
third
thing,
which
is
an
automated
dependency
update.
That's
a
completely
different
thing,
and
then
the
commits
that
you're
seeing
and
that
you
have
issues
with
on
graphical
is
something
completely
different.
That
has
nothing
to
do
with
the
change.
A
Basically
mean
like
I
thought,
because
every
like
every
wet
commit
is
touching
something
in
chains
that
removing
some
removing
some
files
in
directory
code
change,
settings.
I
That
would
be
like
how
you
publish
to
npm
today.
So
if,
if
you
publish
16.2
or
16.3
to
npm,
that's
the
same
thing
that
change
set
is
doing
today
in
graphical.
That's
the
like!
That's
that's
what
the
manual
work
you
do!
That's
what
jane
said
just
automatically,
so
you
don't
have
to
do
anything.
That's
where
you
get.
A
Okay,
so
to
be
clear,
I'm
like
if
something
is
releasing
a
branch
and
not
creating
commits
on
or
in
git
history
of
mine,
I'm
like
totally
what's
right,
what's
what's
open,
they.
A
A
A
Okay,
so
and
if
you
need
to,
I
can
commit
rights
like
admin
rights
on
repo
or
something
to
enable
it
just
pin
me
yeah
so
switching
to,
and
so
we
all
agree
on
what
topic
we
enable
canary
real,
quick
releases
on
branches
by
at
all
through
through,
like
command
automatization
and
let's
create
like
releases
on
npm.
A
I'm
like
okay,
one
one
thing
is
like
security
thing
like
npm
keys
should
be
like.
It
should
be
some
way
for
me
to
type
and
playing
key
security,
and
I
enable
to
factor
factorization,
but
I
expect
with
to
handle
that
so,
okay,
switching
back
to
inconsistently
next
topic
and
consistent
handling
directive
and
build,
I
see
I
responded
to
what
issue.
A
They
were
designed
for
different
use
cases
built
coin
schema
was
to
build
client
schema,
assuming
you
want
to.
For
example,
you
want
to
check
if
something
supports
default
or
not.
So
you
get
introspection,
you
call
bitcoin
schema,
you
get
schema
and
you
check.
If
it
have
extreme
d4
include,
skip
and
build.
Schema
was
designed
for
another
case.
You
have
sdl
and
you
don't
want
to
write
or
print
like
standard
directives
and
it's
assumed
like
standard
directives
are
injected.
A
But
in
a
sense
we
have
a
discrepancy
and
I'm
proposed
like
solution
for
that.
Do
you
think,
like
I
think,
like
with
proposal,
we
need
to
get
response,
I'm
like
on
board
with
it.
A
Yeah,
sorry
about
posting
it
couple
hours
before
I
just
grade
that
you
compile
with
list
I'm
like
yeah
and
automatic
dependency
update.
So
I
think
jacob
proposed
a
solution.
Let's
do
and
I
spent
some
time
before
with
go.
Actually
I
felt
like,
depending
but
better,
but
because
it's
like
owned
by
the
github,
but
all
the
features
that
yakov
described
they're
not
supported
inside
the
panda
board.
So.
A
Like
I
agree
with
what
yakov
wrote
the
miyako
proposal
of
starting
with
some
restriction,
so
only
I'm
okay
with
updating
all
non
non
direct
dependencies,
I'm
okay,
updating
like
patch
version
of
all
dependency
automatically,
I'm
like
for
batching
it
up,
especially
since
we
don't
need
security
updates
or
anything
because
of
the
it's
all
development
tools.
So
I
think
runaway
propose
or
provide
all
the
options.
A
A
Don't
create
don't
like
do
automatic
updates
for
stuff
that
matter
and
even
with
like
new
documentation
site,
we
don't
have
too
much
dependencies
like
and
they
don't
release
too
frequently.
I
agree
about
like
indirect
dependency,
and
I
agree
about
like
patch
version,
but
from
feature
and
major
like
I.
A
Okay,
good
thing
that
was
always
was
like
misunderstanding
about
canary
releases,
it
it.
It
was
good
that
productive
because
yeah
a
different
vocabulary
by
canary
releases.
I
I
was
some.
It
meant
something
different,
so
great
bunch
of
things
to
work
on
and
yeah.
Hopefully
we
will
not
the
soul
like
most
of
with
agenda
items
before
next
call
and
have
a
progress
and
yeah.