►
From YouTube: API Vision working group #12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
It
was
good
yeah
I
got
out
and
did
some
hiking
and
some
fly.
Fishing
and
and
yeah
got
a
little
bit
of
a
break
from
work
so
yeah.
It
was
good
great.
A
Thank
you
yeah.
That's
my
first
time
attending
this
meeting
so.
B
Yeah
welcome
to
the
group
I'm
glad
that
you're
you're
joining.
We
definitely
need
the
support
when
it
comes
to
the
technical
writing.
I
know
we
were
talking
about
that
in
one
on
one
before
my
break
so
yeah
glad
to
have
you
on.
A
Oh,
thank
you
yeah.
I
just
checked
the
agenda
pretty
quickly
and
yeah.
Looking
forward
to
to
discussing
these
points,
great.
C
C
Yeah,
I'm
good
and
good.
I
mean
two
weeks
off,
so
yes
cutting
up
with
things
but
yeah
you
see.
If
you
miss
one
week
two
weeks,
then
do
you
almost
a
couple
of
days
yes
to
catch
up
with
all
their
things.
B
Yeah
I
I
was
off
two
weeks
prior
to
the
last
one
and
I
feel
like
it
took
all
of
last
week
to
catch
up
because
there's
a
lot
of
just
kind
of
coming
back
into
the
milestone
planning
and
working
on
that
so
yeah.
I
was
pretty
involved.
B
Cool
well,
let's
jump
into
the
agenda.
I
had
the
first
two
points,
so
I'll
kind
of
kick
it
off.
We've
had
some
discussion
under
the
rest
of
our
graphql
poc.
I
think
that's
been
the
most
active
issue
and
and
what
will
kind
of
impact
a
lot
of
the
other
work
that
we're
we're
working
on.
The
last
kind
of
comment
I
captured
was
from
alex.
I
was
trying
to
understand
like
what
additional
scope.
B
Maybe
we
should
account
for
within
that
issue,
and
his
suggestion
was
all
the
remaining
work
on
that
falls
under
the
performance
testing.
So
that's
where
I've
kind
of
pulled
in
this
additional
issue
here,
and
I
can
also
just
share
my
screen
to
what
I'm
looking
at.
B
Okay
yeah,
so
this
is
the
performance
analysis
issue
and
I
think
on
that
one.
You
know
we're
kind
of
working
with
mattias
on
this,
but-
and
I
think
he's
out
for
a
few
days-
maybe
maybe
longer,
but
from
the
discussion
there
it
seems
like
gql
scenario.
Tests
are
one
of
the
items
that
we
need
here,
I'm
hoping
to
oh
there's
another
comment
here,
so
my
hope
is
that
we
can
break
this
down
understand
what
kind
of
the
next
steps
are
to
move.
B
This
analysis
forward
there's
a
comment
in
the
discussion
here
about
an
example,
kind
of
a
manual
test
that
we
did
around
around
the
issues
endpoint
and
how
performance
for
for
that
query
was
subpar
but
yeah.
I
think
that
we're
in
need
of
a
more
comprehensive
test
there,
so
yeah.
I
want
to
open
it
up
for
the
sync
discussion
here.
B
I
know
we've
had
some
back
and
forth
in
in
slack
and
in
the
issue
or
issues
here,
but
is
there
anything
else
we
we
can
kind
of
identify
or
clarify
around
these
issues.
Today.
B
C
Mean
assuming
that
is
ready
from
our
site
is
just
waiting
for
them,
maybe,
but
we
can
double
check
if
we
still
have
to
do
something
else
before
go
for
the.
I
guess
it's.
The
memory
of
application
performance
now
is
the
name
of
the
team
yeah.
So
we
can
double
check
if
there
is
anything
else
from
our
site.
B
Great
yeah,
I
might
need
to
catch
up
on
a
couple
of
comments.
I
hadn't
seen
that
were
added
but
yeah
it
wasn't.
B
It
wasn't
clear
to
me
how
this
was
scoped
and
if
we,
you
know,
if,
if
all
we
need
to
do
is
add
additional
scenario
tests,
if
we
are
clear
on
kind
of
what
we're
taking
away
from
this
like
how
how
it's
I
guess,
the
test
will
be
set
up
so
that
we
can
get
a
clearer
picture
like
I'm
kind
of
imagining
as
an
outcome,
that
we
could
see
yeah
for
each
api
or
maybe
for
some
of
the
key
apis
across
you
know
multiple
stages,
we
can
kind
of
get
a
sense
of
is
graphql,
performant
or
subpar
in
every
case,
and
then
we
kind
of
check
some
boxes
and
see.
B
Is
this?
You
know
a
universal
problem
that
clearly
points
to
needing
to
focus
on
gql
improvements,
first
as
a
part
of
revision
and
defining
issues
around
that,
or
is
it
kind
of
case
by
case
just
getting
a
better
kind
of
characterization?
Of
that?
That's
what
I'm
anticipating!
I
don't
know.
B
If
there's
more,
we
can
add
to
scope
this
issue,
or
you
know
if
this,
if
this
work
as
defined
once
it's
done,
is
that
it
or
is
there
more
that
we're
not
we're,
not
accounting
for
so
I
don't
know
all
in
all,
I'm
kind
of
I'm
a
little
bit
fuzzy
on
on.
You
know
how
I
like
where
this
is,
and
if
there's
more
that
we
need
to
be
doing,
but
you
know
in
any
case
this
is.
This
is
a
great
start.
E
Yeah,
I
think,
like
a
different
perspective
of
the
problem,
would
be
that
we
need
to
be.
We
do
have
a
clear
understanding
of
how
the
graphql
is
performing
and
today
observability
and
have
all
the
metrics
in
place
and
try
to
understand
why
sometimes
can
be
slow,
what
it
can
cause
that
would
be
slow
like
with
rest.
E
We
have
a
lot
of
survey
will
be
today
and
we
can
pick
up
some
end
points
and
big
dig
into
into
those
endpoints
and
understand,
like
we
have
like
n
plus
one
problems,
and,
and
at
least
we
have
some
two
links
today
that
allows
us
to
dig
more
into
the
performance
issues
and
what
is
the
current
state
of
graphql
from
from
this
perspective,
and
so
it's
good
that
this
is
coming
up
now,
because
it
means
that
before
we
even
think
about
a
conversion
between
graphql
to
to
rest,
we
might
need
to
first
like
have
two
links
in
place
and
everything
like
so
that
we
can
give.
E
We
can
really
have
a
graphql
first
like
approach
where
everything
should
be,
at
least
in
terms
of
toolings
and
and
observability
should
be,
at
least
at
the
same
level
of
rest.
If
we
want
to
make
the
switch
right.
B
E
And
this
will
allow
us
to
to
have
better
testing
better
comparison,
because
otherwise
we
have
to
the
tests.
We've
done
today
are
very
kind
of
a
high
level
manual
test,
which
still
gives
a
lot
of
insights
but
intuitive
problem,
but
we
will
need
to
better
to
link
their.
I
think
in
place
to
be
able
to
do
this
kind
of
comparisons,
and
another
aspect
would
be
maybe
true.
E
Try
to
understand
from
this
analysis
are
there
cases
where
graphql
is
actually
faster
than
rest
in
the
conversation
right,
I
don't
know
our
mutations
faster
for
some
reason,
depending
on
the
the
logic
should
be
the
same,
but
the
data
returned
might
be
the
differentiator
there.
So,
with
the
rest,
we
might
return
for
certain
endpoints
like
a
larger
response
object,
maybe
with
graphql,
given
that
it
is
using
driven,
it
could
be
less
and,
in
the
certain
case
scenario
might
be
more
performant.
E
It's
like
this
is
something
that
to
be
answered,
and
at
least
we
know
what
kind
of
aspects
of
graphql
are
more.
The
hotspots
are,
the
queries.
Sides
are,
or
also
the
limitations,
and
things
like
that.
B
Yeah
good
good
points
there.
Do
you
see
when
you
when
you
talk
about
tooling?
Is
it?
Is
it
all
in
that
observability
category,
because
I'm
kind
of
anticipating
that
on
one
side
there's
just
the
ability
to
you
know,
run
a
performance
test
and
do
a
comparison,
and
then
the
observability
is
something
that
allows
us
to
kind
of,
as
a
result,
dig
dig
deeper
into
why
something's
more
performant
or
less
performant
is
that
is
that
how
you
would
put
it
or
would
you
put
all
all
this
tooling
under
the
observability
category,.
E
I'm
not
familiar
with
graphql
observability
we
have
today,
but
ultimately,
what
we
want
to
the
question
we
want
to
answer
is:
is
this
at
the
point
performance?
What
do
we
have?
E
Like
n
plus
one
queries,
and
it's
true
that
in
the
rest,
in
the
graphql
to
rest
com
conversion,
we
will
have
more
static
response
objects,
so
we
will
know
at
least
what
is
going
to
be
serialized
and
but
at
least
to
know
whether
it's
the
framework
that
is
generating
some
n
plus
one
queries
that
we
need
to
solve
with
batch
loading
or
different
methodologies
or
or
do
we
have?
E
B
Awesome,
so
just
thinking
through
that
and
the
issues
that
we
have
defined
is
there
anything
that
we
feel
needs
to
be
added
to
the
performance
analysis
issue.
Or
is
this
something
I
think
we
do
have
an
issue
around
observability?
As
far
as
like
understanding
the
tools
we
have
in
place,
but
is
there
something
we
need
to
kind
of
either
identify
as
far
as
existing
issues
to
pull
in
or
define.
E
B
Yeah,
I
think
so
I
think
that
makes
sense.
What
we
need
should
be
driven,
and
I
think
I
put
this
down
I'll,
just
put
it
down
as
an
action
and
then.
B
B
Cool
okay
next
item:
I
just
wanted
to
to
point
this
out.
I
haven't
had
a
chance
to
look
on
my
end
yet,
but
I
know
arturo
put
this
together
before
each
of
our
kind
of
ptos
so
yeah.
I
think
this
is
just
an
open
item
for
us
to
go
in
and
and
review.
B
C
Yeah,
so
I
think
that's
a
for
transparency
and
a
better
organization.
I
think
what
we
can
do
is
like
to
map
every
exit
criteria
that
we
have
with
an
epic
and
then
create
some
epics
and
issues
belonging
to
the
epics.
So
that
way
we
can
go
to
any
criteria
and
see
what
is
the
progress
you
know
and
we
can
keep
track
of.
What
is
the
progress?
C
We
can
better
coordinate
each
other
and
we
know
how
far
we
are
to
achieving
something,
because
at
the
end,
is
go
to
that
epic
and
check
all
the
issues
are
a
complete.
So
this
is
done
and
also,
I
think,
adding
a
dri,
I
think
is
going
to
help
just
you
know
to
have
someone
that
is
more
like
helping
you
know
to
to
the
progress
of
one
specific
epic.
I
think
it's
going
to
also
help
us
to
organize
better
all
the
work
that
we
are
doing
so
at
the
moment.
C
I
think
that
maybe
we
have
to
create
a
move
some
issues
around.
Maybe
there
are
issues
that
they
don't
belong
to
any
epic.
So
maybe
we
cannot
to
the
epic,
then.
Maybe
some
issues
have
to
be
promoted
into
an
epic
and
I
don't
know
I
is
I'm
going
to
be
working
this
week,
but
the
next
week,
and
also
some,
maybe
I'm
going
to
do
all
of
this
movement.
I
know
see
if
I'm
going
to
have
time
this
week.
B
And
one
thing
I
would
note
is
I
recall
from
earlier
on
in
our
working
group
that
there
is
an
epic
out
there
floating
around
under
the
blueprint
around
graphql
and-
and
that
was
something
that
growers
had
had
owned
and
he
was
open
to
us
kind
of
bringing
all
these
together.
So
I
think
that's
that's
still
the
case,
so
maybe
consider
those
issues
that
might
be
out
there.
Maybe
we
can
kind
of
again
reorganize
all
these
epics
so
that
there's
a
bit
of
structure.
I
think
that's
a
really
good
idea.
C
B
Cool
if
there
are
no
other
comments
or
questions
there,
andy
you've
got
the
next
point.
D
Yeah
so
yeah
I
looked
into
how
to
automate
the
the
rest
api
documentation,
some
assigned
to
this
issue
for
quite
a
while,
and
I
finally
had
time
to
look
into
this.
The
problem
is
that
the
the
markdown
documentation
is
manually
maintained
and
it's
partially
out
of
sync.
So
I
was
looking
for
ways
to
to
automate
this
and
came
up
with
this
plan
and
the
linked
issue.
Yeah.
If
you
have
any
feedback
that
would
be
nice,
I
think
we
can
discuss
with
async,
but
maybe
you
can
give
a
short
introduction.
D
So
I
looked
into
the
grape
swagger
jam
and
this
basically
generates
an
open
api
documentation
from
all
the
things
that
are
already
defined
in
our
api
endpoint
classes
and
yeah.
We
can
use
this
open,
ido,
open
api
documentation
to
generate
the
markdown
documentation,
so
I
thought
the
first
step
would
be
to
introduce
grape
swagger
and-
and
I
have
like
a
kind
of
automation
that
generates
the
the
api
documentation
for
every
merge
request
that
changes
an
endpoint
and
yeah.
After
that
we
can
build
something
that
can
pass
this
and
build
the
markdown
documentation.
D
And
ideally,
we
wouldn't
do
this
all
at
once.
We
would
just
yeah
like
do
it
step
by
step
and
at
every
end,
point
step
by
step.
So
we
don't
have
to
have
a
very
huge
virtual
quest
yeah.
But
if
you
want
to
know
more
details,
I'll
learn
this
out
in
the
comments
and
yeah
feel
free
to
add
any
feedback
and
yeah.
I
hope
we
can
start
working
on
this
at
some
point.
A
So
I
have
a
question
so
our
rest,
api
documentation
is
done
manually,
but
our
graphql
documentation
is
generated
automatically.
Is
that
correct.
D
Yes,
that's
correct,
and
I
think
this
is
also
one
one
point.
We
thought
it
would
be
useful
to
have
an
automated
rest
api
documentation
because
it
works
very
well
for
graphql
and
for
us,
it's
yeah.
It's
kind
of
a
pain
to
to
keep
this
in
sync.
A
And
for
graphql,
I
guess
we
use
a
rig
test
right
for
a
graphql.
What
do
we
use
to
generate
the
documentation.
D
A
Okay,
cool,
let's
yeah,
it's
just
good
to
know
so.
C
B
If
I
miss
it,
we
didn't
have
an
answer
on
this
call
right
for
what
we
used
to
generate
the
gql
docs.
That's
a
wreck
talk
a
rig.
A
B
Okay
and
I
added
a
note
here
as
well
there.
This
is
a
link
to
to
a
conversation
I
shared
in
slack,
so
the
api
security
group
is
also
looking
into
generating
api
docs
and
they
seem
to
be
in
a
position
to
start
work
on
this
free
soon.
So
I
was
hoping
to
invite
folks
from
that
team
to
our
group
as
well.
B
They
seem
really
interested
in
this,
so
I'm
actually
going
to
be
speaking
to
derek
the
the
product
manager
on
that
team
right
after
this,
so
you
know,
andy
might
be
good
to
just
be
aware
of
them,
and-
and
maybe,
if
you're
not,
you
know
familiar
with
those
those
folks,
we
can
do
some
introductions
and
see
how
we
can
collaborate
and
how
I
would
expect
they
might
be
interested
in
joining
in
future.
Calls.
D
Yeah,
that
would
be
nice,
and
maybe
you
can
also
point
them
to
the
comment
left
on
on
this
issue
and
yeah,
because
that's
there's
some
merger
quest
link
that
I
used
for
experimenting
and
I
have
a
much
request
that
already
adds
the
grab
sugar
gem
and
generates
the
open
api
documentation.
So
maybe
they
if
they
want
to
start
working
on
this,
maybe
this
would
be
useful
as
a
starting
point
to
look
into.
B
Great
yeah
I'll
point
them
to
this,
and
and
and
they
may
you
know-
want
to
reach
out-
or
you
know,
connect
with
you
and
mrs
and
things
so
yeah
we'll
we'll
see
where
this
goes.
But
I
think
it's
great
that
we
have
another
group,
that's
really
interested
in
what
we're
working
on.
We
need
more
firepower.
So
that's
exciting.
A
A
Andy,
I
think
you
touched
on
that
one
a
bit
can't.
We
use
also
a
rig
task
to
generate
the
documentation
for
rest,
api.
D
Yes,
yes,
so
this
is
what
I'm
planning
to
do.
So
direct
house
is
basically
what
yeah
the
thing
that
starts
the
the
generation,
but
under
the
hood
it
would
use
the
scrapes
rugged
gem
to
yeah
to
build
the
open
api
document.
D
Yeah
in
terms
of
yeah
starting
a
generation,
maybe
we
can
even
use
the
same
rake
task,
so
we
don't
have
two
reg
tasks
that
we
have
to
run.
Maybe
one
reg
task
and
cover
both.
It
would
be
nice
yeah.
E
Do
we
then
need
to
create
another
conversion
layer
from
the
ammo
format
that
I
say
in
the
merge
request
to
a
static
site.
D
D
So
this
is
the
merchant
quest
where
I
tried
grape
swagger
and
we.
So
this
is
the
the
yaml
it's
created
and
we
actually
have
a
built-in
thing
in
gitlab
that
can
show
this
way
so
it
I
think
it
somehow
uses
swagger,
ui
and
yeah.
It
passes
this
yaml
file
and
builds
this
thing
from
it.
So
this
is
pretty
cool.
We
have
already
built
it,
build
a
built-in
viewer
for
this
and
we
just
need
to
link
it
from
a
documentation.
B
Question
andy,
I
think,
don't
we
already
have
this
view
or
maybe
maybe
you're
pointing
out
something
additional,
but
if
we
went
to
our
api
documentation,
there's
a
link
to
an
open
api
spec.
It's
just
not
very
robust.
Today.
D
D
We
already
have
one,
so
this
is
actually
right
next
to
it.
So
this
is
the
one
we
already
have.
The
difference
is
this
is
using
open
api
version
three
and
the
the
grabs
regular
gem
can
only
generate
version
two.
So
this
one
is,
I
think
it's
it's
built
manually,
so
someone
is
yeah,
went
there
and
just
like
built
all
this
jumbo
files
by
hand
and
yeah,
I
think
there's
also
a
way
to
so.
D
B
So,
just
to
clarify
the
the
one
we
have
today
is
manually
generated
and
the
open
api
v3.
C
D
A
A
B
D
Thing
maybe
we
can
keep
the
version,
three
documents
that
you
already
already
have
and
add
version.
Two.
Next
to
I
found
that
we
have
an
issue
open,
that's
actually
requesting
version,
two
documentation.
D
I
was
wondering
why
this
is,
but
it
seems
like
there
are
some
customers
or
users
they're
interested
in
especially
version
two
okay
yeah,
I
attacked
you.
I
actually
attacked
you
on
this
issue
just
before
so
maybe
I
can
try
to
find
it
again.
D
B
Right
yeah:
let's
look
into
that,
I
know
I
was
looking
into
it
a
couple
weeks
back,
but
I
don't
recall
the
specifics
now.
I
thought
they
just
added
additional
support
in
v3,
so
I'd
be
kind
of
surprised.
If
you
couldn't
do
everything
you
can
in
v2
and
v3,
but
yeah
we
can
explore
that.
The
only
other
thing
I
did
want
to
add
is
also
note
that
the
security
team
was
exploring
like
potential
issues
in
how
we
render
open
api
specs
in
a
site.
B
B
Okay.
I
only
have
two
minutes
before
I
have
to
hop
to
a
call.
I
know
we're
a
little
bit
over
time
anyway.
So
just
I'll
hit
this
last
item,
real
quick
api
gap.
Analysis
has
been
a
bit
stale.
The
next
step
we've
been
talking
around
is
creating
a
survey,
so
I've
started
that
just
put
a
couple
of
you
know
like
ideas
as
far
as
questions
to
ask
in
this
morning
and
we'll
try
to
work
on
that.
B
This
week
to
to
build
it
out,
but
if
there
are
questions
that
you
guys
have
in
mind
to
extend
this
api
gap
analysis,
I
think
we're
ultimately
deciding
that
going
team.
My
team
asking
every
team
to
go
through
a
spreadsheet
and
like
pull
down
all
their
apis.
It's
a
bit
of
effort.
I
think
that
what
we
started
with
was
really
helpful
to
get
a
picture
of
of
what
you
know
what
it
looks
like
in
terms
of
gaps
and
between
rest
and
graphql
and
effort.
B
So
it
was
a
good
start,
and
hopefully
we
can
build
on
that
and
go
to
a
survey
that
we
can
get
a
little
more
broader
reach
and
feedback
if
there
are
any
other
approaches
that
I'm
not
thinking
of
to
get
a
more
holistic
picture
across
the
org.
Let
me
know,
but
let's,
let's
just
chat
in
that
in
that
issue.
B
Cool
all
right,
otherwise
check
that
out,
I'm
gonna
hop,
I
don't
know
if
it'll
kill
the
meeting.
If
you
guys
have
any
other
points,
you
want
to
continue
on.
B
Okay,
otherwise
thanks
everyone
for
joining
and.