►
From YouTube: GitLab Terraform provider community hour - 2022-04-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
and
mark
you
missed
our,
you
missed
the
introduction
round.
But
if
you
don't
mind,
would
you
introduce
yourself
quickly.
B
I'm
sure
sorry,
I
missed
that.
My
name
is
mark
nesson,
I'm
a
software
architect
at
kingon
systems.
I've
been
using
terraform
for
a
number
of
years
and
started
using
gitlab
kind
of
last
year
and
been
really
interested
in
making
good
progress
with
the
terraform
provider
and
using
that
to
help
manage
our
lab
infrastructure.
A
Yeah
cool
I
actually
prepared
with
a
couple
of
topics
to
discuss.
One
would
be
the
100
coverage
or
moving
to
graphql,
I'm
adding
it
to
the
docs.
A
A
A
What
what
are
your
ideas
around
this
like
I've,
seen
that
patrick
you
did
a
bit
of
research
or
or
was
it
was
it
you
yeah.
C
Yeah
it
was
me
yep
yeah,
so
I
I
put
the
the
issue
link
to
get
up
there
in
the
notes
as
well,
but
so
I
think
you
know
the
the
the
movement
here
is.
We
had
heard
from
one
of
the
pms
at
get
lab
and
apologies.
C
The
name
is
passing
my
mind
right
now,
but
that
you
know
graphql
is
really
the
the
direction
that
the
gitlab
is
moving
for
those
apis
and
so
as
a
result
of
that
and
we're
already
seeing
features
around
things
like
vulnerability,
management
and
some
of
the
compliance
policies
where,
in
order
to
actually
configure
those
appropriately,
we
need
to
be
using
the
graphql
apis.
C
So
the
question
is
really
what's
the
strategy
here.
I
think
for
migrating
right
now
we
use
go,
get
lab
and
primarily
well.
We
only
use
the
rest
apis
to
my
knowledge.
Adam
keep
me
honest,
but
but
I
think
we
need
to
start
implementing
some
of
these
graphql
apis,
and
so
the
question
is
really.
What
does
that
mean?
C
I've
kind
of
asserted
in
the
issue
that
we
oh
and
I'm
I'm
a
couple
days
behind
on
some
of
the
comments
here.
So
I
need
to
read
some
of
the
notes
that
have
been
added
since
then,
but
I've
kind
of
asserted
that
we
need
to
look
at
look
at
the
best
way
to
create
our
own
graphql
client,
because
there
is
no
git
lab
graphql
client
out
there
right
now,
which
means
how
do
we?
Actually?
C
How
do
we
actually
store
that?
So
do
we
create
a
new
project,
a
new
repository
and
store
it
there?
Do
we
try
and
generate
some
of
the
code
and
adam,
like
you
said
you
know,
even
with
like
giant
client
there,
we're
gonna
essentially
maintain
our
own
copy
of
the
api
spec,
which
I
don't
really
love.
So
the
the
question
here
is:
what
do
we
do,
because
we
have
to
do
something,
because
this
is
the
direction
that
gitlab
is
going?
D
Do
we
have
like
a
resource
in
mind
that
we
can
use
as
like
a
candidate
that
is
right
now
only
supported
by
git
lab.
E
C
D
Yeah
yeah,
it
helps
if
you
have
a
concrete
example,
because
I
was
browsing
the
apis
a
bit.
I
didn't
get
too
far
with
that,
but
it
wasn't
clear
to
me
how
they
map
to
exact
resources,
that
we
want
the
provider
so
yeah.
If
anyone
has
any
ideas
for
resources
in
mind.
A
F
Metadata
on
graphql,
so
it's
there,
but
I've
talked
recently
to
someone
and
I
think
we'll
go
to
the
direction
of
creating.
A
F
C
And
there
definitely
are
some
inconsistencies
between
the
rest,
api
and
the
graphql
api,
the
vulnerability
that
I
mentioned,
or
the
vulnerability
api
that
I
mentioned,
the
the
query.
Graphql
api
returns
information
that
the
rest
get
operation
does
not
so,
for
example,
if
you
want
to
write
something
that
automatically
resolves
vulnerabilities
that
are
not
found
on
the
main
branch
right
now,
you
have
to
use
graphql
for
that.
That's
not
something
we
would
do
inside
the
terraform
provider,
but
it's
just
an
example
of
where
there's
differences
between
those
apis.
F
G
It
sorry
and
it
would
normally
return
more
info
based
on
how
you
chain
types,
it's
much
easier
to
add
more
info
with
low
cost.
Sometimes
if
performance
is
taken
in
consideration.
So
that's
why
you
will
find
normally
more
info
on
the
newer
graphql
endpoint.
E
A
I
can
give
more
info
here
because
I'm
I'm
actually
participating
in
the
working
group
that
discusses
these
topics
and
the
rest
will
say
for
sure.
Basically,
the
decision
is
that
graphql
will
be
the
primary
one.
New
features
might
appear
first
in
the
graphql,
and
rest
is
very
likely
to
become
a
layer
on
top
of
graphql.
A
E
So,
with
going
forward,
one
of
the
questions
I
would
say
is
so:
do
we
so
we're
gonna
roll
a
client
of
our
own?
Do
we
want
to
and
sander
who's
one
of
the
maintainers
in
the
go,
get
lab
library
he
mentioned
in
a
ticket
that
he
just
that
just
got
closed
out,
that
he
does
not
use
it
anymore,
and
so
he
was
looking
for
potential
ownership
and
who's
going
to
maintain
it
going
forward.
So
victor,
I'm
not
sure
if
you
know
anything
about
that,
but
I'd
be
probably
looking
at
get
left
too.
A
A
A
So
I
don't
know
if
gitlab
will
be
interested
or
not,
but
based
on
discussion,
we
agreed
that
there
are
many
other
possibilities
as
well.
A
One
is
that
he
might
allow
us
to
to
to
all
like,
like
all
the
maintainers
on
the
traffic
provider
might
become
maintainers
on
the
gold,
git
lab
library
or
something
on
those
lines.
A
He
said
that
he's
a
bit
worried
about
these
things
because
he
tries
to
be
very
prudent
and
so
on,
but
if
there
is
no
better
option
that
that
then
he
might
be
open
to
this
as
well.
So
we
are
not
vote
on
the
go,
get
lab
library.
E
Yeah
the
one
thing
with
that,
then,
is:
if
we
start
building
out
the
graphql
clients,
do
we
want
to
build
them
alongside?
So
I
imagine
we're
probably
going
to
encapsulate
the
mutations
and
the
queries,
probably
inside
of
us
kind
of
like
a
not
like
one
to
one
with
the
api,
almost
probably
like
the
rest.
Api
goget
lab
library
does,
but
probably
group
them
together
to
kind
of
give
them
like,
create,
read,
update
and
delete
functionality
for
an
entire
like
resource,
so
to
say
so.
D
E
Yeah,
that
seems
good
to
me.
I
was
just
thinking
about
if
anybody
else
wanted
to
use
it,
even
though
they'd
be
very
specific
to
the
provider,
I
was
wondering
if
it
could
be
something
like
the
go.
Get
live
plant
where
somebody
could
instinct
could
use
those
to
like
use
that
library
directly
instead
of
using
the
provider,
would
have
been.
My
only
thought
to
keep
it
separate.
C
G
Yeah,
no,
I
meant
I
third-party
modules
or
something
similar
yeah.
I
don't
have
direct
experience.
I
was
just
thinking
that
it's
something
another
language
exists,
so
I
expected
it
to
be.
There.
C
Yeah
and
I
think
what
we
were
kind
of
I
was
reading
through
some
of
the
comments.
Since
I
last
looked
at
the
issue,
and
it
looks
like
we're
kind
of
saying,
there's
a
there's:
a
client
called
gen
client
e
n
I'll
put
a
link
in
the
zoom
meeting
here.
That
does
essentially
that
it
uses
the
introspection
queries
in
graphql
to
to
help
build
out
some
type
safety
which
allows
you
to
kind
of,
I
think,
to
adam's
point.
It's
like.
C
D
Okay,
yeah,
the
only
caveat
with
that
is
it'll,
be
I
don't
know
if
other
languages
can
do
this
if
they
can
like
generate
the
full,
the
full
types
based
on
what
you
get
from
those
introspection
queries,
at
least
with
gen
client.
You
still
have
to
write
your
own,
like
wrappers
kind
of
around
the
the
queries
and
mutations
in
in
the
graphql
language.
So
yeah,
I
don't
know
if
there's
a
way
to
like
fully
generate
the
types
or
because
that
would
be
pretty
nice.
You
know.
E
That's
adam,
where
I
was
responding
to
you
on
the
one
I
take
it.
I
thought
it
was
more
of
that.
Graphql
is
closer
to
soap,
with
wizzles
and,
and
things
like
that,
where
you
can
fully
forward
generate
like
all
your
queries
and
your
classes
and
everything,
but
that's
not
the.
I
couldn't
find
anything
if
anybody
else
could
find
something
like
that
where
you
can
forward
generate.
C
C
A
C
Yeah,
I
think
so
I
I
think
picking
that
and
just
building
out
a
prototype
on
it
makes
sense
and
honestly
I
I
think
we
could
potentially
build
out
a
prototype
even
without
picking
a
third-party
tool
and
just
seeing
how
bad
it
would
be
to
to
do
it
natively
and
go
just
to
keep
our
dependencies
low
and,
and
let's
see
how
bad
that
is,
I've
got
a
couple
pr's
that
I
need
to
finish
up
before
before
I
can
take
that
on.
C
D
I
guess
just
to
get
people's
kind
of
opinion
on
this.
I
think,
as
long
as
gitlab
is
going
to
support
rest,
I
think
we
should
predominantly
use
rest
in
the
provider
because
it
maps
more
like
the
crud
operations
that
you
would
see
in
like
terraform,
but
yeah
for
these
features
that
are
not
going
to
get
into
the
rest
api.
I
think
it
makes
sense.
We
need
something.
E
Yeah
for
features-
and
I
would
also
say,
if
there's
any
extra
information
that
you
would
get
out
of
the
graphql
endpoint,
that
you
wouldn't
be
in
the
rest,
endpoint
would
probably
that
would
be
another
another
thing
where
we
choose
graphql
or
rest
if
they
both
existed.
C
C
E
E
E
G
E
A
I
guess
we
have
to
look
into
it,
so
there's
not
much
to
be
discussed
about
that
right
now.
There
are
two
topics
that
actually
timo
wanted
to
to
discuss
with
all
of
you
here.
He
cannot
join
us
today.
A
D
Can
you
can
you
share
the
link
to
the
doc
again
because
I
rejoined
yeah
sure
thanks,
I'm
not
sure
what
the
breaking
changes
were.
The
team
I
was
looking
at.
I
am
familiar
with
the
testing
strategy,
though
I
can
speak
to
that.
D
A
D
Yeah
yeah
yeah
timo
and
I
were
discussing
kind
of
over
just
discord
about
our
tests
and
kind
of
going
through
them
coming
through
them
a
bit
they
they're
yeah,
there's
a
lot
of
inconsistencies
across
with
them,
and
some
of
them
have
a
lot
of
redundant
checks
in
them
that
are
just
kind
of
confusing,
and
I
think
especially
for
new
contributors,
trying
to
like
find
something
to
base
their
features
off
of
to
find
like
a
good
test
to
use
as
a
model,
it
can
be
kind
of
difficult,
so
I
was
at
least
hoping
to
kind
of
have
it
like
a
model
for
what
we
consider
good
tests
and
and
maybe
point
to
an
existing
one
in
our
contributing
doctor,
say
hey.
D
This
is
kind
of
how
we
imagine
you
know
you
should
be
testing
your
your
new
providers.
You
should
have
a
crate,
you
should
have
an
update.
This
is
the
types
of
checks
you
need,
so
that
was
the
idea.
I
kind
of
made
that
a
bit
more
concrete
in
that
issue.
It
was
the
rfc
testing
something
what
was
it
testing
strategy,
yeah
kind
of
some
ideas
of
what
we
think
like?
Maybe
we
don't
need
to
do
a
get
on
the
upstream
resource.
D
Maybe
we
can
just
rely
on
the
way
that
the
terraform
framework
does
the
import
testing
as
a
way
to
verify
that
it
was
created
upstream.
That's
kind
of
I
think
that
was
the
most
contentious
change,
because
that
would
be
like
the
the
biggest
reduction
in
lines
of
code
out
of
everything.
So
I'm
not
sure
if
timo
is
looking
for
closure
on
that
it
seemed
like
we
kind
of
were
converging
on
an
agreement
on
that
issue.
D
C
Yeah,
I
I
guess
I
sort
of
viewed
this
decision
as
made
because
we've
been
executing
on
it.
So
unless
anyone
disagrees.
E
H
Sorry,
all
right,
I
think
it
would
be
a
great
idea,
because
I
ran
into
testing
issues,
were
the
biggest
thing
for
me
is
someone
that's
knew
where
to
go
and
new
to
testing
terraform
providers
or
building
terraform
providers?
That
was
a
hurdle
for
me.
So
I
think,
having
something
consistent
is
a
great
idea.
D
C
A
E
D
We
do
have
that
like
first
message.
Welcome
comment
bot.
We
just
need
to
add
I
guess
to
it.
It
does
say
to
check
out
the
contributing
talk.
Maybe
we
could
also
mention.
D
A
E
E
Yeah,
do
we
have
anything
then
right
now,
because
then
it
would
be
speeding
up.
It
would
just
be
an
update
to
the
contributing
md
or
we
can
take
a
resource
that
maybe
needs
a
little
polishing
and
then
fix
that
up
first
and
then
do
the
contributing
to
the
same
time.
If
anything,
because
I
know
at
least
with
a
few
of
the
resources
we've
been
following
that
following
I
think
mostly,
I
think
the
test
main
might
be
the
only
one
construct
that
we
haven't
actually
started
using.
Yet
that
would
need
to
go
into
an
example.
D
Yeah,
I
think
I
think
we
do
the
test
main
change.
I
don't
remember
who
suggested
that
that
was
a
really
good
idea
and
then,
maybe
after
that,
we
update
the
contributing
with
what
we
think
is
is
the
ideal
style
for
that.
D
A
E
Okay,
so
assuming
we
get
past
the
add
to
the
contributing
have
a
model
example.
What
do
we
want
to
do,
for?
I
guess
going
through
all
of
the
resources
and
aligning
aligning
the
testing,
which
is
that
the
the
issue
is
about?
How
do
we
want
to
break
it
up
individual
prs?
We
want
to
have
one
mega
pr
that
everybody
contributes
to
probably
not
the
best
idea,
but
how
do
we
want
to
approach
it.
D
D
I
think
they
should
be
matched
in
groups,
maybe
start
with
a
couple
if
anyone's
interested
in
doing
that
work,
maybe
just
post
in
the
in
the
thread
like
hey,
I'm
gonna
work
on
these,
so
we
don't
like
to
override
each
other's
work
right.
H
E
E
E
C
D
Yeah,
I
don't
know
if
I
want
to
open
like
a
separate
issue
for
each
resource
that
could
get
a
bit
noisy.
I
could
add
a
checklist
to
the
the
existing
issue,
maybe
with
just
each
resource,
and
then
I
can
maintain
that
and
check
them
off
as
we
merge
stuff.
As
long
as
stuff
is
mentioning
this
issue,
I
should
be
able
to
track
that.
E
One
thing
going
forward,
so
I
have
a
bit
of
a
more
of
a
release
engineering
background.
I'd
say:
do
we
want
to
figure
out
a
custom
linting
solution,
potentially
as
we
go
forward
to
make
sure
that
some
of
these
are
being
are
being
met?
E
It'd
be
a
fun
it'd,
be
a
fun
thing
to
do,
but
it
would
also
yell
at
you-
and
you
were
running
your
your
acceptance
test
to
say
hey.
This
does
like
the
linting
test
and
say:
hey
you're,
not
following
these.
These
specific
guidelines.
D
Yeah,
I
I'm
for
any
linting
strategy.
That
is,
you,
know,
justified
and
gives
you
a
way
to
to
fix
your
issues.
Instead
of
just
yelling
at
you,
yeah
I've
played
a
bit
with
go
analyzers
and
go
static
checks
before
I
had
an
old
pr
to
add
some
like
docs
checks
to
it.
Since
then,
we've
done
automated
dock
generations.
We
don't
need
that,
but
I
could
yeah.
That
sounds
reasonable.
H
E
D
I
think
we
kind
of
covered
everything
I'll
add
a
checklist
to
the
issue,
I'll
probably
start
on
some
refactoring
to
to
get
the
existing
tests
in
line.
I
could
also
open
the
I'll
put
a
message
in
the
thread
if
I,
if
I
start
working
on
this,
but
I
could
also
work
at
it
with
the
test
main
idea
that
was
patrick.
C
A
Okay,
the
final
topic
we
have
is
around
the
15-0
deprecations
patrick.
You
might
be
up
to
date
with
that,
based
on
your.
C
Yeah,
I
think
I've
got
there's
really
only
one
that
is
going
to
be
a
big
breaking
change
for
us,
which
is
the
the
project
code
coverage
regex
and
I've
got
a
pr
open
for
that
right.
Now,
I'm
getting
a
weird
go
error
on
it,
but
that's
the
last.
C
I
think
the
last
thing
that
I
have
to
do
to
merge
that
pr,
the
other
15.0
deprecation,
was
just
around
the
gitlab
runner
api
returning
stale
instead
of
offline
or
not
connected,
I
mean
it's
just
going
to
show
up
as
a
planned
change
the
first
time
somebody
runs
in
15.0.
So
I
don't
think
it's
really
a
big
problem
for
us
at
all.
C
I'm
not
sure
that
there's
actually
any
change
we
need
to
make.
I
was
going
to
mess
around
with
the
api
a
little
bit,
but
it's
kind
of
difficult
to
do
that
when
the
api
isn't
actually
returning
that
data.
Yet
so
I
I
think
that
the
big
one
here
is
covered.
I
think
what
what
teemo
was
looking
for
with
the
conversation
is
actually
in
issue
984,
one
sec.
C
I
added
that
to
the
docs
real
quick,
which
is
what
what
is
our
official
policy
on
supporting
previous
releases
right,
which
does
come
into
play
with
the
15.0
breaking
changes.
Conversation
right,
so
it
when
I
when
I
did
the
pr
team
and
I
actually
had
a
back
and
forth
in
the
pr
as
well
about
you
know
how
do
we?
What
does
it
look
like
to
to
support
previous
versions?
C
And
in
this
particular
case
you
know
we're
adding
what
I
ended
up
doing
on
the
pr
was
actually
not
changing
anything
about
the
the
project
resource
itself,
other
than
adding
a
deprecation
notice
on
the
the
particular
schema,
but
I
had
to
refactor
some
of
the
tests
because
one
of
the
tests
that
tested
the
build
coverage
needed
to
have
a
version
application
added
to
it.
C
But
I
think
the
question
is:
is
how
long
then
do
we
maintain
that
effectively,
deprecated
deprecated
code
and
the
obvious
answer
to
that
is
until
our
next
major
release
right
until
14.0
right
and
so
what
does
our
policy
look
like
for
major
releases
right
because
get
lab
seems
to
do
a
major
release
about
once
a
year?
C
I
don't
know
if
there's
a
official
policy
around
that,
but
you
know
if
we
target
a
major
release,
that
always
trails
three
months
after
the
get
lab
major
release,
we
basically
give
our
users
like
a
three-month
upgrade
window
and
and
victor,
I
think
you
had
shared
that
most
of
like
even
the
self-hosted
gitlab
users
were
on
about
a
quarterly
upgrade
cycle,
so
we're
sort
of
aligning
to
those
usage
metrics
from
from
get
lab.
Then.
A
F
A
D
A
Actually,
gitlab
itself
supports
free
versions
back
so
with
that
back
ports
and
security
fixes
they
support
up
to
three
versions.
If
I'm
not
mistaken,.
D
Yeah,
I
think
two
main
things
here.
I
think
one
we
should
start
to
include
in
our
release
notes
what
version
we
tested
against
just
so,
if
anyone's
kind
of
going
through
and
trying
to
find
a
version
that
would
work
if
they're
on,
like
an
old
version
or
if
they're,
about
to
upgrade
to
a
new
minor
version
of
gitlab,
they
can
first
check
that
we've
tested
against
it
before
they
take
our
version.
D
You
know
and
then
yeah
we
were
suggesting
patrick
to
test
against,
like
our
the
latest
version
and
then
also
the
one
from
three
months
ago.
C
Yeah,
which
means
we
need
to
probably
update
our
actions
pipeline
as
well,
because
right
now,
we're
literally
only
testing
against
the
most
recent
version.
C
Correct
yeah
yeah,
so
we
should.
What
that
probably
means,
then
is.
We
should
pin
those
docker
image
versions
as
well.
We
should
probably
pin
them
in
the
docker
compose
that
we
use
for
even
when
we're
testing
in
git
pod
does
anyone
know.
Does
the
pandabot
help
if
we
pin
those
versions
or
is
it
going
to
help
us
keep
those
versions
up
to
date,
because
otherwise
we're
going
to
have
a
pr
every
single
month
to
update,
update
those
right
which
is
going
to
be
kind
of
problematic
over
time.
A
A
It
might
happen
that
if
we
add
the
renovate
json
to
the
reporter,
it
will
be
picked
up
automatically.
I'm
not
100
sure
about
this,
but
but
I
know
that
they.
C
B
I
know
you
can
limit
versions
to
say
you
know,
don't
go
to
this
or
keep
it
within
this
range.
I
don't
know
if
you
can
say
kind
of
that.
Variable
n
minus
one
we'd
have
to
dig
into
it
a
little
bit
more.
D
Yeah
do
do
we
need
to
pin
versions?
I
I
know
it's
not
a
huge
deal
to
have
to
like
merge
and
depend
upon
mr
every
now
and
then,
but
even
if
we
could
avoid
it,
could
we
have
like
the
job
say
you
know
get
whatever
version
is
released,
get
whatever
version
was
current
three
months
ago
and
then
just
use
those
two
versions
to
test
against.
C
D
I'm
not
sure
like
it
might
be.
Okay,
most
of
the
time
to
just
when
you're
local
developing
run
against
latest
sorry,
it's
hard
to
tell
how
much
of
a
problem
that's
going
to
be.
F
A
F
A
I'm
trying
to
kind
of
go
through
what
what
action
items
we
have
around
the
graphql,
then
the
basic
idea
now
is
to
ask
nico
to
teach
us
how
to
use
google
and
then
find
more
of
this
graphql
tools.
A
Oh
there,
it
is
only
surface
one,
we
should
add
three
reasons:
what
version
we
are
testing
against,
and
there
are
many
things
in
the
issue
like
the
going
to
test
main
idea
and
contribution
and
the.
A
A
Okay,
what
else
came
up
the
graphql.
A
D
A
A
A
Yeah
yeah
yeah
cool-
I
I
don't
want
to
add,
like
responsibility
for
people
to
do
these
action
items,
because
this
is
an
open
source
project
and
it
doesn't
seem
to
be
the
right
approach.
It's
more
like
a
summary
of
of
where
we
want
to
move
on
and
there's
anyone's
time
permits
again.
We
can
push
this
forward.
C
H
F
A
We
have
five
more
minutes,
but
if
there
is
nothing
more
to
discuss,
I
just
want
to
thank
you
to
contributing
to
the
provider
and
making
it
better
and
actually
one
more
thing
I
wanted
to
share
is
that
I
plan
to
migrate
the
provider
to
gitlab
in
the
next
quarter,
which
which
will
mean
that
it
will
have
way
more
exposure
within
gitlab,
not
necessarily
within
my
team,
but
within
gitlab
as
a
whole,
and
it
will
be
easier
to
reach
out
to
pm's
connect
to
other
issues
and
so
on.