►
From YouTube: Lightweight Service Registry with GitLab Repository
Description
This lightweight service registry idea leverages Auto DevOps while also providing dynamically defined environment URLs
https://gitlab.com/poffey21/service-registry
A
Anything
that
you
need
needed
to
be
you
know
somewhat
secret
or
environment
scoped
get
lab.
Ci
variables
are
pretty
useful
here
and
what
you
can
do
is
in
the
gitlab
settings.
You
can
store
secrets
and,
if
you
prefix
anything
with
k8s
underscore
secret
underscore
and
then
the
variable
name,
this
will
everything
after
the
k8
underscore
secret
underscore
will
get
propagated
into
a
autodevops
environment.
A
So
if
you
ever
want
to
pass
in
environment
variables
in
and
store
them
into
your
running
applications
on
kubernetes,
this
is
a
really
simple
way
to
do
this
and
one
of
the
nice
things
is.
You
can
scope
these
by
environment.
You
can
protect
them
and
there's
a
few
different
options
there
as
well.
So
you
know
this:
one
is
for
all
review,
apps,
let's
go
ahead
and
point
a
particular
url
endpoint
to
this-
and
this
is
you
know,
manually
defined
anytime.
You
want
to
update
this
you'd
have
to
update
it
within
the
gitlab
ci
variables.
A
Another
option
is
you:
could
we're
building
out
or
continuing
to
build
out
integrations
with
hashicorp
vault
and
there's
nothing
preventing
you
from
you
know,
storing
secrets
or
environment
urls
within
hashicorp
vault
and
then
based
on
the
jwt
the
jot
token,
as
well
as
the
location
of
what
you're
attempting
to
get
vault
could
return.
Those
things
for
you
as
well.
A
Gitlab
also
provides
you
the
ability
to
have
helm
charts.
So
by
default,
your
helm
chart
provides
you
your
cont.
Your
docker
container
is
running
inside
of
git
lab
and
then
or
inside,
of
a
kubernetes
cluster,
and
then
there's
also
a
postgresql
database.
That's
provisioned!
Alongside
of
your
your
application
as
well.
A
So
that
way,
it's
there
are,
you
know
external
services,
but
there's
nothing
preventing
you
from
defining
a
helm
chart
that
has
all
of
your
other
project
repositories,
ref
or
all
of
the
the
containers
from
your
other
projects
referenced
within
that
helm
chart
as
well.
A
So
you
know
upon
a
review
app
being
provisioned
your
you
could
deploy
into
a
single
name,
space
containers
from
multiple
projects
and
have
them
inter
interconnected
and
talking
together,
and
we
have
quite
a
few
environment
variables
that
you
can
override,
but
from
from
the
idea
of
overriding
a
helm
chart,
these
are
probably
the
most
important
ones
available.
A
Additionally,
you
can
always
roll
your
own
cube
cuddle
commands,
so
there's
nothing
preventing
you
from
utilizing
cube
cuddle
rather
than
auto
devops
to
actually,
you
know
set
up
your
your
environment,
how
you
see
fit
and
one
of
the
nice
things
is
you're,
not
name
space,
you
tied
to
a
particular
namespace
and
you
can
get
creative
there
as
well
idea
behind
this
lightweight
service
registry
is
really
you
know.
Let's
take
a
for
example,
you
have
multiple
project
repositories
and
some
of
those
project
repositories
have
long
lived
services.
A
So
we
have
project
a
project
b
and
project
c
and
there's
dependencies
between
these
projects.
Project
a
relies
on
project
b
and
project
c.
Project
c
is
a
fairly
long
lived
project.
It
doesn't
see
a
whole
lot
of
changes,
so
so
it
doesn't
make
sense
to
you
know
continuously
be
developing,
but
project
a
and
project
b.
There
are
some
interdependencies
between
those
two
so
often
times
what
the
development
teams
do
is
they'll
create
a
feature.
A
Branch
called
feature,
one
two
three,
and
that
that
naming
convention
will
be
consistent
across
all
the
different
projects,
so
branch
project
days
branch,
feature
123-
is
synonymous
with
project
b's
feature
one
two
three.
Ideally
what
would
happen
is
this
environment.
I'm
sorry.
This
branch
would
automatically
provision
a
review
environment
and
get
lab
by
default.
Utilizes
this
branch
name
as
the
environment
name,
and
so
you
have
this
project
a
and
it's
being
deployed
out
to
there's
a
review
environment
deployed
out
to
feature
one.
A
A
So
this
is
a
simple
project
called
it's
a
a
django
web
application,
so
it's
written
on
the
django
framework
and
you
can
see
that
this
project
just
has
autodevops
enabled,
which
makes
things
really
simple.
Another
thing
that
I'm
doing
here
is
I'm
I'm
storing
a
k8
secret,
packed
demo
service,
url,
key
and
value,
and
the
intent
here
is
that
when
autodevops
runs,
it's
automatically
going
to
provision
that
environment
inside
of
my
review
app
or
all
subsequent
applications
as
well.
So
you
can
see
this
by
by
default.
A
This
is
the
autodevops
pipeline,
so
I
have
a
kubernetes
cluster
wired
up
to
gitlab.
So
it's
going
to
build
it's
going
to
do
all
this
testing,
it's
going
to
deploy
a
review
application
and
if
I
click
on
this,
you
can
actually
see
that
it's
being
deployed
out
to
this
review
forward,
slash
verifying
service
registry,
which
is
the
branch
name,
and
this
is
what
it
looks
like,
and
you
can
see
that
this
packed
demo
service
url
is
the
same.
A
It's
you
know
this
url
that's
manually
defined
from
the
ci
cd
variable,
which
is
nice,
but
sometimes
you
know
again
back
to
our
use
case.
It
would
be
nice
to
be
able
to
dynamically
support
ephemeral
environments,
so
I
had
this
idea
of
what,
if
we
had
this
service
registry
repository,
that
kind
of
was
a
source
of
truth.
So
let's
pretend
that
we
had
this
registry
yaml
file
and
this
registry
ammo
file
was
had
an
index
of
all
the
projects.
A
It
should
be
stored
as
an
environment,
variable
called
django
autodevops
service
url.
So
that
way,
there's
consistency
across
all
applications
to
make
sure
that
the
environment
variable
is
always
the
same,
and
this
this
particular
django,
auto
devops
application,
has
been
deployed
into
two
locations,
so
this
auto
devops
location
for
production
and
then
this
review
environment
has
also
been
deployed
and
there's
two
other
applications.
A
We
have
this
packed
demo
and
this
pac
demo
is
always
going
to
be
pac
demo,
service,
url
and
there's
production,
environment
and
a
review
environment,
and
you
can
actually
see
that
there's
this
review,
verifying
service
registry
and
a
staging
environment.
So
ideally
this
enviro,
this
verifying
service
registry
environment,
would
be
utilized
for
the
dependent
to
fulfill
the
dependency
between
the
django,
auto
devops
and
pact
demo,
whereas
the
packed
provider
application
has
no
has
no
review
environment,
so
we
would
always
just
rely
on
the
staging
environment
instead.
A
So
and
down
here.
We
we
see
that
there
is.
There
is
a
defined
strategy,
so
the
first
order
of
precedence
would
be
the
ci
environment,
name,
meaning
that
if
the
the
environment
names,
this
is
an
environment
variable
provided
by
gitlab.
If
this
environment
variable
these
match,
it
would
go
ahead
and
utilize
that
url,
if
it's
been
deployed
out,
otherwise
we're
going
to
rely
on
staging
and
if
staging
isn't
available,
we'll
rely
on
production.
A
One
other
so
yeah
I'll
go
back
to
that,
but
what
this
looks
like,
rather
than
having
this
template-
autodev
ops
yaml
in
my
django
autodevops
project,
I'm
going
to
enable
this
project
demo
sys
users,
microservices
service
registry
and
rely
on
the
autodevops
file
there
and
that
file.
A
Is
fairly
simple,
it's
it's,
including
the
autodevops
template,
and
then
it's
also
adding
a
little
bit
of
logic
before
the
script
and
what's
going
to
happen,
is
this:
this
logic
is
going
to
utilize
this
service
registry
and
determine
it's
it's
you're
using
really
two
primary
commands
or
it's
using
bash
logic
as
well
as
jq,
which
is
provided
by
default
in
our
auto,
deploy
image
to
parse
out
and
determine
which
urls
to
utilize.
A
A
A
A
A
A
A
Match
some
improvements,
so
you
know
you
could
always
additionally
so
right
now,
this
this
logic
is
dependent
upon
a
few
things.
So
we
have
these
two
need
statements,
one
it
needs
to
build
before
it
happens
and
additionally,
it's
also
relying
on
the
the
register
service
job
to
generate
an
artifact
on
the
master
branch
and
then
that
artifact
is
what
is
being
used.
There's
a
json
file,
that's
being
parsed
within
this
logic,
that's
then
generating
k8
secrets
variables.
A
You
could
also
utilize
a
gitlab
pages
page
to
host
a
json
file.
That
is
then
parsed.
I
I
don't
know
the
best
solution
there.
You
could
also
tie
this
with
a
deprovisioning
of
an
environment
so
upon
deep
provisioning
of
the
environment.
It
automatically
gets
removed
from
the
service
registry,
and
then
you
could
also
provide
consumers
the
ability
to
state
what
they
need
rather
than
requiring
management
of
what
they
need
in
the
service
registry
project.