►
From YouTube: Sync on Consul configuration for DB Provisioning
A
Okay,
so
I
wanted
just
to
sync
up
with
you
real
quick
on
the
discussion
about
the
console
cluster.
Maybe
there
were
a
few
misunderstandings
or
some
bad
assumptions
on
my
part.
Initially,
I
was
thinking
we
were
going
to
have
one
well,
I
wasn't
really
sure,
but
I
was
thinking
we
would
have
one
console
cluster
for
shard
and
well.
This
is
this
is
how
we
have
it
set
up
now,
but
thinking
about
this
more
and
more,
it
sounds
like
it's
probably
not
the
best
approach.
A
A
So,
given
that
we
have
a
single
console
cluster,
then
we
need
to
decide
whether
it's
going
to
be
well.
I
guess
it's
going
to
be
the
main
console
cluster,
that
is,
that
we're
already
using
for
production
and
staging.
A
Then
it's
a
matter
of
making
sure
that
we
don't
have
duplicate
service
names,
and
this
is
where
I
think
it
gets
a
little
bit
tricky
number
two
on
the
agenda.
I
just
wanted
to
like
enumerate
all
the
service
names
we
have
now
and
what
we're
gonna
need
to
do.
I
think
for
petroni.
B
You
know
I
use
console,
that's
provided
by
omnibus.
A
And
when
you
register
services-
okay,
then
I
misunderstood.
So
when
you
register
services
against
that
console,
how
do
you
are
you
like
dropping
service
files
in
the
console
config.d
directory,
or
are
you
doing
something
else.
B
B
B
A
I
guess
my
my
concern
here
is
that
what
happens
when
we
do
a
gitlab,
ctr,
reconfigure
and.
A
I
guess
it
won't
create
it,
won't
it
won't.
It
won't
delete
or
cause
any
problems
if,
if
it
tries
to
register
a
service
by
using
that
directory,
because
there's
there's
no
like
common
services,
I'm
just
like
I'm
trying
to
I'm
trying
to
think
about
how
this
is
going
to
work,
especially
with
exporters,
because
exporters
also
need
to
have
namespace
by
chardonnay
so
and
unfortunately
like
once,
you
have
console
enabled
it
creates
the
the
server.
A
B
Yeah
yeah
yeah,
having
the
same
name
for
exporters,
could
could
could
be
conflicting
across
charts
and
short
names.
But
if
I
understand
it
correctly,
it's
that
we
only
care
about
it,
because
we
want
promises
to
script
from
the
correct
exporter
and.
A
B
So
can't
we
utilize
tag
names
like
we
just
register
servers
as
the
services
for
exporters
easily,
but
we
just
add
a
tag
name
that
is
unique
to
this
chart.
So
we
just
add
a
bunch
of
tag,
names
and
users.
A
B
A
B
Yeah,
okay,
I
I
don't
yeah
it's
just
this
yeah.
A
A
Yeah
yeah
that
should
be
fairly
straight
forward.
A
I
guess
I
guess
where
what
I'm
struggling
with
is
like
having
a
consistent
approach
to
using
shard-
and
I
was
thinking
about
this
earlier
and
taking
a
look
at
like
we.
We
have
this
service
called
postgres
sqlha
and
that
uses
tags,
but
it
uses
tags
for
master
and
replica.
B
Yeah
we
actually
use
tags,
except
that
these
are
different
ones
like
for
right
now,
in
production
we
explicitly
say:
yeah
use
tag
master
if,
if
a
certain.
A
B
Is
passing
and
that
replica
of
another?
It's
just
passing
and
the
same
thing
is,
is
done
here,
but
this
service
is
registered
by
console
itself.
It's
just
a
coincidence
that
it's
using
the
same
tags
that
we're
using,
but
this
one
is
not
done
by
only
was
itself
just
passes,
a
configuration
to
between
that
register,
console
services
and.
A
Right
so
so
right,
so
I
was
looking
at
this
and
you
can
set
like
this
patrony
scope
and
that
will
change
the
name
of
postgres
sqlhe.
So
we
could
create,
like
you
know
like
I
was
thinking,
patrony
scope.
A
You
know
equals
postgresql,
ci
right,
correct
and
that
will
create
the
service
postgresql
ci
with
two
tags
master
and
replica,
but
I
think
then,
then
we're
kind
of
establishing
a
convention
where
we
have
the
shard
in
the
service
team.
B
Yeah
yeah.
Another
thing
is
that
the
consoles
seem
to
prove
to
offer
like
multiple
wave,
multiple
ways
of
dns
discovery.
So
we
just
don't
don't
need
to
stick
to
to
the
whole
tag:
dot,
service,
dot,
the
console
and
so
on.
Maybe
for
for
the
exporters
we
need
to
because
we
are
constrained
about
what
promises
can
can
search
with.
B
Only
tags
and
not
meet
attributes,
but
for
for
postgres,
it's
flexible
because
it's
from
the
aerial
side
and
we
can
maybe
utilize
something
more
sophisticated.
I
was
thinking.
Maybe
we
can
look
into
the
prepared
queries
this
is
offered
by
console,
but
this
could
be
maybe
complete
things.
But
it's
just
that.
We
have
two
problems,
because
we
we
look
up
for
two
different
reasons:
one
first
trails,
which
is
mainly
zebusters
hay
and
the
other
one
for
the
exporters,
which
is
kind
of
different
thing.
A
A
A
So
so
we'll
have
like
four,
each
shard
and
there'll
be
a
db
host
which
will
equals
the,
which
will
be
the
ilb
endpoint
and
then
we'll
have
db
load
balancing
for
each
shard,
and
that
is
going
to
be
a
record.
A
console
record
so
it'll
be
like
do.
A
B
Ci
dash
db
replica
dot,
server.
A
A
B
Then
no
this
one
is
this
service
is
pointing
to
petroleum
itself,
but
for
db
load
battery.
We
want
to
point
to
the
eg
bouncers
in
front
of
them.
We're
going
to
have
it's
going
to
be
different.
B
It
will
be
something
like
ci
db,
dash
replica,
ci
dot,
dot
services
console.
A
B
Now
we
could
yeah,
we
can.
We.
B
There
was
no
need
for
it
in
our
current
setup,
so
we
just
went
directly
with
a
changing
c
the
service
name,
but
for
our
use
here
we
can
definitely
prefix
it
with
attack.
A
A
Maybe
maybe
it
would
be
better
just
to
put
it
in
the
service
theme.
I
don't
know
okay,
so
I'm
still
like
I'm
really
like
you,
you're,
not
you're,
not
anticipating
any
problems
with
dropping
these
service
definitions
into
the
omnibus
console.
I'm
just
worried
that
it's
gonna,
like
you
know
I
can
talk
to
the
distribution
team
to
say
what
to
see
what
they
think
another
option
could
be
is
that
we
maybe
add
an
omnibus
configuration
to
point
to
a
different
directory
for
console
so
that
it's
maintained
outside
of
omnibus.
B
Maybe
for.
B
B
But
yeah
we
can
make
sure
from
the
yeah
with
the
distribution
team.
B
A
B
A
Okay,
let's,
let's
just
open
up
an
issue
or
have
a
conversation
with
distribution
to
see
what
they
think
and
we'll
go
from
there
when
we
configure
multiple
pg
bouncers,
so
we're
using
the
omnibus
console-
and
so
that's
number
three
number
four.
A
Yeah,
so
this
one
we
don't
need
because
we're
not
disabling
it
for
number
number
four.
Can
we?
How
do
I
make
it
so
that
I
have
multiple
pg
bouncers
on
my
patrony
node
right
now,.
B
A
Okay,
I'll
double
check,
cool
so
and
number
five
was
the
last
point,
so
we
haven't
really
settled
yet
how
this
is
going
to
work,
but
I
mean
currently,
I
have
two
database
projects
for
staging
and
production
and
I
thought
we
would
just
peer
vpc,
pier
that
in
itself
may
change
like
we
may
decide
just
to
deploy
directly
into
the
production
or
staging
projects.
What
is
your
opinion
on
that?
Do
you
think
it's
any
there's
any
value
in
having
a
separate
project
for
all
this
database
stuff.
B
A
What
I
was
thinking
was
is
like
we
deploy
what
I'm,
what
I'm
really
worried
about
is
what
happened
when
we
were
building
out
the
registry
databases
and
staging
where
someone
like
we,
we
overlooked
the
service
name.
So
then,
suddenly,
like
the
new
databases,
were
showing
up
in
the
replica
list,
and
what
my
fear
is
is
that
if
we
start
deploying
to
the
production
project
and
start
registering
to
the
production
console
server,
that
will
have
the
same
sort
of
mistakes.
A
So
what
I,
what
I
was
thinking
was
is
like
we
have
a
separate
project
which
really
is
completely
isolated
from
prod.
We
deploy
the
petroni
cluster,
we
deploy
a
separate
console
service,
we
register
all
the
services
and
then
we
can
kind
of
validate
make
sure
like
okay,
all
the
services
look
good,
then
we
can
just
like
point
it.
We
can.
We
can
peer
the
project
and
point
it
to
the
existing
console
service
and
like
register
everything
there,
but
maybe
that's
being
overly
cautious.
I
don't
know
you
kind
of
see
what
I'm
saying.
B
A
bit
instant
about
it,
because
peering
introduces
all
set
of
problems
like
overlapping
ideas
and
so
on.
You
need
to
be
extra
careful
with
with
the
ip
allocations
and,
what's
that.
A
A
Yeah,
I
tend
to
agree
like
I
think.
Maybe
we
could
do
we
could
decide
later.
If
it's,
I
think,
I
think
it
actually
doesn't
matter
too
much,
because
we
probably
will
create
a
dns
entry
anyway
for
the
internal
lb.
I
think
we
do
already
correct.
B
A
So
it's
not
like
and
for
vpc
purine.
I
have
like
made
sure
that
these
new
projects
are
not
overlapping,
so
we'll
we'll
have
that
option.
If
we
want
to
take
it,
I
guess
if
we
decide
not
to
do
it
that
way,
then
we'll
just
deploy
right
into
the
production
project.
A
What
I
really
what
I
really
wanted
to
avoid
was
as
we're
validating
this
like
deploying
into
the
production
project,
especially
with
terraform
and
ansible.
You
know
it's
like.
I
think
I
feel
much
more
comfortable.
Having
like
this
separate
project,
that's
completely
isolated
with
permissions.
You
know,
I
guess
we
can.
We
can.
We
can
kind
of
validate
it
in
a
separate
project
and
then
just
point
it
to
get
live
production
too,
like
there's,
no
reason
why
we
can't
do
that.
A
Okay,
all
right,
so
I
think
that
so
I
think
the
main
thing
is
we
need
to.