►
From YouTube: Repeatable DB Creation Demo (2021-09-01)
A
A
I
don't
think
there's
any
complication
here,
so
maybe
we
can
get
this
done
soon.
On
the
omnibus
side,
there
are
a
few
blockers.
One
is
the
bg
bouncer
one
which
we've
worked
around
for
now.
The
other
two
are
related
to
setting
custom
service
names
in
console,
which
is
something
we're
going
to
need,
but
are
actively
being
worked
on.
So
I
don't,
I
don't
foresee
any
problems
there
number
two.
A
These
are
just
like
the
big
items
that
we
have
left
the
first
one
is
prometheus
server
and
still
thinking
about
how
we're
gonna
do
this.
Now
that
we
have
the
environment
machines
like
we
have
a
framework
for
deploying
machines
that
will
be
for
the
environment
and
not
the
shard,
we
can
maybe
deploy
kubernetes
cluster
and
install
prometheus
on
it
and
use
console
discovery.
I'm
just
hoping
that
all
that
will
work
well,
so
we'll
give
that
a
try
meet
a
bbc
networking
and
we
need
to
figure
out
logging.
A
We
had
a
discussion
with
the
sharding
team
about
how
we
can
support
them.
I
created
a
new
epic,
that's
linked
to
the
main
epic.
What
we're
gonna
do!
What
I'm
thinking
right
now
is
that
we'll
just
do
we'll.
Just
have
a
separate
environment
for
them
to
have
as
a
sandbox,
with
two
petroni
clusters
installed
and
we'll
deploy
an
omnibus
vm
into
the
project.
A
And
for
demo,
I
just
wanted
to
create
create
database
shards
with
iobs.
A
A
A
I
think
then
this
should
already
be
configured.
Let
me
just
check
out
the
gcp
console.
A
A
So
how
about
we
do
this
from
one
of
the?
We
can't
do
it
from
the
bash
gin,
because
the
bastion
doesn't
have
omnibus
installed.
You
could
do
it
either
from
the
console,
postgres
or
pgbouncer
I'll,
just
I'll
just
do
it
from
the
postgres
code.
So
I'll
do
it
from
like
this
guy.
A
A
And
then
user,
like
we
haven't,
done
anything
with
these
databases
or
did
because
I
don't
think
like
use
or
user's
automatically
created
by
omnibus.
B
B
A
Okay,
so
we
don't
think
we
have
a
password
set
for
gitlab
right.
D
A
Isn't
there
like
also
there's
a
gitlab
ctl
grant
for
this
right,
like
it's
like
gitlab,
cto
pg,
something
attorney.
A
All
right
and
we
are
56
all
right-
we're
the
leader
all
right,
that's
great!
So
now.
B
It's
a
yes
alter
roll,
then
the
name
of
the
I
guess
within
double
quotes.
B
Like
that,
I
guess
so:
no,
okay,
it's
in
chimney
double
quotes
to
single
quotes.
I
always
fall
for
the
filter.
It's
single
quotes.
Yes,.
A
A
A
Okay,
that
was
a
good
demo.
Does
anyone
have
anything
else
before
we
end.
E
I
know
the
gp
network
stuff
is
is
high
on
the
list
and
it's
higher
on
our
list
as
well.
It
currently
is
set.
C
E
1.3
but
we're
working
at
pace,
and
if
we
can
carry
the
time
for
1.2,
we
can
look
at
it,
but
if
you
guys
wanted
to
contribute
we're
more
than
welcome
more
than
happy
to
to
to
help
with
that
as
I've
kind
of
called
in
the
issues,
we've
got
design
kind
of
locked
down
with
the
aws
vpc
setup.
We've
done
so,
if
you're
looking
for
essentially
a
copy-paste
job
onto
gcp,
with,
obviously
the
modifications
required
for
gcp's
little
differences.
E
So
I'm
happy
to
take
contributions.
If
not,
we
will
try
to
get
this
as
a
priority,
we're
just
finishing
up
some
project
car
stuff
and
then
I
know
obviously
trying
to
balance
between
different
projects
out
there.
I
know
this
is
like
the
top
thing
for
you,
so
I
think
it
it's
very
high
analyst
so
I'll
be
able
to
tackle
it
as
soon
as
possible
for
myself
as
well.
If
there's
no
contributions,
sure.
A
And
while
you're
here
grant
as
well
as
alejandro,
I
I
just
submitted
an
mr
about
the
nat,
and
the
problem
we
have
right
now
is
that
when
we
added
that
support,
we
sort
of
assumed
there
would
be
one
nat
per
gcp
project,
but
because
we're
deploying
multiple
reference
architectures
in
a
single
gcp
project.
A
We
we
really
only
want
one
that
so
I
submitted
an
mr
to
optionally
disable
creating
the
net
still
disabling
public
ids,
but
you
know
disable
public
ips,
but
disabled
in
that,
so
we
can
create
it
outside
of
get
because
I
don't
think
you
know.
What
I
don't
know
like
is
is
deploying
multiple
reference
architectures
within
a
single
gcp
project,
something
that
we
want
to
really
worry
about.
That's
when
we
have
multiple
see
in
order
to
do
that.
A
E
The
certainly
the.
E
In
terms
of
best
practice,
what
we
say
to
people
is,
you
should
be
doing
it
for
excuse
me
project,
but
we
know
in
some
cases
some
people
do
want
to
deploy
different
architectures
to
the
same
projects
and
there's
various
considerations
in
there
to
say
vpcs,
most
importantly,
resource
quotas,
which
each
project
has
but
there's
no
strict
requirement.
So
I
did
see
the
armor
open.
I
know
you've
tried
alejandro
to
look
at
first
when
you're
ready
for
me
to
have
a
look.
E
Give
me
a
shout
and
I'll
had
a
quick
look
and
everything
looks
fine
to
me
but
yeah.
We
we're
happy
to
continue
support
to
support
deploying
mobile
architectures
to
one
project
for
people
that
need
to
do
it,
but
in
terms
of
best
practice
I'd
say
you
should
be
doing
a
project.
A
Yeah
like
this
is
really
it's
not
really
that
we're
deploying
multiple
gitlabs
per
project.
It's
that
we're
deploying
multiple
petroni
clusters
per
project
right.
So
this
is
like
where,
since
get
doesn't
explicitly
support
sharding
yet
like
having
multiple
petroni
clusters
associated
with
one
gitlab
instance,.
A
E
Yeah
I
mean
yeah
sharding,
that's
a
big,
a
big
old
thing
to
tackle
that
we
know
is
coming
and
we
will
tackle
it
when
it
comes
into
omnibus,
because
obviously
the
last
I
heard
about
that
project
was
still
very
early.
This
is
where
I'll
go
those
early
days
on
actually
deciding
how
team
do
sharding
and
gitlab
itself,
because
obviously
that's
a
very
complicated
heavy
thing
to
get
done
so
yeah.
We,
we
that's
the,
but
what
you
describing
this
kind
of
the
reason
why
we
still
support
deploying
on
the
same
project
in
gcp.
E
E
There
is
other
forms
of
encapsulation,
but
generally
most
most
accounts
in,
like
aws,
usually
play
in
the
same
pool,
so
we're
happy
to
support
multiple
projects,
and
once
we
get
the
network
stuff
in
that,
that
will
further
that,
I
suppose.
So
I
guess
that's
that's
where
I
come
from
the
get
standpoint.
E
Yeah,
we'll
try
and
get
in
and
if
you
guys
want
to
tackle
it
earlier,
you're
more
than
welcome,
but
if
not,
we
will
try
and
get
an
asap
for
you
and
we
know
so
we
hear
you
loud
and
clear.
We
know
it's
top
priority,
we're
just.
I
just
try
to
get
it
through
all
cool.