►
From YouTube: Repeatable DB creation Demo (2021-11-03)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everyone
to
the
repeatable
database
creation
demo.
Well.
The
first
thing
that
I
have
in
the
agenda
is
that
we
are
using
an
alternative,
swimling.
Well,
jar
is
out
so
that
we
can
record
this
I'll
see
how
to
do
the
upload
to
youtube
thing
later.
So
for
today's
demo,
we
wanted
to
demo
the
work
that
was
done
by
ahmad
for
the
virtual
private
network.
A
And
this
is
an
important
change,
because
when
we
are
going
to
create
a
database
charts
in
the
staging
project
or
in
the
production
project,
we
have
to
connect
to
existing
vpcs
so
that
they
are
able
to
communicate
with
other
hosts
on
that
project.
And
we
have
to
make
sure
that
connectivity
works
and
that
we
don't
disrupt.
A
The
the
existing
configurations
in
those
gcp
projects,
so
what
I
did
is
prepare
this
draft
metric
requests
that
we're
not
going
to
merge
and
in
the
changes
that
ahmad
created
this
this
in
the
shards,
the
sony
ad
file.
That's
this
existing
ppc
name!
A
If
you
leave
this
blank,
then
the
db
provisioning
project
will
create
a
vpc
and
we'll
create
firewall
rules
and
all
that
stuff,
and
I
think
this
projects
I
applied
in
beta
whoops.
A
So
I
was
going
to
apply
this
using
ci,
but
we
discuss
with
ahmad
that
it
doesn't
work
if
you
have
an
environment
and
then
change
the
ppc,
because
this
is
a
very
fundamental
change.
So
we
will
have
to
scrap
the
environment
and
create
it
from
scratch,
which
is
not
a
project,
because
this
not.
B
A
A
problem
for
real
world
usage,
because
this
is
what
we
plan
to
do,
but
it
was
going
to
be
a
little
bit
tedious
for
the
demo,
so
I
went
ahead
and
applied
this
on
the
beta
testing
environment.
So
this
is
the
gamma,
so
the
beta
testing
environment
now
has
it
doesn't
have
the
bbc
created
by
tv
provisioning.
A
Instead,
you
have
to
create
a
bbc
on
your
own
and
in
the
case
of
the
stadium
and
production
environments,
for
example,
this
corresponds
to
already
existing
bpcs.
So
I
think
I
have
this
staging
one
here
so
for
staging,
for
example,
we
will
target
this
vpc,
which
is
where
all
the
existing
components
are
and
we
will
just
target
a
sub
network
that
is
not
in
use
currently
so
on
the
beta,
where
we
applied
this
draft
change
of
using
an
existing
vpc
of
name
demo.
A
If
you
look
at
the
firewall
rules
for
a
project
that
we
create
the
ppc,
there's
a
default,
allow
internal
rule
that
allows
each
host
to
communicate
and
when
I
was
testing
this
change,
the
host
couldn't
talk
within
the
cells
and
that's
because
we
don't
create
a
firewall
rule
because
we're
not
creating
the
vpc.
So
I
had
to
create
a
manual
firewall
rule
myself
with
the
subnetwork
ip
range
so
that
communication
work.
A
B
Yes
correct,
so
basically,
the
expectation
of
this
implementation
is
that
we
are
hooking
two
evicts
that
has
been
created
by
our
dbc
module
that
has
exhausted
under
our
terraform
group
transfer
modules
group.
So
this
mod
is
the
vpc
module.
Just
creates
this
firewall
rule
so
allow
internal
demo.
So
if
you
would
have
created
demo
using
the
vpc,
therefore
module
it
would
have
created
this
this
firewall
out
of
the
box,
but
I
guess
you
could
demo
manually
so
yeah.
A
B
Particular
rule
is
secreted
if
we
are
creating
the
yeah
this
one
right.
So
if
we
are
creating
the,
as
you
see
ourselves
like
in
the
example
of
our
sandboxes
it,
it
will
be
created
using
this
module
and
subsequently
we
will
have
the.
B
A
A
So
here
in
the
charts,
you
specify
an
internal
subnet
right,
so
so,
for
example-
and
this
is
different
from
from
the
tvs
from
the
config
management
way-
we
do
things,
which
is
that
in
conflict
management,
we
have
a
sub
network
per
type
of
component
right.
So
there's
a
sub
network
for
the
best
students
to
network
for
journey
in
the
immigration
provisioning.
We
have
a
single
sub
network
for
all
the
vms
that
are
going
to
be
created
right.
B
Yeah,
that's
kind
of
downside,
it's
not
really
conforming
to
how
we
do
things
right
now
in
staging,
so
we
supply
a
single
subnet.
That's
that's
huge!
It's
slash
16.
So
we
have
a
lot
of
room
to
create
a
lot
of
nodes,
but
all
of
them
will
be
in
this
subnet.
So
as
you.
B
Console
servers,
posters
and
beach,
bouncers
right
and
and
of
all
the
different
charts.
So
all
these
different.
So
if
we
have
like
ci
chart
and
main
chart,
all
of
them
will
be.
B
A
Okay,
yeah,
I
mean
I
guess
that
shouldn't
bring
any
practical
problems
is
different
than
what
we
do,
but
it
should
work
I'll
be
starting
to
to
test
this
on
the
db
benchmark,
and
here
are
some
changes
that
I
haven't
committed
to.
The
adobe
benchmarking
environment,
see
how
how
it
works
on
an
existing
environment.
A
But
one
of
the
things
I
was
wondering
is
okay,
so
you
you
put
an
internal
subnet.
So,
for
example,
here
we
said
10
7,
10,
176,
0,
0
16.
Should
we
put
that
in
in
this
variable
on
conflict
management,
even
if
it's
not
going
to
be
used
for
for
anything
just
so
that
people
don't
grab
that
subnet
work
when
they
create
something
in
that
in
this
project,.
B
Well,
it's
not
really
sitting
stone.
I
guess
when
jarv
chooses
this
particular
subnet,
he
chose
it
with
with
dipped
loins
the
whole
db
programming
in
a
separate
project
and
using
pairing
to
connect
staging
and
the
new
products.
So,
however,
we're
going
to
deploy
this,
I
guess
we're
not
really
going
sort
of
using
a
dedicated
project
for
db,
so
yeah.
We
can
just
use
a
a
new
block
of
subnets
and
change
it
in
the
shards
json
file
and
just
reserve
it
in
the.
A
A
A
Yeah
yeah
right
so
we'll
do
this
and
saying:
okay
yeah!
I
wonder
I
mean
so
in
that
case,
maybe
we'll
it
will
make
sense
what
I
say
of
just
adding
adding
it
to
this
option
network
just
so
that
the
value
that
we
choose
here
is
not
used
for
something
else.
On
config
management,
which
is,
I
guess,
it's
not
ideal,
because
it's
not
really
a
way
to
enforce
that.
But
but.
B
Documenting
that
this
subnet
is
it's
kind
of
user.
I
guess
the
way
we
do.
It
is
just
we
look
at
this
huge
list
of
subnets
and
sees
and
expenses.
B
A
A
All
right,
I
had
a
small
question:
that's
not
really
related
to
them,
but
wanted
to
take
the
chance
to
ask
it
so,
for
the
current
way,
we're
doing
things
is
that
we
are
creating
the
cir
just
as
a
standby
cluster
of
the
main
chart,
so
that
it
has
the
same
data
and
the
sharing
team
can
test
access
to
different
databases,
and
I
wanted
to
ask
mine
if
you
knew,
if
we
can
do
that
standby
cluster
configuration
using
db
provisioning
or
do
we
have
to
do
it?
B
Well,
I'm
not
pretty
sure
how
this
structure
your
reference
is
working
like
that,
like
are
you
saying
that
we
have
an
existing
working
petroleum
cluster
ends
in
a
and
a
separate
cluster
is
replicating
from
it
as
the
standby
leader
and
so
on.
A
Right,
which
is
what
we're
doing
in
the
cir
that
we
created
using
chef
and
therefore.
B
Do
it
in
in
in
the
project
configuration
we
can
specify,
of
course,
as
long
as
omnibus
petroni
allows,
but.
A
A
Right
yeah,
this
is
so
I
was
checking
how
we
do
it
on
chat.
We
do
it
with
the
gila
patrony
cookbook
so,
and
this
goes
to
the
dcs
configuration
yeah
I'll
check.
How
how
I
can
do
this
on,
because,
probably
when
we
deploy
this
on
tv,
benchmarking,
honest
or
on
staging,
we
will
have
to
do
it
this
way,
even
though
this
is
not
the
way
we're
going
to
do
it
once
the
sharding
project
is
finished,
but
just
intermediate
state.
So
I'll,
look
at
that
and
ask
you
if
I
I
get
lost.
A
Okay,
well,
that's
everything
I
had
so
I
think
we
can
end
the
meeting.