►
From YouTube: Repeatable DB creation Demo (2021-08-25)
A
Hey
jose,
we
haven't
started
yet.
A
I
think
I
think
we
have
enough
people,
let's
go
ahead
and
start
first
item
is
on
blockers
and
I
just
wanted
to
give
you
guys
a
link
to
all
of
the
blockers
for
across
all
projects.
I
move
these
to
a
group
label
so
that
you
can
see
them
that
it
spans
both
omnibus
and
get.
A
Those
are
the
priority
one
blockers
and
I
just
wanted
to
highlight
a
few
of
them.
One
of
them.
One
of
the
highlights
here
is
that
since
last
week
we
decided
we're
going
to
have
one
console
server
as
like
one
console
server.
That's
going
to
service
all
the
database
charts
so
because
of
that
we
need
to
make
sure
all
the
service
names
are
unique.
A
There's
some
hard
coding
that
happens
for
service
names,
so
we're
trying
to
get
those
changes
through
omnibus,
but
it's
going
quickly.
So
I
don't
think
you
know
it's
gonna,
be
a
problem.
A
Item
number
two,
just
to
give
you
an
idea
of
what's
being
worked
on
now.
Right
now,
ahmad
is
working
on
the
ilb.
We
don't
have
support
for
this
and
get
so
we're
gonna,
probably
just
work
around
that
by
creating
our
own.
Well,
ahmad,
if
you're
on
the,
if
you're
here,
could
you
just
give
a
quick
update
on
iob
stuff.
D
Yeah
sure
so,
as
you
said,
we
don't
have
the
necessary
components
in
yet
right
now,
so
my
plan
was
to
reuse
some
of
the
stuff
we
already
use
for
provisioning,
our
load
balancer
in
production
and
staging.
D
But
still
we
need
to
extend
as
the
git
branch
we're
using
right
now
because,
for
example,
all
the
nodes
are
provisioned
in
a
single
zone
and
the
the
group,
as
the
group
says,
the
gcp
groups
expects
that
you
have
it
in
across
all
three
zones
and
so
on.
So
we
still
need
to
change
a
few
bits
in
in-game
itself,
but.
D
A
A
Okay,
what
I'm
working
on
right
now
is
exporters
and
monitoring,
just
making
sure
we
have
all
the
prometheus
exporters
that
we
need.
This
is
going
well.
A
I
expect
I
expected
to
have
this
wrapped
up
soon
or
I
expected
expect
to
have
this
wrapped
up
sooner,
but
I
think
we'll
probably
finish
this
week
and
then
it's
going
to
be
a
matter
of
getting
the
prometheus
deployed
into
cooper,
netties
and
figuring
out
how
we're
going
to
hook
it
into
thanos
and
that's
probably
the
biggest
risk
right
now,
because
there
are
just
some
unknowns
around
that
craig.
You
have
item
number
three.
B
Yep
so
now
I'll,
just
verbalize,
I'm
assuming
sharding
group
will
be
able
to
use
this
repeatable
database
creation
for
their
testing
and
staging
environments,
and
we
were
just
discussing
when
not
having
an
environment
will
become
a
blocker.
So
leads
to
a
couple
of
questions.
A
Yeah,
so
I
think
there
are
two
two
aspects
of
this:
one
is
having
multiple
database
shards
or
probably
just
one
shard
to
start
in
addition
to
the
main
chart
probably
will
have
like
a
ci
chart
or
something,
and
we
provision
that
and
we
hook
it
up
to
staging
and
what
we'll
provide
is
you
know
a
pg,
bouncer,
endpoint
and
a
console
service
for
the
replicas
and
we'll
be
able
to
configure
staging
using
those
and
and
then
there's
questions
that
remain
as
like.
A
I'm
not
sure
how
migrations
are
going
to
work
for
the
new
database.
Shards
I
mean
is
that
something
we
have
to
you
know
add.
Is
this
going
to
be
a
disruption
to
the
current
staging
environment?
I
think
that's
why
we
wanted
a
separate
environment
to
hook
up
initially,
so
so
that's
the
other
aspect
is
like
what
we
don't
have
anyone
actively
working
on
creating
a
separate
environment
for
sharding.
A
We
have
the
reference
architecture
for
staging
like
a
staging
environment
using
the
reference
architecture,
and
we
were
thinking
about
borrowing
that
for
this,
but
that's
still
unclear.
B
B
Yeah,
so
any
environment,
it's
going
to
be
replication
streaming
so
effectively,
we'll
just
have
another
copy
of
the
database
and
then
we'll
drop
or
just
not
use
the
tables
that
we're
not
going
to
use
in
the
first
iteration
just
drop
them.
So
this
our
focus
doesn't
need
to
be
entirely
on
staging.
We
just
need
another
environment
where
we
can
start
testing
the
implementation
start
running
tests
against
it,
etc.
B
A
Yeah
this,
this
tooling,
is
around
creating
repeatable
databases
and
not
creating
repeatable,
like
you
know,
not
creating
a
full
gitlab
installation
that
can
hook
up
to
it.
So
that
may
be
part
of
the
confusion.
I
don't
think
that
would
be
difficult
to
bring
up
a
reference
architecture
that
we
hook
into
this.
A
When
I
was
talking
to
camille
about
this,
he
thought
maybe
like
the
first
or
he
was
thinking
that
the
first
iteration
could
be
just
treating
this
shard
as
a
replica,
and
then
you
know
where
all
of
the
tables
will
be
replicated
to
it
and
then
we'll
just
activate
the
logic
to
kind
of
like
use.
The
replica
as
another
shard,
even
though
like
it,
will
have
all
the
same
data
as
the
main
database.
A
So
if,
if
that's
all
we're
looking
for,
then
that
should
be,
I
think,
doable
for
mid-september.
I
I
would
like
to
know.
Maybe
we
need
to
talk
outside
of
this
meeting
on
exactly
what
the
expectations
are,
because
I
I
think,
that's
a
little
bit
unclear.
B
Okay,
yeah,
it's
it
sounds
like
this
tool
is
separate
from
our
needs,
so
we
can
maybe
just
invite
you
to
the
next
sharding
sync,
which
is
on
monday,
where
we
can
talk
about
what
our
teams
are
and
make
sure
that
we
have
a
path
forward
on
that.
So,
okay,
then
you
can
disregard.
My
next
question.
A
C
C
A
So
that's
my
goal
is
I'm
shooting
for
mid-september?
The
monitoring
may
come
after
that.
If,
if
we
fall
behind,
I
think
it
really
depends
on
how
much
of
ahmad
and
alejandro's
time
we
can
have
right
now.
The
registry
db
has
been
taking
up
some
of
our
bandwidth,
so
I
don't
know,
but
that's
the
that's
the
plan.
The
priority
will
probably
just
to
have
a
functioning
patrony
cluster
that's
registered
with
consoles
so
that
the
sharding
team
can
start
using
it.
C
A
Right
now,
what
we're
doing
is
we
are
using
ansible
and
terraform
and
we
are
taking
advantage
of
some
of
the
ansible
and
terraform
that
get
has
written
so
we're
we're
basically
plugging
into
get
where
it
makes
sense.
Does
that
answer
your
answer,
your
question?
I
mean
it's
nothing
beyond
danceable
and
terraform,
though.
C
We
have
a
patrony
and
like
if
you
need
to
do
a
changing,
poses,
we
need
to
change
patterning
and
who
will
apply.
The
changes
will
be
we'll
use
omnibus
for
some
reason,
with
chef
or
not.
Why
are
we
changing
directly
to
config
files.
A
A
So
I
don't
have
anything
to
demo
for
today
that
goes
beyond
what
we
did
yesterday
or
last
week.
If
there's
nothing
else,
I
think
we
can
end
the
demo
meeting.
Does
anyone
have
anything
else.