►
From YouTube: 2022-05-02 Database Scalability WG Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
let's
get
started,
welcome
to
the
database.
Scalability
working
group
monday
may
2nd
significant
update
since
the
last
meeting.
So
I
have
a
few
items
here:
phase
4
rollout
was
completed
at
100
last
thursday.
There
were
some
issues
with
database
saturation
pool
size
configurations,
but
we
ultimately
collaborated
with
sres
across
time
zones
on
a
roll
forage
approach
that
helped
us
address
any
issues
quickly.
We
haven't
seen
any
saturation
issues
since
the
last
config
updates.
Last
thursday.
A
Just
linked
to
a
face
for
retro
issue,
I
tried
to
ping
people
who
were
involved
but
yeah
anyone
else
who's
not
on
there.
Please
feel
free
to
take
a
look
and
chime
in
we're
yeah,
just
looking
to
reflect
back
on
the
last
couple:
phased
roll
out
roll
out
phases
and
hopefully
take
any
learnings
from
these
last
couple
phases
into
the
final
phase.
A
Seven
roll
out
next
up
are
phase
five
and
six
kind
of
happening
at
the
same
time,
essentially
developing
the
and,
I
guess,
aligning
on
the
failover
plan,
doing
test,
runs
and
staging
and
making
sure
that
we
define
any
dr
scenarios
and
rollback
plans.
So
these
conversations
have
already
been
in
progress,
but
I
guess
we
can
all
expect
to
just
see
more
of
a
focus
on
making
sure
that
these
move
forward
and
hopefully
in
time
for
our
estimated
rollout
time
frame
of
mid-june.
A
Here
I
link
to
a
note
today
in
the
issue
about
provisioning.
The
database
layer
for
the
testing
environment
looks
like
we're
pretty
close,
but
the
sra
who
had
been
working
on
that
with
jose
is
taking
some
time
off.
So
there's
a
call
for
help
in
there
to
get
someone
else
focused
on
it,
and
there
was
also
a
little
bit
of
chat
and
slap.
Looks
like
sara
eve.
Managers
are
talking
about
it
and
should
have
an
update
for
us
by
wednesday
of
who
can
help
out
there
camille.
A
Do
you
wanna
verbalize
what
you're
typing
in
the
item
above
there.
C
So
we
continue
working
on
the
from
the
development
on
the
migration
stuff,
like
migration
is
done,
except
like
on
finalizing
some
background
migration.
C
It's
not
yet
tested
because,
like
we
actually
gonna
be
running
that
in
the
production
once
we
run
that
on
staging.
But
it's
not
really
like
at
the
risk
of
delaying
the
phase.
Six
or
seven
in
the
production.
C
C
C
C
We
very
little
like
change
the
application
itself
from
the
testing.
We
need
to
actually
like
the
database
structure
as
close
as
possible
to
the
production,
with
some
form
of
the
application
running
I
mean
it
can
be
like
a
single
node
that
we
can
really
touch
and
see
like
how
applications
we
behave
once
we
start
executing
disaster
recovery
scenarios,
because
the
main
reason
of
this
exercise
for
testing
is
like
that.
There's
disaster
recovery
scenarios
before
we
kind
of
execute
a
happy
path
on
staging.
C
So
this
is,
I
think,
what
we
kind
of
learned,
what
we
need
from
the
testing
as
of
now
and,
of
course,
like
we,
we
probably
want
to
run
qa
tests
on
this
testing
so
like
a
single
node
is
good
enough
or
some
kind
of
performance
testing
to
see
like
if
everything
is
like
working
properly.
As
for
the
application
goals,.
B
A
If
I'm
understanding
correctly,
that's
that
should
be
the
the
the
environment
where
we're
trying
to
provision
the
database
layer
in
the
testing
environment
is
that
right
is
that
the
the
environment
you're,
referring
to.
C
Yes,
like
like,
we
mostly
need,
like
a
database
architecture,
really
to
test
sort
of
this
fallback
failover
scenarios
with
some
sort
of
application
running.
But
it's
really
like
we
don't
need,
like
all
staging
kubernetes
nodes
configured.
C
We
are
just
fine,
like
probably
with
a
single
node,
because
from
the
current
rollout
it
seems
that
we're
not
gonna
be
touching
application
at
all,
except
maybe
stopping
or
maybe
not
even
touching
it
at
all
completely.
Depending
on
the
scenario
we
rather
gonna
be
changing
configuration
of
the
database
for
the
ci
so
like.
As
for
the
route
it
seems
like
quite
similar
to
like
pg
upgrade
is
just
this
steps.
C
Gonna
be
significantly
shorter
and
significantly
simpler,
because,
probably
it's
gonna
be,
I
don't
know,
reload
of
the
pg
bouncer
config
after
we
write
a
new
one,
maybe
execution
of
some
sql
commands
on
the
primary
to
disconnect
cascade
replication
or
something
of
that
sort,
so
they
are
not
gonna,
be
as
intrusive
as
pg
upgrade
and
like
we
really
want
to
avoid
sending
too
many
things.
So
we
really
think
about
as
minimal
amount
of
steps
to
not
really
touch
application
if
it's
not
really
needed.
C
A
Yeah
my
understanding
of
how
we
were
going
to
approach
that
was
once
that
that
database
layer
was
configured.
We
would
essentially
try
to
spin
up
something
with
get
for
the
application
and
then
just
point
that
to
to
what
we
had
set
up
for
the
database
is
that
right,
yeah.
I.
D
D
Taking
sorry
camille,
what
we're
doing
is
the
same
way.
We
do
the
events
marking
we
spin
up
these
things
from
clones
from
production
and
then,
as
far
as
I
understood,
we
were
going
to
take
one
of
the
reference
architectures
and
point
it
to
that
database,
so
it
it
should
be
fairly
straightforward.
B
We
already
have
a
tk
environment
built
for
this
purpose.
We
might
be
able
to
repurpose
that
one
and
point
a
new
cluster
if
we
need
to
do
so.
D
The
problem
is
because
we're
using
production
data,
we
have
very
specific
requirements
for
the
environments.
There
is
much
higher
access
control,
so
the
debian
bench
market
environment
only
there's
only
access
for
very
for
very
few
people,
and
so
we're
limited
by
that.
So
I
know
we're
standing
up
another
environment
to
test
this,
which
is
getting
a
feed
from
production
and
then
we'll
instantiate
an
instance
and
point
it
to
that
database.
D
B
D
Yeah
the
procedure
is
we:
we
do
this
all
the
time
in
database
benchmarking,
but
obviously
we
don't
have
all
the
monitoring,
because
mostly
we're
testing
database
specific
things.
So,
yes,
they're
standing
up
the
the
monitoring
environment
in
this
separate
environment
to
be
able
to
gather
whatever
data
we
need.
A
All
right,
yeah
and
then
just
the
update
of
from
ops,
I'm
actually
going
to
remove
this
section.
Now
pretty
much
everything
there's
done.
We
were
asking
for
some
clarification
if
we
needed,
if
this
larger
re-design
of
how
ci
owned
runners
works,
might
be
a
blocker
for
decomposition,
but
it
it
is
not
and
yeah.
It
looks
like
camille.
You've
confirmed
that
as
well.
A
Thank
you
see
going
on
to
reviewing
rollout
progress,
no
confidence
level
or
eta
changes
for
this
week.
We're
still
fifty
percent
confidence
for
mid-june.
A
And
the
next
item
is
pretty
much
a
repeat
of
what
I
was
talking
about
about.
This
is
the
comment
from
gregor's
about
about
why
we
don't
need
the
larger
redesign
for
cion
runners
right
now.
A
All
right
camille:
do
you
want
to
verbalize
your
pointer.
A
C
Just
like,
as
you
note,
I
think,
runner
team
probably
is
like
the
best
suited
to
that,
because
they
look
into
bigger
problem
of
the
runner
management,
but,
like
this
feature
like
xfci
on
runners
like
it's,
not
the
performance,
it
has
also
usability
problems
in
the
current
form.
C
It's
pretty
messy,
like
from
the
perspective
of
the
user,
to
use
that
and
just
as
showing,
like
maybe
hundreds
of
runners,
just
just
doesn't
scale
well,
so
it
shows
like
the
like
the
the
things
that
we
did
like
we
fixed
that
to
make
it
somehow
work,
but
there
is
a
bigger
deficiency
than
that.
Only
so
very
likely.
This
feature
should
be
like.
We
thought
what
we
want,
how
how
we
want
people
to
use
that
effectively,
because
once
we
do
that,
the
performance
deficiency
will
also
become.
A
Thank
you
and
yeah.
Just
looking
at
the
progress
charts,
we
close
out
a
couple
more
issues.
A
lot
of
the
remaining
issues
are
related
to
to
these
next
couple
of
phases:
testing
the
the
rollout
and
various
scenarios
and
then
some
kind
of
follow-up
work.
That's
that's
in
progress
like
feature
flight
rollout
for
the
ci,
namespace
mirrors
work
and
some
things
like
that.
So
I
think,
even
though
it's
kind
of
getting
to
that
plateau,
it's
I
think
we're
still
still
on
track.
A
It's
most
of
these.
Remaining
issues
are,
are
in
progress
or
on
track
to
be.
C
I
learned
about
this
pt
upgrade
today
on
one
hand,
it's
great
because
there's
like
a
lot
of
overlap
with
the
downtime
approach,
if
you
would
want
to
execute
that
a
lot
of
scripting
around
it,
but
there
is
also
like
I
guess,
like
the
race
period,
windows,
conflicts
and,
to
some
extent,
people
the
same
people
helping
us
to
execute
this
rollout,
and
this,
I
think,
like
it's
related
to
staging
and
also
production.
C
So
I
I
I
guess
this
is
maybe
like
well
right
now,
like
the
biggest
of
the
factors
as
for
the
confidence
for
the
time
when
we're
gonna
deploy
that,
because
we
need
enough
attention
for
that
rollout,
we
are
pretty
confident
in
the
solution
so
like
we
don't
anticipate
anything
going
wrong,
but
we
need
to
also
like
properly
test
that
ahead
of
the
time,
so
we
kind
of
to
some
extent
fighting
for
the
same
resources
with
the
pg
upgrades.
C
But
on
the
other
hand,
we
are
kind
of
gonna,
be
probably
using
a
lot
of
pg.
Upgrade
knowledge.
Ask
for
the
process
execution
because,
like
to
major
accident,
it's
gonna
be
pretty
similar.
So
this
is
my
command
gary.
D
Yeah,
because
these
things
involve
downtime,
we
have
commitments
for
customers
and
users
to
pre-announce
this
with
enough
notice,
so
we
should
be
able
to
make
sure
that
these
things
are
nowhere
near
each
other
and
just
in
case
anybody
ever
says.
Oh,
we
might
be
able
to
do
these
two
things
at
the
same
time,
the
answer
is
no.
D
We.
These
are
very
serious.
This
is
very
serious,
so
we'll
keep
them
separate,
but
we
have
to
plan
way
in
advance,
so
we'll
make
sure
they
do
not
overlap
in
any
way.
And
yes,
we
should
be
able
to
reuse
or
leverage
some
of
the
tooling
that's
going
into
the
pa
upgrade
to
help
out
with
this
migration.
A
Yeah
thanks
for
that
that
did
come
up.
We
have
a
weekly
decomposition
stakeholders
meeting
on
wednesdays
and
that
did
come
up.
I
think
start
coming
up
a
week
or
two
ago.
So
it's
something
where
we
have
as
a
regular
update
item
there
as
well
just
to
kind
of
check
in
on
the
timing
of
both
of
those,
and
we
can
make
sure
that
those
updates
are
also
brought
up
here.
D
Yeah,
actually,
since
that's
already
being
discussed,
we
should
probably
predetermine
which
one
takes
precedence
right.
If
we
need
to
make
a
decision
of
what
goes
first,
then
let's
make
it.
We
can
already
make
that
choice.
I
think
functional
decomposition
is
more
important,
because
the
upgrade
is
something
we're
going
to
redo
over
and
over
again
once
a
year.
We're
very
committed
to
that.
So
I
think
it's,
it's
probably
easier
to
move
the
upgrade
up
and
down
than
to
tinker
with
functional
decomposition.
C
D
Now
because
it's
the
same
process
against
both
databases
and
because
we
have
developed
the
tooling
to
be
able
to
do
this
regularly,
so
we're
committed
to
product
that
we
will
do
a
major
postpost
upgrade
once
a
year,
so
the
focus
for
this
year's
upgrade
has
been
on
make
this
reusable,
unlike
the
last
two
bigger
large
uppers
that
we've
done
where
we
we
just
worked
on
that
one
thing,
because
we
hadn't
done
this
before
so
I
I
don't
think
it
matters.
D
So
I
don't
know
because
I
haven't
been
working
on
that
jose
or
marcel
would
know
if
we
do
it
before,
then
both
clusters
get
done
essentially
one
right
after
the
other.
D
If
we
do
it
after
then,
we
could
run
different
versions
for
a
while
but
operationally
that's
then
we
would
have
to
get
another
maintenance
window
to
the
second
cluster.
So
the
way
the
upgrade
is
being
prepared
is
both
clusters
get
done
serially,
but
in
the
same
window.
C
D
Before
functional
decomposition
is
done,
those
clusters
are
joined,
so
we
will
have
to
do
both
of
them
at
the
same
time,
no
matter
what
after
functional
decomposition
is
done,
we
have
some
flexibility,
but
it's
not
a
best
practice
for
us
to
be
running
clusters
that
are
tightly
coupled
to
an
obligation
in
different
versions
of
postgres.
So
we
would
want
to
avoid
that
at
the
end
of
the
day,
again,
I
think
functional
decomposition
has
higher
preference
at
the
app.
B
D
C
It
makes
sense
to
me
as
like,
I
think,
from
the
performance
point
of
view
on
the
headroom
taken
and
also
like
probably
slightly
easier
steps
initially
than
pg
upgrade.
D
Headroom
wise
13
is
gonna,
give
us
a
little
bit
better
performance,
maybe
five,
ten
percent.
So
it's
not
gonna,
be
like.
Oh,
it's
not
like
when
we
did
12th
where
it
was.
It
was
like
20
and
we
really
wanted
12.
and
because
functional
decomposition
itself
has
just
given
us
very
significant
happen.
Then.
C
Yeah,
it's
like
like
from
the
things
that,
like
we
are
like,
observe
after
facebook.
It
seems
that,
like
we're,
gonna
see
I
gonna
like
remove
about
30
percent
or
something
like
40,
but
something
between
30
to
40
percent
from
the
main
pg
host,
so
we're
gonna
have
significantly
more
cpu
headroom
and,
like
significantly
more
headroom,
for
the
new
connections,
if
we
want
to
add
more
parallel
hosts.
C
So
this
is
this
is
what
we
are
kind
of
seeing
right
now
after
phase
four-
and
this
is
like
excluding
all
the
storage
and
the
vacuuming
chances
that
we
simply
come
later.
This
is
just
looking
at
the
pure,
I
guess,
like
connection
usage,
that's
going
to
be
redirected
to
ci.
A
All
right,
thanks
for
the
discussion
and
yeah,
any
any
other
questions
comments
on
that
point.
A
Cool
the
last
item
here
yeah-
I
talked
about
this
a
little
bit
last
week
about
just
wanting
to
clarify
some
of
the
exit
criterias.
We
kind
of
get
towards
the
the
end
of
decomposition
here
and
trying
to
figure
out
if
what
what
else
there
might
be
to
do
in
the
working
group
and
more
just
for
for
me,
as
well
as
I've
taken
over
as
facilitator,
just
to
understand,
what's
been
done
and
and
what?
What
exactly
we're?
A
Looking
for
in
the
exit
criteria
for
the
working
group,
I
think
we
had
some
pretty
high
level
goals,
but
I
also
wanted
to
make
sure
that
we
were
very
clearly
defining
that
exit
criteria
and
figuring
out,
like
are
those
high
level
goals,
things
that
we
actually
want
to
complete
before
we
exit
the
working
group.
Are
we
just
trying
to
make
sure
that
we're
on
good
footing
to
move
forward,
and
I
think
fabian
also
suggested
that
we
that
we
keep
the
working
group
going
until
until
we
finish
decomposition,
but
then
consider
after
that?
A
Maybe
if
you
want
to
roll
this
into
discussion
about
pods
or
whatever
those
next
steps
might
be
so
I've
started
to
draft
mr
to
try
to
clarify
these
didn't
get
as
much
of
a
chance.
I
wanted
you
to
work
on
it,
so
my
plan
is
to
go
back
kind
of
take
a
look
at
some
of
the
work.
A
That's
already
been
done
on
blueprints
and
ideas
and
then
make
sure
that
we're
also
reflecting
the
the
progress
that
has
been
done
as
well
as,
what's
what's
remaining
so
yeah
any
feedback
on
that
would
be
great
and
all
ping
folks
and
and
remind
everyone
about
it.
Next
week.
A
All
right
that
brings
us
to
the
end
of
the
agenda.
Anything
else.