►
From YouTube: 2021 07 28 EMEA Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
As
part
of
that
we
had
a
method
that
would
retrieve
configuration
settings
for
the
database,
and
then
we
had
a
separate
method
that
would
disable
prepared
statements
that
method
that
returns
configuration
settings
so
such
that
after
the
first
call
it
would
just
reuse
its
results
from
the
previous
calls,
because
the
method
itself,
it
basically
returned
it-
did
the
same
thing
over
then.
The
bug
was
that
when
we
want
to
disable
prepared
statements,
we
would
use
this
method
and
we
would
end
up
using
a
older
and
smaller
database
pool
size,
because
this
result
was
now
cached.
B
At
least
that's
what
I
understand-
and
I
mean
there's
a
really
silly
thing
like
a
method,
you
look
at
and
go.
Oh
yeah,
you
know
we
can
just
reuse
this,
there's
no
reason
it
should
fetch
all
this
stuff.
Every
time,
and
apparently
another
method
didn't
expect
that
basically,
but
it
wasn't
something
our
tests
were
catching
or
something
that's
actually
even
obvious.
Just
by
looking
at
the
code,
I
think.
A
How's,
the
benchmarking
environment
going
say.
C
Good
like
in
this
week,
like
we
are
finishing
out
the
pg
bouncer
level,
that's
okay!
I
need
to
work
with
two
more
components
like
the
game.
Meter
instances
set
up
all
of
this
and
the
net
data
that
will
be
our
monitoring
for
the
to
measure
the
tests.
Then
the
environment
is
ready
afterwards.
I
need
to
create,
as
well
with
the
infra
team,
the
the
playbook
that
will
execute
all
the
failover
itself,
our
switching
the
traffic
like
what
we
are
doing.
C
A
We
didn't
have
an
agenda.
I
was
just
asking
a
couple
questions.
I
saw
camille
running
through
the
topics
there.
Anybody
else
have
anything
they
want
to
cover
today.
D
The
the
incident
is
that
cost
about
our
mr
yeah.
Okay,
that's
about
a
pk
migration
word!
This
sharding
work!
Sorry
mark
okay,.
A
No
first
pk
migration
went
through
last
night
without
a
problem.
D
D
In
the
future,
we
probably
my
recommendation-
or
my
thinking
is
maybe
when
we
make
changes
to
production
because
we
are
insulted
pcl,
we
probably
want
to
babysit
it
with
the
sres
together
for
a
period
of
time.
B
Know
I
think,
the
the
challenge
there
is
that
it's
not
really
predictable
when
something
will
be
deployed
like
it's
actually
surprised
this
already
got
deployed
yesterday,
because
I
think
it
only
got
merged
two
days
ago
and
due
to.
B
Right,
yeah,
you
know
I
am
replying
to
some
of
the
issues
with
some
details
there,
but
the
the
challenge
there
is
basically
like
once
something
is
emerged.
It
can
be
deployed-
let's
say
five
minutes
later
or
12
hours
later
or
right.
So
it's
right.
This
is.
E
E
A
B
No,
but
basically
my
point
more
is
there
like,
if
you
have
say
like
pcl
or
not
sorry,
to
understand,
at
least
in
our
current
state.
We
don't
really
have
a
way
of
saying.
Oh,
we
expect
this
to
be
deployed
then
so
please
have
these
people
be
around.
E
This
is
just
a
short-term
thing,
given
the
bcl
and
the
instability,
and
all
that
so,
like
I
said
I
I
I
was
talking
to
amy
this
morning
and
that's
what
she
said
that
she
was
gonna
gather
some
data
and
then
follow
up
to
see
what
options
we
had
to
make
this
more
predictable
and
make
sure
that
you
know
we're
not
having
people
waiting
for
12
hours
for
this
thing
to
go
out.
Nor
do
we
have
another
incident
or
no
one
available
so.
D
Cool
yeah-
okay,
that's
something!
Maybe
we
we
want
to
bring
up
to
the
stand
up
today
that
if
we
can
be
more
predictable
or
when
this
thing
is
going
to
be
deployed
that
we
can
help
to
babysit
the
deployment
right
you're.
Just
during
this
software
pcr,
I
mean
you
know,
once
we
lift
the
rock
of
the
you
know,
leave
the
data
of
the
pcl.
Then
probably
it's
hard
to
follow
that
you
know
to
practice
the
babysitting,
but
we'll
try
our
best.
F
So
so
the
question
is
that,
like
we're,
gonna
be
making
changes
that
are
potentially
impacting
like
the
the
fundamental
behavior
of
the
current
application,
and
there
is
like
always
risk
associated
with
things
going
sideways.
So
can
we
maybe
ask
this
delivery
to
look
at?
I
don't
know
group,
starting
or
whatever,
label
and
like
proactively,
ask
us
when
they
will
be
deploying
that
because
I
think
they
are
rather
like
look
at
the
changes
being
deployed
and
they
already
have
release
of
the
issues
and
merchant
requests
in
the
next
auto
deploy.
F
Be
shipping
changes
in
different
periods?
It's
not
gonna,
be
like
we
much
much
request
every
day,
so
they're
like
sometimes
gonna,
be
more
sometimes
gonna,
be
less.
So
maybe
if
they
could
really
like
take
a
closer
close
eye
on
this
label
and
like
say,
hey,
we're
gonna
be
deploying.
Can
you
babysit
with
us
for
like
this
period
of
the
deployment?
Would
it
be
like?
Would
it
make
sense
like
to
have
this
request
to
deliver
it.
F
Because
I
think,
from
our
perspective,
it's
pretty
hard
to
track
exactly
when
the
deployment
happens,
even
all
the
pcrs
that
are
happening,
it's
pretty
known
for
them
when
they're
gonna
be
deploying.
So
maybe
it's
also
pretty
easy
for
them
to
figure
out
that
starting
quark
is
of
higher
risk
than
the
others.
So
maybe
they
can
practically
ask
us
when
they
see
the
world
coming
in.
F
A
F
F
D
Yeah,
I
believe
I
background.
I
talked
to
the
product
managers
on
from
the
ops
trying
to
prioritize
the
discovered
issues
there.
There
are
18
ops,
so
quite
a
few,
but
they
declined.
So
I
and
then
they
proposed
all
the
natives
to
have
you
know,
have
a
high
camera
set
to
be
a
self-sufficient
team.
D
I
I
had
my
opinion,
but
just
to
make
sure
this
team
is
aware-
and
I
wanted
to
hear
your
opinions
so
that
I
can
this
will
be
discussed
in
your
stand
up
today.
So
just
want
to
collect
some
sauce
here.
What
do
you
think
this
will?
I
know
I
know
my
answer
but
to
hear
your
opinions.
F
I
I
think
I
think
the
head
count:
research,
unless
we
steal
four
people
from
the
verify
will
not
be
effective
to
do
it.
So
if
it
means
like
we
steal
for
people
that
will
have
the
same
impact
on
the
verified
work
then
like
asking
them
to
do
it,
but
also
increase
the
I
guess
management
overhead
from
our
side.
F
It
will
not
be
super
effective
in
this
in
this
case,
and
the
same
probably
applies
for
all
other
work
that
we
are
finding
out.
We
kind
of
gonna
steal
two
security
and
secure
engineers,
or
do
we
steal
one
engineer
from
the
run
creates
so
we
kind
of
end
up
robbing
with
sibling
the
composition
fixing
feature
team
of.
F
A
I
agree
with
you
camille,
for
what
it's
worth,
I
think
a
head
count
reset,
is
overly
complicated
and
will
take
a
long
time
and
will
delay
the
onboarding
of
developers
actually
using
this
fundamental
change
or
getting
used
to
this
fundamental
change
that
we're
making
to
gitlab.
So
I
I
think
it
should
be
embedded
within
the
teams,
as
they
are.
F
I'm
kind
of
like
thinking
truly
that,
like
the
work
that
we
have
there,
it
should
probably
occupy
them,
for
I
don't
know
two
milestones
majority
of
that
stuff,
so
it's
very
like
a
short
roadblock
and
head
count.
Research
has
like
much
bigger
repercussions
overall
for
the
longevity,
so
I
think
the
problem
is
the
quantity,
because
there
is
like
a
lot
of
small
things
to
do,
but
if
you,
if
you
look
at
these
issues
and
you
join
them
conceptually
together,
there
is
like
probably
three
different
patterns
to
solve
out
of
that.
F
We're
gonna
have
more
of
them
over
time.
But
that's
probably
like
my
my
sentiment
about
these
issues
today.
I'm
kind
of
thinking
that
secure
actually
has
much
harder
problem
to
solve
than
verify.
Except
I
mean
the
problem
for
verify
to
solve
is
queuing,
which
is
like
big
problem,
but
all
the
other
items
are
actually
like,
pretty
small
ones,
that
you
apply
the
same
pattern
for
each
of
them.
It's
just.
There
is
many
of
these
small
things
to
fix.
D
D
Excited
about
meeting
today.