►
From YouTube: 2021 06 23 APAC Sharding Group Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
I've
been
I've
been
asked
to
convey
this,
so
I
will
vocalize
and
the
stuff
from
craig.
As
is
my
understanding,
I
don't
know
who
has
asked
to
develop
a
plan,
but
I
think
the
current
ask
is
to
deliver
decomposed
ci
implementation
into
production
by
the
end
of
october
to
realize
headroom
improvements.
A
And
c,
I
think
I
don't
really
need
to
actually,
I
think,
that's
kind
of
it.
I
think
all
of
this
kind
of
makes
sense.
I
think
the
most
important
thing
for
us
as
a
as
a
group
to
discuss
at
this
point
is
truly.
You
know
what
your
perception
is
of
that
specific
ask
right
and
that's
something
that
I
would
like
to
capture,
because
that's
important
because
you
know
otherwise,
you
know
we
may
get
ourselves
into
a
not
very
great
situation.
B
I
think
my
perception
there
is
like
a
limited
parallelism
that
we
can
apply
with
increasing
amount
of
the
team
members,
but
we
need
to
go
through
some
of
this
challenges
before
and
we
are
still
like
very
early
in
the
face
of
like
this
development
and
actually
finished
developing
exchanges.
So
from
my
like
involvement
with
production
deployments,
this
is
usually
where
you
are
taste
like
them
most
of
the
time
because
of
like
many
granular
steps
that
you
need
to
to
make
on
the
running
system.
B
So
since
we
are
not
really
like
truly
yet
rather
in
the
minority
of
the
cases
to
approach
production,
it's
also
really
hard
like
to
expect
that
production
that
will
be
able
to
deploy
these
changes,
like
let's
say
in
the
one
and
a
half
month
or
like
similar
amount
of
time
to
the
github.com,
because
we
may
upload
staging
at
some
point.
So
my
concert
like
with
the
like,
with
just
having
this
like
using
these
improvements,
it's
like.
B
We
only
control
half
of
the
story,
which
is
like
development,
another
half
which
is
production
and
rollout.
Sorry,
it's
something
that
we
are
not
yet
ready
to
accept
too
big
extent.
C
I
think
we
can
engage
them
to
actually
be
deploying
all
the
production
database
infrastructure,
which
is
it's
a
second
p
maternity
cluster.
Getting
that
up
and
running
it's
going
to
stream
from
the
other
one
to
keep
in
sync.
I
think
we
could
actually
have
people
executing
on
that
today
from
sre,
and
it
will
take
a
couple
of
them
at
least.
A
Yeah,
so
I
think,
but
I
think
what
what
what
camille
said
kind
of
holds
right.
It's
like
we
cannot
do
this
work,
but
it
needs
to
come
out
of
infrastructure
because
they
have
the
access.
So
essentially
that
would
be
something
where
and
we
can
say
well.
This
is
out
of
our
control
right.
We
need
a
team
of
sres,
quite
likely,
you
know
to
actually
accelerate
that,
and
I
think
otherwise
this
becomes
even
less
feasible.
Is
that
correct.
B
Yes,
that's
correct,
but
also
like
rollout
before
this
directive.
Is
that
say
complete
you
mean
it's
like
you
switch
over
and
it
don't
take
many
steps
to
switch
over
because
we're
just
gonna
have
to
fix
a
lot
of
aspects
in
the
application
while
we
discover
more
of
them,
so
I'm
kind
of
like
anticipating,
really
heavy
percentage,
rollout
or
whatever
else
that
would
allow
us
to
validate
aspects
of
the
application.
B
So,
looking
at,
like
the
past
experience
with
the
percentage
rollouts
with
the
production
of
the
smaller
scale,
it
just
takes
time
simply
like
like
it
just
takes
time
and
there's
like
immediate
amount
of
the
work
that
you
can
parallelize.
We
can
have,
of
course,
like
start
doing
things
earlier,
but
it's
still
gonna
take
time,
let's
say
like
to
roll
out
each
individual
trench,
because
it's
gonna
require
a
few
people
involved,
making
a
call
then
like
testing
ryzen
repeats
number
of
times,
so
I'm
I'm
usually
very
optimistic
person.
B
I
I
mean,
like
I'm,
looking
at
the
let's
say,
pages
my
grades
on
that,
like
cleaning,
it
can't
be
easier
to
execute
and
it
took
like
50
or
twice
longer
than
we
anticipated,
and
it
was
like
a
very
small
portion
of
the
system.
So
I
don't
expect
that
it's
gonna
be
different.
I
think,
like.
B
I
don't
know,
that's,
that's
that's
that's
my
perception
about
it
more
like.
B
C
Yeah,
I
I
think
there
are
some
things
we
can
be
doing
in
parallel
that
we
know
we've
got
to
do
and
I
don't
know
maybe
it
feels
like
from
your
perspective,
camille
you're,
maybe
struggling
to
keep
up
with
all
the
things
that
are
going
on
and
yeah,
but
I
think
we're
like
we
are
at
the
point
where
we
can
possibly
distribute
a
few
things
that
you
that
that
that
you've
already
like
kind
of
set
the
path
on-
and
we
can
people
can
just
look
at
like
that's
the
imode
request
that
you
created
and
just
start
removing
joins.
C
We
know
what
that
looks
like
we
know
we
have
to
do
it
to
get
this
job
done
and
we
could
have
a
couple
more
developers
working
on
that,
but
we
de-scoped
that
and
then
I
just
think
we
can
also
be
engaging
data
database,
reliability,
engineers
and
sres
today.
If
this
is
the
company-wide
initiative,
but
those
are
the
people
that
ultimately
can
get
this
work
done
and
with
until
they're
involved,
we're
always
at
least
a
few
months
away
from
getting
anything
done.
C
B
I
I
I
mentioned
percentage
role
because,
like
in
my
head,
I'm
starting
to
see
like
a
pattern
that,
like
we
replicate
all
data,
we
start
treating
data
only
as
a
replica,
and
this
is
how
we
test.
If
everything
goes
like
with
the
connectivity
with
the
failovers
function
properly
before
we
actually
do
switch
over
because
like
when
we
do
switch
over.
B
B
If
we
can
handle
that
many
connections,
if
we
have
all
the
monitoring
in
place,
if
these
databases
can
actually
handle
that-
and
we
need
to
somehow
use
that
in
the
way
initially
that
is
reversible
and
the
only
way
that
we
can
do
the
discernible
because
possibly
doesn't
have
like
primary
primary
way
of
operating
is
when
we
have
a
single
active,
it's
by
simply
trying
to
leverage
it
as
a
replica.
B
But
this
is
really
like,
like
the
like
the
first
step,
because
as
soon
as
we
switch
over,
we
only
got
like
the
half
of
our
like
gaze,
because
we
still
have
to
drop
all
data
and
now
the
question
is:
why
dropping
all
data
is
something
additionally
risky
that
we
need
to
test
and
validates
yeah,
because,
because,
like
like,
you
have
two
aspects
to
solve
like
one
is
a
rights
and
apples
and
second
one
is
like
data
store.
B
Probably
we're
gonna
be
fine
with
just
solving
the
first
one
initially
and
delaying
the
second
one.
The
data
store,
because
data
store
is
not
really
that
big
of
the
concept,
but
then
it's
also
like,
I
think,
putting
correct
expectations.
B
But
that's
that's
the
tricky
part
we
need
to
somehow
or
maybe
not
really.
I
was
thinking
about
the
firing
key
and
dropping
them,
but
we
cannot
really
disrupt
the
foreign
keys
before
we
sorry.
We
cannot
really
like
drop
all
data
and
also
and
like
we
can
draw
foreign
keys
earlier,
but
we
cannot
drop
associated
data
before
we
actually
do
switch
over.
C
C
D
B
Thing
is
like
we
need
to
somehow
an
automated
way
to
discover
that
we
have
relations
that
doesn't
have
foreign
keys
and
then
needs
to
be
manually
deleted
by
ruby,
because
there
is
another
tricky
part
like
in
some
cases
we
are,
even
though
we
are
adding
project
id,
which
is
balanced
tool.
We
are
not
adding
associated
has
mana,
so
this
may
go
unnoticed
for
us
that
we
are
missing
dependent
distress.
B
So
we
need
somehow
to
script
that
to
figure
out
hey
this
foreign
key,
that
is
today
a
foreign
key
that
doesn't
have
like
dependent
like
that,
has
many
relation
and
doesn't
have
associated
dependent
type.
So
that's
it
there's.
Another
aspect
like
I
know
about
a
piece
of
the
c
security
scans
in
one
table
that
needs
to
be
the
normalized
to
include
project
id.
C
B
C
So
I
I
think
I
gather
from
all
the
things
we
just
talked
about
there.
We
could
have
another.
I
don't
know
five
engineers
working
on
this
now,
but
that
is
dependent
on
us
scoping
these
things
in
because
we
know
we
need
them.
We
de-scoped
them.
In
the
last
meeting
we
had
last
week,
we
scope
all
these
things
in
then.
We
do
have
parallel
item
on
what
we're
working
on.
C
And
I
feel
like
we
have
a
lot
of
senior
members
in
the
team
that
could
probably
help
as
coordination
points
for
yeah
several
more
engineers.
That's
my
opinion
on
that,
and
we,
I
think
we
have
like
we're
aligned
on
what
those
what
those
tasks
look
like
they're,
pretty
the
things
that
camille
just
talked
about
then
like
the
some
of
it
is
stuff
he's
already
put
into
a
merge
request
that
actually
just
needs
to
be
broken
out.
Put
into
a
smaller
mode
request,
tested,
really
carefully
emerged.
B
What
we
can
do
we
can
create,
like
split
into
two
tracks,
like
one
is
related
to
the
infra
and
migration,
which
is
exactly
what
you
did
on
doing
and,
like
you
work
closely
with
the
with
the
infra
team,
on
figuring
out
all
of
the
aspects
and
like
we
simply
split
our
focus.
Instead
of
like
maintaining
the
common
shared
focus
across
all
of
these
initiatives
and
like
we
then
focus
on
another
aspect,
which
is
like
the
the
application
side.
I
think
right
now,
according
to
your
plan,
the
screens
are
too
big.
A
Well,
this
is
to
to
say
so
that
I
understand
this
correctly.
It's
like
there's
a
like
for
this
part
of
people
here
right
there.
We
will
try
to
paralyze
as
much
of
the
work
as
possible
for
the
application
side
right.
That's
understood,
that's
something
that
we
can
do,
but
then
there's
this
parallel
track
of
infrastructure.
A
That
needs
to
happen
where
we
don't
have
the
permissions
right
to
do
that
ourselves
and
we
we
do
need
infrastructure
to
essentially
help
with
that
and
given
the
size
and
complexity
of
all
of
this,
this
is
a
large
endeavor
right,
so,
like
one
person
is
not
going
to
pull
that
off,
I
imagine
so.
If
the
october
do
that,
even
though
I
don't
even
think
with
like
five
sres
or
three
sres
like
this
is
going
to
become
more
realistic,
but
without
having
infrastructure,
involvement
now
and
then
essentially
providing
these
resources
it
becomes
completely
unrealistic.
A
A
B
C
I
I
one
I
do
want
to
also
bring
up
that
we
there.
There
is
a
bunch
of
application
changes
that
we
just
talked
about.
That
would
be
scoped.
That's
kind
of
the
point
I
was
trying
to
make
about
parallelism.
We
de-scoped
we
have
focus
on
application,
changes
being
related
to
one
table
right
now,
but
that
means
you
know
a
developer
on.
Our
team
isn't
encouraged
to
go
and
remove
all
the
foreign
keys
today,
even
though
they
could
that's
like
one
one.
C
B
I
I'm
kind
of
thinking
that,
like
doing
this
one
table
looking
at
the
tonka,
mr,
I
think
it's
actually
the
right
first
step
because
it
kind
of
uncovers,
like
a
lot
of
things,
requests
and
like
postgres
that
we
need
to
do
before
we
start
working
bigger
stuff.
So
I
think
right
now,
my
perception
is
like,
let's
get
over
like
distinguished
ci
type
as
quickly
as
possible,
in
whatever
form
it
is
and
like.
Let's
rework
that,
because
without
that,
I
think
we
cannot
really
tackle
many
tables
as
well.
B
So
from
what
I
saw
like,
I
think
we
have
a
hefty
amount
of
the
problems
to
solve
and
like
then,
we
can
even
like,
in
the
application
site,
split
our
focus
on
a
part
of
the
team
that
is
only
like
moving
the
schema
changes,
breaking
foreign
keys
and
another
part
of
the
team
that
is
handling
all
the
load,
balancing
stuff,
for
example,
and
then,
like
we
kind
of
join,
merge
at
some
point
when
we
start
fixing
ci
application
side
of
the
stuff
where
the
features
will
be
broken.
B
But
I
think
these,
like
pork
points
that
we
are
arts.
The
single
table
is
like
is
the
right
first
step
and
at
least
in
my
head.
I
would
not
try
to
twist
that
yet
I
think
we
need
to
finish
that
first.
E
C
E
I
also
think
sorry
you
go
okay,
yeah.
I
also
think
it's
too
early
to
paralyze
we'll
probably
get
confused,
because
we
just
don't
have
the
setup
yet,
but
that's
not
necessarily
the
biggest
chunk.
The
next
biggest
chunk,
I'm
thinking
of
is
the
migration
tooling
right.
B
Like
if
you
would,
let's
say,
designated
someone
from
outing
to
work
with
a
group
of
a
surgery
to
provision
and
do
application
physical
application,
what
is
direct
saying
that's
more
like
the
infrastructure
procedure
rather
than
like
the
tooling
that
we
need
to
build.
There
is
like
this
concert
of
the
switchover
which
is
adam.
B
You
worked
on
this
working
exclusive
to
kind
of
do
like
this
switch
over
dynamically,
so
this
is
something
that
we're
gonna
be
looking
at,
but
I
think
in
the
current
schema,
I
think
it's
like
more
dependent
on
the
infrastructure,
rather
than
actually
writing
to
link.
There
is
less
going
involved
in
the
current
plan.
C
Yeah
we
we
changed
all
the
wording
in
the
epics
to
not
refer
to
migration
tooling
as
of
yesterday,
because
it
isn't
really
a
migration.
It
is
replicating
data
and
changing
dns
records,
and
I
think
camille
coined
the
term
reverse
replication
or
something
that's
what
we
well.
We
decided
to
describe
this
as
it
just
copy
everything
and
then
change
the
connection
and
delete
the
things
you
don't
need
anymore
and
it
all
happens
outside
of
the
application
by
the
dns
records.
So
the
application
doesn't
even
see
anything
change
underneath
the
config
doesn't
change
in
the
application.
C
It's
a
single
dnsc
name
record
that
gets
pointed
and
switched
the
application
detects.
Well,
the
application
goes
through
pg
bounce
of
pgbounce.
It
detects
the
scene,
name,
changed
and
resets
all
connections
and
sees
a
couple
of
seconds
of
errors,
the
application
and
then
what
adam
was
looking
at
was
retry
errors
for
a
few
period
for
a
few
seconds,
be
as
graceful
as
possible,
in
fact,
but
I
I'm
not
even
sure
we
how
much
we
need
that,
because
we
already
have
failovers
happen
in
production
and
they
already
lead
to
the
same
style
of
downtime
for
users.
C
Whenever
a
failover
happens,
there's
always
a
few
seconds
of
errors
that
the
client
that
the
application
needs
to
in
some
way
support
and
what
that
looks
like
sometimes
is
sidekick,
jobs
fail
and
retry.
Sometimes
user
gets
a
500
and
they
refresh
the
page
but
yeah.
That's
that's
my
thoughts
on
that.
So
there's
very
little
application
changes
associated
with
all
this
stuff.
B
C
B
So
it's
even
better
so
basically
like
you,
are
dropping
existing
connections
from
the
pg
bus
and
applications
reconnects.
D
You
need
somehow
to
get
into
maintenance
mode,
so,
whether
you're
going
to
lock
a
few
tables
or
stop
all
traffic
or
whatever.
This
is
something
to
be
decided,
but
the
easiest
way
the
less
impactful
way,
is
to
just
lock
the
tapes
that
you're
moving.
C
Yeah
we
discussed
that
today.
I
was
on
a
call
with
jose
just
before
this
one
and
we've
got
a
few
good
plans.
One
is
the
locking
mechanism,
as
adam
suggested.
Another
one
is
the
fact
that
pg
bouncer
could
just
be
blocked
from
postgres
for
a
period
of
time,
because
we
have
a
dedicated
pg
bouncer
for
rights.
That
is
only
doing
right,
so
it
could
be
blocked
for
a
few
seconds
and
yeah
and
and
what
you
said
before,
camille
it's
even
better.
The
application
doesn't
even
see
dropped
connections
it
it
actually.
A
E
Moment
my
perception
is
that
we
don't
have.
We
don't
know
our
confidence
level.
That's
that's
my
perception
because.
E
A
Okay,
so
essentially
from
the
application
side,
you
know,
like
all
of
the
developments
needs
to
happen,
so
we
we
have
some
level
of
uncertainty
there,
but,
let's
say
it's
safe
to
say
we
work
as
fast
as
we
can
and
then
from
the
sre
side.
We
know
that
we
have
this
like
50,
you
know
like
if
we
don't
control
without
additional
support
from
that
team.
A
A
And
can
this
be
paralyzed
like
if
we
had
an
sre
team
of,
I
don't
know
four
people
tomorrow
right?
Would
we
be
in
a
position
where
we
are
ready
in
like
three
months
and
then
we
just
need
to
flip
the
switch?
I
imagine
that's
not
how
this
is
going
to
go
right.
It's
like
you,
do
with
the
changes,
and
then
you
need
to
gain
all
of
the
confidence
in
actually
yeah.
E
B
B
I
mean
like,
maybe
not
two
or
three
times
longer,
but
it
also
takes
that
amount
of
time.
So
can
we
actually
like
grounded
understating
by
the
october?
I
I
don't
think
so,
like
maybe
maybe
some
bits.
Maybe
we
start
testing
the
pica
right,
because
because
we
we
have
most
of
the
work
being
done
with
the
migration
like
the
upper
application
can
be
like
torque
and
we
disconnected
most
of
the
joints,
but
this
will
actually
probably
uncover
us
like
the
places
that
steve
needs
to
be
fixed.
B
So
I
think
my
confidence
level
is
like
30
or
40
that
we
can
run
all
these
standards
using
replicas
by
the
october.
B
A
So
like
this
is
exactly
where
I
wanted
to
go
now,
because
it's
like
this
october
date,
I
don't
know
where
it's
coming
from.
I
think
you
said
it
may
come
from
database
scalability.
I've
heard
him
for
death
right,
but
I,
like
my
my
gut
feeling
right
now,
having
heard
what
I
have
heard
is
that
you
know
I
should
like
it's
not
a
sure
that
sounds
reasonable.
Let's
shave
off
50
of
the
product
project
timeline,
but
that
the
conference
level
is
very
low.
A
A
C
Yes,
the
specific
october
thing
came
up
in
the
scalability
working
group
meeting
on
monday.
Eric
asked
why
our
timeline
doesn't
show
that
we're
delivering
by
october
and
craig
said,
because
it's
we
think
it's
going
to
take
longer
than
that
and
then
eric
said,
but
we
already
have
another
document
on
the
gitlab.com
stand
up,
which
says
by
october.
We
have
to
have
major
improvements,
otherwise
we've
got
a
problem
and
then
craig
said.
I
think
that
that
date
needs
to
be
updated
to
reflect
what
we
know
now,
plus
some
other
efforts
that
are
ongoing.
C
So
actually
it's
possibly
that
just
people
need
to
get
on
the
same
page
about
october,
not
being
the
end
time
anymore.
There
is.
D
I
think
that
we
should
ask
so
jose
periodically
creates
projections
about
our
database
health.
Maybe
we
should
ask
if
we
can
have
a
new
projection
and
a
prediction
about
how
our
database
health
is
because
we
are.
We
have
run
a
few
initiatives
during
the
past
months
and
we
are
still
running
a
few
initiatives
like
the
redesign
of
the
ci
tables
and
adding
the
cie
pending
bills
and
the
running
bills
and
which
are
going
to
help
hopefully
a
lot.
D
So
maybe
we
should
have
some
a
new
production
so
that
we
we
have
a
more
realistic
deadline,
because
it's.
B
D
Yeah,
but
we
don't
have
a
problem
with
capacity
we
had
a
cpu.
Our
problem
was
cpu
was
not
a
capacity
on
other,
so
we
have
no
problem
with
disk
or
with
rights
or
with
anything
else.
Our
problem
was
cpu
spikes
due
to
very
expensive
queries
that
were
related
to
size,
but
so
a
lot
of
efforts
were
driven
towards
that.
So
for.
D
We
start
if
we
fix
a
lot
of
the
some
of
the
ca
problems.
If
we
fix
the
recursive
city
queries
and
those
things
we,
hopefully
we
have
less
cpu
spikes,
we
and
we
have
more
headroom
so
because
our
problem,
we
were
not
capped
on
capacity
yet
on
a
resource
problem
was
cpu,
not
a
memory
or
disk.
As
far
as
I
know,.
B
Yes,
but
like
like
pending
jobs,
even
today,
they
don't
use
primary.
They
are
executed
on
replica.
So
if
you
are
looking
at
the
cpu
usage
from
their
primary
perspective,
it
will
not
have
like
that.
It
will
not
make
difference
because
it's
using
graphica
always
there
was
a
body
that
was
a
bite
that
we
thinks
that
it
was
using
correct
primary
in
some
cases
and
it
actually
like
provided
a
lot
of
headroom.
But
I
just
want
to
correct,
like
the
like
the
perception
of
like
what
pending
beats,
offers.
B
D
B
D
B
Yes,
from
what
I
checked
recently
like
it
was
like
last
week,
middle
of
the
last
week,
but
balancing
work
moved
around
10
percent
of
the
queries
out
of
the
primary
all
right
and
those
are
actually
not
disabled,
like
we
continue
adding
more
workers
to
leverage.
So
I
think
it's
gonna
have
pretty
not
disabled
benefit.
A
Okay,
but
that's
to
me,
for
example,
it
sounds
like
a
very
worthy
thing
to
do
independent
of
our
own
timelines
right.
It's
like
I
would.
I
would
very
much
like
to
have
a
yes
in
october,
we're
going
to
run
out
of
headroom,
so
we
need
to
do
many
things
or
actually
that's
not
true
right,
because
I
think
that
adds
data
that
we
depend
on.
So
we
can.
We
can
accelerate
that
okay,
but
we
like,
I
don't
want
to
like
turn
this
into
another
one,
one
hour,
long
meeting
right.
A
I
think
I
think
overall,
I'm
hearing,
there's,
maybe
some
some
things
we
could.
We
could
accelerate
by
getting
like
additional
infrastructure
folks,
but
overall,
the
like
confidence
in
just
sort
of
magically
delivering
this
into
production
in
three
months
is
pretty
low,
and
it
depends
on
many
other,
like
folks
and
infrastructure
capacity
and
and
whatnot,
and
that.
A
A
E
Yeah
I
I
mostly
agree
that
on
the
on
the
migration,
then
we
are
heavily
dependent
on
the
infrastructure,
part
and
and
yeah
it's
a
it's,
a
very
difficult
migration.
E
E
I
looked
at
the
mr
that
disables
choice
and
tries
to
decouple
the
code
from
from
the
main
database
and
that
looks
very
promising
to
me.
There
is
one
concern,
though
it's
working
best
in
most
of
the
test
cases,
but
how
we
perform
with
the
current
production
database.
A
C
A
C
I
yeah
timeline.
I
don't.
I
think
it's
really
that
interesting
for
me
to
say
when
I
think
all
of
this
will
be
done,
but
what
I
want
to
not
say
is
okay.
If,
if
we
don't
think
we'll
be
done
with
all
of
this
by
october,
I
don't
want
that
to
be
translated
to.
We
don't
need
help
from
infrastructure
team
to
get
this
to
production
until
october,
because
we
need
we
need
help.
Today
we
have
things
for
them
to
do
today.
C
D
Yeah,
we
depended
on
a
lot
of
on
the
infrastructure,
part
a
lot
and
we
don't
know.
A
D
A
Okay,
thank
you
thanks
for
clarifying
that.
I
think
that's
important
to
point
out,
but
I
think
it's
also
important
that
you
know
having
enough
sre
support
is
not
actually
going
to
mean
that
this
is
going
to
deploy
in
in
october.
It
just
means
that
we
we
move
forward
at
pace
and
we're
not
blocked
in
in
some
areas.
E
No
matter
like
we
may
be
able
to
fit
everything
into
october,
like
there
is
some
confidence,
but
I
question
why
we
even
need
to
do
this
like
we
can
paradise
things.
We
can
drop
things
right
but
again
coming
back
to
why
so
yeah
I
listed,
I
listed
some
some
kind
of
like
estimates,
but.
A
Yeah
so
like,
I
think
we
need
to.
We
need
to
maybe
also
be
be
clear
that
you
know
this
is
very,
very
you
know
unlikely
you
know
to
be
delivered
and
that
why
are
we
like
faced
with
this
and
also
like
what
does
the
data
say
plus?
Are
there
other
things
that
can
happen
in
parallel?
A
If
there's
because
there's
a
very
complex
thing
to
do,
but
maybe
there
are
simpler
things
that
we
can,
that
we
can
do
that,
help
us
bridge
the
gap
right
and
if
that's
a
that's
an
option
right
then
we
should
definitely
do
this
right,
but
if
he
is
yeah
risky
to
rush
the
most
complex,
yes,.
E
A
Well,
I
think
the
like
again,
I
have
as
much
context
here
as
you.
I
think
the
asked
is
develop
a
plan
to
deliver
this
by
the
end
of
october
and
I
think,
like
I
think,
that's
the
time
constraint
right
and
I
think
we
have
a
plan
how
to
do
it.
We
just
don't
have
a
plan
for
how
to
deliver
in
october
and
I
think
actually
like
what
I've
heard
now.
A
It
feels
to
me
as
if
you
know
the
simple,
like
the
things
that
you
can
do
to
make
a
plan
like
this
work
is
add
more
people
right,
that's
not
necessarily
going
to
make
it
faster.
It's
a
little
bit
like
mystical
man
month,
you
know,
like
add
more
saes
is
a
requirement
to
even
move
forward,
but
even
then
you
know
you're
not
going
to
like.
Actually
just
do
this
in
by
october.
The
complexity
of
that
project
sort
of
leads
to
a
fact
where,
like
that
seems
well,
I
guess
you
can.
A
D
One
last
thing
regarding
the
goal
of
october
camille
has
added
an
amazing
he's:
building
a
blue
breathable
database
arduino,
and
I
really
love
the
the
picture
and
the
approach
there
and-
and
we
have
the
same
approach
now
in
the
database
group
with
the
blueprint
by
address
where
there
is
this
discussion.
If
we
want
to
move
that
wall
away,
there
are
a
lot
of
other
things
that
we
we
are
doing
and
we
can
do
until
the
point
where
we
are
going
to
break
without
starving
without
the
composition.
D
So
there
is
what
camille
is
discussing
about
debloating
or
partitioning,
and
we
have
on
the
database
group.
We
have
this
target
now
to
keep
all
tables
below
100
gigabytes.
All
those
initiatives
together,
I
think
move,
will
help
a
lot
with
moving
the
wall
away.
A
B
Yeah,
I
I,
I
would
really
argue
that,
like
yes
like,
we
should
probably
like
define
our
time
and
how
things
we
perceive
as
a
feasible
and
kind
of
provide
like
if,
if
the
concern
is
like
the
capacity
this,
like
the
additional
things
that
we
receive
from
our
perspective
as
a
world
fight
worldwide,
to
do
right
now,
that's
gonna!
Why
we
like
move
this
october
date.
B
Like
we
know
of
the
sound
stuff
that
we
can
do
today,
that
are
of
the
moderate
difficulty
and
that
we
can
do
and
we're
not
really
because,
like
I'm,
actually
having
this
challenge
with
the
like
the
efforts
that
are
happening
in
the
parallel
on
the
stuff
that
we
are
working
right
like
this
is
why
I
started
writing
this
blueprint
about,
like
I'm,
having
a
challenge
that
like
if
we
don't
upload
stuff
first,
it
will
not
uncover
some
of
the
like
misguided,
like
structures
in
the
in
the
tables
that
we
use,
and
it's
really
hard
like
to
make
them
the
right
partitioning
yesterday.
B
But
I'm
also
like
having
a
challenge
like
how
many
things
we
can
do.
Concurrently
with
the
like
the
bloating
efforts,
partitioning
efforts
and
starting
efforts
like
there
is
like
like.
We
need
to
coordinate
that.
So
we
need
to
pick
the
right
ones
that
we
know
that
we're
not
really
impart
our
starting
timeline.
B
B
So
we
need
to
pick
like
the
the
things
that
are
easy
and
fast
to
execute
that
can
finish
earlier.
Then
we
are
actually
gonna,
be
impacted.
So
what
I
kind
of
think
thinking
we
can
propose
alternative
things
that
can
happen
concurrently
to
our
work.
B
A
Yeah,
so
I
think
I
think
this
is
like
going
to
be
quite
important.
So
can
somebody
volunteer
creating
a
list
of
parallel
efforts,
sort
of
well
concurrent?
I
think
it's
probably
best
right
help
with
headroom.
C
Team
is
focusing
on
like
already
yeah.
They
would
have
a
list
of
prioritized
things.
I
think
camille's
point
might
be
re-prioritize
their
list
if
any
of
the
top
items
are
going
to
make
it
slow
down
our
timeline,
I
think
if,
if
they
are
doing
work
that
conflicts
with
ours
and
will
make
it
harder
for
us,
then
we
need
to
get
them
to
start
working
on
that,
but
I
think
we're
supposed
to
be
under
the
assumption
that
the
other
database
team
is
currently
working
on
buying
headroom,
yeah,
okay,.
A
All
right,
I
I
think
honestly,
like
I
think,
there's
not
that
much
more
to
say
here,
but
I
think
we
will
kind
of
I
like
the
correct
english
expression
for
describing
this
this
state,
but
I
think
we're
pretty
aligned.
B
So
since
we
talked
about
like
like
very
park
timelines,
I'm
taking
one
week
off
in
like
two
weeks
from
now
so.
A
I'm
also
going
to
be
on
vacation
for
two
weeks
in
july
august
and
I'm
going
to
be
in
the
states
for
five
weeks
from
next
wednesday,
so
you
will
enjoy
your
european
meetings
without
me.