►
From YouTube: 2021 02 02 Database Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
right,
then,
we'll
jump
to
the
one.
This
one
was
brought
up
during
the
intradev
meeting.
It's
the
cpu
utilization
peak
times
on
the
projects
table,
and
I
was
wondering
what
the
next
steps
were.
I
thought
it
was
odd
that
was
brought
up
in
that
meeting.
Jose.
Do
you
know
why
it
was
brought
up
in
there?
Is
this
really
as
andreas
calls
out
below?
Is
this
top
five
performance
problem.
B
B
I
can
link
any
of
these
but
yeah
from
my
side.
One
thing
that
we
started
to
work
on
analysis
as
well
now
is
to
share
with
everyone.
Please
is
them
looking
for
a
request
that
are
being
executed
by
sidekiq
on
the
primary
to
analyze
the
volume
and
load
and
trying
to
think
about
it
if
we
will
manage
to
find
a
way
to
move
them
to
secondaries,
but
it's
just
the
latest
new.
A
Issue
come
across
this
morning.
I
don't
have
a
link
to
it
here
right
and
there
was
a
note
about
this
project,
one
it
has
move
or
should
we
move
it
to
continuous
integration
since
gregor
is
working
on
giannis.
A
D
I'm
not
sure
how
much
time
he
has
so
I
have
some
thoughts
right
there.
So
we
have
four
issues
on
queries
that
are
select
select
start
from
from
an
entity
by
id,
so
there
are
lookups
by
id
and
jose
has
done
a
very
thorough
analysis
using
something
for
all
of
them.
B
D
Of
queries,
so
we
know
that
those
queries
originate
from
many
places.
D
We
are
not
sure
yet
from
how
many
places,
because
this
is
a
this
sampling
analysis
is
not
exhaustive,
and
maybe
if
we
we
continue
with
a
longer
term
analysis,
we
can
have
more
offenders,
but
at
the
moment
we
don't
have
a
clear
winner
and
I'm
worried
that,
whether
this
gregory,
for
example,
for
products
they
look
up
for
products,
the
ones
that
are
was
brought
up
in
the
infrared
meeting,
whether
we
will
have
a
clear
game
there
or
we
will
spend
time,
checking
all
those
places
that
are
equal
in
the
number
of
queries
that
they
generate.
D
So
this
is
a
question
for
everyone,
and
I
was
thinking
as
a
proposal.
Maybe
we
could
those
three?
We
could
wrap
them
if
everyone
agrees,
we
know
that
those
are.
We
know
that
those
are
very
queries
that
are
very
often
executed.
D
We
have
the
analysis
by
jose
has
shown
that
we
have
something
between
500
and
1000
per
per
second
for
most
of
those,
so
we
know
that
those
may
be
problematic,
but
we
also
know
that
those
are
very
quick,
some
millisecond
queries
and
unfortunately
we
don't
have
a
clear
winner
after
this
analysis,
this
is
not
the
case
for
taggings,
so
for
taggings.
We
know
that
we
can
see
from
the
analysis
from
jose
that,
for
example,
that
build
queue
worker
creates
a
lot
of
queries.
D
So
my
opinion
is
that
we,
we
should
continue
checking
this,
and
maybe
we
can
find
something
there,
but
my
proposal
there.
If
everyone
agrees,
I
don't
know
jose.
What
do
you
think
about
that
is
that
for
the
projects
namespace
and
users,
maybe
we
can
wrap
those
and
revisit
them
when
we
have
more
statistics
or
if
we
continue
seeing
those
as
a
problem
like,
for
example,
what
andreas
discussed?
B
Okay,
I
understand,
you
think,
is
a
good
approach.
My
point
is
like
we
will
get
more
statistics.
I
think
in
two
three
weeks,
because
I'm
trying
to
export
more
metrics
from
what
we
did
here,
the
information
that
you
have
in
the
issues,
some
sampling
that
we
collected
me
and
nikolai
and
we
we
analyzed
it
I'm
proposing
to
collect
now
constantly
from
activity,
more
samples
and
more
comments,
and
then
we
will
have
a
huger
base
or
a
larger
base
for
analysis.
B
D
D
We
have
a
lot
of
you
know
that
that
are
equal
in
size.
So
we
have,
you
know
six,
six,
six,
five,
five,
it's
not
like
a
35
and
then
three
two
one
and
that's
why
I'm
discussing
those
three.
We
could
leave
them
open
for
three
weeks.
If
everyone
agrees
and
then
we
can
revisit
them
and
if
the
new
analysis
shows
the
same,
maybe
we
can
think
about
it
then,
and
we
can
follow
with
the
taggings.
D
B
Thank
you.
I
would
like
to
ask
one
more
thing
here
in
the
meeting
any
one
of
you
were
aware
of
the
change
on
the
weekend
on
some,
please
I'm
trying
to
really
blameless
here,
but
just
to
share
awareness
like
we
had
some
changes,
some
crown
job
and
like,
if
you
see
like
simply
the
database,
was
burning
the
whole
weekend.
We
had
spikes
of
90
to
100
percent
of
cp
utilization
during
no.
No,
it
was
not
nice
is
that
I
can
share
the
issue
with
you
guys,
everyone,
sorry,
three,
six,
four,
six,
four
in
production,
one!
B
B
B
B
The
funniest
thing,
or
the
sad
thing
is
the
if
you're
checking
my
comment,
the
next
one
that
is
like
where
I
did
a
comment.
I
did
some
research
and
we
had
a
spike,
but
was
one
spike
at
3am
for
a
few
minutes.
Okay
and
unfortunately,
someone
tried
to
optimize
it
and
go
down.
Please
scroll,
all
down.
Please.
B
B
If
you
go
a
bit
up,
you
will
see
like
how
it
was
during
the
last
weekend
with
more
please
here
you
see,
like
was
much
more
massive,
the
spikes
constantly
they
tried
to
spread
the
the
job
during
24
hours
and
the
result
was
a
massive
impact
on
the
database
and
applications
for
the
application.
It
seems
to
be
not
so
negative
because
they
will
consider
this
a
severity
three,
but
I
raised
this
from
the
database.
Perspective
was
pretty.
C
B
C
So
I
haven't
seen
this
before,
but
if
it
had
sort
of
a
do,
we
know
which
change
was
causing
this.
Is
there
a
single
change
that
caused
all
of
this
or
yes,
I
will
give
you
the
link.
D
D
I
remember
the
discussion
that
they
were
discussing
about
this
job,
creating
problems
but
yeah.
I
don't
remember
them
yeah,
but.
B
B
C
Just
from
the
labels,
it
doesn't
look
like
a
tab
to
database
review,
so
basically
it
could
be
the
case
that
this
is
just
an
application.
Behavior
change
that
in
the
end,
has
this
effect
on
the
database.
C
B
B
B
C
E
E
They
may
just
not
know
that
they
can
ask
right
and
that
that
that
actually
helps
helps
in
the
end,
and
so
there
may
be
a
little
bit
of
drumming
the
drum
so
that
people
are
aware,
but
that
maybe
I
think
there
are
some
examples
where
you
know
where
we
can
say
like
look.
E
This
is
this
is
something
that
we
can
help
and
that
that
would
be
a
good,
a
good
thing,
but
yeah,
because
I
don't
think
it's
I
think,
from
a
sort
of
from
a
scalability
perspective
like
I
think
the
database
label
is
used
as
a
if
it
has
it
right.
It
will
be,
will
be
triaged
by
this
group
eventually,
but
if
people
just
make
changes,
especially
on
the
application
layer,
it's
probably
impossible
to
catch
all
of
those
things.
So
it's
like,
how
will
you
know.
B
Yeah,
because
what
understood
there,
I
don't
know
in
detail
that
change,
but
it
seems
that
was
a
massive
update
there
that,
like
we
got
some
problems
with
the
statistics
and
we
needed
to
run
some
vacuums
there
out
of
vacuums
vacuums
manually
to
fix
the
situation
and
just
sharing
with
you
with
everyone.
Thank
you.
A
D
24
hours,
so
it's
not
like
they
changed
anything
in
the
batches
or
anything
else.
They
just
spread
the
jobs
over
24
hours,
instead
of
eight
hours
that
they
have
so,
instead
of
so
the
this
seems
like
a
nice
idea
because,
instead
of
hammering
the
database
with
jobs
for
eight
hours,
they
spread
them.
D
C
I
know
we
should
have
been
able
to
catch
this
from
from
code
review.
You
know
just
looking
at
the
changes
like
a
two-line
change.
D
D
B
Solution
search
interruption,
but
the
vacuum
was
managing
to
collect
better
the
statistics
and
cleaning
up
the
table.
But
my
point
here
is
you
I'm
asking
you
because
you,
the
team
here,
has
more
experience
than
I.
If
you
would
have
executed
this
in
a
database
labs,
perhaps
we
would
see
some
strange
behavior
or
do
you
think
it
wouldn't
happen.
B
D
B
What
happened
the
last
month
is
like.
We
had
an
alert
on
that
turpos.
Then
you
run
out
of
action.
We
are
happy,
but
we
had
just
one
spike
in
2
3
a.m.
In
the
weekend
that
was
yeah
thankful
in
the
first
of
the
month,
because
my
my
main
concern
here
is
like,
thankfully,
this
run
on
the
on
the
on
the
off
peak
time
or
weekend,
because
if
we
execute
this
during
a
tuesday,
I
think
the
impact
would
be
higher.
C
C
B
A
C
E
And
just
for
my
understanding,
sorry
good
fabian
did
somebody
actually
have
to
get
up
at
the
weekend
at
an
odd
time
and
look
at
this.
So
is
this
like
an
alert
thing
where
somebody
was
paged
and
then
you
have
to
look
at
it
and
is
that
what
we're
talking
about
here.
B
E
E
This
is
the
impact
on
the
database,
but
there's
also
a
human
impact,
as
in
somebody
will
get
paged.
If
it's
on
the
weekend
right,
we
we
should
avoid
that
right,
because
that's
that
is,
I
think,
kind
of
what
we
want
to
mitigate
as
well.
B
I
will
I
will
ask
for
the
manager
on
call
that
was
there
we'll
talk
with
brent
tomorrow,
and
I
will
follow
up
this
because
I
am
afraid
that
since
that's
a
received
as
a
severity,
3
perhaps
can
leave
it
without
attention
and
next
week,
next
month
we
have
again
an
incident.
E
Yeah,
I
think
what
you,
what
I
would
also
offer
as
in
what
I
can
do
or
janis-
and
I
can
do-
is
something
that
happens
or
may
happen
here
is
people
from
the
product
side
who
look
at
this
and
say:
oh,
we
don't
really
care
right
because
we
have
something
else
to
do
and
then
it
doesn't
actually
get
the
the
priority
that
it
deserves.
E
Yeah,
okay,
then
yeah
I
can
like
giannis
and
I
can
own
the
follow-up
on
the
on
the
product
side.
A
D
Yeah
for
the
takings
one,
we
can
contact
the
team
for
the
takings
or
someone
and
everything
else.
My
proposal
is:
let's
de-prioritize
them
and
if
we
find
something
interesting
in
the
new
sampling
analysis.
D
F
Yeah,
the
tagging's
one,
I
think
I
do
agree.
We
should
prioritize
it.
I
think
there's
maybe
it
might
even
be
a
bigger
issue
than
we
know,
because
when
I
was
looking
at
the
one
that
went
into
triage
recently
when
we
had
the
migration
that
failed
on
the
name
space
table,
because
we
weren't
able
to
get
the
lock
due
to
like
a
long-running
transaction,
we
found
that
statements
were
timing
out
or
though
the
requests
were
timing
out.
F
Not
the
statements
were
timing
out
and
I
think
there's
an
issue
there
with
tags
related
as
well,
something.
F
Was
discussed
in
one
of
the
issues,
so
it's
very
inefficient
the
way
it
does
it.
If
you
add,
like
10
taggings,
to
say
to
like
a
runner,
it
runs
10,
individual
queries,
it
runs
10,
individual
inserts
and
I
think
in
that
case,
particularly
it's
hitting
some
like
pathological
case
or
something
where
it's
basically
stuck
in
a
loop.
And
it's
trying
to
do
like
that.
Like
tens
of
thousands
of
queries
and
then
this
the
request
is
timing
out,
so
I
think
the
yeah,
the
tanks,
one
that's
worth
looking
into.
F
A
Okay,
so
for
these
issues
that
we're
calling
out,
we
do
need
to
update
them
with
what
we're
doing
that
we're
going
to
pause
for
three
weeks
until
the
analysis
comes
out,
be
honest,
you're,
going
to
update
okay.
A
All
right
and
one
other
thing
I
thought
we
could
get
through
really
quick,
so
we
can
just
knock
this
one
out,
so
we
talked
about
creating
an
issue
template
for
query
performance
investigations.
A
My
question
out
there
was
do
we
still
need
this
or
is
jose
providing
us
enough
information
with
the
most
recent
issues
and
giannis,
you
got
some
feedback
there.
I
think
we
need
both.
D
That's
my
question:
how
I
see
things
so
the
statistics
on
how
often
queries
run?
What's
the
cpu
cost,
what
was
the
load
on
the
database?
Those
are
there
to
explain
why
we
need
to
investigate
and
justify
that.
This
is
a
problem
so
and
then
what
jose
was
adding
afterwards.
The
analysis
with
the
sampling
analysis
or
whatever
other
analysis
we
do
is
the
to
debug
and
to
figure
out
the
root
cause.
So
I
think
that
we
need
both.
I
don't
know
if-
and
everyone
agrees.
C
Yeah
and
just
having
that
issue
template
is
is
good,
I
think
you
know
we
just
document
what
we,
what
we,
what
we
kind
of
want
to
have
in
those
issues,
and
we
can
change
that
going
forward.
Okay,.
A
Is
there
anything
else
we
should
add
to
this
template
now,
so
we
can
start
using
it
going
forward
and,
as
andrea
said,
if
there's
anything
missing,
we
can
iterate
on
it.
But
I
want
to
get
a
starting
point.
We
can
close
this
out
and
say
we're
all
in
an
agreement
and
then,
like
I
said,
iterate,
I
don't
think
this
needs
to
stay
open
or
we
need
to
spend
a
ton
of
time
on.
I
think
we
have
a
pretty
good
understanding
of
what
we
want
now
and
how
we
want
to
proceed
going
forward.
A
A
C
I
think
it
should
be
somewhat
relative
right.
Total
number
of
calls
compared
to
total
number
of
calls.
B
D
B
C
So,
basically,
a
way
of
saying
that
this
takes
two
percent
of
our
traffic
on
the
database
in
terms
of
number
of
queries
or
something
like
that.
Yeah.
A
C
Say
that
again,
this
will
go
as
an
issue
template
to
the
gitlab
tracker
itself,
not
for
the
database
team
tracker.
Yes,.
A
A
Let's
do
that
somebody
else
want
to
run
through
the
board.
The
next
thing
to
do
is
the
billboard.
C
E
D
D
D
Okay,
so
we
have
a
closed
three
issues
this
week.
So
it's
the
ideal
manual,
ca
jobs.
C
E
C
Thanks
yeah,
so
this
one,
it's
triggering
the
testing
pipeline
automatically.
This
already
happened,
so
I'm
getting
a
lot
of
pipelines
being
triggered
on
that
that
instance,
but
they
stop
early
because
we
don't
allow
everybody
to
to
execute
them.
Basically
yeah,
that's
the
state
of
that.
C
C
This
is
there
and
then
the
next
step
is
to
hopefully
enable
the
feature
by
by
next
week
and
then
we'll
see
those
annotations,
okay,
great.
F
Yeah,
the
the
first
one
really,
I
think,
can
be
closed.
Now
it's
been
more
than
24
hours.
I
just
wanted
to
make
sure
everything
was
breaking,
but
I
haven't
heard
any
or
seen
any
reports
of
anything
the
second
one,
the
the
first,
mr
that's
using
it
that
this
was
blocking
is
going
to
go
be
merged
very
soon.
So
I
think
I'll
just
make
sure
that
that
mr
or
that
the
new
migration
helper
works
properly
and
that
should
be.
C
Yes,
that's
the
status
of
that
yeah.
A
So
I
don't
know
how
long
we
want
to
wait
for
jerry
on
that
one.
To
be
honest,
I'd
say:
let's
give
him
the
week,
maybe
ping
him
again
and
if,
if
he
doesn't
weigh
in
then
we
can
just
publish
it
and
iterate
on
it
from
there
he's
got
a
lot
on
his
plate.
So
that's
why.
C
F
It
so
we
sort
of
had
a
discussion
about
this
a
little
last
week,
giannis
and
andreas,
and
I-
and
so
I
think
we
need
to
maybe
have
a
follow-up
from
that
conversation.
Basically
we're
talking
about.
You
know
it's
very
hard
to
manage
this
migration
because
we
don't
know
production
timings.
Really.
We
were
talking
about
doing
initially
doing
like
a
sampling
approach
so
that
we
would
change
the
schema.
F
F
So
I
think
we
want
to
have
a
conversation
around
that
a
little
bit
more
instead
of
putting
together
some
ideas,
but
also,
obviously,
we
have
a
sort
of
a
timeline
that
we
need
to
get
these
migrations
out.
So
I
think
we
need
to
figure
out
if
there's
something
that
we
can
work
on.
It's
not
going
to
be.
C
C
A
C
D
But
what.
D
It
was
like
a
two
three
percent
per
month,
but
we
have
to
to
start
extrapolate
a
little
bit
better.
C
E
D
D
D
E
But
I
do.
I
do
think
and
correct
me
if
I'm
wrong
here,
that
there
is
a
hard
failure
date
on
those
things,
but
at
some
point
things
are
going
to
just
crash
and
burn
right.
So
in
my
mind
we
should
you
know
this
is
the
thing
that
we
really
need
to
get
a
good
handle
on,
and
I
think,
as
andrea
said,
this
is
likely
going
to
happen
in
other
areas
as
well.
E
So
you
know
if
we
can't,
then
we
still
need
to
do
it
manual
because
it's
it's,
but
it's
going
to
be
very
painful
and
error-prone
and
all
of
those
things.
So
if
we
can
find
a
pragmatic
solution
to
make
this
easier,
I
think
that's
good,
but
I
like
my
worry,
is
obviously
right.
If
you,
if
we
spend
some
some
months
finding
that
nice
solution
and
then
it
doesn't
quite
materialize,
you're
going
to
do
it
manually,
anyways
and
then
it's
going
to
be
really
rushed
right.
D
Table,
even
if
it
is
the
largest
one
it
at
least
it
does
not
have
the
effort
that
ci
builds,
for
example,
has
which
has
also
12
foreign
keys,
which
mean
when
we
run
the
migration
for
crbx.
We
will
we.
D
To
update
13
babies
so
yeah.
C
And
we
talked
about
this
today
and
we
shouldn't
bet
on
on
the
refactorings
that
we
are
discussing
right,
so
we
we
might
have
a
big
refactoring
coming
up
for
the
eyeballs.
That
also
incorporates
that.
So
we
don't
have
to
do
it
in,
in
that
case
we're
fortunate.
But
we
shouldn't
bet
on
that
and
we
need.
We
need
a
good
solution
anyways,
because
we
have
these
problems
in
other
places
as
well.
C
So
perhaps
this
is
a
bit
of
a
parallel
effort,
but
it's
still
still
worth
in
any
case,
and
I
think
the
sooner
we
get
to
those
deadlines,
the
more.
We
also
want
to
talk
about
an
emergency
kind
of
procedure
how
to
escape
from
that.
If
we
need
to.
E
Is
this,
maybe
I
don't
know
if,
if
you
do
do
this,
but
is
this
maybe
a
large
enough
sort
of
project
to
take
an
afternoon
as
a
group
with
some
coffee
and
map
out
those
plans
and
talk
about
it,
sort
of
synchronously
and
just
say
like
okay?
This
is
how
how
it
looks
right
and
have
that
discussion
so
so
that
we
we
we
do
this
proactively
rather
than
when
things
are
starting
to
get
a
little
bit
out
of
out
of
the
hand,
maybe
as.
A
Well,
yeah,
that's
we
talked
about
having
focus
days
and
we
talked
about
them
being
wednesday
and
we
talked
about
having
that
discussion
today.
So
maybe
that's
a
good
thing
that
we
focus
on
tomorrow.
B
A
Everybody
like
pat
andre,
is,
if
you're
all
prepared
for
it,
just
throwing
it
out
there,
but
I
think
it's
a
good
idea.
D
Yeah,
I
agree
and
whatever
help
pata
needs
there.
I
think
that
this
is
our
top
priority,
because
we
have
a
hard
deadline
and
so
whatever
pat
needs.
We
should
provide
him
with
as
much
time
as
he
needs
and
we
should
have
as
many
synchronous
goals
as
required.
E
C
F
D
Rename
indexes
we
have,
this
is
done.
Oh
no,.
C
C
Otherwise
I
would
be
working
on
the
instrumentation.
This
is
the
next
on
the
next
tab,
which
enables
us
to
to
actually
use
this
testing
pipeline,
which
also
helps
the
other,
mrs,
because
this
is
just
migrations
that
we
want
to
run
on
the
database,
and
I
would
like
to
see
those
mrs
run
on
the
testing
pipeline.
C
A
C
D
So,
security
review:
this
is
an
important
one.
We
are
waiting,
but
they
have
prioritized
that
this
is
a
prioritize
for
this
milestone
for
the
security
team.
Am
I
right?
Andreas.
C
D
Should
we
keep
it
to
track
that
this
is
done
on
our
end
or.
D
Yeah,
so
let's
keep
it
for
for
our
own
tracking
purposes,
and
this
is
my
own
and
because
we
have
some
issues
with
some
customers
that
go
back
and
forth,
update
and
then
downgrade
versions.
D
So
the
solution
that
we
are
going
to
propose,
we
are
going
to
add
a
troubleshooting
sections
in
the
documentation
for
omnibus,
because
that
solving
this
problem
will
require
porting,
10
versions
back,
so
we
decided
to
just
add
the
sections
on
the
documentation
for
this
one,
and
there
is
a
similar
one
that
we
are
investigating,
how
to
also
address.
A
F
There's
the
second
part
or
the
the
related
one
to
that,
which
is
the
third
one,
which
is
adding
the
migration
helpers,
that's
something
that
we
can
potentially
start
working
on
once
we
figure
out
what
we're
doing,
but
the
converting
the
events
id
we
haven't
even
done
the
first
part.
Yet
so
I'd
say
we're
not
going
to
tackle
that
for
some
months
or
weeks.
A
D
F
Yeah,
it's
really
I
just
the
actual
dropping
the
table
and
stuff
should
be
fast.
It's
really
just
the
post
analysis
hasn't
really
been
done
yet,
which
I
would
like
to
do
so
we
can
clean
all
that
up
and
be
done
with
it.
Finally,
I
just
haven't
really
got
around
to
it,
so
I
guess
if,
if
you
feel
that
you
have
any
time
for
that,
you
probably
don't
you
want
to
look
at
it
or
anyone
wants
to
contribute.
F
D
E
Super,
and
is
the
current
stuff
that
is
remaining
in
ready
for
development
consistent
with
our
overall
priorities,
are
those
the
most
important
things
that
we
want
to
do.
Yes
right.
E
That's
always
the
question
I'm
sort
of
asking
like
at
the
end
is
like:
if,
if
these
are
not
just-
and
I
I
kind
of
know
that
they
are
but
I'm
that's
always
what
I'm
interested
in
and
given
how
much
work
is
in
depth
and
how
many
items
are
in
here.
I
think
there's
nothing.
We
need
to
schedule
for
the
next
week.
E
I
think
and
again
I
think
what
I'm
most
interested
in
personally
is
sort
of
the
cycle,
time
right
so
having
issues
that
are
small
enough
and
really
find
enough,
so
that
they
can
go
from
ready
for
development
into
you
know
being
closed
in
a
relatively
short
amount
of
time,
and
I
think
that
is
actually
quite
nice.
So
maybe
that's
something
we
can
also
look
at
at
some
point.
It's
like
if
some
of
these
issues
are
large
right
and
if
we
can
make
them
smaller,
but
that's
for
the
future.
C
Very
quickly
I'll
see
you
guys
later
yeah.
A
D
So
if,
but
do
you
need
anything
there?
Do
you
want
me
to
jump
in
there.
F
Yeah
I
went
through
yesterday
and
really
checked
on
all
of
them.
I
think
more
or
less.
I
know
like
the
status
of
them.
It's
just
a
question.
Maybe
we
have
a
few
of
them
where
we
want
to
put
them.
I'm
not
sure
that
I
know,
or
maybe
there's
a
little
bit
of
a
discussion
there.
D
F
D
Yeah
and
the
idea
is
that
we
have
the
scheduling
the
list
also
prioritized,
so
where
it's
easy
for
us
to
move
things
from
there
back
to
to
the
active
board.
If
I'm
in
agreement.