►
From YouTube: 2020 06 29 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright,
happy
Monday,
everybody
jump
right
into
it.
So
a
few
things
need
to
be
back
on
the
memory
teams.
Lesson
learned
the
knowledge
sharing
performance
improvement,
so
the
goal
there
is
to
create
some
issues
for
the
upcoming
milestone.
So
we
still
have
time
on
that
one.
But
when
you
have
a
moment
to
look
and
I
know,
some
folks
have
provided
feedback.
Thank
you,
but
as
this
milestone
continues,
keep
throwing
in
ideas
and
if
we
need
to
have
a
synchronous
meeting
to
discuss
just
let
me
know
I'll
set
something
up
memory.
A
Team
retro
was
created
over
the
weekend
last
time
we
got
our
retro
and
the
day
before
the
company
with
an
engineering
white
retro,
and
it
puts
a
lot
of
pressure
on
the
facilitator
to
make
sure
everything's
organized.
So
let's
get
our
feedback
in
earlier
guess.
The
next
retro
is
on
the
seventh.
So
again
we
have
about
a
week
to
get
that
in
so
early
feedback
is
appreciated.
A
Some
feedback,
as
I'm
going
through
360
reviews
with
everybody,
is
be
a
good
time
for
us
to
set
up
another
iteration
retro.
Considering
all
the
other
stuff
we
have
going
on,
I
wonder
what
the
best
timing
was
for
everybody.
I
was
thinking,
maybe
the
end
of
July
after
this
milestone
gets
done
after
we
get
through
previous
two
items
so
scheduling
at
about
a
month
out
and
if
you
have
any
feedback
on
the
best
timing
for
that
I.
A
Just
added
into
that
issue
itself
and
then
feedback
on
memory,
team
office,
hours
and
scheduling,
I
added
a
page
this
morning
he's
answering
some
questions,
Matias
posted,
so
any
questions
or
comments.
On
the
first
section,
there,
oh
cool
all
right,
just
jump
over
to
progress
update,
so
telemetry
I
saw
chin.
You
furiously
typing
away
this
morning,
mattias
take
it
from
there
yeah.
B
I
need
to
start
with
my
stuff
because
I'm
not
I,
haven't
caught
up
yet
actually
I'm
watching
you
at
then.
So
that
would
be
the
fourth
item
migrating
to
recording
rules.
So
there's
there
was
a
server
side
in
Mar
for
homnibus
that
got
merged
so,
but
it
was
something
and
forth
so
I
had
to
go
back
and
change
the
climb.
That's
in
review
so
telemetry
it
so
I
mean
reviews
from
a
bunch
of
different
people,
so
telemetry
reviewed
it
I
need
a
back
end
review
and
still,
though
I
think
jr.
B
B
Em
anymore
performance
indicator
and
yeah
the
global
one,
so
so
I'm
not
working
at
this
right
now,
because
it's
kind
of
done
but
I
can't
put
it
into
a
review
until
the
other
stuff
gets
merged
because
branch
of
the
branch
and
then
the
third
one
when
I
was
working
on
today,
is
it's
like
failure
tracking,
because
one
problem
is
that
currently,
if
like,
if
anything,
fails
and
there's
other
things
that
can
fail
like
we
can't
find
Prometheus
for
the
query,
returned
no
results,
or
some
exception
was
thrown
or
anything
like
that.
B
We
do
a
very
defensive
approach
which
we
had
done
elsewhere
and
usage
ping
as
well,
which
is
to
fall
back
to
like
like
empty
elements,
basically
just
so
that
we
don't
fail
anything
else.
It's
just
that,
then
we
totally
blind
as
to
what
didn't
work
so
I'm
working
to
have
like
a
small
data
structure
that
we
return
as
part
of
the
topology
paying
in
it
will
include
some
kind
of
idea.
B
B
I
know
that
junior
was
working
on
yeah
collecting
that
you
name
info
stuff
that
it
suffered
from
the
same
back
and
forth
due
to
the
dependency
on
recording
rules,
we
ended
up
deciding
to
I,
bought
that
that
we're
not
going
to
wait
for
this
and
just
do
an
ad-hoc
query,
which
is
which
kind
of
makes
sense
for
this
stuff
anyway
and
I,
don't
actually
know
what
she
needs
with
the
exhale
prometheus
stuff.
So
I
can't
be
me.
B
A
B
I
mean
I
think
my
we
will
not
yeah
focus
on.
We
might
start
like
focusing
on
it,
but
probably
won't
multi
note
because
I
talked
to
Josh
and
we
decide,
like
my
impression,
had
been
initially,
but
this
is
pretty
high
priority,
but
apparently
it's
not
because
I
kind
of
underestimated
I
overestimated
the
number
of
multi
note
installations.
We
actually
have
a
ton
of
customers,
apparently
I,
don't
have
a
number
for
you,
but
I'm.
Like
several
people
told
me
it
was
Ben
and
Joshua
told
me
that
we
have
a
ton
of
single
customers.
B
So
so
we
will
continue
to
focus
on
single
note
for
now.
So
in
that
area
we're
making
good
progress,
it's
just
like
adding
more
data.
You
know
the
stuff
that
was
missing,
making
it
a
bit
more
resilient.
That
was
the
resilience
is
like
important
right,
so
so
the
recording
world
stuff
and
the
fader
tracking,
that's
all
to
do
with
yeah,
making
sure
we
don't
fail.
But
if
we
do,
we
know
what
what's
going
what's
going
wrong
and
I
think
that
will
also
put
us
in
a
better
position
to
tackle
and
multi
know
it
probably
maybe.
A
All
right,
Jen,
you
did
not
spend
time
on
real,
just
taking
away
all
the
boot
so
status
and
Sims.
Last
week,
I'm
a
blog
post.
That's
on
me
at
the
moment.
Actually
I
submitted
the
marketing
and
asked
them.
If
there's
anything
else
that
you
do
think
it's
in
good
shape.
It's
scheduled
to
go
out
on
by
seven,
so
nickel,
Oh,
Europe
yeah.
E
Regarding
metric
transaction,
like
refactoring
I,
just
like
rebates
on
master,
to
include
the
latest
changes
and
fix
some
specs,
so
it's
ready
for
review.
We
waited
actually
to
merge
this
another
another
related
tasks
regarding
fix
for
DB
calls
in
log.
So
after
this
is
fixed,
this
is
really
just
waiting
to
be
reviewed
by
maintainer.
Actually,
writing
the
Camille
is
assigned
to
this
issue,
so
I
hope
that
we
could
look
at
next.
F
E
Thank
you.
Thank
you.
Regarding
the
blob
controller,
like
that,
there
was
some
back-and-forth
with
this.
We
took
first
one
approach
to
care
something,
but
it
turned
out
that
this
will
not
work
so
I
created
another,
mr
that
seems
to
be
working
and
I
also
like
married,
and
the
numbers
are
promising
I.
G
G
E
We
actually
figured
the
fix
for,
like
counting
number
of
database
calls
and
cash
calls,
because
some
numbers
were
strange,
so
I
think
that
when
I
finish,
this
block
controller
I
can
like
start
analyzing
the
logs
and
parameters,
much
easy
to
find
n,
plus
one
Josh,
queries
and
I
will
probably
create
separate
issues
for
that.
So
I
plan
to
take
that
that
one
next,
if
there
is
no
any
other
work
that
requires
that
just
be
a
priority,
so
yeah.
A
F
I'm
kind
of
thinking
because,
like
this
issue
is
like
says
that
they
can't
reduce.
Maybe
we
should
like
speed
this
issue
in
to
detect
and
close
it
yeah
and
have
like
this.
Their
reviews.
One
has
an
investigation
to
like
to
maybe
point
out
a
number
of
the
end
points
that
we
may
be
interested
in
make
sense.
It's
like
III
think
like
a
major
piece
about
detecting
we've
got
it
covered
so.
F
F
So
I'm,
actually
like
updating
a
mark,
so
I
want
you
to
be
pushed
to
maintain
a
review
I
plan
on
that
to
do
it
on
Friday
but
like
it
didn't
solve
e5,
so
I'm,
just
hoping
like
on
getting
this
match.
As
for
the
future
flux,
we
also
have
the
feature
flags
office
hours
this
week
on
Thursday,
it's
like
3
p.m.
my
time.
So
if
you
would
be
interested
in
joining
future,
black
work
feel
free
or
so
joining
me
gonna
make
people
like,
like
we
gonna,
announce
it
again
to
make
people
aware
of
this
happening.
A
A
G
I
started
to
look
into
it,
but
to
be
honest,
this
had
great
questions
by
Camille
and
I.
Think
I
don't
have
like
any
strong
opinions,
I'm
so
far
and
I
think
that
would
be
my
priority
to
answer
those
questions
who
is
providing
like
pros
and
cons
or
nature
possible
decisions,
to
be
honest,
I
didn't
work
with
dynamic
resize
so
far,
but
I
already
like
see
that
we
have
some
alternatives
like
instruction
as
most
suggested
on
the
issue.
Yeah.
G
B
Cameo,
like
you,
suggested,
to
look
into
for
the
CloudFlare
or
like
one
of
these,
like
edge
network
providers,
that
they
have
this
feature
back
and
basically,
but
then,
of
course,
you
need
to
you
like.
If
the
user
sends
like
an
update
to
the
avatar
and
then
some
active
record
entity
is
invalidated,
you
can
make
sure
you
set
the
red
headers
to
flush
it
from
the
edge
cache
as
well
and
yeah.
This
is
okay,
probably.
F
B
F
Like
the
challenge
is
really
like,
if
we
don't
have
catch
like
100
images,
resize
and
a
second,
it's
it's
pretty
hefty
task
to
do
like
image
resize.
It's!
It's
not
easy
like
it's
like.
If
you
lose
your
package
like
it's
a
like,
like
you
like,
you
need
to
like
whatever,
like
I'm
missing
the
world,
but
like
it's,
it's
it's
pretty
complex
task,
especially
if
you
may
have
like
very
big
images.
F
So
I
don't
know
I
think
like
we
need
some
kind
of
cash
for
that,
but
due
to
like
2
o'clock
night
in
forward
like
you,
cannot
really
saw
that
on
a
disc,
because
you
don't
have
the
disk
space
that
you
could
use
for
that
case,
because
then
you
really
have
the
problem
with
removing
data
from
the
cache.
So.
B
F
Mean
I
mean
it
can
go
in
the
big
store
bucket.
It's
just.
The
question
is
release
it's
like
the
easiest
thing
to
do,
because
the
currently
it's
like
single
assigned
to
this
story,
but
like
single,
my
son,
really
seems
to
me
into
a
slide.
We
just
integrate
image
processor
with
dynamic
resizing
in
doctormick.
Give
me
work.
A
Don't
worry
about
the
the
milestone,
there's
a
lot
of
research
that
needs
to
be
done.
That's
a
single
issue
for
a
huge
amount
of
work
and
the
comment
I
put
in
there
and
the
feedback
I've
been
given
to
Tim
and
challenges.
You
know
we
get
started
in
this
quarter.
It's
unlikely
that
we'll
get
finished
in
this
quarter
because
it's
a-okay
are
for
someone
somewhere
might
think
it's
even
tied
to
one
of
our
ok
hours.
So
there's
a
lot
going
on
in
that
issue.
There's
a
lot
being
requested
and
there
are
a
lot
of
alternatives.
A
Obviously,
since
I
mean
we
just
started
talking
about
it,
we
have
a
lot
of
different
ideas,
so
it
is
not
expected
to
be
delivered
in
this
milestone.
That
is
two
weeks
away
from
completing.
So
what's
expected
is
the
research
and
maybe
an
approach
or
a
good
idea
on
a
couple
of
approaches.
There
are
cost
calculations
that
need
to
be
considered
right,
like
storing
all
the
images
versus
computing
versus
using
CDN.
What's
gonna,
be
our
best
approach
going
forward
so
for
now
it's
research
issues
that
will
spawn
a
lot
of
other
issues.
I
think.
F
G
But
I
just
wondered:
do
I
mean?
Is
this
probably
I'm
going
to
okay
in
so
but
I
wonder?
Do
we
have
any
like
dot-com
priority
over
on
Prime
Minister,
so
is
this
issue
should
be
like,
let's
say
neutral
through
the
installation
type
we
just
atomizing
any
give
up
have
any
priorities
right
now
that
comb
is
more
important.
For
example,
maybe
we'll
have
some
tuition
from
clients,
and
this
is
more
important.
A
This
one,
if
I'm,
remembering
the
origins
of
the
story
right,
is
to
cut
down
on
overall
costs
because
we're
forget
the
details,
but
we
shouldn't
fork
code
right
between
calm
and
versus
on-prem.
So
we
need
to
find
a
solution,
in
theory
would
work
for
both.
So
we
don't
want
a
hard
code
for
calm,
but
this
is.
This
was
generated
from
operational
costs
on
comm
to
try
and
cut
them
down.
Does
that
make
sense
so.
F
A
Ok,
yeah
and
tins
almonds.
Gonna
have
a
ton
of
context
here.
So
if
you
have
questions
about
approach
or
anything
else,
just
add
them
to
the
issue
and
he's
been
responding
pretty
quickly,
he's
he's
happy
that
were
able
look
at
it
faster
than
his
team's
will
be
able
to
so
priority.
So,
thanks
for
picking
that
up
and
asking
questions
all
right
grant.
H
Yes,
just
a
quick
update
on
I
told
the
Lexie
and
a
few
others,
but
on
file
blame
yeah,
we're
working
on
the
safety
pilot
now
gnarliest
working
wearing
that
and
she
found
that
the
file
of
retesting
JSON
file
Muse
with
a
blob
test.
It's
just
too
big.
It's
too
it's
too
much
for
blame
to
handle
and
the
browser
actually
crashes
and
then
looking
at
that.
No
that's
a
separate
issue.
Computer
issues
already
gone
for
that
area.
Those
are
just
the
server-side
response
times.
This
is
actually
the
browser
itself.
H
Let's
just
count
rendered
the
file
I
rendered
the
page
with
a
large
file.
So
at
that
point
it's
kind
of
changed
the
approach
a
little
bit
in
that
we
rephrase
the
new
issue
to
say
this
this
this
area.
It
needs
some
changes,
because,
when
the
files
to
bake
like
the
Linux
maintain
his
file
or
adjacent
file
that
we
pressed
emojis
and
get
lab,
when
you
trying
to
complain,
it
just
crashes,
the
browser
and
that's
probably
never
going
to
change,
because
no
competitor
can
also
do
this
area
at
all.
B
H
H
The
approach
needs
to
happen
here
now
and
so
reason
they
should
suggest
that
we
need
to
put
applying
limit
to
file
blame
and
that
if
the
files
just
got
too
much
more
storied
history,
that
would
chew
on
the
page
and
that'd
be
hard
limit
to
me
trying
to
figure
out,
but
there
needs
to
be
some
kind
of
limit
in
the
product
itself.
There
I
should
say
no
this.
H
This
file
stupid
we're,
not
gonna,
show
you,
you
need
to
ever
run
something
locally
or
you
need
to
do
something
different
and
obviously
that
has
mobile
benefits
and
it
doesn't
crash
the
vs.
browser
if
the
files
too
big
and
it
won't
add
a
heavy
call
on
our
servers
as
well
and
with
that
kind
of
change.
H
We're
gonna
change
the
file
that
we
used
in
our
testing
as
well
and
update
the
various
issues
to
show
the
new
numbers
for
the
new
file
and
if
I
will
be
smaller,
we
still
wanted
to
be
a
heavy
blame
file,
but
a
file
that
actually
does
render
in
the
browser
it
doesn't
crash
and
I
found
that
would
likely
be
within
this
new
limit,
be
so
we're
still
exploring.
That
is
Frawley
at
the
moment.
H
We're
kind
of
looking,
maybe
at
some
Ruby
files
in
our
specs
folder,
which
are
some
of
them,
are
not
as
big
as
the
father
occurred.
Testing
is
about
150
kilobytes,
with
quite
a
bit
of
a
storied
history,
there's
no
quite
large
I
guess
for
blame,
but
that
is
rendering
it
doesn't
crash
everything.
So
that
probably
is
within
the
guidelines.
So
we
still
want
plane
to
be
improved
and
plane
is
still
useful
feature,
but
for
very
large
files
for
crazy-crazy
history.
I
think
it's
reasonable
for
us
now
to
look
at
limiting
that
for
the
user.
H
B
B
H
Certainly
and
I've
not
descriptive
and,
however
think
a
little
should
work.
It's
I
have
to
cite
the
fact
that
we
do
have
a
1
megabyte
limit
for
blob
files
as
I
just
strict
limit.
If
it's
a,
if
it's
above
a
megabyte,
we
don't
do
it.
So
this
is
a
bit
more
of
a
complicated
area
and
that
we
can
every
night
we
probably
should
try
to
aim
to
have
a
hard
limit.
Whatever
that
limit
is
maybe
issue,
I
think
generally,
it's
a
mix
of
size
and
history.
H
H
Ultimately,
what
we'd
like
to
see
is
a
hard
limit
and
that
we
don't
actually,
let's
a
hit
the
server
much
at
all,
because
obviously,
if
you
do
then,
even
if
it's
like
oh
it'd,
take
five
seconds
that
still
hit
the
server,
it's
still
processing,
something
that
is
using
up
it's
using
a
time
and
and
resource
to
do
something
that
is
impossible
so
but
I
by
the
Saints
I
open
up
the
scriptures,
because
I
appreciate
this
is
a
complicated
area.
So,
every
little
bit
we
have
every.
G
G
So
when
we
get
on
the
back
end,
when
we
understand
the
blame
account,
we
could
pretty
much
make
a
decision
if
you
are
going
to
render
it
or
not,
because
every
blame
group
is
more
or
less
more
or
less
the
same
size
of
HTML.
It's
January
small
as
the
same
of
HTML,
boilerplate
and
I
believe
that
the
boilerplate
is
what
kills
the
browser
here,
mostly
not
the
content
itself.
Yeah.
H
Go
we
need
to
go
somewhere,
so
we
can
actually
detect
so
I'm
like
sad.
It
was
a
complicated
area,
so
I've
no
be
prescriptive
at
all.
Whatever
we
need
to
do
to
implement
is
ultimately
as
long
as
the
service
hits
minerals
possible
and
the
users
experience
is
a
very
quick
response.
All
from
sorry.
No,
we
can't
show
you
this
page
and
you
need
to
check
blame
for
this
file.
H
So
that's
just
an
update
about
that.
So
the
update
is
that
this
two
issues
already
opened
I
know
some
of
the
guys
already
worked
on
for
an
API
and
the
web
controller.
The
blame
we're
gonna
change
the
file
because
the
file
which
were
on
the
testing
I
think
is
not
realistic
anymore
and
that
it
needs
to
be
eventually
blocked.
H
A
H
F
A
H
H
Broke
against
it
obviously
shouldn't
be
working,
I
guess
10
anyway,
I
always
thought
should
yeah,
but
it
broke
recently.
So
someone
got
stung
with
that
recently,
so
I
think
in
retrospect
maybe
the
GDK
should
have
a
or
the
gck
I
guess
there
should
have
been
a
no
summer.
Sadie's
update
your
know,
your
local
post
grass
and
so
yeah.
A
By
the
blank
stares
in
the
camera
that
most
people
didn't
know
about
the
PG
11
requirement,
so
thanks
for
the
feedback,
I
will
look
to
get
better
at
it.
My
concern
is,
it
was
posted
in
the
weekend
review
quite
a
bit
and
it
was
posted
I
think
as
early
as
the
twelve
eight
release
notes
that
this
would
be
a
requirement
and
so
trying
to
get
better
at
multimodal
communication.
I
think
a
couple
ideas
is
to
post
frequently
and
often
in
the
development
channel
and
I
think,
there's
probably
some
better
coordination.
A
We
could
have
done
with
product
management
to
make
sure
that
it
touches
all
these
other
products,
like
kisi
ke
TK,
make
sure
that
they
have
a
plan
as
well.
We
do
have
a
Postgres
12
timeline,
so
we're
gonna
try
and
stay
on
the
yearly
cadence
with
Postgres
so
about
this
time
next
year,
we'll
be
upgrading
the
PG
12.
So
we
need
to
make
sure
we
get
all
the
affected
products
and
make
sure
that
we
don't
go
through
this
again.
F
Yeah
I
think
let
me
only
have
a
single
suggestion
that
like
speaks
the
best
to
the
people,
fine,
but
allow
these
to
be
ignored.
Then
like
people
would
be
aware
that
they
should
be
upgrading
to
PG
11
or
whatever
else
associated
software.
Because
thing
I
know
it's
PG
11,
but
it
could
be
also
like
Reddy's
or
whatever
does
the
advanced
e
that
England
could
be
like
mismatched
really,
but
it's
kind
of
like
also
connected
we've,
like
on-prem
installs,
some
of
these
on
prama
start.
They
run
older
version
of
leaky.
Talk
sorry
take
it.
F
They
run
new
version
of
the
gita
about
the
ground
they
own
PG
database
written
is
not
it
up
own.
So
now
it
can
be
like
if
we
don't
somehow
like
somebody
dated
if
I
not
gonna
need
to
like
miss
behavior,
because
from
the
features
that
I
know
that
we
started
using
from
the
PG
11
is
like
the
chip
defaults
on
columns
without
table
rewrites
and
now
recently
we
also
started
database
partitioning
this
migration,
so
yeah
IIIi.
F
Think,
like
my
internal,
my
suggestion
is
like
if
we
know
that
there
is
like
a
requirement
of
BJ
11:00
I,
just
failed
application,
but
just
I
don't
know,
introduce
some
and
of
a
variable
to
allow
these
two
temporary
disabled,
but
other
than
other
than
that,
like
this
probably
rises.
The
awareness
like
pretty
violently
but
makes
people
really
like
be
prepared
for
that
chance,
but
still
I
skipped
it
for
a
time
being.
B
G
H
Well,
in
theory,
it's
a
fair
caller
yeah
I've,
not
heard
of
any
customers
for
self
managed
or
I'd,
be
stung
by
this
Veritas
support,
see
if
Ashley
anyone
has
any
reasons
you
see
well,
my
third-party
Postgres
is
still
ten
and
I
was
ever
able
to
upgrade
get
lab
without
any
warnings
or
or
cannot.
You
know
even
hard
failures
to
say.
No,
you
kind
of
grade
your
Davis
to
immersion,
certainly
I.
Think
that's
what
should
happen
eternally
with
the
tooling
Ivor.
If
you
trying
upgrade
GDK
or
another
one
I
should
say:
hey
your
progresses.
Incorrect.
H
Imagine
I'd
like
to
hope
the
same
Hanff
customers
but
yeah
I,
didn't
consider.
The
idea
of
someone
using
a
third-party
process
was
obviously
on
the
bus,
so
much
you're
updating.
At
the
same
time
it
would've
update
Postgres
for
you
automatically
so
I
guess
so
most
users
should
be
fine,
but
it
probably
has
been
or
the
point
will
be.
Some
users
are
there
that
we
have
missed
the
release,
notes
Oh
miss
there
were
places
that
we
call
out
that
this
is
a
their
requirement,
so
we're
checking
for
that
kind
of
thing
as
well.
Cuz
yeah.
B
The
way
correct
to
be
fair,
I
think
I
think
it
was
fairly
well
communicated.
I
think
that
I
mean
I
knew
there
were
these
upgrades
were
happening
and
I
looked
at
this
table
before
on
the
docks.
I
think
the
one
thing
I
wasn't
totally
clear
to
me,
but
I
might
have
just
read
it
there.
Really
enough
was
the
that
cutoff
point
like
when
it
and
all
the
persons?
This
is
specifically
not
supported
anymore,
but
nothing
will
new.
That
is
what's
happening.
Sure.
D
A
I
mean
again
I
think
I've
heard
it
said
somewhere
if
you
say
it
five
times,
and
you
cover
80%
of
the
audience
right.
It's
probably
good
I
think
the
thing
that
was
missed
here
was
just
incorporating
it
into
GDK,
&
gck
and
making
sure
that
there
was
deprecation
warnings
early
on
making
sure
that,
like
GDK
didn't
upgrade
so
a
lot
of
people
were
stuck
and
to
the
point
where
they
had
to
drop
the
entire
database
and
rebuild
from
scratch.
So
it
just
wasn't
a
good
experience
for
those
folks
using
it.
A
So
we
can
get
better
next
time-
and
this
is
I
mean
it's
going
to
be
an
annual
occurrence
of
us
upgrading
Postgres.
So
we
need
to
make
sure
that
we
consider
all
these
things
every
year
and
get
better
what's
customer
feedback.
So
that's
all
and
actually
I'll
just
create
an
issue
for
better
communication
and
Postgres
upgrades
so
that
we
make
sure
we
capture
all
these
now
and
don't
forget
up
nine
months
from
now.
A
So
thanks
for
that,
and
then
there
was
one
new
item
that
was
added
I
think
either
today
or
Friday
about
allowing
admins
to
configure
act.
Timeout
for
Puma
and
Camille
said
it
was
a
low-hanging
fruit.
I
just
want
to
make
sure
people
know
about
it,
since
we
only
have
a
little
over
two
weeks
in
the
milestone
left
so
that
it
does
not
get
forgotten.