►
From YouTube: 2022-11-21 Application Performance Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Computer,
okay,
hello,
everyone-
this
is
November
21st,
and
this
is
application
performance
team
meeting
and
we're
going
straight
to
the
billboard.
So
we'll
start
with
closed
issues
and
the
first
one
is
mine.
This
was
refactoring
how
we
name
our
diagnostic
reports,
so
it
made
the
naming
a
bit
more
cleaner
and
easier
to
parse.
This
is
done
and
currently
active
on
production.
So
I
close
this.
B
A
B
A
Bad
happened:
we
just
have
a
pretty
huge
amount
of
reports
being
uploaded
to
the
Google
cloud
and
it
has
one
day
retention
period
currently
on
production,
but
it
could
be
changed
in
ux
in
one
click.
So
no
big
deal
so
no
immediate
danger
we're
just
not
running
it
in
production
to
save
some
resources.
B
A
Unfortunately,
not
no,
unfortunately,
we
could
not
do
it
right
now,
because
we
will
need
either
not
aware
feature
Flags,
which
is
not
possible
or
not
aware,
environment
variables,
but
we
are
not
also
able
to
do
this.
Unfortunately,
so
currently,
we
could
only
disable
call
reports
like
individual
type
of
report,
for
example
generic
reports.
You
could
disable
it
individually
without
disabling
caller
reporting
framework,
which
I
did.
B
B
C
B
A
B
Yeah,
so
Nicole
I
was
working
on
kind
of
streamlining.
All
of
the
all
of
these
different
memory
killers
that
we
were
running
and
the
way
I
don't
know
like
it
got
really
complicated
with
all
of
these
different
configurations.
But
I
I
guess
like
the
bottom
line
here,
is
that
we
kind
of
want
to
do
make
for
a
kind
of
a
seamless
transition
for
self-managed
from
Puma
worker
color
into
the
memory
Watchdog,
which
now
has
an
RSS
monitor.
So
it
should.
B
It
will
never
behave
exactly
like
from
a
worker
color,
but
I,
don't
think
that
needs
to
be
anyone's
concern,
except
for
people
working
on
gitlab.
So
basically,
what
he
was
doing
with
that
change
was
to
to
make
sure
that
if
it's
not
configured
specifically
to
be
disabled,
meaning
it
would
fall
back
from
a
work
of
color.
We
we
enabled
it
by
by
default.
B
The
idea
is
that
we
want
to
move
to
the
memory
Watchdog
so
where
they
can
actually
get
rid
of
a
working
killer,
and
if
we
were
to
leave
it
up
to
every
single
person
running
gitlab
out
there.
This
would
just
never
happened
right
because
they
would
have
to
go
in
and
change
these
environment
variables.
So
we
take
a
slightly
more
bold
attempt
here,
which
is
to
assume
that
Watchdog
will
get
the
job
done.
B
Well
enough,
I,
don't
know,
maybe
even
better
we'll
see
so
we
enable
it
by
default,
and
this
then
means
the
Puma
Barca
color
will
be
disabled,
so
it's
mutually
exclusive
switch,
but
it
will
read
the
same
configuration
which
is
the
per
Walker
max
memory.
A
Yeah,
that's
cool
the
next
one
is
for
me:
it's
yeah.
This
is
a
little
one,
this
one's
about
our
metric
exporter,
failing
specs
on
Mac
OS,
so
it
was
fixed,
relatively
easy.
C
Yeah,
this
was
for
a
reputry
audit
I
think
last
week
it
was
in
review.
Still
it's
been
updated
to
1.0.1
from
0.9
something
major
change.
Just
some
rubatory
support
some
improvements
with
performance
and
as
well.
It's
nothing
very
interesting.
We
can
actually
use.
C
This
was
a
bug
in
deployments
API.
This
was
kind
of
related
to
the
previous
one,
where
I
wanted
to
test.
If
it
worked
with
the
specs
and
then
I
found,
this
slow
spec
turned
out
that
we
actually
yeah.
There
was
just
a
small
bug
with
a
one
leg
change
where
we
tried
to
go
a
variable
that
didn't
exist
which
caused
a
timeout
and
basically
after
five
minutes.
C
A
Really
cool
yeah
thanks
for
fixing
it
okay,
that's
everything
crazy
closed.
Let's
move
to
the
verification,
so
this
is
from
Nicola
and
I
will
keep
it,
as
is
it's
another
issue
related
to
this
effort
to
streamline
Watchdog
and
boom
memory
killer
and
let's
go
to
blocked
this
one
is
also
from
Nicola
but
yeah.
B
Still
blocked
I
can't
I
can
talk
to
this
because
we
were
both
working
on
it,
so
it
is
still
blocked,
because
I
found
a
problem
with
another
problem,
with
memory
used
in
GitHub
metrics
exporter,
which
looks
like
it
is
caused
by
a
spike
in
Sample
volume
emitted
from
the
rails,
app
which
we
didn't
have
well.
B
We
have
metrics
about
metrics
metametrics,
basically,
but
I,
don't
think
we
have
any
alerts
defined
on
those,
so
it
went
unnoticed
and
I
only
found
it
because
it
resulted
in
a
knock-on
effect
in
gme,
which
was
very
high
memory
use.
B
So
what
looks
it
looks
like
yeah?
What
happens
is
that
every
so
often
not
all
the
time,
but
every
so
often
brailles
will
to
generate
an
enormous
spike
in
the
yeah
amount
of
samples
that
emits
for
for
some
time
like
not
for
very
long
I.
Think
it's
just
minutes,
and
then
it
drops
back
down
and
because
these
are
written
to
disk
and
then
the
exporter
will
the
exporter
doesn't
look
at
file
size?
It's
right.
B
It
just
streams
these
things
from
disk
and
processes
them
in
memory
and
then
renders
them
back
out.
So
this
can
cause
a
very
high
memory
losing
gme
that
this
is
how
we
notice
the
problem.
Even-
and
it's
not
currently
understood
like
why,
like
what
the
root
cause
is
here,
so
I
open
up
two
follow-up
issues,
one
to
investigate.
Why
is
this
even
happening,
because
it
doesn't
sound
right
that
the
rails
app?
Has
these
spikes
in
in
samples
it
produces,
that
doesn't
sound
healthy.
B
So
that's
the
root
cause
we
need
to
get
to
the
bottom
of
and
fix,
but
another
related
issue
is
that
we
should
make
Jimmy
kind
of
a
bit
more
conservative
with
regards
to
what
it
actually
ingests
and
I
I,
don't
know
yet
how
we
would
best
go
about
this,
but
yeah
I'm
open
for
ideas,
I
guess,
there's
many
ways
to
go
about
it.
We
could
we
could
look
at
making
it
more
memory
constant
in
in
use,
so
that
we
only
have
a
whole.
B
You
know
more
than
like
a
buffer
of
you
know,
X
megabytes
in
memory,
and
anything
else
would
just
have
to
wait.
That
would
most
certainly
increase
latency,
but
that's
maybe
the
right
trade-off
to.
B
Or
it
could
even
do
something
like
yeah
I,
don't
know
like
it's.
It's
just
I
haven't
really
thought
it
through,
but
that
we
probably
need
to
do
something
about
it,
because
it
will.
It
will
take
memory
away
from
the
C
Group,
which
is
shared
with
the
Puma
and
psychic
processes.
So
so.
B
Reduce
in
memory,
kill
is
kicking
in
like
the
Linux
memory.
Color
kicking
in
and
and
and
process
is
being
read
that
shouldn't
shouldn't
be
reaped
and
all
of
these
things.
So
that's
it's.
B
B
I
still
consider
this
block
because
it
is
running
on
staging
and
we
have
seen
these
problems
on
staging.
It
hasn't
caused
any
major
issues
right
now,
but
so
I
haven't
rolled
it
back,
but
I
also
don't
want
to
proceed
like
putting
it
into
production.
With
these
kind
of
problems
still
going
on.
A
I
agree:
I
agree,
I've
already
started
looking
into
Jimmy
like
since
Friday
we
discussed
it
so
I
will
prepare
something
to
try
the
next
one
is
also
yours,
Matthias
and
I
suspect.
We
are
blocked
by
the
same.
B
B
Yes,
so
This
was
another
interesting
one,
so
we
added
Google
Cloud
profiling,
support
for
gme
and
it
worked,
but
we
then
found
out
with
a
very
funny
bug
that
it
was
actually
we're
actually
not
able
to
enable
this
in
production
if
there
is
also
another
go
service
running
on
the
same
pod,
not
even
in
the
same
container
on
the
same
pod
that
also
uses
Cloud
profiling.
B
No,
it's
called
chemical.
It's
called
continuous
profiling
in
lab
kit
and
the
the
reason
is
that
this
is
configured
with
an
environment
variable
and
the
way
our
charts
work
is
that
they
that
they
apply
these
at
the
Pod
level.
So
so
you
you
set
it
or
you
set
it
at
the
chart
level,
and
then
they
the
way
these
charts
operate.
Okay,
how
it
currently
operates
is
it
will
set
it
in
all
containers
configured
for
this
deployment
right.
B
So
that
then
meant
that
gme
was
overriding
Cloud
profiling
for
Workhorse,
and
this
caused
a
really
funny
bug
where
they
were
both
sending
stack
frames
into
into
stackdriver
on
Google.
So
we
had
a
profiler,
so
the
profiler
was
showing
a
mixture
of
gme
and
workouts
stack
frames,
which
yeah
was
very,
very
strange,
yeah.
It's
it's
very
messy
and
I.
Don't
know
how
to
fix
this.
I
created
I
created
an
issue
in
the
distribution
backlog,
because
I
think
this
might
be
a
breaking
change
in
the
chart
and
I.
B
Don't
really
know
how
to
fix
this
either.
Yes,
you
need
to
patch
the
templates
it
uses
to
deal
with
this
I
think
Chase
and
I
want
to
say
it
mentioned.
He
would
have
a
look,
but
he
hasn't
hasn't
responded.
Yet
it's.
C
A
B
Want
to
disable
it
for
Workhorse,
that's
for
sure,
so
so
I
had
to
roll
it
back
in
and
Jimmy
yeah,
so
so
yeah.
We
can't
continue
with
this
and
until
that's
fixed
at
the
chart
level.
Unfortunately,
okay,
okay,
thanks.
A
For
working
on
this
and
moving
to
the
in-depth,
this
is
also
from
your
Matthias
yeah.
B
On
top
of
our
diagnostic
reports
stuff
and
the
memory
Watchdog
actually
so
the
idea
is
see
that
we'll
add
a
second
report
which
is
an
object
space
dump
where
we.
B
For
like
memory
like
yeah,
diagnosing
issues
with
high
memory
use
and
memory
leaks,
and
the
idea
is
that
so
I'm
hooking
I
I
have
a
I,
have
a
POC
for
this
ready.
It's
just
waiting
for
some
other
reflectors.
We
need
to
get
done.
The
idea
here
is
that
we
hook
into
on
worker
stop
and
which
is
which
is
triggered
by
the
memory
Watchdog.
B
Whenever
a
worker
workers,
memory
use
kind
of
gets
out
of
hand
and
and
then
the
stop
event
will
then
in
turn
trigger
a
heap
dump
and
and
the
system
will
be
written
to
disk
and
and
then
picked
up
by
the
reporter,
whether
the
uploader
and
put
it
into
GCS,
where
we
can
then
download
it.
This
one
is
a
little
trickier
than
the
existing
report
that
we
have,
because
the
files
can
be
so
large
right.
B
So
I
have
a
super
rough
POC
sitting
on
my
laptop,
it's
not
in
any
Mr.
Yet
that
works
so
where
we
just
take
an
object,
space
dab
and
then
it
gzips
it
yeah
I
just
need
to
go
back
to
a
little
refactor.
We
were
working
on
to
extract
some
code
that
we
will
want
to
share
with
these
reports,
because
otherwise
there
will
be
a
lot
of
duplication
of
logic,
around
logging
reports
putting
metrics
into
Prometheus
how
files
are
named
and
correlated
with
the
uuids
in.
B
To
duplicate
so
so
I'm
kind
of
like
yeah
I,
I
kind
of
consider
this
part
of
this
overall
issue
to
do
this
for
a
factor.
But
that's
what
I'm
working
on
right
now.
A
Yeah
amazing
amazing,
this
one
is
from
me
actually
I
haven't
started
working
on
this.
This
is
just
a
very
small
issue.
I
will
pick
it
up
when
I
will
have
like
one
hour
or
so
like
free
time.
A
The
next
one
is
a
little
bit
more
interesting.
That
is
what
Matthias
mentioned
about
reducing
memory
usage
of
gme
and
on
Friday
we
paired
with
Matthias,
and
we
discussed
how
we
actually
work
over
the
props
we
have
in
our
gme
and
I.
Just
as
as
much
as
mentioned,
we
have
a
couple
of
ideas
how
we
could
immediately
reduce
the
memory
usage
by
not
reading
the
file
in
the
memory.
This
was
this
is
just
an
initial
step
and
we
could
go
even
further
and
do
not
pass
a
huge
amounts
of
data
between
functions.
B
B
Yes,
so
I
think
we
kind
of,
and
then
the
question
is
you
know
what
is
a
reasonable
cap,
like
just
from
my
experience,
having
worked
on
this
and
having
seen
what
it
typically
uses
for
like
a
typical
production
tumor,
he
sorry
I'm
like
all
right
it
to
me
too
many
different
issues
for
a
production
sample
set
from
from
Puma
I
I
saw
80.
Megabytes
is
totally
totally
reasonable,
so
I
think
that's
a
that
should
be
it.
That's
a
good
aim.
You
know,
like
plus,
minus
10,
20
megabyte.
That's
fine.
A
B
Megabytes
I,
don't
think
it
needs
to
be
more
than
that
and
so
I
think
yeah
the
title's
a
bit
misleading,
because
of
course
we
need
to
buffer
them
in
memory.
What
I
meant
to
say
was
here:
yeah
is
that
we
should.
B
We
should
not
allow
this
to
grow
in
some
kind
of
unbounded
fashion
right
and
it
currently
does
sure,
because,
because
we
do
not
pay
attention
to
file
size
and
the
total
volume
we
process
and
yeah
for
reasons
of
latency
Improvement
and
because
it
worked
fine
with
the
sample
data
sets,
we
had
even
those
for
production
it
it.
B
It
created
buffers
that
were
very
large
in
memory,
basically
the
size
of
a
single
process,
simple
file,
which
yeah,
which
in
this
case
you
know,
could
have
been
a
was
in
in
the
region
of
like
40
50
megabyte,
but
but
we
was
what
we
were
seeing
in
production.
The
other
day
there
was
like
hundreds
and
hundreds
of
megabytes
of
samples
in
a
single
file
so
for
a
single
process.
So
so,
apparently
those
can
get.
C
B
We
need
to
make
sure
that
yeah
yeah
like
like
this
yeah
content
length
like
this.
We
need
to
cap
this
somehow
and
then
make
sure
that
we
copy
it
yeah
check
by
chunk,
but
then
also
what
you
pointed
out
last
week.
We
might
have
to
check,
because
if
we
then
still
hold
on
to
this
data
in
memory
until
everything
is
parsed.
Yes,
we
still
have
all
data
in
memory,
so
we
need
to
see
if
we
also
might
have
to
change
this
even
further
to
emit.
B
A
B
Yeah
I
agree,
I
think
that's
a
good
approach.
Let's
do
this
and
then
the
great
thing
is
that
we
have
really
good
test
coverage
for
the
system.
I'm
actually
happy
to
say
that
I
mean
you.
You
found
a
weird
inconsistency
last
week
that
we
weren't
quite
able
to
explain,
but
it
didn't
actually
cause
problems
right.
So
we.
A
B
B
And
we
started,
we
started
super
super
test,
driven
right
with
the
comparison
tests
that
we
from
day
one
we
were
running
acceptance
tests
against
is
the
Ruby
exporter
so
that
we
would
always
compare.
Are
they
producing
the
same
Json?
Sorry,
the
same
textual
output
for
the
same
sample
sense,
this
kind
of
stuff
helped
them
also
iterate
over
performance
to
make
sure
that
we
don't
regress
on
behavior
and
we
have
the
benchmarks
for
performance
tracking
and
what
I
actually
wanted
to
say
was
it's
okay.
B
If
we
like
make
it
a
little
slower,
I
think
that's
yeah
I
think
this
is
probably
fine.
So,
let's
move
in
a
Direction,
Where,
We
trade
up
Legacy
for
lower
memory
use.
That's
a
good
start.
A
B
Added
the
RSS
memory
limit,
Watchdog
form
for
as
a
replacement
for
Pokemon
color,
but
we,
it
was
It-
was
kind
of
killing
two
birds
with
one
stone
here,
because
we
can
also
use
it
for
cyclic
because
for
psych
we
can
only
use
the
psychic
memory
killer
demon,
which
is
the
the
another
color
we
had
in
place
originally
for
psychic.
But
we're
also
looking
to
replace
this
one
with
the
Watchdogs.
B
So,
but
we
had
to
extend
it
a
little
bit
to
understand
the
same
settings
that
sidekick
memory,
color
demon
was
using,
so
so
this
required
another
change,
so
basically,
after
this
is
done,
we'll
be
in
a
position
where,
like
Watchdog,
can
actually
take
on
these
roles.
Right
for
all
of
these
different
environments,
which
is
pretty
cool.
A
Yeah,
okay,
cool
thanks
Nikola
for
working
on
this.
If
you're
watching
this
yeah-
and
this
is
everything
from
in-depth-
so
let's
move
to
our
custom
notes.
So
what
I
actually
noticed
that
we
have
a
couple
of
issues
when
I
was
doing
complaining.
Question
I
noticed
that
we
have
a
couple
of
issues
planned
for
this
Milestone
which
are
not
in
our
typical
discussion
Loop.
So
maybe
we
could
touch
them
and
be
aware
of
them,
so
this
one
is
actually
assigned
to
our
team
as
far
as
I
know,
and
let
me
double
check.
A
B
Have
assets
red
is
yeah
yeah,
yeah
right.
A
Just
yeah,
of
course,
yeah.
So
this
is
currently
quarantined,
but
yeah
we've
been
asked
to
fix
this
back
and
currently
this
is
p2s2
I'm,
not
sure
if
this
is
exactly
correct
priority,
but
I
guess
it's
worth
checking
this.
So
if
some
of
us
will
have
some
time,
we
could
pick
this
issue.
Maybe
it's.
It
will
be
relatively
simple.
Who
knows
so
and
Sean
also
started
some
investigation,
so
maybe
worth
checking
what
you
want.
A
This
is
the
first
one.
The
second
one
is
also
interesting.
I
believe
Nicola
knows
what
to
do
with
this,
because
I
think
somebody
asked
Nikola.
This
is
this
one?
A
Let
me
check
yeah.
This
is
about
our
cyclical
strategy,
so
we
have
a
certain
condition
which
is
not
which
is
not
exactly
handled.
So
this
is
something
maybe
maybe
Nikola
could
say
more
on
the
next
meeting,
but
this
is
also
b2s2
in
our
team,
so
something
we
should
handle
on
this
milestone,
so
feel
free
to
pick
up
or
contact
Nikola
and
maybe
discuss
with
him.
A
The
next
one
is
from
City
he's
not
attending
I'll
read
from
him,
so
he
asked.
If
we
are
aware,
if
we
we
are
able
to
move
our
team
meeting
for
the
next
two
hours
later,
so
I
think
me
and
Roy
are
fine.
What
about
Humanities?
Is
it
too
late
for
you.
B
Pretty
late
for
me,
but
I
I
can
I
can
do
it.
Yeah
I
can
shift
my
hours
a
little
bit
on
Monday.
Second.
C
So
a
bit
of
context
is
a
feature
flag
from
the
custom
slis
for
Global
search
deal.
It's
been
removed
from
the
code
for
quite
a
while.
Now
it's
just
the
feature
flag
still
exists,
so
I
should
just
run
the
comment
in
the
stack
channel
to
remove
it.
I'll
do
that
to
lunch
week
here.
A
B
Lot
of
like
p2s
piling
up-
and
there
was
another
issue
that
we
were
asked
to
work
on-
which
one
I
got
what
it
was
I
think
it
was
also
something
Reddit
related
delayed.
B
So
should
we
maybe
talk
about
who
will
work
on
what,
because
otherwise
I'm
not
sure
these
things
will
get
done,
yeah
it's
too
bad
at
an
equalizing
here,
but
I
think
yeah,
it's
usually
better
to
assign
people
to
these
things,
because
otherwise
they
just
like
yeah
our
Ken's
gonna,
kick
down
the
road.
C
A
Sounds
great,
thank
you,
I
think.
A
B
I
mean
I
can
also
help
with
something
I
was
actually
thinking.
The
the
psychic
thing.
I
I,
remember
this
being
quite
complicated.
All
of
these
different
delays
that
we
inject
and
jobs
being
all
these
retry
strategies
and
and
stuff.
It
was
super
complicated
because
because
then
it
changes
the
the
database
handling
strategy
right,
basically,
based
on
what.
B
Basically,
before
you
know,
should
it
then
go
back
and
go
go
with
reads
rights
to
the
primary,
or
this
is
okay
still
to
go
to
secondaries
and
stuff
like
that
I
I
guess
like
this
is
one
of
these
areas
where
I
I'm
just
a
bit
afraid
there's
a
lot
of
tribal
knowledge
stuck
in
Nicola's
head,
so
I
think
I
would
probably
need
Piccola
to
drive
this,
but
it
would
probably
be
good
if
we
pay
on
this
but
yeah.
We
can
suggest
that
when,
when
he's
back
in
yeah.
B
You
know
one
of
us
can
compare
with
him
on.
A
C
B
Yeah
Ruby
three
that
there
aren't
any
big
updates.
Well,
we
should
say
we
made
really
great
progress
on
the
audit,
so
this
last
push
that
tongue
and
CZ
facilitated,
which
I'm
grateful
for
it
yeah
it
was.
It
was
very
effective,
so
we
have
again
thank
you,
of
course,
as
well
for
going
through
the
sheet
and
picking
up
anything.
B
That's
not
a
sign
and
all
of
these
things
so
we're
we're
at
97
percent
review
done
rate,
which
is
great
so
so
review
done,
means
it
got
eyeballs
basically,
and
there
was
a
decision
being
made
whether
the
dependency
that
we
use
is
Ruby,
free,
compliant
or
not,
it
doesn't
mean
it
doesn't
mean
our
dependency
footprint
is
97.
It
will
be
three
compliant.
Unfortunately,.
B
We
find
there
is
a
problem
and
then
we
break
out
a
follow-up
issue
which
then
gets
addressed
later
so
so
those
that
are
currently
flagged,
as
still
needs
to
be
addressed,
is
20
of
the
whole
set
of
dependencies
yeah.
B
So
so
there's
still
some
work
to
do,
but
it's
a
really
good
progress
like
because
just
a
few
weeks
ago,
I
think
we
were
at
like
70
or
so
so
there
was
a
pretty
big
jump
other
than
that
I'm
I'm
I'm,
still
I'm,
currently
looking
to
form
kind
of
like
a
tighter
task.
A
A
B
Reason
being
that
we
we're
still,
we
still
have
problems
in
some
areas
like
Ci,
CD
and
also
the
GDK,
which
kind
of
stops
us
from
making
further
progress
right
because
we
need
to.
We
need
to
resolve.
C
A
B
It
right
in
terms
of
manually
testing
it
against
the
Ruby
three
base
environment
so,
and
we've
done,
we've
accomplished
a
lot
asynchronously
so
far,
but
I
feel
like
we're
at
a
point
now
where
it
would
be
useful
to
have
regular
check-ins
and
have
dris
on
all
of
these
different
teams
that
we
know
we
can
reach
out
to
directly
right
kind
of
improve
the
efficiency
of
the
communication
a
little
bit
as
well.
C
A
B
Because,
like
one
example,
for
instance,
was
the
whole
gitly
Ruby
thing
which
has
been
going
on
for
probably
nine
months
or
so
where
it's
still
not
clear
to
me.
If
is
this?
It's
the
story,
three
cup
line,
or
do
we
need
to
make
changes?
I,
I
know
we're
still
going
to
have
Gallery
will
be
around
for
a
while,
but
that
might
not
be
a
problem
right
if
the
things
that
remain
could
be
three
compatible.
So
sometimes
these
kind
of
nuances,
I,
don't
know
why.
B
C
B
B
Checklist
as
well
was
a
very
small
attempt.
A
B
Habitable
of
course,
but
I
think
these
are
really
kind
of
the
critical
like
these
things
need
to
happen
right
before
we
can
even
think
about
and
and
unfortunately
I
don't
think
any
of
them
are
fully
done.
So
it's
I
feel
like
there's
a
bunch
of
areas
where
we're
at,
like
you
know
98,
but
we
just
have
to
get
it
done
fully
so
that
we
can
can
move
on.