►
From YouTube: Say/Do Ratio and Milestone Transition
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
B
C
A
A
All
right,
so
I
I
just
want
to
set
this
up
just
to
help
me
help
me
out
help
me
to
understand
how
we
can
adjust
milestone
cleanup
where
there
is
not
as
much
impact
to
the
safe
do
ratio.
I
try
my
best
to
pull
together
some
examples.
A
I
was
still
a
little
confused
on
specific
examples
that,
where
there's
issues
that
are
rescheduled
that
are
not
included
in
the
cd
ratio-
and
my
sincere
apologies,
if
I'm
overlooking
things,
there's
just
been
a
lot
going
on
in
life,
and
I'm
just
I'm
just
trying
to
make
sure
I
get
this
right
here.
John
thanks
for
the
reply
earlier,
I
see
that.
A
You
say
it's
possible
that
changes
improve
the
accuracy
daniel.
Do
you
have
any
examples
where
just
like
an
issue
that
went
through
the
reschedule
and
clean
up
that
didn't
show
up
in
the
cd
ratio.
B
We
did
have
one
instance
that
the
release
management
team
had
that
there
was
just
an
error
in
it
somewhere
that
lily
was
able
to
clean
up.
Thank
you
lily.
I
think
lily
probably
has
a
better
explanation
of
what
actually
happened
there.
It's
not
really
actually
my
core
concern,
I'm
happy
to
talk
about
that
more
but
yeah.
It
really
has
a
good
example.
I
think.
C
Let
me
just
clarify
a
little
bit
when
this
issue
came
up
at
that
time.
What
was
the
problem?
Is
the
bot
starts
to
migrate
the
unfinished
issues
on
the
19th
on.
B
C
19Th
of
each
month,
but
our
inquiry
actually
closes
on
the
17th,
then
some
unfinished
issues
are
actually
not
counted
correctly
because
there
is
a
18s
in
the
middle,
so
the
issues
are
still
tagged
as
previous
release,
so
they
are
not.
Then
they
are
not
done.
C
Yeah,
so
that's
the
issue
they
they're
not
down
on
the
18th
but
they're
they're
not
counted
towards
next
release,
because
because
the
the
query
actually
starts
from
the
18th,
while
the
issue
is
still
tagged
as
a
previous
release.
So
the
this
these
issues
on.
A
B
Yeah
and
the
the
overall
what
we're
doing
right
now,
I
think
the
milestone
bot
cleanup
makes
sense
to
me,
but
when
we
start
having
say
do
it
doesn't,
I
think
sage
is
great
and
being
able
to
call
out
when
we
have
things
that
are
gonna
make
it
into
some
type
of
release.
I
actually
had
a
full
response
before
you
that
I
had
not
pressed
send
on.
So
I'm
really.
B
Because
I
tried
to
write
all
this
down
and
I
got
distracted,
I
apologize
but
yeah
so
that
I
agree
with
what
john's
saying
that's
100,
I
don't
mean
anything
I
say
is
not
meant
to
correct
that
or
adjust
that,
but
in
general
we
have
a
monthly
cadence
for
releasing
that
monthly
cadence
is
appropriate
for
self-managed
customers
and
that's
where
a
lot
of
our
profit
comes
from.
So
it
certainly
matters,
and
we
certainly
need
to
be
able
to
communicate
it.
B
So
that's
a
good
thing
that
we
have,
but
our
our
release,
cadence
for
dot
com
is
daily
they're
effectively
once
it's
like
once
a
day
these
days
right,
so
our
cadence
for
releases,
there
is
every
day,
and
so
what
I
find
interesting
about
the
current
setup
is
the
miss
deliverable
bot
that
that
triage,
the
cleanup
bot
for
the
end
of
a
milestone
totally
makes
sense
on
behalf
of
the
self-managed
customers
and
good.com.
B
But
the
say
dude
doesn't
make
sense
if
we're
considering
anything
for
com
after
the
18th,
that's
actually
deployed
and
released,
because
it
really
it
should
be
up
to
the
21st.
I
suppose
that
something
could
be
in
and
completed,
and
I
think
the
bigger
issue
here
is
all
of
these
dates:
cut
off
ranges
and
stuff
that
we
have
they're,
just
a
non-deterministic
way
to
figure
out
what
we
put
in
a
release
and
deliver
the
dot-com
or
self-managed
releases
right,
and
so
we're
trying
to
make
them
reflect
what's
actually
in
there.
B
But
the
reality
is
the
only
way
we
really
know,
because
we
don't
know
whether
the
build
they
did
didn't
work,
and
so
they
reverted
back
to
the
previous
one.
This
happens.
I
was
actually
doing
a
deterministic
method
of
identifying.
What's
actually
in
a
release.
My
sense
was
if
we
aligned
the
bot
with
or
if
we
I
think
taking
this,
I
do
ratio
calculation
and
using
the
misdeliverable
labels
means.
B
We
only
have
to
solve
this
problem
in
one
place,
but
it
does
mean
that
we
still
miss
that
the
discrepancy
between
when
we
need
something
in
a
release
for
it
to
be
delivered
to
self-managed
customers
and
when
we
actually
would
need
something
in
a
release
to
the
outlook.com.
B
And
so
that's
where
that
big
weird
gap
is
for
me
and
and
so
I
was
kind
of
trying
to
suggest
ways
we
might
try
and
solve
that
problem,
and
I
think
I
think,
as
our
seidu
moves
towards
becoming
a
kpi
which
which
hopefully
it
will,
because
I
think
it's
an
awesome
metric
and
I
love
that
we
did
it.
Thank
you
lily
for
all
your
hard
work
on
that
this
will
become
more
of
a
problem
because
people
will
be
like
well.
My
say
do
says
this,
but
I
actually
released
these
things.
B
Two
days
later
it
went
to
dot
com
and
we
weren't
really
even
targeting
self-managed
customers
for
that,
and
so
that's
kind
of
my
summary
so
I
feel
like
in
in
the
case
of
how
we
might
solve
it.
A
Okay,
okay,
so
if
we
so
kind
of
coming
back
to
that
comment,
if
the
milestone
reschedule
milestone,
cleanup
automation
is
aligned
to
what
is
I
guess
where
this,
mr
it
like,
is
this,
mr
in
the
release
environment,
and
is
this,
mr
in
the
gprd
environment.
A
That
would
tell
us
that
the
mrs
associated
to
an
issue
are
in
a
release
or
deployed
to
gitlab.com,
but
what's
hard
to
do
when
we
reschedule
issues
is
what
assumption
do
we
make
with
how
many,
mrs
because
it's
a
one-to-many
relationship
in
theory-
and
it
could
just
be
a
mention
like
someone
could
just
say?
Oh
this
is
related
to
this
issue
and
there's
not
a
clear
way
to
pull
that
apart.
B
C
So
I
just
want
to
highlight
kyle:
it's
not
imr.
The
stadium
ratio
is
again
issues.
B
Right
yeah,
so
when
an
issue
gets
a
label
and
the
reason
I
I
think
we
were
actually
saying
just
similar
things
from
different
directions
there,
john.
So
thank
you
for
that.
But
it's
it's
because
we
have
an
issue
with
potentially
several
mrs
associated
with
it,
and
we
want
to
figure
out
which
release
they
were
in
and
where
that
release
was
actually
released.
What
ends
up
happening
is
for
a
release
post,
which
is
an
associated
process.
B
It's
a
documented
step
to
go
through
each,
mr,
with
the
current
milestone
to
determine
whether
that
mr
was
actually
in
production
or
in
verification
stage.
I
kind
of
haven't
solidified
that
in
my
head,
because
I
was
reading
that
today,
figure
out
which,
if
all
of
them
on
prior
milestones
and
the
current
milestone,
are
actually
in
production,
then
that
issue
is
considered
to
be
released
and
not
a
misdeliverable
from
a
release
post
perspective
and
then
how
that
aligns
with.
I
think
what
you
were
saying
chun
was.
B
There
is
a
bit
of
a
mismatch
here
with
how
we're
considering
the
releases,
because
you
could
have
50,
mrs
and
there
could
be
one
missing
and
it
still
doesn't
end
up
in
production.
It's
not
considered
released
depending
on
the
product
manager.
Sometimes
the
product
manager
will
say:
oh
okay,
yeah.
We
could
just
prime
that
off
the
next
time.
Let's
let's
hand
it
off
yeah,
that's
okay!.
C
A
A
Yeah,
so
I
think
we
need
to
look
into
that
gap
so
lily.
I
I
kind
of
I'm
typing
up
comments
on
the
issue
as
we're
going
along.
I
think
this
is
that's
a
gap
that
we
need
to
look
into
because
that
was
unclear
to
me.
So
I
wasn't
necessarily
looking
for
issues
that
were
open
on
the
18th
and
what
milestone
they're
in.
I
was
really
looking
at
issues
that
went
through
the
transition
to
see
are
they
included
in
the
safety
ratio?
So
my
apologies
for
the
misunderstanding.
There
I'll
sit,
I'm
gonna,
I
think.
D
So
chan,
if
you
could
just
provide
a
little
bit
more
clarification,
I
think
I'm
a
little
bit
confused
so
on
the
18th,
if
it
was
still
in
the
in
that
previous
milestone,
it'd
still
count
as
the
previous
milestone
as
say,
but
if
it's
open
on
the
18th,
it
wouldn't
count
towards
a
due
for
that
previous
milestone
on
the
18th
say
it
doesn't
reschedule.
D
Yet
it
wouldn't
count
towards
the
next
milestone,
but
once
the
schedule
the
bot
reschedules
it
to
the
next
milestone,
it
should
count
as
say
and
then
whether
or
not
that
issue
closes
during
that
iteration
time
period.
It
would
either
count
as
do
or
do
not,
but
it
should
be
counted
towards
the
previous
milestone
if
it
was
still
there.
Even
if
it's
open
on
18th.
C
Let
me
start
out,
I
think
that
was
not
quite
right
at
that
time,
because
right
now
I
think
we,
you
know
you
did
some
iterations
there
already.
We
are
using
the
workflow
labels
now
our
workflow
verification
workflow
production,
so
those
labels
will
help,
but
that
requires
extra
steps.
So
that's
the
operational
you
should
concern.
I
think
at
that
time
the
issue
is
still
tagged
for
the
previous
release.
C
Oh,
I
know
what
happened
the
issue.
No,
the
issue
is
still
tagged
with
the
previous
release.
It
still
stayed
all
state
open
on
the
18th.
I
don't
know.
I
checked
my
my
comment
there.
In
the
in
example,
I
think
there
were
cases
it
is
not
counted
as.
D
Say
so,
in
the
slack
thread,
I
think
that
when
I
was
doing
some
investigation,
there
were
three
of
them
that
I
looked
at.
One
of
them
was
actually
counted
essay
and
then
the
other
two
actually
weren't
counted,
which
is
what
kyle
was
finding
too,
was
that
they
weren't
counted
because
they
weren't
a
part
of
product.
D
So
those
issues
weren't
counted
as
issues
because
they
lived
in
a
project
outside
of
the
product
list
like
that
is
part
of
product
list,
and
that's
why
they
weren't
counted
for
say
so
I
had
it
didn't,
had
anything
to
do
with
the
dates,
particularly
but
more
so
it
just
didn't,
live
under
there
as
part
of
product.
That's
why
they
weren't
getting
counted.
A
C
A
A
Worth
looking
into
because
we
I
let
this
sit
for
too
long,
where
this
is
something
that
you
identified,
that
we
never
came
back
and
said,
here's
an
example
that
kind
of
fits
this
criteria.
This
is
what
happened
from
a
safety
ratio
perspective.
That's
what
happened
from
milestone.
Cleanup
perspective
yeah.
I
can
take
that
on
again.
This
fell
off
my
radar
and
my
apologies
for
that.
D
A
D
From
a
query
perspective,
just
generally
speaking
about
how
I've
like
built
the
logic,
basically,
if
something
wasn't,
if
you
were
looking
on
the
18th
of
the
month
and
something
wasn't
counted,
the
reason
why
it
wouldn't
be
counted
would
be
either
because
it
wasn't
part
of
product
or
the
bot
had
yet
to
reschedule
it,
and
it
still
remained
in
the
previous
milestone.
D
D
B
B
D
B
At
my
team's
performance
on
the
18th
any
one
of
my
teams-
and
I
see
it
on
the
call
at
the
17th-
the
18th
is
not
important
for
this
argument
for
one
of
the
point
I'm
making
on
the
17th.
If
I
look
at
it
it
says
60
or
whatever,
and
then
I
go
to
the
20th.
D
Their
the
say
your
say
metric
or
the
number
that
we're
listed
as
they
should
still
be
the
same,
but
in
addition
to
do
as
well,
I
think
I
think,
if
they've
closed
anything
from
the
17th,
which
is
the
cutoff
to
the
20th,
then
we
didn't.
D
We
wouldn't
count
that
as
do
unless
it
had
the
workflow
verification
or
production
labels
applied
before
the
17th,
and
then
they
closed
it
from
the
17th
to
the
20th,
so
the
seidu
ratio,
I
would
say,
would
it
be
improved
for
that
previous
milestone,
that
you
were
kind
of
reviewing.
B
Right
so
in
that
case,
I
I
think
john.
Maybe
this
is
wrong,
but
this
is
kind
of
how
I
interpreted
your
comments,
although
I
don't
think
you
gave
a
more
specific
example.
What
happens
at
that
point
is
that
is
true
only
for
self-managed
customers,
yes
anyone.com,
and
that
is
actually
we
have.
As
I'm
sure
I
don't
need
to
tell
everyone.
We
have
a
bunch
of
really
important
customers
on
dot
com
super,
not
true
for
them,
because
that
will.
B
And
it's
out
and
so
go
ahead.
John.
C
Sorry
you're
correct
you're
correct,
so
my
concern
is
primarily
about
the
self-managed
releases
because,
let's
say
the
ratio
is
tied
to
the
release
cycle
so.
C
Concern
is
about
self-managed
and
I
I
do
think
really
did
more
iterations
there
and
adding
you
know
using
those
two
labels
workflow
those
two
labels
solved
the
problem.
I
had
largely
it's
just
the
team
members
have
to
remember
to
apply
those
two
labels.
That's
that's
execution
issue,
but
I
do
think
those
two
labels
will
solve
my
initial.
B
Problem
yeah-
and
I
think
my
concerns
were
never
well
they're
related
but
they're
slightly
different
in
the
sense
that,
like
I
think-
and
I
tried
to
make
this
point-
maybe
poorly-
if
no
one
kind
of
understood
what
I
was
getting.
A
B
But
the
point
I
was
trying
to
get
at
there
was
if
we
only
care
about
self-managed
releases
and
again
I
know
a
heap
of
our
profit.
Slash
income
comes
from
that,
so
we
should
super
care
about
them.
B
Then
the
savior
as
it
is
right
now
is,
is
perfect
and
no
problem,
but
as
soon
as
we
start
considering
well,
the
labels
of
mr
liberals
get
a
deliverables
get
run
on
the
22nd
or
I
forget
kyle
depends
on
a
few
different
things,
but
yeah
22nd
best
case
scenario:
they'll
get
a
misdeliverable
at
that
point.
B
If
they're
not
closed,
but
in
fact,
from
the
seidu
ratios
perspective,
there
were
misdeliverables
on
the
18th,
I
mean
so
there's
there's
a
there
is
a
gap
there
and
how
we're
communicating
and
reporting-
and
I
think,
just
to
reiterate
a
little.
My
main
argument
here
is
we're
going
to
use
this
for
a
kpi.
That
gap
is
problematic.
B
I
mean,
I
think,
a
different
approach
completely.
Just
to
maybe
think
about
it
is
we
could
have
a
separate
label
for
missed,
self-managed,
deliverable
and
missed.com
deliverable,
and
that
would
completely
identify
where
the
discrepancy
is
that
I'm
trying
to
call
out
in
this
conversation,
which
would
say
from
the
18th
it
would
get
a
miss
deliverable
instantly,
there'd
be
another
bot
that
would
run.
He
would
add
it.
B
This
self-managed
deliverable
they
would
get
them
all
and
then
anything
that
didn't
get
a
production
or
verification
by
the
20th
22nd
or
whatever
we
it
is,
would
get
a
missed
dot,
com
or
misas
or
whatever
we're
calling
it
deliverable.
Does
that
sound
good.
D
Then
just
to
clarify,
so
I
think
the
what
you're
questioning
is
more
of
the
numerator,
the
do
portion
of
it,
and
so
right
now,
like
you
mentioned
it,
it
is.
It
works
as
if
it's
closed
or
has
any
of
those
two
labels.
We
count
it
as
do
or
do
not,
but
you're
suggesting.
We
can't
do
that
because
of
the
self-managed
releases,
and
so
in
that
case
you
would
recommend
instead
of
using
those
labels
and
some
sort
of
date,
it
would
just
be
using
the
misdeliverable
label.
D
B
Yeah,
that's
one
approach
I
think
we
can
take
for
sure
and
I'm
not
saying
we
do
this,
but
I'm
really
trying
to
just
explain
the
situation,
and
I
think
the
thing
I'm
just
looking
at
the
time,
but
I
think
what
the
other
thing
that
this
brings
up
for
me
is
that
a
lot
of
teams
are
trying
to
have
more
iterative
processes
that
aren't
just
purely
focused
on
a
monthly
cadence.
I
mean
that's
actually
been
encouraged
at
some
level
and
those
people
are
adding
deliverable.
B
You
know
two
weeks
into
something
that
they
wouldn't
have
been
in
this
say
so
for
those
teams.
If
it's
calculated
just
at
the
start
of
the
milestone,
then
those
teams
are
kind
of
taking
a
ding
for
not
having
put
deliverable
on
right
up
front
and
so
there's
a
couple
of
ways.
Sorry
go
ahead.
D
I'm
gonna
just
add
there
that
so,
if
they
added
a
deliverable
label
like
two
weeks
into
the
iteration,
it
still
be
counts
as
day.
So
if
at
any
point
during
the
seventh,
the
18th
of
the
month
through
the
17th
of
the
next
month
at
first,
we
did
it
where
it
was
beginning
of
the
month.
Then
we
realized.
Oh,
no,
a
lot
of
teams,
weren't
kanban
style.
So
we.
B
Okay,
that's
my
misunderstanding.
Thank
you
so
yeah
the
few
options
I
proposed
were
you
could
have
separate
labeling
for
selfmanagedin.com.
I'm
not
saying
we
want
that.
We
already
have
a
million
labels,
and
so
that's
not
ideal.
From
that
perspective,
the
other
alternative
would
be
to
have
the,
as
you
just
mentioned,
really
have
the
safety
ratio
based
purely
off
the
labeling.
B
If
they
say
it
was
based
purely
off
the
labeling,
it
would
be
consistent
in
the
issue
what
was
happening
with
that
issue
right.
So
when
we
looked
at
that,
that
would
be
our
single
source
of
truth
and
we
would
know
if
it
had
a
misdeliverable
label
that
would
be
reflected
in
the
seido
ratio.
B
The
issue
that
that
brings
up
is
the
bot
runs
on
the
21st
or
22nd
or
whatever
it
is
based
on
whatever
cadence
it's
running
on
and
then
say.
Do
is
not
going
to
see
any
of
that
stuff
until
after
the
22nd,
because
it
wouldn't
have
run
by
then,
and
so
I
think
the
another
alternative
we
could
consider
is
actually
running
that
bot
on
the
17th
or
18th
as
well,
and
so
it
would
be
updated
at
that
point,
and
then
you
could.
B
We
could
make
some
assumptions
about
it,
having
maybe
it
runs
twice:
17th
and
21st
or
18th
and
22nd.
I
mean
at
that
point.
We
could
run
this,
I
do
it
would
appear
to
be
working
as
it
is
now.
It
would
reflect
whether
something
was
missed
from
a
a
self-managed
perspective
and
that
it
would
also
then
just
be
updated
at
the
end
of
the
month
as
well
or
it
could
be
updated
on
an
ongoing
basis.
D
I
think
if
that
were
the
case
and
there
weren't
any
other
implications
that
would
be
best
from
a
query
perspective,
and
just
like
you
mentioned,
there
would
be
a
gap
in
reporting
just
because
we
now
would
have
to
be
explaining
self-managed
release
status
versus
like
release
management,
say
dues,
which
is
a
bit
complicated
from
query
perspective,
but
I
will
say:
if
there
aren't.
If
there
are
implications,
then
we
definitely
could
have
could
break
it
out
to
see
it
both.
But.
B
A
Yeah,
so
definitely
don't
have
all
the
answers.
I
think
moving
it
earlier.
The
earlier
iterations
of
having
this
run
earlier
than
what
it
is
now
or
not
like
created
a
lot
of
confusion
and
on
the
product
side
it
happened.
That
happened
right
as
I
joined.
So
to
be
completely
honest,
I
am
not
like
fully
versed
in
the
impacts
that
happen.
I
can.
I
can
look
it
look
into
it
a
little
bit
closer.
A
I
I
still
like
the
idea,
I
guess
of
using
missed
deliverable
running
it
on
the
22nd,
using
this
deliverable
as
the
single
source
of
truth
for
the
do
as
well
and
then
just
building
that
logic
based
off
of
where
I'll
say,
adjusting
the
logic
from
what
it
currently
is.
I
I
don't
know
what
that
looks
like,
but
just
making
it
a
little
bit
more
accurate.
Then
it's
an
open
issue.
You
know
right.
B
B
Yeah
and
I
think
the
confusion
would
be
really
plain
to
people
who
are
like
well,
the
milestone
ends
on
this
date.
I
would
be
getting
this
deliverable
now
and
so
that's
why
I
was
saying
that
the
like
separate
labeling
for
self-managed
and
dot-com
would
be
a
way
to
do
that,
but
I
think
it's
a
pretty
awful
way
to
do
it,
I'm
hoping
there's
a
better
way,
but
it
again
it's
really
just
calling
out.
We
have
two
dates.
B
We
care
about
right
now
and
the
reason
we
have
two
dates
is
we
actually
have
two
separate
release:
cadences
for
our
product,
depending
on
whether
we're
self-managed
or
whether
we're
dot
com
and
so
the
way
we
square.
Those
can
be
just
like
at
the
top
of
the
thing.
Hey
anything
emerge
at
the
top
of
the
say:
do
dashboard
really
could
be
anything
emerge
or
get
in
production
after
this
date
won't
be
reflected
here,
but
the
labeling
will
be
accurate
by
the
end
of
the
milestone,
and
so
we
can
say
that
too.
B
That
might
just
be
the
simplest
thing
to
call
out
here
rather
than
running
off
and
changing
things
around,
but
part
of
the
reason
carl
is
suggesting
running
that
bot
twice
would
be
so
that
those
things
are
updated
at
two
separate
times
that
reflect
our
current
release.
B
Best
case
release
cadence
right
right
and
I
don't
know
if
that's
possible
or
bad
or
whatever,
but
that
that
would
be
a
way
to
say
hey.
This
is
a
shorthand
to
say
we're
going
to
run
it
on
the
18th
as
well
as
the
22nd
and
then
on
the
18th.
At
least
it
represents,
what's
in
the
say,
do
and
what's
true
for
self-managed
customers,
which
I
think
is
important.
A
A
A
Yeah
so,
as
I
say
weird
time,
I
think
what
would
help
me
is
looking
at
that
process.
You
talked
about
the
release.
Post
process
describes
what
exactly
engineering
managers
do
I'm
not
super
familiar
with
that
like
so,
if
you
could
point
me
to
that
I'll,
do
some
research
to
to
look
into
this.
It
seems,
like
the
data
exceptions,
are
likely
in
a
better
state
than
it
was
before.
A
Am
I
right
in
all
of
that,
or
am
I
way
off
on
something,
so
I
will
research
that
process
that
dan,
if
you
can
send
that
over
I'll
research
that
and
see
how
we
can
tweak
the
rescheduling
policy
and
come
up
with
a
better
proposal
by
next
week,
I'll
touch
base
in
the
issue.
Async
mention
this
whole
group,
and
if
we
need
to
get
back
together,
we
can.
How
does
that.
D
A
D
B
C
And
just
to
reconfirm
that
the
the
latest
report
say
the
ratio
report
by
using
by
you
know,
looking
at
the
workflow
verification
and
the
workflow
production
labels
that
solves
at
least
95
percent
of
the
issues
I
have
seen
for
my
teams
so
again
yeah.
So
that
should
be
a
good
solution
for
us
in.
D
A
Okay,
I
will
like
I
kind
of
I
type
notes
in
the
issue
along
here.
I'm
going
to
edit
these
add
it.
I
recorded
this.
I
can
upload
it
to
unfiltered
and
posted
on
the
issue
as
well,
but
the
big
the
main
next
step
is
on
me
and
I
again
just
want
to
reiterate
my
apologies
for
the
slipping,
we'll
get
something
figured
out.
Let
us
know
if
this
is
like.