►
From YouTube: Plan | Weekly Team Meeting 2020-03-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I
think
we
all
submitted
our
pick
up.
Videos
yesterday,
I
know
for
the
project
management
group
solution,
validation,
basically
design.
We
need
to
improve
the
mass
of
view
and
tracking
scope
change
there.
That's
like
a
recurring
theme
that
is
blocking
people
from
using
github
and
then,
after
that,
we'd
kind
of
move
over
just
sprints
and
then
replicating
some
of
that
stuff
into
the
script
you
as
well.
A
The
three
features
shipping
the
burn-up
chart
to
mount
like
on
milestones,
I,
think
the
underlying
data
issue
has
been
mergers
about,
go
through
verification
and
then
pushing
more
on
the
real-time
updates
of
assigning
a
sidebar
John
Hope
did
spin
up
a
working
group
for
this,
so
I
think
there's
an
M
R,
that's
open.
If
anybody
is
interested
enjoying
that,
there's
been
some
interest
from
other
stages
as
well
that
are
interested
in
doing
things
there.
A
So
that's
pretty
cool
and
that
should
hopefully
help
Henryk
out
a
little
because
it's
a
it's
a
big
ask
for
one
person
solve
all
the
problems
that
need
to
be
solved
with
that.
It's
pretty
hairy,
so
I'm
glad
we're
able
to
get
him
support
and
then
shipping,
the
JIRA
issues,
importer,
MVC,
there's
some
other
confidential
refactoring
tech
debt
like
using
that
filters
or
slow
and
then
I
didn't
link
you
here
for
those
are
confidential
spam
finding
issue.
B
Just
on
the
real
time
thing,
the
the
real
thing
that's
holding
us
back
is
things
that
depend
on
other
teams.
So
you
know
we're
gonna
have
a
lot
of
open
questions,
but
how
we
deploy
this
highway,
monitor
it's
rather
than
actually
the
feature.
I
think
Heinrichs
made
a
lot
of
progress
with
the
feature.
A
Cool
yeah
he
has
and
that's
why
I
think
a
working
group
is
the
right
move
here.
Just
because,
like
the
async
nature
that
we
have
is
great,
sometimes
within
other
times
it
makes
things
really
really
slow.
So
hopefully
this
will
help
out
with
that
I
think
I,
don't
know.
If
keen
keen
are
you
here,
yeah
sorry.
A
Want
to
go
through
your
stuff,
go
quick,
I!
Just
like
the
goal
of
doing
this
is
just
making
sure
that
we
didn't
put
anything
in
here.
That
is
wildly
inaccurate,
not
achievable,
slash
like
there
blight
and
concerns
from
folks
on
the
teams
that
we
have
committed
to
it,
and
so
it's
just
like
yeah
I,
don't
keen
if
you
wanna,
run
through
your
stuff
or
not,
but
that'd
be
awesome.
If
you
did
yeah.
C
C
Anything
too
crazy
or
surprising
and
yeah
I
think
Donna,
Donna,
loves
I,
think
we
yeah
we
looped
on
those,
so
I
think
we're
good
cool.
D
Good
keep
going
solution,
validation,
we're
still
working
the
quality
management
NBC.
That's
been
ongoing.
Now
for
about
half
release,
probably
another
release
worth
of
solution,
validation
to
do
on
that
one,
and
we
have
requirements
management
to
keep
us
busy
in
the
meantime.
So
nothing
shouldn't
be
holding
anything
up,
but
we
want
to
get
it
done
from
a
design
perspective.
There's
couple
design
things
out
there
that
Nick's
working
hard
on
the
issue.
D
Participants
can
email
address,
he's
working
through
and
then
once
we
get
requirements
done,
we
wanted
to
put
the
count
next
to
then
that
was
something
got
pushed
out
of
the
MBC,
because
it
just
wasn't
that
critical,
but
it's
a
small
one.
Hopefully
that
is
something
we
can
just
sort
of
throw
in
the
hopper.
As
you
know,
more
work
to
be
done
toward
requirements
and
sort
of
improve
the
user
experience
there
or
quality
of
life.
I,
don't
think
features
or
anything
surprising.
D
The
nbc4
requirements
did
not
make
twelve
nine,
so
we're
into
twelve
ten
I
think
everything
I've
heard
thus
far.
Is
it's
not
going
to
take
all
twelve
ten
should
be
done
relatively
quickly.
So
that's
good
news.
This
is
in
there
it's
it's,
not
a
high
priority,
but
I
wanted
to
make
sure
there's
other
work
there
and
we
were
gonna
circle
back
and
figure
out
what
else
we
can
do
toward
requirements
to
make
the
MVC
even
better,
if
possible,
or
if
it's
not
possible
in
this
release,
then
we're
gonna
sort
of
schedule
them
moving
forward.
D
A
One
other
random
piece
of
feedback
that
I
got
from
a
customer
this
week
about
blocking
issues
is
that
they
said
like
on
the
issue
board.
They
want
to
click
on
the
icon
to
more
or
less
filter
the
issue
board
by
all
the
things
that
are
blocking
it
or
inverse,
see
the
like
asset
or
icon.
That
is
like
blocking
something
else.
We
can
talk
more
about
that
offline.
Those
feedback,
yeah.
D
I
mean
I
think
that's
getting
a
lot
of
adoption.
The
other
thing
I
wanted
to
talk
to
engineering
managers
about
a
little
bit
is
how
do
we
add
in
or
just
engineers
in
general?
How
do
we
add
in
metrics
to
collect
data
on
this
stuff
because
I,
we
don't
know
right
now,
who's
using
blocking
issues
and
it's
very
difficult
to
try
to
find
that
data
currently
with
thee
with
the
measurements.
D
We
have
right
now,
I'd
like
to
kind
of
make
a
push,
probably
in
1300
or
maybe
I,
don't
know
exactly,
but
to
make
sure
adding
in
the
hooks
to
collect
data
on
you
know,
requirements,
management,
usage,
blocking
issues
usage
just
because
I
think
these
features
are
being
utilized.
I've
heard
a
lot
of
great
feedback
on
them,
but
I'd
like
to
start
getting
numbers
and
sort
of
track.
D
Adoption
of
these
things,
just
sort
of
in
the
back
hopper,
so
I'm
gonna
create
a
issue
for
that
and
we
can
kind
of
discuss
in
general
for
certify
what
do
we
need
to
do
and
then
we
can
promoted
to
an
epoch
and
create
some
sub
issues
and
start
kind
of
getting
that
work
done.
If
these
are
small
changes,
this
could
also
help
us
fill
out.
D
There's
an
excel
sheet:
there's
actually
an
issue
out
there
right
now
that
was
started
by
the
I.
Think
it's
the
planetary
team,
where
we
are
gonna
start,
you
know,
sort
of
figuring
out
what
we
need
for
telemetry,
but
that
doesn't
really
do
the
implementation
of
it
that
that's
sort
of
once
the
data's
in
how
are
they
going
to
mangle
the
data,
so
we
can
easily
use
it.
We
need
to
get
the
hooks
in
there
I
think
to
actually
provide
the
data
at
some
point
as
well.
So
well,
there's.
A
Two
problems
there
I
think
we've
been
advised
or
it's
been
encouraged,
not
to
use
the
word
telemetry
anymore,
because
that's
too
big
you
know
there's
a
merge
request
for
that
and
it's
in
the
handbook
now,
because
self-manage
will
need
to
use
the
usage
pink
and
like
the
hooks
or
whatever
it's
all
it
all
just
drive
from
database
queries.
Basically,
it
doesn't
use
anything
from
end
user
behavior
like
how
we
collect
stuff
on
comm
with
snowplow.
So
like
you
can
look
it.
A
So,
like
that's
the
lowest
common
denominator,
you
can
start
with
and
then,
if
you
want
like
a
subset
example
of
some
more
granularity
about
like
behaviors,
then
you
can
look
at
some
of
the
data
from
snowplow
and
kind
of
understand,
like
the
user,
behavior
and
more
great
either
way
and
treat
that
as
like
a
sample
size.
Basically,
it's
just
a
little
context
for
when
you
get
to
it,
yeah.
D
And
the
more
I
look
through
that
I
mean
just
as
a
top
level
sort
of
10,000
foot
view
look
at
things
the
data
team
is
doing
or
tell
every
team
or
whatever
they're
gonna
rename
themselves
to
is
doing
an
excellent
job
of
trying
to
provide
these
metrics
and
they've
got
the
smile
charts
out
there.
The
caveat
is
they're
only
doing
it
for
specific
pieces
of
data
in
plan
and
none
of
the
service.
D
That's
things
are
in
there
currently
and
from
what
I
can
tell
they
have
all
this
raw
data
in
a
database
and
then
they're
they're
mangling
it
for
queries
to
bring
it
into
a
database
where
they
can
display
it.
Our
data
is
not
being
brought
into
that
database.
The
certified
data
it's
not
being
brought
into
that
database
yet
because
it's
low
priority.
D
So
we
don't
have
service
desk
information
right
now,
not
that
it's
necessarily
the
right
metrics,
but
we
don't
have
anything
so
unless
I
go
and
look
at
raw
page
views,
which
is
hard
for
sugar
steps,
because
they're
just
issues
and
the
raw
page
views
don't
take
into
account
the
emails
that
come
in
it's
very
hard.
For
me
to
sort
of
get
any
read
on
service
desk,
which
is
the
one
you
know
feature
that's
been
launched
and
there's
nothing
that
I
can
tell
right
now
for
blocking
issues
that
we
can
look
at
even
with
page
views.
D
A
I
think
you
just
need
to
update
the
usage
because,
like
it's
stored
in
the
database,
so
if
it's
stored
in
the
database,
then
it
can
be
retrieved
and
that's
really
just
like
updating
the
query
and
the
thing
so
I
know
service
desk
issues
are
pulled
out
of
the
usage
ping
right
now,
it's
not
just
pageviews.
It's
like
how
many
issues
were
service
desk
issues
which
is
sort
of
like
how
many
people,
not
how
many
people
email
links.
A
D
D
A
That
until
we
do
use
your
ID
stuff,
which
I
think
in
order
to
do
that,
I
was
talking
to
a
team.
They
have
to
more
or
less
anonymize
everything
and
then
tokenize
like
the
anonymous
data
and
keep
store
those
tokens
on
with
the
customer
so
that
we
don't
ever
get
it
basically,
but
anyway,
it's
it's
complicated,
they're,
doing
a
good
job.
E
Real
real
quick,
though,
before
you
move
on
to
the
next
one,
because
I
think
we
do
have
some
action
items
there
there's
two
fault:
we
need
to
figure
out
what
we
want
to
put
in
usage
ping
because
we
can
get
like
the
amount
of.
Even
if
it's
not
like
user
specific
like
at
that
level
of
granularity,
we
can
get
how
many
issues
have
been
or
marked,
as
is
as
is
blocked
or
is
blocked
by.
We
just
need
to
add
that
data
to
the
usage
ping.
E
If
we
want
to
get
that
for
self-managed
instances
which
we
haven't
done
so
I,
think
we
need
to
go
through
Keene
and
spreadsheet
and
just
document
what
we
want
to
track
there.
And
then
we
can
go
on
the
engineering
side
and
add
that
to
usage
ping
in
the
future
when
we're
creating
when
we're
breaking
down
these
issues.
That
type
of
that
request
should
really
be
part
of
the
acceptance
criteria,
like
every
feature
that
we
build.
We
on
engineering
should
either
add
something
to
snowplow.
E
E
A
Okay,
cool
next
I,
totally
missed
the
geotechnical
discussion
this
morning,
because
it's
on
I
wasn't
on
the
invite
and
I
had
my
other
calendar
turned
off
for
the
team.
Calendar
turned
off
sorry
about
that
real,
quick,
not
to
duplicate
the
conversation,
but
just
want
to
make
sure
that
we're
not
blocked
on
anything
or,
if
there's
anything,
that
I
need
to
take
care
of
immediately
to
help
move
people
forward
thanks
John
for
adding
me
to
calendar,
but
Abed.
F
I
think
we
will
need
to
address
tracking
the
external
issue,
identifier
in
MVC,
so
that
we
don't
go
back
to
that
after
we
release
the
next
iteration
and
then
the
other
thing
is
that
we're
currently
sharing
a
data
structure
with
the
other
imports,
but
the
other
imports.
Don't
have
this
big
current
things
like
running
them
over
and
over
again.
So
that's!
Okay
for
those
imports
doesn't
work
as
well
with
JIRA
as
it
can
conflict
with
another
feature
called
mirroring
yeah.
We
can
sort
of
trick
it
and
hack
it
for
the
MVC
about.
F
It
would
be
good
if,
if
we'll
get
time
to
be
good
to
get
in
there
and
see
what
is
not
on
people
together
to
have
a
separate
data
structure
for
the
actual
JIRA
import,
considering
it
can
be
learned
multiple
times
on
the
same
project.
So
those
are
kind
of
the
two
newer
things
that
popped
up
last
week,
yeah.
A
I
spent
some
time
responding
to
the
the
storing
the
project
or
the
issue
key
from
JIRA.
In
that
issue
that
you
raised
and
I
basically
said
it's
fine.
If
we
want
to
store
it
for
this
iteration,
so
we
don't
have
to
go
back
and
re-import
to
get
that
key
in
the
right
place
later,
but
we
shouldn't
do
anything
with
it
beyond
that.
A
A
Basically,
it's
gonna
have:
has
the
the
same
characteristics
is
imported
from
JIRA
I
would
say
like
do
the
minimum
amount
we
need
to
do
so
that
we
don't
accrue
a
total
debt,
but
also
to
like
don't
over
engineer
it
until
we
get
to
another
importer
from
another
service
or
something
where
we
actually
need
to
invest
it
and
make
it
even
better,
if
that
makes
sense,
so
take
the
time
to
do
it
right,
but
I
wouldn't
over
engineer
it.
Yet.
F
F
A
I,
don't
like
hacks
in
production
software,
so
I
would
say
like
what
stock
is
a
team
and
figure
out
what
we
need
to
do
to
make
sure?
It's
not
that
just
because,
like
that's
where,
like
somebody
could
come
along,
not
you
a
year
from
now,
I
need
to
work
on
it
and
not
know
exactly
how
it's
supposed
to
work
anyway,
but
thank
you
for
bringing
it
up.
F
Yeah,
so
on
the
progress
of
the
issue,
I
think
we're
yet
to
start
on
the
on
the
UI
stuff,
the
flow
itself.
How
are
importing
all
of
those
pages
where
we
progress
and
and
working
on
how
many
imports
were
on
them
and
so
on
and
so
forth.
So
that's
yet
to
be
done.
What's
being
worked
on
for
the
last
week,
was
the
sidekick
importers
and
fetching
issues
and
like
running
them,
setting
them
in
the
database?
E
Yes,
on
the
front
inside
calling
is
gonna,
be
working
pretty
heavily
on
that's
his
number
one
priority
is
the
very
important
stuff
so
he'll
be
taking
the
lead
on
that
and
we'll
determine
if
we
need
to
bring
in
anyone
else
to
work
on
that.
But
one
thing
we
did
want
to
talk
about
was
I,
don't
think
we
have
the
we
haven't
built
out
the
api's
to
start
a
to
start
a
an
import.
Have
we
it's
all
done
through
our
private
API
is
using
camel
using
like
a
traditional
post
form.
Yes,.
F
E
Doesn't
we
just
need
to
know
if
we're
going
to
get
those
api's
done
or
try
to
get
those
done
in
1210
and
whether
we
should
be
building
it,
as
ideally
as
a
as
a
view
app
using
those
api's
or
if
we
should
keep
it
in
Hamill
like
the
POC
is
so
my
question,
for
you
is
how
large
of
an
effort
is
it
open
up
to
expose
those
API
to
such
good?
Ok,.
G
It's
just
just
a
quick
update
on
that.
We
spoke
a
little
bit
about
showing
success
and
failed
numbers
of
importance
and
it
sounds
like
that's
possible,
but
we
might
need
to
clarify
what
those
terms
mean
in
this
context.
So
you
and
I
can
talk
about
how
that
manifests
in
the
US
and
the
UI.
But
my
thought
was
just
that.
We
would
help
the
user
to
understand
what
we
mean
when
we
say
success
or
fail
in
case
there's
any
gray
area
there,
but
it
sounds
like
it
should
be
doable
to
show
a
number
to
get
them.
A
A
If
we
can
get
that
granular,
but
the
more
context
we
can
add
to
their
reporting
the
better
it
would
be
because
then
we
can,
at
least
like
start
logging,
those
failures
and
what
are
we
using
for
logging
right
now,
whatever
we're
using
for
logging,
and
that
way,
we
should
be
able
to
then
start
to
see
like
what
the
common
problems
are
at
scale.
And
then
we
can
target
fixing
the
specific
things
as
they
happen.
A
So
I
almost
like
this
make
sure
that
we're
locking
that
to
with
whatever,
if
we're
using
graph,
QL
or
something
else,
that'll
be
a
little
bit
different.
I'll
leave
that
up
to
entry
years
to
figure
how
to
do,
but
it
would
be
cool
to
be
like.
Oh
so
you
look
through
the
logs
and
understand
the
specific
reasons
why
it's
failing,
so
we
can
prioritize
them.
F
We
do
have
to
log
in,
but
I
mean
that
would
be
slightly
different
from
reporting
it
back
to
the
user
so
like
they
raise
it.
So
what
happens
if
an
issue
is
it?
Well,
it's
not
probably
not
trying
to
see
because
right
now,
if
we
are
not
doing
commands
or
anything
related
to
the
issue,
which
is
duty,
issue,
description
and
summary
and
whatever
so
if
T
is
not
saved,
then
obviously
it's
failed.
F
But
then
the
question
is:
if
the
issue
is
imported,
but
some
commands
is
not
important
or
one
of
the
attachments
was
not
imported
because
it
was
not
retrieved
from
the
G.
Our
different
situations
like
that
is
that
issue
considerate,
successful
or
probably
not
to
what
degree
and
so
on.
So
there
is
some
granularity
there,
but
probably
not
very
cool.
A
Yeah,
let
me
see
we
can
iterate
on
that,
as
we
go
all
right,
think
I'm,
the
next
one
I
think
Donald.
You
originally
opened
up
the
merge
requests
to
put
the
cycle
time
goal
in
our
process.
Improvements
are
working
towards.
How
should
we
start
to
measure
that
I
guess
the
question,
because
I
think
if
we
are
taking
the
Kanban
first
approach
using
lis
time
is
a
measure
of
like
setting
expectations
about.
One
will
be
done
with
things
is
great,
but
we
need
to
be
able
to
see
it
somewhere.
So
I
was
curious.
E
Yeah,
we
should
definitely
be.
We
should
start
tracking
it
I'm
interested
to
see
or
to
hear
if
and
it
would
be
helpful
to
actually
see
data
to
support
it.
But
if
anyone
just
not
using
data,
has
anyone
noticed
that
issues
have
been
in
smaller
week-long
chunks
yet
or
has
it
been
a
changed
since
we
started
to
chunk
issues
into
smaller
pieces.
H
A
Yeah
I'll
add
the
documentation
for
that
basic
queueing
theory
is
like
any
process.
That's
operating
a
hundred
percent
is
inefficient
and
so
like.
If
we
have,
let's
say
ten
people
on
team
and
we
have
a
working
progress
on
ten.
That
would
be
basically
putting
the
system
at
a
hundred
percent
of
its
capacity,
and
you
generally
want
to
reduce
that
a
little
bit
so
there's
slack
in
the
system
to
handle
like
other
parts
of
the
processes
in
different
places.
A
So
it's
just
also
a
starting
point
and
I
think
that's
where
based
on,
like
the
goal,
I
think
is
measuring
improvement
and
what
is
that
kind
of
cycle
time
like
because
so
the
working
theory
is
the
less
work
you
have
in
progress.
The
faster
each
item
will
like
go
through
a
given
process
and
we
need
to
look
and
see
like
how
does
that
impact
overall,
lead
time
and
cycle
time
and
then
based
on
those
numbers,
you
then
figure
out
which
part
of
your
process
is
the
slowest
one.
A
So
we
actually
have
to
measure
time
and
process
for
each
list
and
then
we
can
focus
on
fixing
one
process.
So,
if
turns
out,
reviews
is
like,
where
time
time
is
spent.
It
has
a
high
cycle
time.
Within
that
one
stage,
then
we
focus
as
a
team
on
how
can
we
improve
or
reduce
our
time
spent
and
reviews
which
a
lot
of
times
is
probably
spent
waiting,
which
is
like
the
sign
of
waste?
A
H
Cool
thanks,
yeah
I,
understand
this
analysis
or
better
understanding.
That
is
how
long
some
some
stage,
how
much
time
it
takes
I,
was
more
curious
about
the
specific
limits
we
set
for
each
of
the
stage,
because
at
this
point,
I'm
not
a
big
fan
of
these
limits
to
be
honest
and
I,
would
better
understand
it
fast
before
before
I
said
some
negative
opinion
about
this
I
think
there
are
some
reasonable
situations
when
it
makes
sense
to
switch
context.
H
For
example,
there
are
situations
when
I
might
think
for
feedback
to
proceed
with
something
and
it's
it's
not
very
value
able
to
switch
it
from
in
development
to
other
stage
and
then
move
it
back
when
I
get
a
feedback
and
I
can
continue
on
this
stuff.
Also,
it's
handy
to
have
some
long-term
or
another
issue.
I
can
rest
on
when
I'm
working
on
some
high-priority
stuff,
so
I
found
it
quite
quite
usable
to
have
a
couple
of
things
in
progress,
but
I
understand
that
switching
context
is
explained
with
some.
A
A
It's
not
perfect
by
any
means,
but
I
think
the
the
underlying
goal
is
that
like,
instead
of
picking
up
a
new
issue,
how
can
you
contribute
to
an
existing
issue
to
help
move
it
forward?
That
might
not
be
yours,
you
know
so
like
taking
approach
of.
Let's
say
there
was
only
five
work-in-progress
items
and
for
10
people,
I
counted
those
10
people
work
collectively
to
move
those
five
items
forward
and
pull
something
through
and
I.
A
Think
that's
where
right
now,
we've
kind
of
had
the
mentality
where
everybody
gets
to
sign
one
issue
and
works
on
that
one
issue
and
then,
when
they're
done
with
that
one
issue,
they
pull
in
a
new
issue,
but
I'm
wondering
if
there's
it's
a
it's.
Also
an
experiment.
I'm
wondering
if
people
working
collectively
together
on
issues
to
move
them
through
will
help
them
go
faster
than
just
giving
everything
up,
and
then
people
not
really
collaborating
that
much
until
they're
in
usage.
That's.
H
See
it's
interesting
either
on
the
other
side,
I
wonder
if
it
wouldn't
mean
that
we
should
improve,
braking,
braking
the
issue
in
that
case,
because
I
would
expect
that
allocating
more
on
people
on
an
issue
which
is
in
progress
requires
additional
time
to
get
them
into
context
of
the
program
and
if
the
issue
can
be
divided
into
between
multiple
people,
I
would
expect
that
in
many
cases
it
also
means
that
the
issue
can
be
broken
down
to
small
pieces.
So
we
should
typically
do
this.
A
Absolutely
yeah
absolutely
I
think
that's
the
outcome
that
we
would
want
and
I
think
that's
a
great
one.
I
also
think
too
like.
If
you
have
your
Carter
summits
and
dev
and
an
interview,
you
should
be
able
to
have
one
issue
you're
working
on
in
dev
and
then
one
in
review,
and
so
the
idea
is
like
something:
shouldn't
move
from
in-depth
in
review
until
there's
a
space
in
review.
It's
like
a
pull
system,
and
so
that
way
you
can
work
more
diligently
with
whoever's
reviewing
your
Amarra
to
push
that
forward.
A
But
I
agree
like
I'm
gonna
record
a
video
and
I'm
gonna
write
up
some
documentation.
Kind
of
explaining
the
experiment
and
I
would
like
some
help
in
getting
some
of
the
metrics
in
place
to
measure
it,
but
they
don't.
We
find
it
doesn't
actually
improve
anything.
Then
I
don't
want
to
change
processes
for
the
sake
of
processes
like
then
we
can
stop
doing
it.
That
consents
cool.