►
From YouTube: 2020-09-15 Plan Dogfood Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah
much
better,
I
was
kind
of
sick
last
week,
but
got
better
over
the
weekend,
so
it
was
good.
It's
good.
B
The
the
smoke
from
the
west
coast
of
the
us
has
reached
the
east
coast
of
the
us
so
like
our
sun.
Here
is
really
weird
looking,
which
is
just
like
really
upsetting.
B
Hopefully
they
get
some
rain
soon
to
take
care
of
things.
B
Like
not
out
here,
I
mean
it's
just
kind
of
like
a
weird
hue,
but
a
friend
of
mine
who
lives
out
in
oregon
and
said
it's
just.
It's
like
a
it's
hell
on
earth
out
there.
So
I
don't
know
mike
if
you're
dealing
with
that
now.
C
Oh
yeah,
I
believe
eric
probably
is
too
you're
in
you're
in
pleasanton
or
walnut
creek
right.
D
Danville
but
yeah
east
bay,
tri
valley
area.
We.
E
D
C
Oh
be
ready,
we
have
two
air
purifiers
one
working
one
on
the
way.
D
G
They're
a
lot
smaller.
We
had
that
crazy
snowstorm.
You
know
that
50
degree
temperature
swing
that
really
helps
help
them
contain
them
so
they're
still
there
but
they're,
not
they're
they're,
controlled
at
least
or
they're,
not
that
bad.
A
That's
also
a
way
to
clear
out
the
fire
with
a
snow
storm
cool.
Then
I
don't
see
justin
yet
on
the
call
keenan.
C
G
C
C
Did
you
want,
I
think,
gabe
asked
me
to
do
it.
Tim
ux
is
somewhere
between
product
and
something
else
it'll
do
so
yeah.
What
gabe
wants
to
talk
about
is
that
we
dog
food
iterations
from
a
process
standpoint
and
iteration-
is
one
week,
one
week
time
box,
to
which
we
assign
issues.
C
Oh
yeah,
it
just
started.
Okay
and
yeah.
An
iteration
is
a
one
week,
time
box.
We
assign
issues
that
we
want
to
complete
from
a
done
done
perspective,
which
means
ready
for
development
at
start
of
the
iteration
to
close
by
the
end
of
the
iteration.
So
the
question
is:
how
do
we
measure
lead
and
cycle
time
for
issues?
So
we
get
feedback
about
whether
we're
focusing
on
smaller
batch
size,
improvements,
small,
smaller
batch
sizes
and
improving
our
lead
in
cycle
times.
C
C
Yeah,
I
think
what
becomes
smaller
would
be
the
the
batch
size,
and
I
believe
that
means
just
the
size
of
the
issues
is
smaller,
because
we're
planning
in
one
week
increments.
D
So
I
would
expect,
then,
that
you'd
be
able
to
do
one
incrementally
more
issues
if
they're,
smaller
or
what
you
could
do
is
look
at
the
changed
lines
and
the
diff
of
the
mrs
that
are
linked
to
the
issue.
If
that's
a
tight
coupling
to
actually
see
the
the
size
of
the
work
shrink
thinking,
you
know,
if
there's
less
scope,
there's
less
code,
that's
having
to
be
changed,
but
you
you
need.
Probably
you
probably
need
to
get
to
statistical
significance
to
actually
see
that,
because
that
can
be
very
up
and.
A
A
Donald
john
and
jake
could
be
correlated
to
lines
of
code
and
changes.
How
is
it
looking
in
plain
reality.
H
Could
I
was
actually
just
looking
for
this,
mr,
which
has
come
up
this
week,
which
proposes
that
maintainers
have
asked
that
we
include
more
context
in
mrs,
in
other
words,
that
we
don't
ship
things
that
are
a
database
change,
followed
by
a
model
change
followed
by
a
controller
change,
and
so
on
that
we
don't
horizontally
slice
issues
that
instead
we
deliver
vertical
feature
slices.
H
That
would
naturally
encourage
growth
in
mr
size
if
we
go
ahead
with
that,
in
my
opinion,
also
increases
the
feedback
cycle
between
the
time
you
get
a
review
and
the
time
you
start
writing
code,
which
is
generally
quote
unquote
a
bad
thing.
H
On
in
terms
of
measure
and
cycle
time,
this
common.
E
H
Is
just
to
for
the
first
instance
I
know
donald's
been
doing
this
manually
and
I
don't
mind
doing
it
manually
the
first
few
times,
because
I
think
we
should
feel
that
pain
before
we
resolve
to
build
something.
H
So
we
build
the
right
thing
and
kind
of
identify
which
things
are
causing
us
the
most
pain
before
we
build
them.
If
that
makes
sense,.
E
E
It
was
mainly
about
the
different
stages
in
the
product
development
flow,
and
how
long
do
we
spend
in
each
of
the
stages
and
from
what
it
looked
like
it
sound
I
mean
if
you
really
want
to
measure
that,
then
it
seemed
like
we'll
have
to
check
the
label
events
like
basically
it's
all
about
measuring
how
long
an
issue
had
a
particular
table
and
stuff
like
that
right,
so
yeah,
that's
the
gist
of
the
discussion
that
we
have,
and
I've
also
added
the
link
here.
So
in.
H
Case
yeah,
I
know
from
kenan
has
an
issue
at
the
minute
to
look
into
using
system
notes
for
one
of
the
north
star
metrics.
I
know
there's
some
feedback
that
that
system
node
table
is
enormous
and
any
join.
That's
done
on
that
table
will
will
basically
time
out
before
we
can
get
usable
data
on
it.
H
But
I've
actually
used
system
notes
in
the
past
just
to
quantify
when
labels
were
added
and
removed
from
an
issue
and
actually,
if
you
scope
down,
you
can
do
it
through
the
api,
but
you
have
to
like
apply
a
lot
of
filters
first
and
then
so
you,
in
other
words,
you
know
in
advance
which
issues
you
want
to
track
and
then
you
can
grab
the
system.
That's
quite
quite
easily.
I
mean
it's
a
bit
of
work
to
actually
build
those
scripts,
though,
but.
B
G
So
we
have
it
on.com
for
what
I'm
trying
to
solve
for,
but
the
problem
is,
we
can't
unify
it
across
self-managed
and
so
we're
trying
to
get
a
gmail
representation
for
portfolio
management.
That's
actually
showing
interactions
with
the
epic
artifact
and
not
just
the
action
of
creating,
and
so
that's
why
we
need
that
additional
layer
of.
C
B
I
don't
want
to
put
donald
on
the
spot,
but
I
would
ask
donald
had
talked
to
me
a
bit
about
like
how
he
does
the
iteration
planning
and
that
might
be
helpful
for
like
donald.
How
do
you,
when
you're
working
with
your
team
kind
of
determine
like
what
goes
into
the
iteration?
Does
that
like
come
out
of
your
one-on-ones
and
then
you
pull
it
all
together?.
F
I
used
somewhat
more
asynchronously
than
that
in
that
typically
the
beginning
of
a
week.
So
before
the
start
of
iteration
week,
which
is
a
wednesday
it's
like
on
tuesday,
the
engineers
the
front
of
engineers
on
the
team
will
take
a
look
at
what
we
have
in
ready
for
dev
and
determine
what
they
want
to
what
they're
going
to
take
on
in
the
next
iteration.
F
B
Wednesday
yeah,
I
don't
I
I
guess
I
can't
speak
for
gabe
from
the
from
the
product
side.
I
know
like
part
of
the
the
pain
that
gabe's
representing
with
that
question,
is
like
we
have
one
week
and
if
it
takes,
you
know
a
24-hour
time
frame
with
everybody
across
various
time
zones
to
kind
of
get
the
iteration
plan
put
together.
Then
that's
you
know,
17
or
whatever
of
the
iteration
itself,
just
kind
of
getting
it
planned.
So.
F
Yeah-
and
I
think
he's
also
referring
to
the
the
the
end
of
cycle
time
and
lead
time
in
that-
we,
because
the
traditional
definition
of
those
is
the
end
being
you
deliver
it
to
a
customer.
So
that
means
it's
been
through
reviews
and
it's
out
on
production
and
figuring
out
a
way
to
get
all
of
that
done,
regardless
of
how
small
the
iteration
is
with
our
asynchronous
nature
is
a
bit
difficult
because
at
minimum
for
most
things,
you
have
to
account
for
like
two
days
just
for
the
review
process
to
go
through.
I
Well,
I
know
the
other
thing
he's
mentioned
to
me
in
the
planning
process
as
well
was
because
it
took
that
24-hour
lead
to
do
the
iteration
planning.
You
weren't
really
sure
what
was
going
to
make
what
was
going
to
complete
the
last
iteration.
So
you
didn't
know
if
there's
going
to
be
stuff
pulled
forward
in
the
future
duration,
normally
in
a
synchronous,
setup,
you'd
all
sit
down
one
afternoon
and
say:
okay,
we
know
exactly
where
we
are
now:
here's
the
next
iteration
being
that
we're
doing
this
over
a
24
to
36
hour
time
window.
I
It's
we
think
this
is
where
we're
going
to
be
in
36
hours
from
now
and
we're
going
to
plan
as
if
that's
right,
but
if
any
hiccups
come
up
or
anything
gets
accelerated,
we
will
have
made
the
wrong
assumptions
going
in
so
we're
starting
off
not
at
ground
zero.
When
we
actually
start
that,
and
that
makes
it
a
little
bit
more
challenging.
I
think
that's
something
we
as
a
company
need
to
overcome
with
asynchronous
work
and
it's
something
that
we
should
then
represent
to
our
customers
as
well
as
a
way
of
doing
any
services.
I
I
So
if
we
are
looking
at
this
from
a
cycle
time
situation,
one
of
the
things
iterations
is
very
good
at
exposing
is
areas
in
your
process
that
perhaps
are
too
lengthy
or
could
be
improved,
and
I
think
what
we're
seeing
now
is
to
donald's
point.
If
there's
a
potential
two-day
review
process
involved
with
an
iteration,
what
else
takes
up
time
and
then
we
can
prioritize.
You
know
understanding
the
cycle
better,
and
I
think
at
that
point
we
may
say
for
asynchronous
work.
I
One
of
the
conclusions
might
be
that
we
need
a
two-week
iteration
instead
of
a
one-week
iteration
just
because
it
doesn't,
it
doesn't
really
coalesce
in
one
week
for
us
with
our
with
the
nature
of
our
business.
I
don't
know,
I
think
that's.
We
just
need
to
sort
of
understand
that
as
part
of
the
output
of
this.
C
Could
we
consider
a
one-week
iteration
a
stretch
goal
and
has
have
there
been
any
any
concerns
that
have
arisen
so
far
about
a
week
time
box.
I
However,
in
an
asynchronous
nature
that
may
be
as
rapid
as
we
can
go,
so
I
think
that's
one
of
the
things
we
just
sort
of
need
to
keep
an
eye
on
is
synchronously
one
week
is
the
standard
and
there's
reasons
for
that
asynchronously.
We
may
need
to
look
at
alternatives
and
I
think
that's
perfectly
healthy.
It's
just.
A
A
It's
planned
and
48
hours
later,
you're
already
in
the
next
thinking
phase
or
on
the
other
hand,
if
we
just
have
one
week,
we
can
rather
adapt
and
change
our
process
faster
and
learn,
also
not
the
actual
work,
but
can
also
iterate
better
on
on
the
process,
not
too
sure
if
it's
better
to
start
with
two
weeks
like
we
did
with
releases
and
deployments,
we
went
from
one
month
to
two
weeks
to
one
week
to
one
day,
so
I
will
totally
give
this
up
to
you.
What
sounds
the
easiest
and
most
flexible.
G
Yeah,
the
only
the
main
reason
two
weeks
sounds
good
to
me
is
the
point
I
think
jake
made
earlier
with
the
simple
in
donald
meter,
8.2
like
just
simply
because
getting
issues
turned
around
through
maintainer
review
and
review
in
general.
In
such
a
short
amount
of
time
can
essentially
guarantee
we're
going
to
have
a
lot
of
things,
slip,
iteration
iteration,
which
might
not
be
what
we're
trying
to
accomplish
and
that
import
that
then,
with
the
two
you
know
24
to
48
hour
time
cycle.
G
It
takes
to
answer
questions
in
a
synchronous
environment
because
the
team
is
so
distributed
that
two
weeks
might
it
reduces
the
amount
of
planning
cycles
we
need
for
in
it
like
iterations,
within
a
single
milestone
and
also
gives
us
that
window
to
be
able
to
get
things
from
in
dev
to
verified,
maybe
a
little
in
in
a
less
stringent
box.
But
that's
just
my
thought,
my
opinion,
I'm
I'm
supportive
of
whatever
we
think
is
going
to
work
for
the
team.
I
Yeah,
I
think,
to
echo
keen-
and
I
think
that's
a
very
good
point.
I
think
the
idea,
then,
is
your
margin
of
error
when
you
have
a
24
to
36
hour
planning
window
your
margin
of
error
for
what
gets
carried
through
it
matters
less,
because
you
have
a
longer
time
frame
to
make
up
for
that
that
potential
error
at
the
beginning
and.
E
I
Given
the
realities
of
where
we're
at
with
maintainer
review,
I
think
we
just
need
to
accept
that.
That's
where
we're
at
now,
not
that
we
can't
look
to
improve
that
in
the
future.
But
if
that's
where
we're
at
now,
we
need
to
plan
around
the
knowns
and
that's
one
of
the
knowns
is
I
mean
interviewing
under
24
to
48
hours
is
just
not
achievable.
Currently,
if
that's
something
that
we
want
to
look
into
or
as
an
output
of
this,
that
would
be
something
we
would
look
to
improve
to
reduce
our
cycle
time.
I
F
Yeah,
so
what
we've
done
on
the
front
end
team
on
the
planned
front,
end
team
instead
of
changing
the
iteration
time,
we
change
the
definition
of
cycle
time
to
mean
getting
something
in
review
as
opposed
to
getting
something
merged
into
production.
F
So
that's
another
approach
we
could
take
just
because
we
have
a
lot
less
control
over
the
the
time
it
takes
once
it
gets
into
review
somewhat.
We
do
have
some
control
over
that,
but
less
control
than
than
we
do
the
time
it
actually
is
in
development.
F
So
that's
another
possibility.
If
we
want
to.
We
want
to
think
about
that.
The
only
the
only
thing
I'm
hesitant
about
with
two
weeks
is:
what's
what's
the
theory
where
you
know
you
fill
up
the
amount
of
time
that
you
have
regardless
of
the
iteration,
so
I
just
don't
want
us
to
be
in
a
similar
situation
where
we
are
in
yeah
there.
You
go
parkinson's
off,
but
a
similar
situation
as
we're
in
with
with
the
milestones,
which
probably
won't
be
because
it's
half
the
time,
but
it
may.
I
I'm
hesitant
to
de-scope
the
definition
of
iteration.
I
understand
why
we're
doing
it.
I
think
it's
a
very
practical
approach,
but
I'm
hesitant
to
de-scope
that,
because
the
idea
of
an
iteration
is
a
value,
add
to
a
customer,
and
I
think
when
we
start
de-scoping
and
saying
we're
going
to
get
it
to
review
what
that
means,
then
is
we're
not
actually
tracking
the
most
important
part
which
is
actually
delivering
that
value.
I
B
Thinking
of
like
a
world
in
which
we
reshape
what
defines
what
goes
into
an
iteration
by
like
things
that
will
shift
rather
than
things
that
are
like
being
like
started,
I
guess
like
things
that
are
finishing
rather
than
things
that
are
starting
because,
like
realistically,
if
a
feature
slice
is
you
know
of
significant
value
to
the
customer,
it
probably
takes
longer
than
one
week
to
go
from
like
the
very
beginning
to
being
on
production,
because
it
once
even
like,
let's
say,
given
a
a
very
quick
one-day
turnaround
on
review
for
the
reviewer
and
then
the
maintainer,
and
then
the
deployment
so
like
it
waits
a
day
before
it
goes
out
to
canary
and
and
goes
to
prod.
B
Like
that's
three
days
out
of
your
week,
I
I
think
I
agree
with
what
you're
saying
mark,
but
I've
also
gotten
the
feedback
from
from
my
team
around
like
trying
to
start
something
and
and
kind
of
finish
it
and
finish,
meeting
deployed
to
production
in
a
week.
Is
it's
pretty
tricky
and
I
get
that
the
whole.
A
big
goal
of
this
dog
fooding
is
to
reduce
that
and
make
that
feel
easier,
but
yeah.
B
Those
two
things
are
kind
of
at
an
impasse
like
having
a
week
with
three
to
four
days
of
the
finishing
leaves
only
you
know
a
minority
of
the
time
to
actually
do
the
work.
I
The
iteration
length
is
right
where
it
needs
to
be
when
I've
done
this
in
the
past,
one
of
the
things
that
usually
came
out
of
it
was
we
found
bottlenecks
in
the
process
and
that's
the
joy
of
the
iteration
is
you.
They
become
very
apparent
what
those
bottlenecks
are
and
then
we
could
decide
from
a
business
case.
Does
it
make
sense
to
tackle
those
bottlenecks
or
does
the
business
case
to
tackle
them
not
outweigh
the
benefit
provided
by
doing
so,
and
we
want
to
then
say:
okay,
are
there
other
areas
we
could
optimize?
I
You
do
come
to
a
point
where
you
believe
your
system
is
as
optimized
as
possible
and
you'll
revisit
every
few
months
to
determine
if
you
still
believe
that,
but
there
are
situations
where
it's
not
worth:
either
the
financial
output
or
the
engineering
work
or
the
effort
to
minimize
things
further.
It
just
isn't
practical
and
being
asynchronous
in
nature.
I
think
we
have
a
built-in
lag
that
we
don't
necessarily
want
to
overcome.
That's
that's
one
of
the
benefits
of
get
lab.
I
F
A
A
Cool
then
we
have
two
weeks
and
we
have
some
opr
definitions
from
donald
2..
F
Sure,
yes,
I
added
some
dev
krs
on
to
this
earlier,
the
first
one
being
75
of
our
feature,
issues
that
are
moved
in
dev.
I
should
have
an
iteration
assigned
the
hope
there
is
that
we
encourage
usage
of
iteration
within
the
engineers,
which
is
why
I
had
it
as
they
are
moved
in
dev
as
opposed
to
plan
breakdown
or
in
the
milestone.
F
I
had
a
little
bit
of
a
conversation
with
john
around
the
75
percent
number
I
put
75
knowing
that
we're
getting
a
late
start
and,
like
john
said,
I
still
don't
that
probably
still
not
possible,
because
what
do
we
have?
Fifty
percent
of
the
quarter,
yeah.
F
Yeah
yeah,
so
I
was
thinking
of
getting
the
metric
first
and
then
adjusting
that
number,
depending
on
on
what
we're
at,
but
the
other
option
that
john
pointed
out
was
just
using
the
final
iteration
of
the
quarter,
so
I'm
not
sure
which
one
we
want
to
go
to.
I
think
either
work,
but
let's
keep
it
as
it
is
for
now.
Let
me
see
what
the
number
is
at
currently
and
then
we
can
adjust
it
by
next
week.
F
To
get
the
iteration
to
get
the
metric.
F
Yeah,
I
don't
know
if
we
can
pull
that
in
size
sense
already.
If
it
just
came
for
free
when
we
added
iterations,
let
me
look
into
that.
If
not
we
may
need
to,
we
may
need
to
just
model
it
or
make
sure
that
it's
coming
through
into
size
sense,
which
isn't
too
too
much
effort
to
do,
but
this
should
be
something
that
we
build
in
the
platform.
So
this
could
be
one
of
the
outputs
of
of
this
of
this
working
group
of
that
care.
F
All
right,
second
one
I
have
there
is
what
we
have.
There
is
fifty
percent
of
the
plan.
Issues
in
a
iteration
is
burned
down,
just
to
encourage
use
of
of
those
metrics.
F
We
have
them
for
milestones.
I
don't
know
from
the
engineering
side
if
we
really
use
them
all
that
often
so
I
think
just
having
this
will
encourage
the
usage
of
that
both
on
the
engineering
side
and
on
the
planning
side,
and
then
it
also
will
encourage
breaking
down
of
issues
into
a
small
iteration
size,
either
weekly
or
bi-weekly
work.
B
Sure
yeah,
I
was
saying
like
these.
I
think
these
are
great.
So
thank
you.
I
think
they'll
be
good
for
kind
of
like
forcing
us
to
use
iterations.
I
was
asking
like
if
the
goal
of
dog
fooding
is
that
we
like
find
and
then
resolve
pain
with
using
these,
do
we
want
to
add
an
okr,
that's
like
around
actually
fixing
stuff
that
we
find,
or
maybe
that's
like
next
quarter's,
okay,
ours,
that
we
like
we
dog
food.
B
I
I
would
be
more
likely
to
state
the
output
of
this
working
session
is
to
find
potential
future
improvements,
rather
than
focus
on
fixing
those
potential
future
improvements,
because
I
would
rather
focus
on
utilizing
it
and
coming
up
with
a
plan
then
trying
to
plan
and
execute.
While
in
the
midst
of
utilizing,
that's,
I
think,
given
our
time
frame,
that
that's
probably
better
for
me,
but
again,
I'm
very
open
to
the
discussion.
A
I
think
that
can
totally
also
depend
simply
on
the
change.
I've
seen
this
so
many
times
in
the
past,
on
the
when
we
did
dog
fooding
in
the
past,
sometimes
just
getting
rid
of
three
clicks
that
you
need
to
go
through
by
replacing
it
with
one
click.
That's
a
huge
dog
fooding
win
and
made
everyone's
life
so
much
better,
and
it
was
an
easy
and
quick
win,
but
it
can
be
also
totally
like
okay,
that
we
need
a
complete
new
dashboard
with
new
data
sources.
A
Okay,
let's
park
this
somewhere,
but
there
might
be
really
also
some
some
nice
quick
wins
that
are
hidden
in
there
too,
that
when
we
feel
the
pain
we
just
go
ahead
and
get
moving
on
those.
I
would,
I
think
we
we
simply
take
a
look
and
be
pragmatic,
but
I
I
totally
agree.
A
We
shouldn't
be
like
pushing
big
features
now
that
the
time
is
too
short
and
and
getting
the
process
in
the
changes
in
at
the
same
time,
but
keep
our
eyes
open
and
if
you
feel
annoyed,
then
let's
get
it
fixed,
so
cool,
then
my
main
suggestion
would
be
looking
at
time.
A
Let's
get
this
into
issues,
I
will
also
comment
in
the
slack
channel
and
that
everyone
is
updated
there
and
also
in
the
major
okrs,
but
that
we
have
the
okrs
from
from
each
area
and
can
basically
link
them
sounds
good.