►
From YouTube: Post Deployment Monitoring Think Big
Description
High Level discussion of what we have planned and what we learned about Post Deployment Monitoring
A
Okay,
so
today
we
wanted
to
discuss
a
little
bit
about
close
deployment
monitoring
and
the
purpose
is
to
give
like
a
high-level
idea
of
what
we
want
to
do
and
what
we
aren't
want
to
accomplish,
but
also
to
hear
from
you
about.
Maybe
some
additional
thoughts
or
concerns
or
opportunities
that
I
may
have
not
noticed
so
definitely
hear
from
Dmitry
kind
of
to
do
a
whiteboard.
A
And
I
will
also
share
my
screen
so
that
you
guys
can
see
sorry,
you
all
can
see
what
we're
talking
about
so
post-deployment
monitoring.
The
basic
idea
is
that
once
you
have
automatic,
see
ICD
and
everything
is
deployment
deploying
properly.
You
also
have
a
step
post
deployment
where
you
want
to
make
sure
that
everything
is
still
working
properly
and
if
it's
not,
then
you
want
to
do
something
about
the
fact
that
it's
not
working
properly
so
for
the
NGC.
A
What
we
wanted
to
do
here
was
kind
of
use
what
we
already
have
built
in
in
lab.
So
this
was
a
collaboration
with
some
monitor
team
which
actually
two
groups
in
the
monitor
team.
So
one
group
was
incidence
response
and
when
the
other
one
is
the
APN
team,
and
what
we
did
here
was
the
monitoring
team
has
an
incidence
response
feature
which
means
that
if
there's
a
threshold
and
that
is
exceeded,
it
automatically
opens
an
issue
for
you.
Okay,
so
so
someone
knows
that
they
need
to
deal
with
it
and
what
we
won.
A
We,
the
first
thing
that
we
did
was
we
research,
this
mechanism
and
we
figured
out
that
we
could
leverage
the
API
behind
it
in
order
to
trigger
something
in
the
pipeline.
So
if
there's
the
threshold
that's
crossed,
we
will
eventually
either
stop
the
rollout
if
we're
talking
about
incremental
roll
outs
or
even
rollback
in
in
criminal
rollers
or
in
a
regular
deployment.
So
the
idea
is
there's
some
really
big
problem
and
the
system
will
automatically
identify
it
and
roll
it
and
take
care
of
it.
A
For
you,
that's
the
idea
of
close
deployment
and
with
the
APA
what
the
aimed
PM
team.
We
decided
to
start
with
what
we
had
already
out
of
the
box
and
good
luck
today
and
what
we
have
is
Prometheus
errors
that
are
notifications
that
are
triggered
in
the
one
of
these
three
metrics,
so
throughput,
latency
or
HTTP
error
rate.
Those
are
the
ones
that
we
started
with.
If
one
of
these
exceeded
exceeds,
then
we
will
automatically
deploy.
That's
the
big
idea.
A
The
nice
thing
about
metrics
in
get
lab
is
that,
basically,
you
can
support
any
metric
without
any
additional
work,
because
we're
leveraging
their
their
already
existing
mechanisms.
So,
even
if
a
user
is
creating
a
custom
metric
as
long
as
it's
working
in
terms
of
the
monitoring
team,
we
should
automatically
be
able
to
support
it
because
we're
all
using
API
is
that
already
exist
in
the
system.
B
A
A
B
A
Okay,
so,
as
I
mentioned
we're
using
the
existing
Prometheus
API
and
if
there
are
exceeds,
we
will
stop
the
deployment
or
rollback.
It
depends
on
what
the
user
defines,
because
not
every
user
wants
an
automatic
rollback.
Some
just
want
to
stop
so
I'm
still
want
to
manually
rollback
and
we'll
go
into
that
in
a
minute
when
I
go
through
some
user
insights
that
we've
done
around
we've
done
user
research
around
this
area,
so
in
general
this
is
a
high-level.
A
The
UX
doesn't
need
to
actually
look
like
this
at
the
end
of
the
day,
but
this
is
the
general
idea
is
that
this
is
our
environments
page
you
can
see.
This
is
a
deploy
board
by
the
way.
This
is
not
only
support
different
kubernetes.
This
is
supported
for
any
deployment.
Ok,
very
important
to
note
so
here
you
see
the
environment
page
and
you
can
see
you
can
see.
We
have
the
roll
battle
aboard.
This
already
exists
today
and
I.
Think
yogi.
A
You
can
see
here
there's
a
indication
that
there's
an
error
rate
exceeded
0.1,
that's
gonna,
show
up
in
the
environment
page
and
then,
based
on
this
data,
the
user
can
decide
whether
they
want
to
abort
rollback
before
this
can
be
done
automatically.
It
depends
on
what
they
chose
and
it
says
rolling
back
to
blah
blah
blah
blah
blah
the
previous
deployment
right
and
we'll
go
into
into
that
in
detail
in
a
minute.
A
Something
really
interesting,
I
may
be
jumping
ahead,
but
something
really
interesting
again
with
a
monitor
it's.
So
today
we
have
metrics
that
you
can
see
in
any
deployment.
I
don't
have
an
active
project
to
show
this.
But
basically
just
imagine
you
see
a
graph
over
time
and
you
can
see
in
another
CPU,
for
example,
what
they're
doing
now
the
monitoring
team
is
they're
doing
annotations
inside
that
graph.
So
you
can
tell
that
something
happened
in
that
specific
time
interval.
So
I'm.
A
A
Another
idea
that
we
had
in
the
beginning
of
this
epoch
was
also
to
add
this
data,
so
the
merger
quest
we
did
for
the
MVC.
We
decided
to
skip
this,
but
just
so
you
get
a
high
level
picture.
So
the
merger
quest
itself.
You
have
the
data
of
the
deployment,
so
this
is
exactly
the
M
R
that
was
deployed,
and
then
you
can
get
the
notifications
there
as
well.
A
C
A
So
hopefully,
I
linked
them
all
to
the
right
place
and
basically
you
can
see
the
different
epochs
and
issues
that
are
related
here
that
are
part
of
this
epoch.
So
let's
go
from
the
bottom
to
the
top.
So
the
first
thing
we
did
was
to
research.
If
we
could,
actually,
you
know,
leverage
the
incidence
response
mechanism
and
that
we
included-
and
it
was
successful,
so
we
were
able
to
do
that.
So
that
was
good
news
and
that's
when
we
started
to
think
okay,
we're
gonna
go
for
this
issue.
A
Okay,
so
let's
start
with
these
two,
so
we
had
show
alerts
and
environment
index
page
and
I
will
show
you
there's
already
a
mock-up
for
that
as
well.
So
the
idea
is
this
is
the
way
alerts.
Look
today
in
incidence
response.
Okay,
so
you
can
see
this
already
exists.
It's
under
the
alerts
tab.
This
is
what
it
looks
like.
What
we
wanted
to
do
was
kind
of
take
the
environment
page
and
propagate
the
the
most
critical,
a
look
alert,
that's
currently
existing
in
the
system
and
show
it
to
you
in
the
environment,
page.
A
D
A
There's
two
places
three
places:
actually
what
we
wanted
to
like
kind
of
connect,
everything
together,
the
first
design
environment
page,
because
it
shows
you
exactly
what's
going
on
with
a
year
environment
and
if
you're,
looking
at
a
production
environment
you're
most
likely.
Also
looking
at
you
know
this
page,
especially
if
you're
talking
about
the
play
boards,
that's
usually
where
you
want
to
check
your
help
and
see
that
everything
is
fine
and
green
and
great
I'm,
actually
four
places
now
that
I
think
of
it.
So
that's
the
first
place.
A
The
second
place,
which
is
still
related
to
environments,
is
something
that
I
talked
to
Jackie
about
is
adding
this
to
the
there's,
an
environments
dashboard
so
also
having
this
indicator:
color
the
environment
in
the
environment
dashboard,
so
that
you
know
there's
a
problem.
So
that's
the
second
place.
The
third
place
is
in
the
pipeline
itself.
A
Okay
and
and
I'll
show
that
in
another
issue,
and
the
idea
is
when
someone
is
monitoring
the
deployment
they're
either
looking
the
environment,
space
or
they're
looking
at
the
pipeline
page
and
throughout
the
user
interviews,
both
of
them
came
up
so
I
think
I
think
we
both
we
need
to
support
both
then
the
fourth
place
is
the
alerts.
So
the
alerts
page
here.
A
A
So
this
was
the
first
issue,
the
second
one,
which
is
the
most
important
issue
of
the
entire
post.
Employment
is
actually
doing
McKenzie
right.
So
we
want
to
add,
cancel
a
cancel
button.
Save
environment
page
excuse
my
ugly,
a
mock
up.
This
is
something
that
I
did
I'm
sure
Demetri
will
fix
it,
but
the
idea
is,
since
we
already
see
that
here
we
can
do
the
stop
here
and
again.
We
need
to
support
two
different
mechanism.
One
is
a
manual
stop
and
wasn't.
One
is
an
automatic
stuff.
A
A
Great
question:
stop
it
stop
and
roll
back
his
roll
back
they're,
not
the
same
stop
so
in
case
of
an
incremental
rollout.
It
will
stop
rolling
out
the
pods,
so
it'll,
you
know,
leave
I
guess
whatever
the
amount
of
pods
that
you
decided
it's
going
to
roll
out
with
the
faulty
state
until
you
roll
them
back
and
the
rest
of
the
pods
will
just
remain
with
the
old
versions.
A
A
A
B
A
B
B
A
A
A
That
makes
them.
So
what
this
is
doing
again
is
adding
the
cancel
button.
It
will
auto
cancel
deployment
jobs,
okay,
when
alert
manager
detected
a
critical
alert
on
environment.
Okay.
This
is
in
case
the
user.
Select
this
we're
not
going
to
do
this
in
case
the
user
doesn't
select
it
okay
and
it's
going
to
work.
Similarly,
they
skip
outdated,
the
priming
job.
So
taking
you
back
to
skip
out,
they
did
the
finding
job
which
was
formerly
known
as
forward
deployment,
allow
only
forward
deployments
what
it
does
is.
A
A
E
A
E
Did
I
think
would
make
sense
a
lot
like
with
the
icon
there
and
also
the
way
explained
it
to
call
it
just
a
stopped
deployment
because
cancel
makes
it
feel
like
you're
rolling
back
to
the
beginning,
like
you're
canceling
everything.
But
if
you
just
say,
stop
or
like
even
pause,
I
think
might
be
a
little
bit
clearer.
A
This
just
makes
the
logic
easier,
since
this
is
a
manual
election
I
think
a
user
should
be
able
to
decide
whether
they
want
to
well
I
hope,
I'm,
saying
this
correct
myself
stop
the
deployment
at
any
time,
even
if
there's
no
alert,
but
definitely
if
there
is
an
alert,
they
have
an
incentive
to
do
that,
so
it
should
be
visible
at
any
time,
but
only
users
that
are
a
lot
to
deploy
to
the
environment
can
press
the
button.
So
up
to
you,
Dimitri
I
wrote
this.
A
A
A
So
there
is
a
follow-up
issue
to
this.
This
is
like
the
rabbit
hole
of
github,
which
says:
let's
state
the
canceled
pipeline
reason
in
the
job
page
in
case
of
the
rollup
was
stopped.
Okay.
So
what
happens
now?
Is
we're
gonna
see
this
cancelled,
icon,
okay,
which
exists
today,
but
we
don't
know
why,
like
a
user
that
goes
into
this
pipeline,
will
see
that
it's
canceled,
but
we
don't
know
why
it
was
canceled.
D
I,
just
you
know
like
a
really
bad
suggestion,
but
hey
just
let
me
say:
I
would
be
like
to
ask
for
a
comment.
It's
just
stupid.
Of
course,
I
don't
propose
this
is
a
real
solutions,
but
we
could
ask
for
input
if
if
the
user
is
just
canceling
out,
of
course,
like
you,
don't
want
to
have
a
pop-up
or
something
in
your
face,
but
again
we're
brainstorm.
It's
just
pointing
that
out.
Well,.
A
A
D
A
A
A
I'm
gonna
skip
that
because
I
don't
think
it's
important
to
understand
spinnaker
at
the
moment.
It's
so.
The
first
thing
we
need
to
do
is
add
a
setting
for
users
to
configure
whether
they
want
to
support
what
about
purl
back
yes
or
no.
This
is
really
important
because
again,
this
is
a
harmful
event
and
people
that
I
was
talking
to
didn't
necessarily
trust
the
system
enough
to
support
to
say
that
they
want
auto
rollback
or
they
support
overall
rollback
for
any
non
production
environment.
A
A
Here,
I
we're
not
going
to
support
that,
at
least
not
in
terms
of
the
MVC.
We're
just
gonna
take
the
latest
the
latest
successful
one
and
for
99%
of
the
people
that
I
interviewed.
This
was
the
case
like
they.
They
just
do
the
last.
The
latest
deployment
so
I'm,
not
even
sure
that
along
the
line
will
do
that
we'll
see.
If
there's
you
know,
demand
for
it
and
then
and
then
we
can
add
it.
A
So
I
think
that's
a
separate
issues
of
this
one
in
terms
of
iteration.
But
yes,
log
is
super
important
and
then
for
MVC
metrics
will
be
defined
on
the
dedicated
demo
file.
So
they
did
here
forget
what
I
wrote
here,
but
I
just
want
to
say
like
what
the
problem
is.
This
is
a
possible
solution,
not
necessarily
the
one
that
we
will
do,
but
the
idea
is
that
we
have
a
lot
of
alerts
coming
into
from
the
system.
So
if
you
remember,
when
we
talked
about
the
alerts
page.
A
Yeah,
there's
a
bunch
of
alerts
and
not
necessarily
every
time,
there's
another.
You
want
to
do
a
rollback
right,
there's
very
specific
alerts
that
you
want
to
rollback
and
the
rest
of
them.
You
want
to
know
about
them,
but
you
don't
necessarily
want
to
automatic
rollback.
So
it's
important
that
the
user
define
which
are
these
alerts,
what
they
want
to
automatic
rollback
if
those
were
triggered
in
any
case
and
that's
the
problem.
Okay,
so
the
problem
is:
how
do
you
define
the
alerts
that
you
want
to
roll
back
on?
A
A
A
A
Anytime,
so
else
has
this
problem:
okay,
so
metrics
will
be
defined
on
a
dedicated,
yellow
file,
so
the
way
that
metrics
works
today
and
get
that
in
terms
of
the
dashboards
that
you
see,
you
actually
define
in
a
ml
file,
the
dashboard
that
you
want
to
see
and
the
that's
where
it
reads:
those
yellow
files
and
that's
what
they
display.
So
I
thought
that
would
be
a
nice
way
to
overcome
this,
because
the
user
defines
a
yellow
tile.
Her
rate
exceeded
90
percent
and
in
case
there's
a
match,
then
there's
an
automatic
rollback.
D
E
A
D
A
Yes,
so,
let's
start
question
by
question:
first
of
all,
the
people
who
are
setting
these
rollbacks
are
the
people
who
have
the
rights
deploy
for
the
assistant
to
the
without
environment.
So
we
should,
though,
the
environments
that
are
concerning
the
most
are
the
protected
environment,
and
if
someone
is
a
maintainer
in
the
in
that
environment
and
they
they
can
write
the
yellow,
then
I
think
we're
okay.
A
Another
interesting
idea,
which
leverages
another
stage,
is
using
parent-child
pipeline.
So
maybe
we
could
do
like
some
different
permission
for
have
that
as
a
child
pipeline,
that's
called
the
mo
file
which
may
you
know,
have
even
less
audience
than
the
regular
mo
file.
I,
don't
know
those
are
some
ideas
and
in
terms
of
custom
metrics,
so.
A
Okay,
so
we
have
a
bunch
of
metrics
that
are
supportive
and
can
be
found
in
documentation,
basically
in
prometheus,
we
support
anything
that
goes
through
Prometheus,
so
we
have
things
that
we
built
out
of
the
box,
but
there's
so
many
more
errors
that
users
can
define
and
anything
that's
supported
by
Prometheus
is
supported
by
good
vibe,
but
it
doesn't
come
like
out
of
the
box
free
as
in
these.
So
a
user
would
basically
need
to
create
this
syntax
for
something
else
that
they
were
measuring.
A
Too
many
tabs
already,
okay,
so
I
broke
here.
This
is
similar
to
the
common
metric
siamo
file
that
can
be
found
in
that
bag.
Cute
lab
dashboards,
annal,
so
I
don't
have
a
project
that
does
this,
but
if
you
do-
and
you
can
look
into
that-
it's
basically
built
in
a
similar
way.
That's
the
way
that
the
dashboards
are
shown
I
might
have
like
sister
doves,
a
project
but
I.
Think
that's
a
little
bit
like
going
into
the
details
too
much,
and
we
can
do
that
at
cost
line.
Two.
A
A
A
A
It's
just
a
thought:
I'm,
not
sure
this
is
the
approach
we're
gonna
have
at
the
end
of
the
day,
but
the
the
reason
I
like
this
is
because
it
follows
other
processes
that
we
have
we
think
at
lab,
and
this
post
deployment
monitoring
is
very
closely
tied
to
mana
to
to
monitor
itself.
So,
theoretically
is
the
same
persona
I.
A
A
So
this
is
like
the
title
of
the
of
the
chart
that
you
would
see.
I
don't
know
if
we
have
any
dashboards
here,
you
yeah,
we
don't
have
anything
enabled,
but
in
order
to
enable
you
basically
like
have
a
yellow
file
that
this
is
calling
and
that's
what
what's
displayed
in
the
dashboard.
So
this
in
this
scenario,
you
would
see
anomaly
anomaly
chart
and
it's
the
container
CPU
usage
bear
environment.
Those
would
be
the
two
things
that
you
would
see
in
the
dashboard.
A
C
A
A
High-Level
concepts
of
what
we
want
to
accomplish
and
in
terms
of
user
interviews,
you're
more
than
welcome
to
look
into
these
interviews,
but
I'm
gonna
just
keep
it
really
really
short
in
terms
of
what
we
learned
from
there
so
long
as
users
want
to
roll
back
to
the
latest
stable
version.
We
talked
about
that
this
one
was
really
interesting
users
to
prefer
the
finding
metrics
through
UI.
This
was
across
the
board.
Everyone
like
just
wanted
to
select
what
metrics
they
want
to
see.
A
Health
check
is
very
important
for
kubernetes
metrics.
This
is
something
that
wasn't
in
the
out
of
the
box,
metrics
that
we
provided
and
it's
something
that
I'm
working
with
the
monitor
team
to
expand
on
users,
prefer
water
roll
back
to
manual
for
specific
thresholds.
So
that's
what
we
were
talking
about
in
this
specific
issue,
so
the
Robeck
is
very
specific.
So,
for
example,
something
that
everyone
agreed
upon
was
that
if
the
CPU
usage
was
over
90%,
they
know
there's
a
big
problem
and
they
wanted
to
roll
back.
A
A
A
Users
require
auto
and
manual
rollback
methods.
So
this
one
was
very
simple:
some
people
just
don't
trust
what
a
rollback
and
some
don't
even
have
an
automatic
process
for
CIC.
Yet
so
there's
nothing
to
do
automatic
users
want
to
get
notifications
to
rather
all
that
process,
so
they
want
like
they
want
to
know
it
started.
They
wanted
to
know
what
stated
is
at
every
given
moment.
They
want
to
know
that
it's
successful.
They
want
to
get
notifications
all
the
time
that
what's
happening,
notification
or
rollback
differ
for
different
environments.
A
A
So
one
of
the
main
problem,
things
that
happen
when
you
deploy
to
AWS
is
that
you
tend
to
expend
a
lot
of
budget
that
you
didn't
mean
to
so.
If
we
could
connect
post-deployment
with
overspending
for
cost
in
AWS,
that
would
be
a
WoW
moment.
Forget
lab
and
so
I'm
working
with
the
monitor
team
now
to
to
support
AWS
cost
metrics
that
we
could
leverage
in
terms
of
alerts
that
we
can
connect
here
so
that
we
could
stop
for
a
rollback.
The
environments
in
case
budget
was
was
crossed.
A
B
A
So
that's
all
something
that
came
up
here
like
someone
said
that
they
were
like
going
220
thousand
dollars
over
budget
just
from
one
deployment
because
they
didn't
notice.
They
were
doing
all
these
things,
because
not
all
developers
know
what
they're
doing
in
terms
of
AWS
so
anyway
also
crossing
into
another
high-level
topic,
which
is
the
point
AWS.
But
the
connection
together
tells
a
really
nice
story
and
definitely
shows
value
to
our
customers.
You
can't
get
more
value
than
saving
money
right,
so
so
very
excited
about
that.
So.
B
A
Things
like
talk
to
them
about
this
an
hour
ago.
They
haven't
really
started
on
it,
but
we
opened
up
an
issue.
I
sent
him
a
tool
that
someone
an
open-source
tool
that
someone
had
already
built
that
connects
Prometheus
to
be
it
CLI.
The
WS
feel
like
the
API
call
for
the
time
for
the
budget,
so
it
looks
pretty
straight
forward.
You
know
in
terms
of
connecting
it
to
get
lab,
which
is
really
really
cool.
The
downside
is
that
every
API
crawl
that
you
do
to
cost
for
AWS
cost
you
0.1
dollars.
A
So
I
can
I
found
a
really
really
nice
tool.
That's
ready
like
it's
on
github
and
I,
think
we
can
pull
it
in
and
use
it.
So
I
mean
that's
not
what
school
for
progressive
delivery,
but
it's
something
that
we
can
connect
so
nicely
together
until
a
really
nice
story
to
our
customer
is
that
that
goes
across
the
stages.
C
C
I'd
want
to
spend
less
because
we've
only
a
little
bit
of
time
left
on
explaining
what
we're
gonna
do
and
more
of
like
extracting
the
ideas
that
are
now
perhaps
potentially
circling
around
in
your
head
and
then
I
am
oriT
and
Nadia
I
will
potentially
document
them
in
the
mural
board
and
any
further
steps
can
then
be
done
in
a
separate
session
because
I
there's
not
not
enough
time
for
that.
So
I
think
the
most
important
point
till
the
art.
C
D
C
Iii
think
that
the
current
ideas
out
there
are
are
well
done.
Indeed
how
I
do
see
them,
though,
is
that
they're
more
lined
up
as
like
hey?
If
we
do
this,
then
we
can
do
this
next
and
this
next
and
this
next
one
is
next
and
what
this
is
about
is
to
consider
alternatives.
Just
consider
where
are
we
missing
like
what
it?
What
are
the
gaps
currently
do?
C
We
have
alternative
ideas
that
just
or
it
didn't
think
of,
potentially
because
we
are
here
to
help
her
think
of
those
ideas
right
not
to
like
not
to
discount
any
idea
she
has
had
because
I
think
their
value
very
valuable,
just
to
add
to
them
and
then
be
able
to
say,
hey
might
make
sense
to
to
combine
these
ideas.
It
might
make
sense
to
see
if
these
ideas
are
worthwhile
to
big
car.
Her
ties
first,
the
others
or
vice
versa.
B
B
I
looked
at
that
a
while
back
and
and
I
know
that
there
is
an
entire
framework
about
communicating
with
authorities
in
general,
and
so
how
do
we
build
up
based
on
on
all
this,
like
API,
all
this
framework
that
we
have
right
and
I
think
we
are
going
to
make
many
more
API
calls
new
types
of
API
calls,
and
so
we
need
to
see
based
on
like
the
framework
we
have.
How
can
we
load
logically
build
that
on
that
framework
right?
B
C
B
B
B
C
B
B
We
need
either
someone
who
knows
much
more
knowledgeable,
but
I
do
we
need
someone
as
to
more
research
to
ramp
up
on
this
existing
framework,
and
once
we
have
this
person
Ramdev,
like
you
know
correctly,
then
we,
you
know
we'll,
have
a
better
idea
of
what's
technically
feasible
or
how
we
can
build
up
on
top
of
that
framework.
Yeah.
That's
basically.
C
B
C
E
C
E
Question
just
to
try
to
kind
of
spawned
the
idea
engine
I
was
wondering
what,
if
there
is
any
idea
as
to
what
would
be
the
MVC
that
would
kind
of
satisfy
the
most
number
of
users
based
on
those
sorry
I
I,
don't
know
if
this
goes
into
the
ideas
section,
but
it's
more
what
what?
E
C
All
right,
all
right,
I'll,
give
I'll
give
a
very
short,
very
short
overview
of
the
whiteboard,
because
you're
looking
from
a
bird's
eye
perspective,
while
I'm
trying
to
focus
this
down
to
the
idea
idea,
generation
I'll
quickly
quickly
share
what
I've
been
going
on
here.
So
this
is
the
board
and
it
has
three
layers.
C
Actually
it
has
four,
but
let's
focus
on
three:
it
has
the
overall
intended
desired
outcome,
which
is
give
voice
post
deployment
monitoring
to
our
users,
make
that
available
right
and
then
we
have
our
research
insights
and
then
we
have
our
ideas
and
our
ideas,
section
is
the
one
that
is
the
one
where
we,
where
we
need
additional
people
to
come
in.
Give
us
ideas
give
us.
C
You
know
there
a
little
bit
of
insight
onto
you,
know
what
is
possible
in
their
minds
and
then
jot
that
down,
and
then
you
mentioned,
how
can
we
like
fix
the
desired
outcome,
the
best
for
the
most
amount
of
people?
The
idea
is
that
each
of
these
ideas
are
going
to
be
linked
towards
these
opportunities,
which
are
researched
and
the
more
insights
or
opportunities
that
those
ideas
link
towards
the
more
people
will
be.
C
You
know
made
happy
because
of
that.
Basically,
so
there
is
a
inherent
structure
to
doing
this,
which
kind
of
rolls
into
the
more
of
a
process
idea.
You
mentioned
the
it
is
bit
difficult
to
do
these
white
boarding
exercises
correct
I
would
have
loved
for
more
people
to
be
inside
of
this
meeting.
A
lot
of
more
engineers
would
have
been
welcome
to
you
know
to
to
feel
this.
C
This
idea
engine
s
you
so
rightfully
called
it
and
I
think
for
now,
we
are
out
of
time,
so
just
might
be
a
valuable
opportunity
for
next
time
for
improvement,
but
yeah.
If
it
is
about
hey,
what
can
we
do
else
for
post-deployment
monitoring
and
how
to
feel
that
and
guide
that
into
the
right
direction
is
a
very
hard
thing
to
do.
The
point
is
ideas
being
generated.
They
are
only
worthwhile
if
we
can
link
them
to
existing
opportunities
that
have
been
researched.
C
C
That
is
an
unknown
which
we
need
to
do
a
little
experiment
on,
in
other
words,
no
research
and
based
on
that
it
can
become
an
opportunity,
because
at
some
point
we
will
have
an
answer
towards
that
that
research
or
that
experiment,
which
kind
of
defines
it
as
an
opportunity
hey.
We
can
give
automatic
rollback
on
these
and
these
in
these
thresholds,
because
we
know
that
users
want
that.
D
Maybe
having
more
people
explain
that
there,
the
board
again
and
maybe
guide
as
we
we
could
be
taking
one
inside
one
opportunity,
one
by
one
and
brainstorming
like
all
together.
This
can
also
be
it
on
a
seam
but
I'm
not
sure
how
much
people
participate
in
that,
but
we
could
be
taking
this
together
one
opportunity
by
another
and
like
discussing
what
are
the
technical
possibilities.
What
are
other
ideas
that
we
could
add
on
top?
What
are
it
proposed?
D
So
I
think
today
was
a
great
beginning
for
opening
up
that
story
and
we
can
continue
ID
aging
in
the
another
session
or
I
think
maybe
involving
more
people,
so
I
think
we're
still
doing
great,
but
we
should
have
been
starting
with
the
explanation
of
the
board
and
maybe
sharing
the
screen.
So
it's
visible
in
front
of
everybody's
face.
So
we
are
all
in
one
moment:
yeah.
C
Agreed
I
think
it
would
have
been
helpful
if
we
started
from
the
board
and
then
picked
a
little
bit
of
the
issues.
Hey
we're
now
going
to
discuss
this
idea.
This
idea
this
opportunity,
this
opportunity
and
kind
of
create
that
mental
model
inside
of
our
minds
that
hey
I
have
this
idea.
Oh
wait,
a
second
that
is
the
link
towards
this
opportunity,
super
great.
Let's
link
those
together,
baman
baman.
We
create
those
those
Direct
Connect.