►
Description
A
So
what
I
wanted
to
do
was
get
together
and
start
of
detail
out
our
release
stage
direction
and
put
together
some
ux
mocks
that
maybe
could
help
paint
the
picture
for
both
our
users
and
our
sales
team
on
where
we're
hoping
the
user
experience
will
go
across
the
release
stage
and
also
help
kind
of
put
some
details
and
some
thoughts
as
like
a
brainstorm
between
all
of
us
and
the
the
release,
the
release
groups,
so
that
we
can
help
feed
things
that
maybe
I'm
not
thinking
about
on
the
release
management
side
or
things
that
you're
not
started.
A
A
But
I
I
want
to
leave
this
meeting
with
issues
created
that
then
dimitri
and
hayana
can
can
collaborate
together
on
creating
some
designs
and
then
ori
you
and
I
can
work
together
on
breaking
down
those
pieces
from
a
product
standpoint
and
associating
what
we
currently
have
in
our
backlog
to
feed
into
these
experiences
and
potentially
create
new
issues
for
our
backlog.
B
The
other
really
nice
thing
about
about
having
this
stuff
ready
on
the
direction
pages
is
that
once
or
twice
a
year,
we
usually
do
like
a
presentation
to
the
board
on.
You
know
like
what
are
we
doing
and
it
really
helps
to
have
visualizations
of
that
stuff.
The
sales
team
can
also
potentially
use
these
directly,
although
we
should
make
sure
that
we
have
some
kind
of
disclaimer
there.
That
says
these
are
kind
of
like
prospective
images,
and
you
know
you
shouldn't
print
these
out
and
say
you
know.
C
Like
we
do
on
the
direction
page
already,
but
in
particular
for.
A
That's
really
a
great
point:
jason!
Thank
you.
Okay,
so
ori
is
there
anything
else
that
you
think
you'd
want
to
accomplish
out
of
this
call,
or
while
we
have
each
other.
D
Here,
I
think,
that's
I
think
we
have
a
good
place
to
start.
Let's
just
go
ahead.
A
Yeah
sounds
good,
so
I
added
a
section
at
the
bottom
of
the
page
called
flow,
and
what
I
wanted
to
sort
of
contextualize
here
is
that
there's
an
experience
we
can
create
for
a
user
that
involves
gitlab
embedding
into
their
day-to-day
life
and
their
their
personal
workflow.
So,
for
example,
I
receive
a
notification
from
gitlab
in
my
email
about
a
deployment
that
I
need
to
approve,
and
then
I
click
that
from
my
email.
A
It
brings
me
into
gitlab
and
navigates
me
to
some
sort
of
dashboard,
which
could
be
our
environments,
dashboard
redesign
and
it's
a
pending
approval
for
a
release
tag.
But
actually
maybe
it
has
information
about
the
feature:
flags
that
are
related
to
that
release,
tag
or
perhaps
maybe
there's
other
sorts
of
information
around
alerts
from
a
post-deployment
monitoring
issue,
or
maybe
there's
what
we.
What
violations
from
a
release,
evidence
there's
a
lot
of
directions
that
we
can
go,
but
the
point
being
that
we
take
gitlab
and
we
create
an
experience.
A
That's
inside
that
user's
workflow,
and
I
think
we
have
a
lot
of
things
in
the
in
the
op
and
then
the
direction
page
from
a
release.
Standpoint
that
we
could
do
this
with
like,
for
example,
or
you
have
on
the
first
item
like
feature
flags
with
review
apps.
I
think,
there's
an
opportunity
to
put
more
more
experience
around
that
and
would
love
to
hear
your
your
your
thoughts
on
your
flow.
D
Yeah,
so
I
did
add
specifically
for
this
one
flow.
I
don't
know
if
I
totally
understood
your
like
what
you
were
trying
to
accomplish,
but
what
I
wanted
to
say
is
something
that
we
can
really
leverage
in
git
lab
is
review
apps
with
feature
files,
which
means
that
as
a
developer,
I'm
developing
something-
and
I
want
to
know
what
it
looks
like
for
all
my
users
so
being
able
to
spin
up
a
review
app
and
choose
the
user
persona
that
I
am
and
like
see.
D
All
the
different
variants
that
I
can
have
is
really
really
powerful
if
you
connect
that
also
to
visual
reviews
and
additional
features
along
the
way.
I
think
this
could
be
like
one
of
the
one
of
the
better
features
that
we
can
really
compete
with,
that
others
don't
have
yeah.
D
A
B
What
we
also
think
about,
maybe,
is
that
three
years
is
quite
a
long
time
and
you
can
really
paint
the
picture
of
you
know
like
the
future
of
release
management,
rather
than
kind
of
like
here's,
the
next
step
and
here's
some
marks
of
what
the
next
step
is:
we're
not
gonna.
A
B
It
for
a
while,
but
you
know
this
is
this-
is
what
next
step
is,
but.
B
Create
a
crazy
idea
and
you
and
and
make
a
screenshot
of
that
crazy
idea
and
then
maybe
that's
like
a
cooler
way
to
do
it,
that's
more
evocative
and
then
you
can
work
towards.
You
know
iteration
towards
that
crazy
idea.
A
Like
here's,
what
I
think
I
agree
with
you
jason,
I
was
considering
like
the
review.
App
actually
is,
you
know
we're
getting
a
lot
of
requests
from
a
deployment
approval
perspective
and
if
we
could
use
the
review
app
as
a
published
state
of
of
what
can
be
approved
either
you
know
at
the
director
level
or
beyond.
Maybe
there's
like
a
way
to
aggregate
review
apps
across
multiple
projects.
B
That's
a
cool
idea.
Another
one
that
comes
to
mind
is
release
managing
a
progressive
delivery.
So,
like
you
know,
progressive
delivery
is
gonna
or
he's
gonna
build
a
bunch
of
features
that
are
like
how
you
control
the
rollout
of
a
of
a
feature.
So
what
does
release
management
love
that
look
like?
Is
there
some
kind?
You
could
make
some
kind
of
mock
of
a
dashboard?
That's
you
know
the
incremental
rollout
state
of
all
the
features
in
this
release,
or
something
like
that.
D
Put
something
below
so
like
if
we're
talking
about
specific,
like
specifically
for
feature
flags,
so
today,
future
flags
are
on
a
project
level,
and
sometimes
you
want
to
know
like
what's
going
on
with
your
environment,
which
is
like
a
group
of
projects
right
and
I
don't
think
it's
visible
now,
but
you
know
if
I
was
marin,
I
would
want
to
know
everything
that's
on
in
production
at
the
moment
and
just
like
figure
that
out
and
right
now
you
need
to
kind
of
pick
and
choose
from
from
different
projects.
D
A
We're
kind
of
chipping
away
at
that
too,
which
is
really
exciting.
It's
high
on
in
our
in
our
redesign
of
the
environment,
dashboard
being
able
to
visualize
environments
at
scale,
because
today
it's
impossible
to
do
that.
So
this
is,
I
think,
that
there's
ways
to
iteratively
iteratively
deliver
that
promise
of
release
management
across
progressive
delivery.
I
think
that's
exciting.
Okay,.
D
D
Well,
performance
is
one
metric
but
think
about
it.
It's
so
much
easier
to
talk
about
revenue.
Okay,
so
if
you
have
a
website,
that's
your
application
and
your
your
idea
is:
how
do
I
get
more
people
to
add
stuff
to
their
cards
or
or
make
a
sale?
At
the
end
of
the
day,
you
actually
can
convert
money
to
the
decision,
and
you
know
when
I
make
more
money,
that's
the
variant
that
I'm
going
to
choose
and
roll
it
out
for
everyone
right.
So
having
that
visual
is
awesome.
C
Settings
so,
for
example,
say
that
there's
a
performance
indicator,
then
they
will
have
some
matrices
to
connect
that
to
smile
or
ascv,
and
that
is
the
eventual
metric
that
they
actually
want
to
monitor
and,
ideally
that
conversion
should
be
automatic,
so
there's
less
bias
involved
in
or
less
manual
error
involved.
So
one
key
metric,
that's
that's
the
basic
tldr
yeah
dollar
signs.
B
Is
difficult
because
you
have
to
pull
in
a
lot
of
financial
data
into
the
system?
That's
maybe
difficult
to
keep,
but
there
are
you
know
if
you
measure
conversion,
for
example,
instead,
a
conversion
on
a
page
to
whatever
the
activity
is
that
they
want
to
happen
during
the
age.
B
A
C
Yeah
that
I
mean
like
that
conversion
you
will
need
an
additional
variable
for
that,
and
that
is
indeed
derived
from
external
data,
but
when
you
kind
of
focus
that
down
and
make
that
you
know
a
static
variable
to
work
with
so
say,
for
example,
so
much
conversion.
So
it's
views
at
this
page.
We've
calculated
before
that
that
converts
to
x
final
dollars
just
saying
something,
putting
something
out
there
and
then
with
that
you
can
work
and
get
to
an
actual
result.
Even
though
it's
an.
A
Assumed
not
sure
if
you
chopped
it
out
the
end.
For
me,
I
missed
like
about
the
bottom.
C
Sorry,
yeah,
what
what
what
did
they
say
is
that
you
know
the
data
on
this
table
together
with
private
managers.
They
will
find
out
that
a
certain
amount
of
clicks,
for
example,
relates
to
a
certain
amount
of
dollars
and
that
variable
will
then
be
used
or
can
then
be
used.
A
C
A
So
in
my
in
my
my
previous
role,
I
did
social
media
roi
and
one
of
the
challenges
that
we
had
were
these
things
called
custom
calculations
and
it's
that
every
sort
of
organization
has
a
way
that
they
want
to
calculate
the
return
on
investment
from
a
conversion
metric.
So
this
is
really
like
getting
infinity
gritty
details,
but
instead
of
bypassing
like
a
canned
report
or
providing
a
can
report
out
of
get
lab
for
conversion,
it
may
be
better
to
like
just
provide
custom
calculations.
A
I'm
not
sure
if
this
is
something
that,
like
the
analytics
team
is
doing,
but
it
could
be
a
way
to
like
set
a
target
data
set
and
pool
that
into
like
a
formula.
It's
a
formula
builder
inside
of
git
lab
for
metrics.
That
would
save
some
time,
but
it
also
be
able
to
be
more
flexible
than
what
could.
What
could
end
up
being
really
really
challenging
for
us
to
support
a
bunch
of
canned
metrics.
A
Okay,
so
a
b
testing
seems
like
an
interesting
perspective,
connecting
back
to
door
four
and
greater
reporting
in
the
concept
of
how
we,
how
we
measure
release
inside
of
gitlab
regrow,
we
how
we
will
intend
to
manage
the
impact
of
the
early
stage
inside
of
gitlab
in
our
release
direction.
We
have
this
enable
common
deployments
target
jason,
provided
a
rewrite
that
I
think
is
pretty
compelling
and
I'll
have
to
reconcile
that
with
our
current
page,
because
we
did
make
updates.
A
I
snapshotted
this
like
a
couple
months
ago,
and
then
we
made
updates
early
stage
so
well.
We
may
need
to
to
bring
that
back
in,
but
what
I
wanted
to
talk
about
here
was
how
to
connect
run
book
so
release
run.
Books
back
to
progressive
delivery
is
sort
of
time
back
into
what
jason
was
saying
before
about
release
management
across
incremental
rollouts.
Is
there
steps
that
we
would
want
to
support
or
a
certain
use
case
or
workflow,
or
how
should
we
expect
that
experience
to
play
with
run
books
for
users.
D
It
also
defines
like
the
who
and
who's
not
allowed
to
deploy
something
or
run
it.
So
it's
really
important
for
her
for
enterprises.
A
I
agree,
I
think
what
I
wanted
to
drill
into
was
like
how
do
we
want
a
user
like
what's
the
flow
that
we
would
expect
them
to
take
advantage
of
common
deployment
targets,
and
how
would
we
want
them
to
use
run
books
in
that
context?
Does
that
make
sense
like
maybe
they
use
your
amazon,
ecs
template
and
in
combination
with
that
they
can
use
a
supported,
run
book
template
inside
of
git
lab,
so
they
have.
D
B
Another
thing
you
can
maybe
show
here
is
like
a
runbook,
that's
associated
with
a
automated
release.
That
has
things
that
are
just
like
you
know
impossible
to
automate,
so
just
show
kind
of
like
a
run
book
pop-up
or
something
like
that.
That's
part
of
a
automated
release
but
says
something
like
coordinate
with
partner
on
press
release
or
something
like
that.
Oh
yeah.
D
And
you
can
even
add
that
to
the
audit
log
and
see
that
you
know
whoever
needed
to
do
that
manual
step.
Did
it
properly.
A
I
love
this
experience
I
feel
like
this
could
be
super
powerful
because
we
could
then
do
a
bunch
of
different
like
linking
into
other
stages
as
well
like
a
step
inside
of
the
inside
of
the
runbook
could
be
confirmed
that
there's
no
vulnerabilities
or
you
know
any
any
sas
issues
linking
back
to
our
immense
investment
that
we're
going
to
expect
out
of
2020.
A
A
A
D
A
What
would
what
would
that
mean
for
the
flow
so
like
here?
A
user
is
using
your
amazon
ecs
template.
They
would
have
a
an
automated
run
book
for
the
release
of
that
aws
template
with
external
steps
and
that
pops
up
the
run
book
step
to
go
confirm
that
this
stuff's
been
done.
Yeah.
D
It
could
be
that,
but
it
doesn't
have
to
like,
don't
think
about
only
the
features
that
we
have
now,
which
is
those
containers
or
templates.
We
also
plan
on
having
like
this
template
creator
for
for
cloud,
so
you
tell
them
here's
the
instant
size
that
I
want.
Here's
the
database
type.
I
want
here's,
the
name,
and
it
just
does
everything
for
you.
So
if
you
can
connect
that
into
the
runbook.
A
So
even
better,
the
first
step,
then,
is
I
create
my
own
template
and
then
I
leverage
an
out
of
the
box
automated
release
for
a
series
of
steps
that
get
provides
for
you.
Okay,
well,
we'll
want
to
unpack
this
one.
If
this
will
be
the
flow
that
we
want
to
put
on
our
three-year
page
right,
like
that,
one
might
need
a
little
bit
more
details
to
make
this
connected.
A
A
B
For
secrets,
we
shouldn't
put
anything
that
we're
just
about
to
release
since
you'll,
then
just
kind
of
get
stuck
updating
it
right
away,
so
it
would
have
to
be
something
that's
like
the.
We
can
only
do
after
a
couple
more
years
of
development
rather
than
the
feature
that's
just
come
out
or
will
just
come
out.
B
Everything
avoid
putting
the
just
like
the
next
thing,
you're
going
to
release
in
any
of
this,
because
you'll
have
to
keep
updating
it.
If
you
do
that,
if
you
don't
do
that,
it
will
look
like
you're
not
going
to
deliver
the
thing
you
already
delivered
for
another
two
or
three
years.
A
Yeah,
that's
kind
of
why
I
paused
a
little
bit
on
secrets,
because
I
think
that
this
year
we're
going
to
get
it
to
a
viable
place
where
people
can
integrate
with
hashicorp
and
then
beyond
that,
like
you
can
use
everything
we're
building
for
any
other
target
key
store.
So
it's
pretty
expensive,
like
the
increment
of
work
that
we'll
be
shipping,
will
support
a
bunch
of
use
cases.
So
I
wonder
if
this
is.
A
D
A
Year,
you're,
absolutely
right,
I
would
say
that
our
secrets,
management
vision
is
coupled
with
a
whole
other
provider
of
product
too.
So
that
makes
it
I
wouldn't
say
limited,
but
it
definitely,
its
scope
is
a
little
bit
more
contained
than
I
think,
release
orchestration
or
progressive
delivery
on
the
on
the
whole.
A
It
could
be
that
we
completely
separate
out
from
that
and
we
decided
to
create
a
different
experience,
but
that's
definitely
a
good
call
out,
though
jason.
C
By
the
way,
a
quick
question
regarding
this
point
we're
now
discussing
now.
I
know
we
have
some
points
to
discuss
still,
but
I
I
think
it
would
be
helpful,
at
least
for
me
and
ayanna,
to
get
a
little
bit
of
a
quick
summary
of
the
flow
we
have
in
mind
like
how
does
the
user
walk
through
these
things
instead
of
just
like?
Oh,
we
have
this
feature,
and
we
got
this
new
thing
and
like
like,
how
does
the
user
go
from
one
to
the
other,
and
how
does
that
experience?
Look
like.
A
Yeah,
we
don't
know
for
this
particular
topic.
The
enable
common
deployments
like
what
I'm
doing
right
now
is
a
process
of
elimination,
we're
going
to
go
through
all
these
line
items
and
then
pick
the
one
that
has
the
most
meat
that
naturally
connects
progressive
delivery
and
release
and
then
decide
what
user
experience
makes
sense.
We're
kind
of
just
like
throwing
spaghetti
at
the
wall
to
see
what
sticks
right
now
and
then
we'll
look
at
all
the
spaghetti.
That's
stuck
and
work
through.
A
D
A
So
we
don't
really
know
yet,
but
I
think
the
end
by
the
end
of
this
meeting
we
should
hopefully
have
an
idea
of
which
flow
we
want
to
detail,
and
then
it's
probably
going
to
be
us
working
asynchronously
to
like
hammer
out
the
actual
details
and
we'll
need
to
storyboard
it,
because
this
will
be.
A
A
So
I
think
we
talked
about
common
deployment
targets.
Do
you
dory?
Do
you
want
to
go
over
the
bullet
points
that
you
had
below
the
cloud
deployment
templates
and
support
for
main
cloud
providers
and
that
stuff.
D
So
cloud
deployment
templates
is
what
I
mentioned
before
that
we're
gonna
you
can,
you
know,
basically
enter
very,
very
minimal
information
and
we'll
just
do
it
for
you,
like
the
size
and
stuff
like
that.
D
And
then
you
know
support
for
all
main
providers.
It's
also
mentioned
above
so
it's
just
a
repetition
but
of
course,
run
books
again.
They
tie
in
really
nicely
because
the
steps
are
pretty
much
the
same,
regardless
of
where
you're
deploying
to,
and
in
fact
you
know,
lots
of
companies
are
moving
into
multi-cloud.
So
it
also
makes
sense
to
have
like
one
procedure
for
deployment
regardless
of
to
the
target
itself
and
then
insights
into
cpu
building
bit
into
cloud
provider,
billing
effects
on
code
changes
and
deployments.
D
This
one
is
super
interesting
and
it
ties
back
to
that
dashboard
that
we
were
discussing
before
the
director
dashboard
or
similar,
and
this
is
something
that
I
actually
started
working
on
with
with
dove,
and
it's
really
one
of
the
things
that
I
learned
from
the
post-deployment
interviews
was
the
fact
that
sometimes
you
want
to
roll
back
your
deployment,
not
because
something
wasn't
working
properly
or
because
there's
errors,
but
because
it
made
a
spike
in
your
charge
in
your
cloud
provider
charge
and
so
having
this
insight,
you
know
per
release
per
deployment
per
a
lot
of
things.
D
It
gives
you
a
lot
of
power
to
make
decisions,
and
money
is
always
interesting
in
that
sense,.
D
Would
you
it's
not
very
complicated
to
do
this
integration
because
I
already
found
a
github
project
that
has
already
implemented
this
with
prometheus,
which
is
which
we
already
support
so
there's
an
api
in
aws
I
started
with
aws.
I
haven't
checked
the
others,
but
I
assume
it's
the
same.
So
there's
an
api
that
you
can
ping
and
see
what
your
billing
status
is,
and
if
you
do
that,
you
know
a
few
times,
you
can
get
more
data
and
see
what
effects
they
have.
D
Our
first
step
is
to
document
how
to
do
that
without
actually
connecting
it
to
anything,
but
the
the
downside
about
this
is
that
you
get
charged
0.1
cents
for
every
time.
You
pull
this
api.
So
it's
something
that
we
need
to
think
about,
and
you
know
think
about
really
clearly
so
that
we
don't
spend
more
money
on
billing
because
of
monitoring
than
the
actual
cost
of
so
like
a
human.
B
B
D
A
So
what
could
be
cool
about
this
experience?
From
like
a
flow
perspective,
I
could
see
a
slack
message
getting
sent
to
a
group
channel
saying
that
you're
reaching
your
cp
billing
threshold
and
then
you
go
to
you,
click
the
link
from
slack,
and
it
brings
you
into
the
cic
to
director
dashboard
where
you
can
then
look
at
the
commit
that
is
in
a
review
app.
That
is
letting
you
see
what's
changing
your
performance
and
then
it
also
allows
you
to
link
into
either
prometheus
for
monitoring.
So
it
could
be.
A
You
know
like
that,
would
be
a
flow
that
I
could
see
in
the
end
state.
Is
you
roll
back
from
that
cicd
director
dashboard.
D
Yeah,
it
allows
you
from
there
you
can
do
it
for
incidents
response
which
also
ties
into
to
monitor
and
back
to
plan,
because
it'll
open
an
issue
for
it
for
you,
but
what's
really
really
powerful
here.
Is
that
you,
if
we
have
this
data
for
enough
releases,
you
can
really,
you
know,
see
the
graph
of
over
time
what's
going
on
and
how
everything
affects
everything.
A
A
D
D
A
Okay,
I
like
this
one
too,
this
one's
a
good
one
for
us
to
consider
as
a
flow.
So
for
right
now
we
have
two
candidates
for
our
our
flow
for
us
to
spend
more
time,
we
have
about
50
minutes
left
in
schedule
time.
Okay,
let's
see
support
mature
organizations,
provider
choice
for
navigating
cross-repository
multi-cloud,
complex
releases
in
this
case,
mature
organizations
was
a
nice
word
for
legacy
deployment
workflows,
so
people
who
are
not
deploying
to
the
cloud
continuously
or
have
a
different.
A
You
know
waterfall
method
for
approving
deployments
and
trying
to
bring
them
into
the
modern
way
of
releasing
code.
To
answer
your
question:
jason
we're
not
questions.
A
B
Have
a
mix
in
their
portfolio,
so
you
could
say
something
like
sport,
mature
organizations
so
and.
A
Really
this
is
it's
like
a.
It
was
a
response
to
both
gartner
and
forrester.
Before
we
were
ranked
a
strong
performer.
But
again
both
of
those
reports
indicated
that
we
are
ultra
we're
over
indexed
on
cloud
native
and
we
don't
enable
legacy
organizations.
B
A
So,
for
example,
when
customers
they're
looking
for
impacts
of
metrics
into
their
deployment
cycles
like
door,
four
would
be
a
helpful
thing
for
them
to
get
visibility
into
what
their
current
model
is
going
and
the
benefits
they
could
get
by
deploying
into
the
cloud.
Less
lift
and
shift
less
moving
things
just
more
like
supporting
what
they're
doing
today
and
showing
the
benefits
of
using
cloud.
A
A
But
from
a
release
perspective,
I
just
would
say
that
giving
them
like
visibility
on
the
dashboards
into
like
all
of
these
global
at
scale
things
they
may
be
leveraging.
So,
for
example,
in
the
cicd
dashboard
allowing
them
to
select
pipeline
minutes
related
releases,
feature
flags,
active,
run,
books,
shared
environments,
and
this
would
help
them
kind
of
see
across.
D
What
you
just
said,
like
kind
of
sparked
something
for
me,
we
have
all
this
data
for
a
bunch
of
different
organizations.
Maybe
it
would
be
interesting
to
show
on
a
dashboard
like
here's,
the
benchmark
of
organizations
similar
to
you
that
can
I
don't
know
what
the
criteria
could
be
k
lock
could
be.
You
know
the
number
of
projects
number
of
users
or
whatever
that
we
haven't
get
that
and
then
say
here
is
their
statistics,
and
here
is
how
you're
performing.
A
B
B
I
think
one
thing
that
I'm
thinking
about
when
we
talk
about
this
section
is
that
it's
kind
of
the
run
books.
Everything
we
talked
about
in
run
books
is
relevant
here,
plus,
like
some
kind
of
just
like
dashboard,
for
what
are
you
releasing
across
all
of
your
instance
or
across
all
of
your
groups,
which.
B
Of
those
things,
then
they
can
have
a
mix
of
things
that
are
automated
things
that
are
manual
and
they
can
release
them
with
confidence.
A
A
Okay,
I
think
I'm
going
to
be.
A
Tweaking
this
next
one,
which
is
about
like
audit
and
compliance
narrative,
I
need
to
spend
more
time
going
upstream
into
the
compliance
team's
work
because
they
have
a
bunch
of
stuff
with
policy
that
we
could.
I've
created
a
couple
of
issues
around
release
evidence,
but,
for
example,
or
going
back
to
some
of
your
like
pre-release.
A
How
do
we
allow
people
to
model
impacts?
We
could
tie
that
same
kind
of
perspective
to
compliance
and
audit
like
what
are
the
likelihood
of
this
release
violating
policies
you
have
set
in
gitlab
and
showing
that
connection
like.
If
you
were
to
turn
this
feature
flag
on.
This
could
provide
a
violation
of
a
policy
you
have
set.
B
This
would
be
a
fun
one
to
make
the
mock-ups
for
because
I
think
you
can
get
really
creative
and
just
kind
of
like
show.
You
know
the
whole
world
all
of
the
interesting
things
that
we
can
automatically
collect.
Just
show
it
in
some
fake
dashboard
that,
as
if
it
all
already
exists-
and
it
would
be
really
compelling-
I
think.
A
B
Annoyed
by
every
company,
that's
trying
to
do
cd
is
annoyed
by
their
audit
team,
and
so,
if
you
give
them
a
tool
that
just
says
like
hey,
don't
worry
about
it,
we'll
automatically
collect
everything
just
like
wire
it
up
the
way
you
normally
would
and
and
then
we'll
have
the
report
ready.
That's
fun.
A
Yeah,
so
I
could
see
in
this
case
not
only
like
the
proactive
avoidance
of
audit
problems
and
failures,
but
being
able
to
triage
those
failures,
even
in
production
right
like
so
like
you
identify
a
production
failure
and
just
having
having
that
dashboard
and
visibility
for
actions.
I
think
jason
is
what.
B
You
were
what
you
were
yeah
yeah,
yeah,
exactly
and
maybe
even
like
warnings
as
you're,
headed
towards
the
release,
like
this
issue
seems
out
of
compliance
in
some
way
and
you
haven't
done
the
release.
Yet.
So
it's
not
an
after
the
fact
thing,
but
it
could
just
be
a
warning
on
the
release
that
something
seems
out
of
compliance.
A
Hyanna,
this
is
kind
of
like
the
issues
I
created
for
release,
evidence
that
allows
people
like
it
adds
a
comment
to
the
release.
Evidence
saying
that
this
violates
your
two
approval
rule.
We
could
kind
of
create
mocks
around
that
visual
too.
Here
that
would
fit
this
one.
D
Yeah,
we're
actually
exploring
something
that
kind
of
along
that
same
line
that
I
discussed
with
marin
today,
which
is
like
disabling
future
flags
modifying
future
flags.
In
case
there's,
like
some
kind
of
incidence
incident
in
production
at
the.
D
A
A
So
it
does
sound
like
two
of
these
have
a
very
heavy
cicv
dashboard
future.
A
A
Our
production
version
of
the
release
stage
sounds
better
than
what's
written
today,
but
the
things
that
I've
added
under
here
are
the
ones
that
I'm
currently
working
on
with
the
team,
and
it
was
just
more
like
context
for
where
this
could
go
like
we're,
trying
to
add
a
heat
map
of
environments,
so
that
people
can
see
like
just
active
environments
and
where
they've
been
throughout
the
day
like
how
often
are
they
failing
or
active
and
available?
A
How
many
are
they?
How
time
are
they?
Are
they
idle
for
very
long?
I
could
see
that
in
the
future
there
needs
to
be
more
support
or
configuration
around
workflows,
so
customers
are
actually
are
asking
for
ways
to
do.
Non-Pipeline
based
workflows
for
releases,
and
my
expectation
will
be
to
build
most
of
that
into
the
dashboard
and
allow
them
to
action
things
at
scale
in
the
dashboard
without
building
in
like
a
an
actual
workflow
inside
of
gitlab.
A
D
D
So
we
already
talked
about
the
fact
that
we're
kind
of
connecting
everything
together.
So
we
add
it
to
the
environments,
we're
adding
to
the
environments,
view
alerts
that
you
can
see
and
then
you
can
take
action
on
them
and
we're
talking
about
adding
also
alerts
to
the
sorry
actions
to
the
alerts
page
because
right
now,
it's
just
a
long
list
of
alerts,
but
you
can't
do
anything
about
it.
D
So
we're
talking
about
connecting
that,
and
even
today
we
talked
about
adding
that
also
to
the
future
flags
view,
because
of
the
the
fact
that
we
want
to
disable
modification.
D
So
there's
a
lot
going
around
about
how
to
connect
all
these
things
together,
and
we
also
talked
the
two
of
us
about
having
the
environment
in
the
environment.
Dashboard
be
colored
same
color
as
the
alert,
so
you
have
visibility
that
something's
going.
D
On
course,
rollback
rollbacks
and
even
stopping
the
deployment.
These
are
all
things
that
also
tie
into
the
auditing,
because
we
want
to
know
what's
going
on
with
our
deployments,
also
ties
into
who
is
allowed
to
press
those
buttons.
In
terms
of
you
know,
your
policy
and
whatnot.
A
D
The
idea
is
that
it's,
the
really
big
idea
is
that
you
trust
your
your
system
to
do
everything
automatic
for
you.
So
if
you're
really
nervous
that
you're
gonna
break
something
in
the
deployment
and
now
you're
going
to
sleep,
and
you
don't
want
to
be
woken
up
at
four
o'clock
in
the
morning
because
someone's
screaming
at
you,
the
production
is
down.
The
system
is
going
to
take
care
of
it.
For
you,
that's
the
big
idea
that
you
know
gitlab
does
everything
for
you
and
you.
You
can
have
a
quiet
state
of
mind.
D
A
Let's
say
that
we
want
to
talk
about
security,
vulnerabilities,
quality
of
under
quality
results
or
performance
and
then
tying
back
to
what
you
were
mentioning
before
on
like
revenue
or
cost
modeling,
and
it
just
sends
out
an
email
to
a
team
based
off
of
that
threshold
being
met
that
it
automatically
rolled
it
back
to
the
last
successful
deployment
or,
however,
we
want.
A
D
D
Something
that's
really
interesting
again
outside
of
the
release
stage.
Is
that
the
monitor
team
they're
planning
they
have
a
bunch
of
dashboards,
that
they're
planning
to
add
annotations
to
the
dashboard.
So
imagine
you
just
did
a
deployment
and
something
went
wrong
and
now
you
see
on
your
graph
some
kind
of
spike
and
you
can
really
like
like
zoom
in
and
see.
Oh,
there
was
deployment
exactly
at
this
moment,
so
you
get
a
lot
of
information
that
you
can.
D
You
can
try
it
and
see
what
what
happens
and
what
you
can
learn
from
it.
So
it's
pretty.
B
D
C
A
I
think
for
the
next
one
advanced
deployments
and
deployment
audience
environments
from
a
three-year
vision
perspective.
D
D
Which
is
a
good
three-year
goal,
because
at
the
moment
we
don't
really
support
anything
on
kubernetes
and
we
have
parcel
support
for
aws.
So
there's
a
lot
of
work
to
be
done.
D
D
Sorry,
in
terms
of
the
environment
view,
this
is
really
critical,
because
every
time
you
cho
you
choose
a
subset
of
the
audience,
that's
going
to
get
your
deployment,
it's
really
important
to
visualize
to
understand
what
is
being
deployed.
Who
is
getting
it
in
what
kind
of
per
percent
and
how
they're
performing
so
that
you
can
know
if
you
need
to
roll
back.
A
Okay,
so
from
like
the
mo
like
the
next
30
minutes,
what
I
would
want
us
to
discuss
is
either
the
three
purple
bullet
points
that
we
have
and
kind
of
detail
out
the
flow
for
the
designers
and
kind
of
discuss.
What
that
would
look
like
from
a
user
experience
perspective,
or
we
could
just
pick
pick
one
that
we'll
want
to
go
pretty
deep
on
that
we
think
would
be
the
most
comprehensive.
A
A
D
Okay,
I
also
think
it
probably
needs
the
least
amount
of
mock-ups.
A
There's
a
lot
of
interesting
things
from
the
compliance
policy
side
that
we
could
co-create
with
the
compliance
team.
That
would
be
really
compelling.
But
if
it's
not
super
exciting
for
you,
then
I
wouldn't
want
us
to
spend
a
lot
of
time
talking
about
it
in
this
meeting,
it's
something
we
can
take
offline,
async.
A
A
Ways
to
deploy
or
deployment
targets
back
to
the
run
books
with
non-automated
and
automated
steps
that
will
be
easy
to
mock
up.
I
think
how
else
would
we
expect
a
user
to
sh
to
to
experience
this
multi-cloud
multiple
deployment
target
perspective.
D
Something
that
we
didn't
mention
before,
but
is
also
really
important,
is
my
goal
is
to
leave
them
inside
of
the
gitlab
interface
as
much
as
possible
and
without
going
out
to
the
different.
You
know,
cpu,
consoles
and
stuff,
like
that,
so
the
more
data
that
we
can
bring
in
and
give
them
you
know
visibility
to
what's
going
on
is
better
for
us.
A
D
Yeah
also
something
that
came
out
from
like
the
aws
user
research
that
I
did
was
people
don't
understand
what's
going
on,
even
if
they
configure
everything
properly
and
aws,
says
they're
deploying
and
they
get
a
200.
Okay
and
everything
looks
fine.
They
don't
hey
trust
it
be.
It
doesn't
really
mean
that
everything
is
okay.
They
go
and
do
manual
testing
and
like
press
the
links
and
make
sure
that
nothing's
broken,
even
though
everything
is
up
and
running
so
any
visual
that
we
can
show
to
make
to
to
help
them
understand.
D
D
Jackie,
I
think
this
can
also
like
be
a
step
in
the
run
book
like
how
to
validate
that
everything's.
Fine.
A
I
was
just
thinking
that
too,
like
there's.
A
couple
of
you
know
ways
that
we
can
approach
that,
whether
they're,
like
manual
checks
or
providing
ways
to
configure
automatic
checks,
like
you
know,
sort
of
like
a
syntax,
validator.
A
Okay,
so
from
the
action
items
I
have
on
on
this
particular
bullet
point,
I
think
I'll
spend
some
more
time
pondering
the
three-year
vision
for
secrets
management
and
what
we
expect
to
do
is
do
there
all
my
validation
for
secrets.
Management
has
been
around
the
current
experience
variables
and
where
we
would
want
to
take
variables
and
separating
out
secrets,
as
well
as
the
hashicorp
and
multi-key
store
support,
so
the
ultimate
like
get
lab
ui
experience,
I'm
wanting
to
make.
A
I
wanted
to
reduce
the
barrier
of
entry
into
configurating,
your
key
store,
configurating
configuring,
your
target
key
store
because,
like
vault,
is
super
hard
to
and
challenging
to
configure,
you
have
to
be
an
expert
and
like
what
a
bound
claim
is
and
and
set
all
of
those
particular
items
in
order
for
it
to
work
appropriately.
A
So,
of
course,
like
a
big
vision
for
me
is
to
reduce
that
and
make
it
way
easy.
There's
also,
this
idea
of
coupling
hashicorp
cloud
products
with
the
gitlab.com
offering
that
one's
like
super
moon
shoddy,
because
we
have
no
idea
what
the
hashicorp
vault
cloud
product
would
be,
and
gitlab.com
performance
and
cost
could
be
a
factor
into
that.
So
I'll
spend
more
time
in
investing
in
what
that
vision
should
be,
and
then
we
can
tie
it
back
to
enabling
common
deployment
targets.
A
I
also
think
I
can
see
three
issues
that
we'll
want
to
create
to
talk
about
a
flow
around
a
common
deployment
target,
there's
associating
your
template
machine
to
a
run
and
then
the
creation
or
expansion
of
a
runbook
template
for
global
deployment
targets.
I
think
that
those
could
be
some
experiences
that
we
could
mock
out
here.
D
Some
some
other
thoughts.
I
don't
know-
maybe
I'm
just
an
idiot
for
saying
this.
But
but
hey
are
we
converting
like
run
books
into
code
like
you
just
write,
something
simple
and
we
translate
it
into
yaml.
A
A
So
the
things
I
need
to
investigate
on
that
is,
if
we
support
runbook
storage,
is
that
increasing
storage
costs
for
our
customers,
and
would
that
be
like
an
adoption
hinderance
for
them?
Or
is
this
really
just
like
another
yaml
file
that
lives
inside
their
repository
and
these
files?
Like
examples
I
received
from
customers
on
how
their
run
books
look
today,
they
can
be
really
expansive
and
they
can
have
hundreds
and
hundreds
of
runbooks
and
contingencies
and
and
different
deployment
plans.
D
Okay,
and-
and
the
second
thing
that
I
wanted
to
ask,
is
when,
where
we
have
some
kind
of
run
book,
do
we
have
web
hooks
that
trigger
the
pipeline,
which
is
a
nice
way
to
connect
it
as
well.
A
Yes,
yeah
that
this
whole
idea
of
executing
scripts
from
markdown
would
be
so
that
you
can
trigger
pipelines
and
trigger
potential
deployments,
because
I
imagine
that
if
we
start
coupling
more
deployment
approvals
as
a
part
of
a
run
book
step
that
they
may
trigger
a
pipeline
up
through
a
certain
point
and
then
ends
up
being
a
manual
job.
To
then
approve
a
pipeline
deployment.
A
They're
super
varied
there's
a
lot
of
different
tools
that
people
are
leveraging
for
run.
Books
runbook
is
like
a
generic
term
of
a
collection
of
documents
so,
like
you,
can
have
a
series
of
scripts
and
then
jason
probably
actually
knows
more
about
this
actually,
but
as
far
as
the
ones
that
I've
seen
with
customers
that
they'll
have
like
a
block
of
of
code.
That
they'll
run
as
like
a
preparation.
A
For
example,
I'm
going
to
prepare
my
environment
for
a
deployment
and
then
I
have
a
step
that
says.
I
need
to
notify
marketing
to
send
this
email
pre-deployment
and
then
I'm
going
to
run
another
script
and
then
run
some
tests
and
then,
once
I
receive
a
passing
result,
I
will
perform
some
more
manual
actions
and
then
it
gets
handed
off
to
another
team
and
they
can
be
hundreds
of
thousands
of
steps
long
in
some
cases.
B
A
It
I
know
that
didn't
really
directly
answer
your
question,
but
I've
seen
it
being
like
a
mixture
of
sql
text
yaml,
ruby
java,
because
sometimes
they
they
kick
off
application
code.
You
know
like
there's
scripts,
for
that
too.
A
A
Typically
they're
running
those
scripts
or
they
have
like
a
machine
that,
within
im
iam
permission
that
goes
in
and
like
with
like
a
rundex
server
and
runs
those
scripts
on
behalf
of
a
person.
B
Yeah,
I
mean
like
you,
can
think
of
our
a
good
example.
I
think
is
our
new
hire
checklist.
That's
a
run
book
essentially,
and
then
you
can
make
them
more
fancy
by,
like
imagine,
adding
a
button
to
it
that
if
you
clicked
on
it,
it
automatically
did
so
ran
some
script,
but
at
its
core,
it's
just
like
a
long
list
of
bullet
items
with
check
boxes.
A
A
Okay,
so
all
we
can
start
creating
some
some
issues
for
for
this
one
and
working
asynchronous
on
that
is
that,
okay,
all
right?
Okay,
any
questions
I
I'm
gonna,
give
hayana
and
dimitri
some
time
to
ask
questions.
Is
I
think
that
this
will
probably
be
the
one
that
will
decompose
and
work
for
cross
stage
mocks
on
and
then
hopefully
we'll
go
from
there
or
cross
group
box.
E
E
C
Yeah
yeah,
I
agree
I
would.
I
would
love
like
a
sigma
document
where
we
kind
of
pull
in
visuals
of
existing
uis
and
kind
of
like
write
down
our
ideas
on
little
notes
or,
however,
we
want
and
kind
of
you
know,
create
some
coherent
where
we
can
look
at
it
from
bird's
eye
perspective
right
now.
It's
a
lot
of
text
a
lot
of
abstractness.
C
E
Here
I
went
out
to
the
document
as
well.
I
created
a
figma
file
and
yeah,
I'm
not
sure
if
everyone
has
access
to
it.
If
product
managers
have
a
figma
account,
but
I
think
will
be
fun
and
useful
if
everyone
could
yeah
in
the
future
contribute
to
the
same
file,
because
then
you
can
just
make
the
changes.
A
Okay,
so
I'm
I'm
off
for
the
rest
of
the
day,
but
I
will
create
these
issues
first
thing
tomorrow
and
ping
you
all
on
it:
we'll
have
an
epic
called
release,
stage
mocks
and
then
a
sub
epic
for
this
particular
story,
and
then
I
think
we
can
work.
Asynchronously
maybe
get
some
prototypes
together,
and
then
we
can
review
as
a
team,
those
prototypes
that
will
be
our
next
sync
meeting
to
be
efficient
with
our
time
sound,
good
yep.
What's
up
everybody?