►
From YouTube: 2020-08-13 Ops Cross Stage ThinkBIG!
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
welcome
to
the
ops
crosstasting
big
session.
This
is
our
starting
number
link,
five
or
six
yeah,
and
I
probably
will
skip
this
part
about
explaining
the
format,
because
I
think
I
see
everyone
who
was
who
is
on
the
participant
list
was
with
us
before,
so
everyone
is
familiar
so
for
today
we
have
a
little
bit
of
a
shorter
session.
I
think
because
we
have
just
one
topic
james,
so
we
usually
spend
like
25
minutes,
but
I
guess
if
we
will
feel
like
talking
taking
a
bit
longer,
we
could.
A
Yay,
we
have
an
exciting
topic
today,
so
james
I'll
just
hand
it
over
to
you.
So
you
can
start
the
introduction.
B
Sure
so
we've
been
talking
to
a
lot
of
customers
about
the
topic
has
been
around
dashboards
of
sorts,
but
I
don't
want
to
focus
on
the
feature.
I
want
to
focus
more
on
the
problem
that
they're
having
and
it's
that
signals
out
of
their
ci
pipelines,
aren't
there
for
them
to
measure
and
to
look
at
and
to
understand
performance
of
the
pipeline
and
where
there
might
be
bottlenecks
in
it.
So
testing
is
just
one
portion
of
that,
and
so
we're
focused
on
understanding
where
slow
tests
wear
a
slow
test
jobs.
B
B
What
jobs
are
the
bottlenecks
and
I
think,
we're
starting
to
do
a
good
job
with
dag
and
some
of
the
other
things
that
expose
some
of
those
but
from
like
pipelines
over
time
perspective,
not
just
a
single
pipeline
and
looking
at
the
run
is
there
data
that
we
can
start
to
expose
and
help
solve
those
problems,
and
so
that's
kind
of
the
problem.
B
Space
that
we've
been
operating
in
and
the
people
we've
been
talking
to
are
mostly
team
leads
who
are
responsible
for
crafting
their
own
pipelines
or
directors
who
are
kind
of
in
charge
of
overall
health
and
sometimes
more
importantly,
budget
of
their
ci
cd
pipelines
of
understanding
hey.
How
much
can
this
cost,
especially
if
they're
hosting
their
own
runners
in
a
cloud
environment?
How
much
are
these
runners
costing
them
spinning
things
up
in
you
know
aws
gc,
whatever,
whatever
cloud
provider,
you
might
pick.
C
I
I
think
we
have
to.
This
is
a
really
humongous
problem
space.
From
my
point
of
view,
can
I
share
something
up
really
fast?
Let
me
add
some
context.
Absolutely
I
mentioned
my
screen
yeah.
How
do
I
share?
How
do
I
share
there?
You
go
cheering
so
james
had
made
some
really
interesting
points.
So
pipeline
performance
is
kind
of
one
big
bucket
signals,
and
then
he
started
drilling
into
jobs
and
starts.
C
I
think
the
first
thing
is
kind
of
like
really
understanding
what
pipeline
performance
means
and
what
I
have
on
my
screen
is
just
one
example
of
that
question
of
pipeline
performance
right
and
nothing
here
of
expertise,
and
so
how
people
understand,
because
you're
doing
all
the
great
stuff
with
that
and
so
on.
C
What
you
see
on
the
screen
right
now
is
some
analysis
that
we
did
yesterday
on
behalf
of
a
prospective
customer
of
gitlab
and
that
customer
said:
hey,
I'm
looking
to
move
to
gitlab,
but
I'm
comparing
git
lab
to
circle
ci
to
bitbucket
and
a
few
other
vendors
in
the
space
and
in
their
specific
use
case
and
here's
where
we
get
to
the
performance
question.
C
They
were
looking
at
a
node,
the
performance
of
a
pipeline,
to
build
an
android
app
on
those
three
different
platforms
and
then
android
app.
You
use
like
node.js
and
some
stuff
so
say
my
use
cases.
We
build
android
apps
and
we
want
to
see
how
fast
the
pipeline
for
building
our
standard
android
application.
Whatever
it
is.
It
takes
to
build
a
circle,
cra,
git,
lab
and
bitbucket,
and
initially
what
they
found
was
in
circus
ci.
The
total
run
time
for
a
very
basic
pipeline
that
they
would
use,
took
approximately
250
seconds.
C
You
can
do
the
math
on
gitlab.com
and
this
particular
customer
is
interested
in
going
to
gitlab.com
fully
not
self-managed,
which
is
another
kind
of
like
sliver
of
this
equation
performance.
New
james
is
pointing
out.
They
notice
that
the
performance
for
that
same
pipeline,
same
exact
parameters
just
running
on
github.com
took
538
seconds,
so
really
significant
amount
of
time.
C
So,
if
you
look
at
this
graph
right
here,
everything
in
red
is
the
different
jobs
within
that
pipeline
and
how
long
it
took
from
diflab.com,
and
so
we
spent
a
lot
of
time
over
the
past
two
weeks
ahead
of
recorded
customer
yesterday
to
kind
of
recap
what
we've
done
kind
of
analyzing
pipeline
performance
for
that
specific
use
case,
and
then
we
came
back
with
some
recommendations
and
in
this
particular
case
we'll
say:
hey
you
know
we
probably
have
to
offer.
C
You
know
optimize,
cpus
and
gitlab.com.
So
anyway,
with
that
said,
I
think
the
question
of
performance
is
super
interesting,
but
super
hard
and
I'll
stop
sharing
my
screen
because
I
don't
know.
If
what
is
the
goal?
Proof
when
we
say
hey,
we
want
to
surface,
take
the
performance
metrics
for
pipeline.
C
Are
we
saying
that
hey
you're
running
a
node.js
app?
It
should
take
x
or
you're
running
microservices
pipeline?
It
should
take
why
it
really
gets
into
an
interesting
kind
of
problem
space,
and
maybe
the
first
iteration
is
just
saying:
hey
here's.
What
things
have
been:
here's
the
current
baseline
performance
you
see
and
then
we
can
go
from
there.
That's
when
I
can
just
kind
of
kick
that
off
with
a
little
bit
of
the
conversation.
B
Yeah,
I
think,
there's
two
different
problems
or
potentially
two
different
problem
spaces.
There.
One
is,
if
you
don't
change
compute,
how
can
you
optimize
your
existing
jobs
and
pipelines
and
the
other
is
once
you've
done?
That
is
there
opportunity
for
you
to
change
the
runner
that
you're
on
to
run
faster
because
at
the
end
of
the
day,
so
many
of
these
services
charge
by
how
long
you've
run
whether
it's
on.com
with
runner
minutes
or
if
you're
going
to
aws
like?
B
Can
you
can
you
buy
a
you
know
the
4x
compute,
but
do
it
so
much
faster
that
the
price
difference
doesn't
matter
like
do
those
things?
Don't
they
don't
scale
together?
Maybe
so
I
mean
maybe
there's
like
law
that
down
the
road
as
we're
offering
multiple
different
runner
types
on.com
we
could
you
know
you
could
check
a
box
and
say
hey,
find
the
best
runner
for
me
for
each
pipeline
and
we
could
run
like
duplicate
a
pipeline.
Every
100
pipelines
on
a
different,
compute
type
and
say
hey
turns
out.
B
If
you
change
to
this,
one
it'll
run
this
much
faster
and
then
here
would
be
your
projected
cost
or
how
many
runner
minutes
you
would
consume
when
you
did
it
like
that,
might
that's
that's
a
couple
of
iterations
down
the
road?
Probably,
but
I
you
know
thinking
big,
that
instead
of
just
saying,
hey
here's
the
data,
but
if
we
take
that
a
little
bit
further,
we
can
say
here
is
how
you
can
improve.
This
is
the
action
that
you
should
take
james
did
we
have
like
a
vulcan
mind,
build
or
something
like
that.
C
B
C
C
A
super
sorry
go
ahead.
Go
ahead.
Oh
I
was
gonna,
say
that's
kind
of
the
crazy
vision
we
were
kind
of
like
throwing
around
when
I
was
chatting
with
adrian
this
week
and
juan
so
yeah.
If
we
could-
and
that's
you
right
that
is
the
absolute
nirvana.
It's
like
hey,
you
know
my
baseline
job
takes
x
and
it's
on
stream.com.
Let's
just
keep
it
simple
for
a
second
and
then
you
can
talk
about
seven
eyes
right,
because
I
could
bring
up
a
whole
different
bowl
of
whites.
But
let's.
C
E
No
worries,
then,
from
my
perspective,
if
we
look
at,
if
we're
talking
about
ideas
of
how
to
solve,
for
that,
and
just
talking
concretely
about
the
example
that
darren
gave
from
ci
perspective,
you
also,
there
are
ways
that
you
can
optimize
a
pipeline.
If
we're
talking
about
comparing
pipeline
time
to
run
the
the
way
you
author,
the
ci
configuration,
can
make
the
pipeline
run
quicker.
E
Okay
of
these
three
jobs,
job
abc
b
runs
the
longest
and,
if
you
can
offer
in
a
way
where
there's
no
dependency
of
sequential
running,
where
you
can
author
it
to
run
concurrently,
that
would
shorten
the
amount
of
time
a
pipeline
takes
to
run
in
its
totality,
and
so
that
comes
down
to
the
keywords
you
use
instead
of
jobs,
running
sequentially
stage
after
stage,
you
can
do
it
through
needs
keyword,
for
example,
to
let
them
run
concurrently
again,
if
that,
if
one
of
the
goal
is
in
addition
to
optimizing,
compute
and
and
and
the
best
configurations
for
runner,
they're
also
best
configurations
for
pipeline
the
example,
you
had
darren
I'm,
I
don't.
E
C
C
Takes
forever
because
we
have
all
this
caching
stuff
that
has
to
happen
and
putting
on
gotcha
but
yeah.
It
was
a
very
basic
pipeline
and
yeah,
but
you
know
what's
interesting
kyle,
I
think
if
you
kind
of
like
kind
of
like
take
what
you
just
said
about
jameson
and
create
a
vision,
I
think
it's
a
super
cool
vision
and
I'm
thinking
I
don't
want
to
kind
of
suppose
a
solution,
but
like
your
first
iteration
of
the
dashboard,
could
just
be
hey.
C
B
So
I
think,
there's
two
data
points
there
that
were
are
really
anecdotal
today
or
hidden
in
the
ui.
One
of
them
is
block
clock
time.
How
long
does
it
take
from
when
a
pipeline
starts
to
when
I
get
a
result,
it
doesn't
matter
what
it
is.
The
other
is
runner
minutes.
How
many
runner
minutes
are
each
of
my
pipelines
consuming.
B
B
Here's,
here's
the
data,
here's
for
each
of
your
pipelines,
how
many,
how
long
and
how
many
minutes
it
consumed
like
that
might
be
a
good
start
to
see
if
people
start
to
grab
that
and
build
their
own
charts
out
of
it
or
if
we
expose
it
as
a
prometheus
metric
or
something
they
can
grab
it
and
put
it
into
grafana
like
there's
a
couple
of
different
options
there
and
build
their
own
dashboards.
B
Yeah,
I
think,
if
like
hiding
it
in
the
pipeline
is
not
gonna
like
that's,
not
the
right
persona
that
we
would
target.
It's
interesting,
I
think
for
a
developer
to
say,
but
then
they
just
end
up
with
that
anecdotal
data,
and
maybe
they
can
point
at
a
single
pipeline.
There's
a
ton
of
work
then
to
dig
that
out
and
look
at
trends
over
time
and
unless
you're
seeing
a
trend
over
time.
It's
not
actionable
for
you.
B
So
if
you're
a
team
lead-
and
you
know
that
like
it
feels
like
everybody's
kind
of
playing
a
lot
more
ping
pong
this
week
and
when
I
ask
them
about
that,
it's
always
build
times.
I
wonder
if
that's
actually
true-
and
you
can
look
at
that
trend
over
time-
be
like
oh
well,
what
happened?
Did
we
introduce
a
new
job?
Is
it
you
know
that
crummy
code
quality,
that's
slowing
us
down
whatever
it
might
be.
E
So
are
we
saying,
in
the
spirit
of
iteration
start
with
the
very
smallest
thing,
which
is
maybe
two
data
points
of
showing
how
long
it
takes
jobs
to
run?
E
And
I
don't
understand
the
part
where,
in
the
in
the
agenda
james,
when
you
said
how
long
does
it
take
for
jobs
to
start.
B
So
there's
I've
experienced
this
is
past
experience
if
you
have
a
shared
runner
pool.
So
I'm.
B
Self-Hosted
and
you
have
a
lot
of
different
teams-
teams
want
to
know
how
long
it
takes
for
their
pipelines
to
start
what
that
tells
them.
Your
team
who's
managing
the
runners
is
hey.
Are
we
scaling
up
fast
enough?
Is
our
resource
pool
of
compute
big
enough
so
that
we
can
meet
the
needs
of
our
development
teams?
B
What
we
found
when
I
was
in
a
pass
role
is
that
we
built
auto
scaling
groups
and
we
basically
have
them
run
the
clock,
because
we
are
very
we're
west
coast
and
mountain
time
based
and
at
8
a.m.
We
would
start
to
just
schedule
scale-up
groups,
and
so
we
would
always
have
a
little
bit
of
head
room
ahead
of
as
development
teams
were
working
and
then
around
5
p.m.
Pacific
we
would
start
to
tear
them
back
down
and
then
it
made
like
we
could
use
the
scheduled
in
the
spot.
B
Instances
on
aws
to
do
make
that
cheaper
compute.
That
was,
that
was
our
ultimate
goal,
was
making
it
efficient
while
not
having
them
wait
a
long
time
for
those
runners
or
runners
or
build
notes
or
whatever
they
were
to
spin
up.
So,
that's
that
it's
maybe
a
different
persona.
It's
your
devops
manager
or
whoever's
in
charge
of
hey,
I'm
in
charge
of
the
runners
in
the
budget
for
the
runners.
C
Okay,
gotcha
so
interesting,
so
that
one
that
description
about
those
use
cases
james
sounds
like
it
fits
most
clearly
into
the
other
concept
that
we
went
into
in
the
comments
and
the
agenda,
which
is
the
potentially
my
dreams
all
over
the
place,
which
was
that
concept
of
not
jackie's
director
level,
ci,
cd
dashboard,
but
the
runner
enterprise
management.
C
I
hate
him
dashboard
again
and
anyway
for
the
running
enterprise
management
view.
What
we
were
thinking
about
at
a
high
level
is
that
today,
in
the
github
ui
you've
got
different
views
depending
on
which
level
you're
at
for
looking
at
the
runners.
I
know
you're
kind
of
getting
off
topic,
but
then
we're
thinking,
look.
C
We
need
to
figure
out
a
way
to
add
more
value
to
whatever
that
view
is
that
you
have
access
to
so
that
as
a
devops
persona,
you
can
do
more
things
and-
and
we
just
had
to
have
a
happy
discussion,
and
so
when
you
raise
this
across
this,
this
thing
topic
we're
kind
of
thinking
like
okay.
How
would
some
of
the
metric
stuff
that
you're
talking
about
fit
in
potentially
to
like?
B
Yeah,
you
can
take
that
a
step
further
and
we
are
talking
about
different
types
of
runners.
You
could
even
then
start
to
profile
within
a
pipeline
like
it
does
my
unit
testing
job
need
a
different
type
of
runner
or,
like
am
I
training
a
model,
and
that
needs
a
different
type
of
runner
than
the
rest
of
the
job,
so
that
you're
only
using
the
really
expensive
ones,
smartly
and
in
the
right
places.
If
we
think
about
taking
that
another
step
of
what
kind
of
runner
do
you
need
to
for
your
pipeline?
B
To
what
kind
of
runner
do
you
need
for
every
job?
And
you
can
start
to
do
some
like
loose
matching
of
what
kind
of
job
is
this?
And
maybe
we
even
start
to
look
at
the
gitlab
ci
yaml
and
see
some
of
the
things
that
you're
doing
in
it
and
suggest
to
you
a
runner
that
is
efficient
for
you
and
like
you,
could
give
them
a
scale
of
well.
B
I
wanted
to
run
really
cheaply
and
I
don't
care
how
long
it
takes,
or
I
will
burn
a
pile
of
money
if
I
can
get
this
back
in
five
minutes
and
give
them
some
options
in
between
right.
C
I
guess
we're
kind
of
assuming,
then
that
there
is
one
segment
of
the
population,
one
market
segment
of
organizations
that
will
a
choose
to
self-manage.
Maybe
it
always
gets
kind
of
schooly
doing
this.
Let's
just
keep
it
simple
choose
to
say:
self-manage
runners
for
the
ci
cd
stuff,
where
one
segment
chooses
a
self-match.
Within
that
universe
that
chooses
to
self-manage
runners,
you
have
roughly
two
categories:
category
a
is
hey.
I'm
gonna
run
these
runners
on
public
cloud,
aws,
azure,
google,
digital
cloud,
you
name
it
right.
The
other
segment
might
just
be
like
no
way.
C
This
is
running
on
prem
idea,
mainframe,
red
hat
enterprise,
and
so
I
guess
the
question
for
us
as
we
think
big
about
this-
is
which
segment
right,
because
it
might
be
easier
for
us
to
solve
some
of
these
cost
management.
Things
with
a
public
cloud
use
a
segment
versus
you
know
mean,
and
we
need
to
think
about
that.
B
I'm
going
to
say
that
that's
your
expertise
of
what
that
market
looks
like
and
the
segmentation
looks
like
darren,
I'm
I'm
only
speaking
from
personal
experience
of
problems
that
I've
encountered
and
we
were
running
build
nodes
in
public
cloud.
C
C
At
least
with
the
public
cloud
you're
like
hey
on
google,
it's
costing
you
two
dollars
an
hour
for
this
type
of
vmware.
You
can
kind
of
like
create
all
your
cool
dashboards,
but
if
someone
is
like
it
doesn't
make
sense
for
me
because
darren
I
have
a
red
hat
enterprise.
You
know
on
prime
openshift.
You've
got
me.
I
know
what
my
costs
are.
It's
relevant
for
you
to
have
this
kind
of
future.
You
know.
B
Yeah,
I
think
that
the
yeah,
the
problem
space
is
different
for
those
folks
who
are
self-hosted
and
they're
kind
of
locked
into
their
compute
type
that
we
want
to
tell
them
about.
How
do
you
make
your
gitlab
ci,
yaml,
more
efficient
or
more
effective,
and
then
there's
interesting
things
around
like
this
job
just
always
passes,
or
if
this
job
fails,
this
job's
going
to
fail
too,
and
I
think
tao
and
the
ci
group
are
already
starting
to
do
some
great
work
there.
Where
you
talk
about,
how
do
you
just
reduce
the?
B
F
How
am
I
saving
time
or
money
with
this
immense
amount
of
change
so
being
able
to
afford
like
hey
here's
ways
that
gitlab
can
help
reduce
your
waste
and
craft,
even
if
you're
not
using
us
end-to-end
for
your
deployment
cycle?
That's
that's
one.
One
learning
that
I
also
had.
I
do
need
to
jump
at
in
like
five
minutes
right
now,
so
I
just
want
to
say
one
other
thing
more.
Tactically.
F
I
would
love
to
help
kind
of
break
down
the
current
ci
cd
dashboard
issue
into
segments
for
us
to
deliver
across
our
our
groups.
I
think
that
what
the
problem
we're
trying
to
solve
with
the
cicd
dashboard
is
exactly
what
we're
talking
about
here.
It's
administration
of
the
return
on
investment
that
gitlab
offers
you
and
I
think
that
runner
is
a
big
cost
driver
there.
I
think
that
ci
pipelines
are
a
big
cost
driver
there.
Continuous
deployment
doesn't
have
a
cost
driver
with
that,
because
it's
relying
on
a
pipeline.
F
The
only
thing
is
that
your
compute
and
your
resources
that
you
have
inside
of
your
cloud
providers
and
finding
ways
to
ingest
something
like
a
cloud
health
metric
or
displaying
connected
insights
from
prometheus.
So
that's
thinking
like
more
big
and
round.
How
do
we
look
at
post-deployment
monitoring
in
the
context
of
kubernetes
deployments,
as
well
as
on-prem
or
private
cloud
deployments?
F
So
as
far
as
some
next
steps
that
I
will
take
out
of
this,
I
would
I'll
probably
ping
james
you
and
darren,
so
we
can
kind
of
tether
together
these
different
dashboarding
experiences
and
whether
this
is
darren.
We
cross-link
your
runner
management
board
from
the
director
dashboard,
whatever
we
decide
to
create
there.
I
just
want
us
to
kind
of
be
thinking
more
about
the
entry
point
for
customers
to
get
this
roi
management
experience
in
ci
cd.
F
I
feel
like
we're
missing
that
entry
point
today,
because
we
could
nest
this
at
the
project
level
or
the
group
level
and
have
it
in
ci
cd
insights,
but
that
might
not
be
discoverable
for
the
director
who's
managing
an
instance
level
cost
center.
C
Okay,
yeah,
that
makes
sense,
I
think
I
like,
so
I
think
then,
if
we
kind
of
what
is
the
term,
you
use
insights
management.
I
think
that's
completely
appropriate.
I
think
if
you
have
an
insights
management
whatever
it
is,
you
can
have
that
and
still
have,
let's
know
youtube
dashboards
like
for
you
could
have
like
runner,
enterprise
management
for
admins
and
devops.
C
It
just
links
because
what
we're
going
to
do
with
the
management
of
the
runner
at
scale
is
things
like
turning
stuff
on
and
turning
stuff
off
right,
it's
kind
of
like
more
of
that,
turning
the
dial
kind
of
a
thing,
and
then
you
just
like
want
to
your
point
now.
You
want
to
like
uplift,
hey,
what's
really
happening
to
your
phone,
then
you
can
make
sure
it
links
nicely
to
the
whatever
the
analytics
that
insights
are
meaningful
information,
dashboard
thing
you're
coming
forward.
That's
economics,
yeah!
This
is
super
cool.
C
C
F
But
that's
kind
of
my
gut
reaction
from
talking
to
a
bunch
of
customers
with
hybrid
deployments
that
are
like
I'm,
my
head's
on
fire,
because
I'm
an
aws
gcp
azure
and
I
can't
manage
my
cost
because
I
can't
compare
them
effectively
and
that's
the
problem
that
I'm
hearing
time
and
time
again.
C
F
F
Which
just
shows
an
aggregation
of
releases,
information
and
pipeline
information
as
an
mvc
at
the
group
level.
Customers
are
like,
I
can
kind
of
do
this
today
when
I
extract
information
with
the
api.
What
I
want
to
see
is
is
analytics
and
actions
that
gitlab
is
telling
me
to
take.
G
Yeah
they
always
want
predictive
dashboards,
they
want
instructive
dashboards,
they
don't
they
don't
want
it
just
to
sit
there
and
report
in
because
they
want
the
mental
workload
taken
off
of
them.
Please
just
tell
me
where
I
need
to
do
what
I
need
to
do:
make
it
even
easier
having
to
be
a
button
that
I
can
just
click
and
and
then
get
live,
will
know,
and
it
will
go
modify
the
thing
that
needs
to
change,
because
I
have
15
other
thousand
things
I
have
to
do.
Please
just
make
my
job
easier.
G
The
other
thing
that
I've-
I
don't
know
if
we
suffer
from
this,
but
I've
heard
it
in
the
past.
Other
companies
is.
Why
do
I
have
now
10
dashboards
to
go?
Look
at
now.
That's
not
actually
helping
me
at
all.
I
have
tinda
and
don't
put
all
10
on
one
page
either
that
doesn't
help
me
at
all,
especially
if
some
of
the
information
is
dependent
on
the
other
information.
They
need
the
dashboards
to
be
much
more
intelligent
and
talk
to
each
other
and
level
it
up
in
a
way
that
helps
them
make
the
critical
thinking.
F
Thoughts
totally
and
that's
exactly
what
we're
hearing
from
users
when
we
put
the
current
mocks
in
front
of
them
for
the
cicd
dashboard
they're
a
little
rudimentary
in
that
it
was
a
super
nbc
approach
that
we're
just
saying:
hey.
Let's
take
the
information
that
currently
exists
in
gitlab
and
bubble
it
up
at
the
group
level
and
people
were
were
not
receptive
to
that
and
it
could
have
been
the
the
directors
I
was
talking
to.
I
talked
to
three
different
directors
right.
F
We
have
a
conversation
with
another
customer
hi
on,
and
I
do
today
to
dive
into
some
of
their
more
nuanced
needs
around
administering
at
scale
as
well
and
they're.
Actually
purely
cloud-native,
so
that'll
be
a
really
cool
take
from
them,
but
they,
the
the
general
theme,
is.
I
want
nuggets
of
insights
for
me
to
then
take
forward
and
I
want
to
know
how
that's
doing
over
time
and
if
I
should
change
my
behavior.
C
Yeah
jack
and
then
you
have
to
jump
off.
I
just
linked
in
the
chat,
come
to
co
and
you
kind
of
scroll
down
they
have
in
their
marketing,
says
decisions,
not
dashboards.
F
F
C
I
think
when
you
looking
forward
to
having
new
drivers
and
supporting
you,
but
if
we
can
think
differently
about
this
and
say
we
have
a
gitlab
point
of
view
where
we're
just
not
aggregating
a
bunch
of
metrics
and
then
become
a
data,
analytic,
analytics
or
bi
shop,
it's
like
no.
We
have
a
point
of
view
of
producing
something
that
allows
you
to
make
decisions
and
gitlab
would
be
super
cool.
I
think
yeah.
F
F
But
it's
all
about
getting
people
to
be
faster,
leaner
with
gitlab,
and
we
can
create
two
versions
of
each
of
those
for
cloud
native
and
then
for
hybrid,
so
that
people
can
leverage
the
prometheus
metrics
that
already
exist
or
plug
and
play
with
things
like
cloud
health
or
expose
an
api
endpoint
for
aws
management
engines.
So
they
can
just
suck
that
data
in
from
aws
and
display
it
in
gitlab.
F
Because
that's
like
that's
what
they're
wanting
to
do,
they
want
to
aggregate
themselves
or
extract
themselves,
but
also
we
got
to
think
about
how
to
solve
this
hybrid
deployment
model,
because
today
it's
impossible
to
do
and
get.
But
I
want
to
drive
this.
I
want
to
help.
I
want
to
parse
this
work
out,
so
we
can
meaningfully
solve
these
problems,
and
I
know
that
our
customers
today
are
not
happy
with
what
hyon
and
I
have
built.
So
we
need
to.
F
We
need
to
act
fast
and
like
actually
put
something
out
there,
that
people
want
to
use.
I'm
sorry,
I
have
to
leave.
I
feel
awful,
but
thank
you
for
this
conversation
I'll
be
pinging.
You
all
on
issues
too.
A
B
Well,
I
think
we
have
a
lot
of
really
think
big
ideas,
but
maybe
we
can
take
you
know
not
the
whole
20
minutes,
but
a
few
minutes
and
talk
about
what
would
be
a
smallest
iteration.
That
would
give
someone
not
just
data
but
potentially
like
a
view
of
something,
has
changed
in
a
direction.
I
don't
want,
and
here
is
a
thing
I
can
go,
do
about
it.
Even
if
that
thing
is
hey
your
wall,
clock
time
increased
by
25
between
this
pipeline
and
this
pipeline
go
look
at
that
one.
B
B
G
I'm
not
sure,
because
what
I
feel
like
they're
asking
is
something
that's
not
that
simple
to
give
them.
It's
like.
We
tried
simple
and
they're
like
yeah,
that's
nice!
I
want
more.
I
want
something
more
actionable
and
more
intelligent
than
than
what
you
can
give
give
me.
I
don't
know
if
we
can
point
if
we
can
compare
pipelines
from
the
last
run
to
this
run
and
say
over
here
in
this
general
area
of
the
pipeline.
It
looks
like
that
took
longer,
instead
of
just
the
whole
entire
clock
time.
C
So
that's
that's
a
question
make
sure.
Let's
see
if
we
can
answer
that
question,
so,
let's
just
say
today
darren
wants
to
go
and
say
I
have
a
project
in
vietnam
and,
let's
just
call
it
damage
project
right,
simple,
one
right
parents
project
and
let's
just
say
I
don't
change
the
pipeline
file.
I
mean
this
is
a
really
bad
example,
but
anyway,
let's
be
able
to.
Maybe
let's
just
say
I
don't
change
the
pipeline
file
right
and
I
run
it
every
every
morning.
C
Eight
o'clock
right
if
I
want
to
get
today
a
baseline
view
of
the
performance
for
the
past
week.
How
would
I
do
that
in
vietnam
of
each
of
those
runs
like
my
monday
morning,
my
tuesday,
do
I
have
to
go
into
each
pipeline
view
and
say
you
know
monday.
It
took
two
minutes
on
tuesday
it
took
a
minute
and
a
half
on
wednesday.
It
took
three
minutes.
How
do
I
get
that
via
the
baseline,
in
terms
of
what
it's
been
doing?
Does
anybody
know.
C
Is
it
helpful
to
have
a
baseline
because
to
james
point,
to
me,
when
I
hear
james
say
I
need
to
make
a
decision,
I
always
kind
of
go
back
to
my
data
warehousing
days
and
I'm
kind
of
thinking
you
know.
What's
my
baseline,
you
know
what
I
mean
and
what
am
I
expecting
to
be
the
threshold
of
reasonable
performance
that
gives
me
like
a
look
hey
something
has
changed
in
a
good
or
bad
way.
Let
me
go
figure
out
why.
B
It
looks
like
in
the
pipeline
list
that
for
pipelines
that
are
finished,
you
get
the
total
run
time
for
them.
So
that's
a
data
point
that
we
have,
but
comparing
those
because
there's
not
a
filter.
Really,
oh,
I
guess
there's
a
status
filter
all
right,
so
you
could
filter
by
just
the
past
pipelines
and
then
you
can
start
to
see
because
they
sort,
I
think
by
pipeline
id,
which
is
by
most
recent
run
so
yeah.
You
can
start
to
compare.
C
Are
taking
so
let
me
ask
you,
let
me
ask
you
so
if
we
were
thinking
about
like
trying
to
come
up
with
like
a
baseline,
we
want
to
do
it
at
the
project.
Level
say
all
of
the
pipelines
in
this
project
on
android
for
the
past
week
took
this
much
time
to
run
in
terms
of
walk-up
time.
It's
like
your
very
first
iteration.
I
see
people
think
that's
helpful.
I'm
just
gonna
bring
totally
brainstorming
the
smallest
iteration
of
something
that
might.
B
H
I
think
one
thing
that
it's
probably
not
related
to
the
to
a
baseline
number
but
more
like
a
baseline
configuration,
is
the
idea
I
have.
I
think
many
of
the
issues
that
people
report
when
a
pipeline
is
slow,
have
to
do
a
lot
with
how
they
have
their
gitlab
ci
yaml
file
configured.
H
So,
for
example,
one
thing
that
I
see
a
lot
is
that
people
don't
cash
their
dependencies
and
that
that,
of
course
increase
the
the
amount
that
a
job
takes
to
run
because
it
needs
to
go
and
download
a
bunch
of
things
that
interior
shouldn't,
so
maybe
maybe
maybe
the
first
step
like,
which
is
something
that
I
always
think
is
we.
We
first
try
to
teach
people
how
to
write
good
gitlab,
ciamo
files.
You
know
how
to
I
honestly
think
that
our
experience
for
writing
gitlab
ci
yama
files
is
so
bad
right.
H
That's
my
honest
opinion.
Like
you
open
the
gitlab
ciamo
file
on
the
linter
is
in
a
different
page,
like
the
feedback
that
you
get
from.
The
linter
is
very
loose,
very
vague.
You
know
like
there's,
there's
so
little
kind
of
like
guidance
and
I'm
boring
on
how
to
efficiently
write
a
ci
configuration.
You
know,
so
I
mean
I'm
just
kind
of
going
one
step
back,
because
I
feel
that
like
before
we
we
even
start
like
getting
thresholds
and
like
vase
in
like
hey.
This
pipeline
needs
to
take
five
or
ten
minutes.
H
Let's
first
like
teach
people
how
to
write
gitlab
good,
like
gitlab
ci
configurations,
and
then
we
can
like
create
those
baselines.
You
know
because,
right
now,
I
think
if
we
create
a
baseline
for
anything,
it's
going
to
be
based
on
proof
practices
if,
if
we
ever
can
like
take
it
from
that
perspective,.
E
E
And
so
we're
introducing
that
slowly
and
rather
than
in
addition
to
giving
that
warning
to
the
user,
we're
linking
to
the
linking
to
the
documentation
to
educate
them,
how
to
write
their
ci
configuration
better
to
optimize
their
their
pipeline,
and
I
just
saw
vitica
put
in
the
zoom
chat
that
she's
working
with
jason
jason's,
focusing
on
ci
authoring
topics
under
the
ci
group.
So
we
totally
agree
with
with
what
you're
saying
juan
that
the
first
step
is
to
make
sure
the
customer
has
written
their
pipeline
in
a
way
to
make
it
run
optimally.
E
B
E
Or
one
small,
if
we're
talking
about
a,
I
think,
small
first
iteration,
it
could
be
that
we
for
we
surface
the
data
on
which
jobs
or
pipeline,
maybe
just
starting
with
jobs,
is
taking
the
longest
to
run
and
giving
the
user
the
option
to
do.
You
want
to
run
this
through
the
linter
tool,
just
to
get
the
warnings
for
suggestions
of
how
to
better
author
their
their
yaml
file
for
that
job?
E
That
might
be
the
smallest
iteration,
because
I
knew
I
know
that
ci
team
already
has
issues
in
the
next
and
upcoming
milestones
to
improve
feedback
that
the
linter
gives.
So
maybe
we
tie
into
that
and
say
from
jobs
that
are
running
slow.
We
just
give
them
the
option.
Do
you
do
you
want
to
run
this
to
linda
to
get
suggestions
and
those
suggestions
not
only
call
out
the
ways
that
they're
writing
the
ways
that
they're
making
mistakes
and
using
the
syntax?
It
also
links
them
to
documentation,
to
educate,
self-educate.
B
E
Yeah,
maybe
that
that
maybe
that
one
is
the
very
smallest
here's
your
slowest
job
and
then
a
later
iteration
is:
do
you
want
to
see
how
you
can
optimize
this
job
in
the
way
it's
offered.
C
Yeah,
I
think,
exploding
the
hair
is
a
slowest
job
kind
of
makes
sense.
I'm
just
a
little
bit
worried
about
the
next
step
in
that,
because
the
optimization
piece
could
become
a
really
deep
rabbit
hole
because
I
mean
linking
is
one
basic
thing,
but
you
know
these
jobs
are
complex
depending
on
how
you
know
what
they're
doing
right,
but
maybe
we
got
to
solve
that
right
now,
but
that's
just
not
only
concern
in
terms
of
you
know:
they're
like
okay,
so
now
you're
telling
me
the
job
is
bad.
So,
okay,
how
do
I
fix
it?
E
B
Yeah,
I
think
that
we
were
talking
about
one
massive
dashboard
or
a
couple
of
different
dashboards
data
presentation
views,
let's
call
them
that
data
is
interesting
to
different
personas,
and
so
we
might
like.
Maybe
the
very
first
issue
here
is
just
identify
which
data
maps
to
which
persona,
and
then
we
end
up
with
a
few
different
mvcs
for
each
of
those
personas
and
problem
spaces
of
here
is
the
first
bit
of
data
and
an
action
you
take
after
you
identify
it.
It's
trending
in
the
way
you
don't
want.
B
B
Then
you
have
your
team,
lead
who's,
caring
about
wall,
clock
time
for
their
team's
pipelines
or
their
projects
pipelines
within
that
they
might
start
to
care
about.
Well,
which
jobs
are
the
slowest
that
contribute
to
that
slow
wall
clock
time.
So
if
we
break
it
down
into
those
three
and
there
might
be
more,
those
are
just
the
three
that
I
remember
and
I'm
not
taking
notes
thanks
to
everybody
who
is.
But
if
we
make
three
issues
there,
we
can
start
to
at
least
explore
what
else
is
the
problem
space
here?
B
B
H
I
I
feel
good
about
that.
I
think
that
that's
a
good,
a
good
next
step
for
sure.
A
I
like
that
you're
turning
to
the
user
validation
as
well,
because
yeah,
it's
like
nice,
to
have
an
idea,
but
then
it's
nice
to
like
see
if
that
direction
is
correct
or
like
yeah
consume.
The
data
so
sounds
good.
Yeah.
B
Yeah,
why
don't
we
towards
that
end?
Let's
make
these
problem
validation,
workflow
issues
just
to
validate
that
we're
in
the
right
problem
space,
and
then
I
guess
first
steps
would
be
work
with
our
designers
and
our
research
folks
around
interviews
or
survey
or,
however,
we
want
to
take
that.
Take
that
research
on.
C
Yeah,
that
makes
sense.
I
guess
the
question
I
have
with
that
is:
do
we
need
to
make
sure
we
coordinate
that
research
effort,
whatever
research
effort
jackie
already
has
going
on
with
the
director
level.
B
Yeah,
I
I
think
that
we
take
that
ci
cd
dashboard
and
we
split
it
into
two.
To
be
honest,
I
think
that
there's
different
there
there's
maybe
the
same
persona
who
cares
about
the
ci
part
of
it
and
the
cd
part
of
it.
But
I
think
that
we
really
should
bifurcate
those
issues.
That's
a
lot
of
data
about
things
that
are
happening
before
deployment
and
things
that
are
happening
after
deployment
in
one
space
that
I'm
not
convinced
the
same
people
want
to
look
at.
E
Plus
yeah
plus
the
data
that
in
that
issue
about
pipeline,
she
said
that
in
solution
validation,
they
found
that
users
did
not
find
that
data
helpful.
So
that
would
be
need
to
be
reworked,
anyways
and
is
another
reason
to
separate
it
from
the
metrics
on
that
dashboard,
around
releases
and
deployments.
So.
B
D
I
think
it
makes
sense
to
split,
but
there's
also
the.
I
think
we
should
focus
on
the
the
possibility
to
add
so
entry
points
to
the
other
sections
in
the
product,
so
that
we
make
sure
that
the
personas
that
yeah
that
working,
for
example,
the
release
manager,
can
also
have
access
to.
I
don't
know,
specifically
specific
parts
of
the
pipeline,
same
things
with
security,
etc.
D
So
if
you
can
remember
that,
I
think
it
makes
sense
to
split,
but
I
would
love
to
also
hear
jackie
thoughts
on
that,
but
I
think
we
can
catch
up
we'll
see.
Okay,.
A
There
is
tons
of
notes,
I
did
my
best
and
I
highlighted
the
action
items
in
bold,
so
yeah.
I
think
that
that's
we
have
enough
or
for
documenting
it
properly,
okay,
and
we
also
have
the
recording,
of
course,
yeah.
B
I'll
I'll
take
on
starting
to
create
those
issues,
but
darren
and
I'll
tag
you
in
those
as
well
to
help
get
them
into
the
right
to
the
right
groups
or
have
the
right
dra.
A
Awesome
well
what
an
active
and
interesting
discussion
thanks
a
lot
for
contributing
and
participating.
That
was
awesome.
We
almost
used
the
whole
time,
so
it
sounds
like
the
topic
was
really
important,
yeah
james,
if
you
will
be
creating
issues
adding
a
label.
I
think
it
would
be
very
helpful
for
tracking
the
helpfulness
of
these
questions
and
I
just
added
some
links
here.
We
are
actually
thinking
of
changing
this
format,
a
little
bit
to
make
it
more
async
or
more
interactive
or
maybe
more
visual.
A
I
don't
know
there
is
some
ideas
outlined
if
anybody
is
willing
to
help
us
go
leave
your
thoughts
there
and,
of
course,
as
always,
if
there
are
some
retrospective
issue,
things
that
you
want
to
add
to
the
issue,
the
links
are
at
the
bottom
of
the
notes
for
this
session.
A
So
thanks
a
lot
yeah.
Let
me
know
if
I
can
help
with
following
this
up
in
any
way,
but
this
was
a
great
discussion
thanks
a
lot
thanks.
Everybody
thanks.