►
From YouTube: Improve Pipeline Schedules
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
lee
well,
thank
you
so
much
for
one
reaching
out
to
me
and
then
to
you
having
some
really
great
ideas
about
how
to
improve
just
our
pipelines.
Experience
in
general,
I
do
have
a
note
stock
that
I
added
to
your
invite
and
I'll
drop
it
in
the
agenda
here,
just
so
that
I
can
start
recording
some
of
your
thoughts
down
and
we
can
share
it
with
the
product
manager
over
our
pipeline
execution
group
he's
on
pto
this
week.
So
I
wanted
to
make
sure
that
I
connected
with
you.
A
B
It
sounds
like
you're
super
organized.
I
didn't
catch
the
docs
link
in
the
oh.
Did
you
say
you
chatted
it.
I
just
tried
it
sorry.
I
thought
it
was
in
the
invite.
Okay,
no
worries,
I'm
glad,
because
I
was
about
to
say
that
I'm
super
disorganized
and
was
just
gonna
kind
of
touch
on
random
bits
and
pieces.
But
it's
awesome
that
you've
got
a
bit
of
structure.
A
Yeah,
I
think
our
first
place
I
would
like
to
start
will
be
the
issue
that
you
did
submit,
where
you
talked
a
little
bit
about
using
scheduled
pipelines
for
something
that
I
actually
haven't
seen
before,
which
is
where
your
company
is
going
through
a
couple
different
transitions,
where
your
looks
like
you're
like
replacing
certain
tools
with
gitlab.
So
I
would
love
to
learn
a
little
bit
more
about
those.
Maybe
a
specific
example
of
a
legacy
system
that
you're
replacing
and
why
pipeline
schedules
would
be
a
natural
fit
for
that.
B
Cool
cool,
so
let
me
go
off
and
just
a
tiny,
tiny
little
tangent,
but
like
one
of
the
things
that
we're
trying
to
replace
is
our
kind
of
support
and
help
desk
type
system.
So
at
the
moment
we
only
use
our,
oh
god,
my
brain.
We
only
use
gitlab
for
our
merge
requests.
B
We
use
package
repository
and
loads
of
other
bits
and
pieces,
but
we
don't
use
issues.
We
only
use
merge
requests,
so
we've
got
a
separate
help
desk
system
a
bit
like
you
guys
have
got
zendesk
or
salesforce
or
whatever
it
is.
You
know,
so
we
manage
all
of
our
kind
of
issues
or
tickets
in
another
system.
B
So
one
of
the
really
big
projects
I've
been
focused
on
working
on
is
trying
to
get
our
help
desk
migrated
over
to
gitlab
for
issues
and
that's
kind
of
all
the
crm
work
that
you
may
or
may
not
know
that
I've
been
working
on.
So
that's
a
slightly
different
note,
but
it
kind
of
encompasses
the
trying
to
transition
systems
and
move
across
so
on
a
slightly
unrelated
but
but
fairly
similar
sort
of
note.
We
are
in
the
process
of
upgrading
other
systems.
Our
clients
are
moving
to
the
cloud.
B
We've
got
various
different
bits
and
pieces,
so
we
provide
all
sorts
of
technical
solutions,
system,
implementations,
consultancy,
system,
process,
reviews,
you
name
it.
We
do
it,
and
one
of
the
things
that
we
do
quite
a
lot
of
is
what
I
would
call
like
integrations
or
interfaces.
B
So
it's
it's
where
you've
maybe
got
two
different
systems,
or
maybe
even
just
one
system,
and
you
want
to
grab
some
data
from
the
system,
maybe
massage
it
manipulate
it
a
little
bit
and
then
maybe
push
it
into
a
different
system
or
push
it
back
in
or
generate
a
report
or
whatever
it
might
be.
So
it
might
be
a
simple.
I
think
we
had
a
conversation
said
it
might
be
a
small
cli
type
script,
but
more
often
than
not
we're
working
with
net.
B
So
we're
writing
little
console
apps
that
that
you
know
contain
the
code
that
will
connect
to
these
systems
using
apis
or,
however,
they
use
grab.
Some
data
do
some
work
with
that
data
and
then
push
it
somewhere
else
do
something
with
it
so
yeah
in
the
past.
We
would
always
have
servers
generally
on
on
our
client
premises,
so
we
would
build
these
these
apps,
these
console
apps
these
interfaces
or
integrations,
and
then,
when
they
were
ready,
we
would
compile
them
and
deploy
them
manually.
B
I'm
going
to
say,
for
the
moment
we
we
haven't
really
got
to
the
cd
bit
of
ci
cd,
we're
slowly
getting
there.
We've
put
in
automated
testing
in
and
linting
and
that
kind
of
stuff,
but
we're
not
really
there
with
cd
so
manually,
deploying
and
yeah.
This
kind
of
brainwave
that
probably
really
should
have
come
to
me
a
very
long
time
ago,
was
that
well,
why
do
we
need
a
server
with
these
things
deployed?
Can
we
not
just
run
them
in
cd
in
a
pipeline?
B
Can
we
not
just
spin
up
a
docker
image,
execute
the
or
compile
and
execute
the
the
app
the
interface,
the
integration
and
you're
done?
We
no
longer
need
to
maintain
and
manage
these
servers.
You
know
cost
savings.
We
don't
need
to
worry
about
deploying
the
latest
versions
and
making
sure
that
we
only
deploy
once
we've
merged
to
master
and
all
of
our
sort
of
what
you
call
it.
Git
flow
processing
just
takes
a
whole
load
of
the
headaches
out.
B
So
something
else
that
spurred
me
on
to
this
was
with
the
move
of
the
help
desk.
We
want
to
start
using
gitlab
triage
and
it
was
the
understanding,
like
figuring
out
that
gitlab
triage
runs
as
a
scheduled
pipeline.
I
think
is
what
got
me
thinking
about
all
of
this
stuff
and
how
we
can
use
it.
So
forgive
the
very
long-winded
response
there,
but
yeah
so
schedule
pipelines
starting
to
kind
of
test
it
and
yeah.
It
looks
brilliant,
does
exactly
what
we
want,
but
I'm
starting
to
think
about.
B
Well,
how
do
we
then
have
visibility
if
one
of
these
integrations
falls
over
maybe
permanently,
maybe
just
temporarily?
Maybe
it's
replicating
data
from
one
system
to
another
and
there's
a
field.
That's
too
long
that
in
the
past,
we've
never
had
that
problem,
and
now
it's
violating
some
kind
of
constraint,
or
maybe
there
are
too
many
records
or
whatever
it
might
be
so
start
starting
to
understand.
How
can
we
better
monitor
and
have
visibility
of
that
kind
of
thing?
B
So
I
get
notifications
of
just
about
everything
at
the
moment
and
I
really
need
to
like
reduce
those
and
change.
My
settings,
I
think
in
gitlab,
so
I
don't
when
I
did.
Some
research
looks
like
scheduled
pipelines
notify
the
owner
by
default
and
I
presume
that's
only
on
the
change.
So
if
a
previously
successful
pipeline
is
now
failing
or
a
previously
failing
pipeline
is
now
successful.
B
So
that's
okay,
one
of
the
things
I
think
I
said
to
you
or
on
the
issue
I
suggested
was:
we
could
create
a
sort
of
bot,
a
fake
user
and
create
the
scheduled
pipelines
as
that
user
with
a
special
email
address,
so
that
that's
where
the
notifications
go,
but
my
immediate
thought
was
surely
this
is
configurable
somewhere
when
we
set
up
the
scheduled
pipeline,
we
should
be
able
to
say
you
know,
notify
this
email
address
or
these
users
or
this
group
or
something
along
those
lines.
A
Stop
talking
yeah
you're
good
one
note
on
that,
so
I
think
that
you
are
able
to
just
specifically
subscribe
to
where
you're,
not
the
owner
of
like
a
pipeline.
You
can
subscribe
for
that
project
of
notifications
on
field
pipelines,
fixed
pipelines
and
I
think
the
in
successful
pipeline.
So
any
anybody
can
subscribe
to
that.
For
that
repository.
B
Wicked
I'll
have
a
look
at
that
after
the
call
just
to
see
how
you
actually
go
about
doing
that.
A
A
Beyond
that
question,
I
did
want
to
clarify,
like
these
integrations
that
you're
building
they
sound
a
little
bit
like
just
like
an
app,
which
is
how
you
know
like
they're.
They
are.
They
are
currently
running
on
a
server
which
would
be
which
could
be
a
kubernetes
cluster
instead
right
or
a
container
of
some
sort.
A
When
you
say
you
want
to
monitor
the
integration,
could
it
be
the
same
thing
as
like
monitoring
the
environment,
to
see
like
whenever
a
job
has
been
deployed
to
that
environment
to
make
sure
that
you
are
expecting
the
same
sort
of
operations
like?
Is
it
available.
B
In
this
instance,
primary
first
and
foremost,
it's
it's
success
and
failure.
The
idea
of
most
of
these
we're
not
as
advanced
as
kubernetes
at
the
moment
and
the
the
idea
of
this
is
that
that
throwaway
environment
ie
the
pipeline
runs
it
spins
up
a
docker
image.
It
does
what
it
needs
to
do
and
then
it
destroys
itself.
B
B
Oh,
it's
currently
failing
or
in
the
last
you
know
seven
days,
it's
failed
like
10
of
the
time,
or
you
know
that
is
starting
to
get
those
sorts
of
metrics,
as
well
as
the
kind
of
follow-up
that
I
I
added
a
bit
more
complexity
and
said:
it'd
also
be
nice
to
to
be
able
to
get
a
little
bit
of
output
out
of
the
sort
of
job
the
pipeline
as
well.
Just
I
process
this
number
of
records
or
something
along
those
lines.
A
Yeah
for
this
conversation,
we'll
split
them,
because
I
think
pipeline
schedules
and
the
notifications
for
pipeline
schedules,
there
might
be
certain
notifications
that
aren't
in
our
current
notification
structure.
So,
for
example
like
when
a
scheduled
pipeline
runs,
there's
no
notification
on
that
so
like.
If
you
have
a
scheduled
pipeline,
and
it
starts
to
run
that
you're
not
going
to
get
notified,
it'll
just
run
and
then
you'll
get
notified
of
success
or
failure
right.
B
A
B
A
What
has
been
produced
out
of
that
job
and
what's
currently,
what's
the
performance
of
that
and
like
seeing
the
latest
artifact,
that's
been
successful.
The
latest
artifact
that
that
is
failed,
so
that
you
can
see
at
a
glance
the
history
of
these
deployments
of
the
integrations
from
an
artifacts
point
of
view.
Rather
than
from,
like
the
pipeline
point
of
view,.
B
Yeah
yeah
exactly
and
I've
lost
my
trailer
for
a
little
bit
there
so
yeah.
I
guess
it's
kind
of
because
we
are
we're
really
small,
so
it's
kind
of
seeing
how
we
can
things
that
maybe
we're
over
engineering
at
the
moment
to
find
like
quick,
easy,
simple
solutions.
So
you
know
the
the
real
go
to
town
on
it
solution.
B
I
guess
is
using
like
cabana,
and
I
don't
know
if
elastic
or
whatever
the
tools
are
something
where
we're
sending
logs
to
a
log
server,
and
you
know
going
crazy,
but
we
want
to
build
like
the
most
basic
interface
might
be
like
a
bash
script
that
uses
curl
or
something
like
that
to
grab
some
data
and
then
ftps.
You
know
a
couple
of
lines
of
code
and
it
would
be
super
awesome
to
be
able
to
make
that
into
something
without
having
to
write
hundreds
of
lines
of
code.
B
I
think
that
that
probably
makes
a
bit
of
sense
and
that's
what
these
scheduled
pipelines
are
starting
to
make
me
realize
it's
like.
Well,
we
don't
need
to
deploy
to
servers
and
set
up
scheduled
tasks
on
a
windows,
server
and
scrape
logs
using
a
cactus
monitoring
tool,
and
all
these
crazy
things
when
actually,
we
can
just
run
it
straight
from
here
and
yeah.
Jobs
are
good
in.
A
B
Yeah,
so
so,
there's
there's
a
couple
of
things:
one
probably
wouldn't
sit
on
the
the
kind
of
pipelines
page
itself.
So
this
is
what
I
knocked
up.
The
sql
that
I
shared
on
the
on
the
issue,
so
this
is
pulling
from
postgres
directly
and
it's
we've.
I've
just
set
one
dummy
one
real
and
one
kind
of
work
in
progress.
Integration
interface
schedule
pipeline
up
in
in
our
gitlab
instance,
and
you
can
see
the
success
rate
over
the
last
seven
days,
so
how
many
of
the
jobs
have
succeeded?
B
How
many
have
failed
and
this
metric
is
actually
the
number
of
failures
in
the
last
seven
days
or
something
along
those
lines
you
can
see.
I've
got
some
other
information
available
here,
obviously
messed
that
up
a
little
bit.
So
I
can
see
what
the
current
the
most
recent
status
was
that
when
the
next
run
is
when
the
most
recent
run
is
and
there's
a
bunch
of
other
bits
and
pieces
that
I
can
stick
on
there.
So
you
know
this.
B
I
think
the
challenge
I
often
find
with
git
lab
is
that
you've
got
people
that
have
like
one
massive
project
and
that's
it.
So
then
you've
got
people
that
have
got
a
group
and
then
you've
got
people
who
have
got
an
entire
instance.
B
So
I
think
that
this
is
a
little
bit
specific
to
our
needs,
but
I'm
sure
there
are
similar
people
that
have
a
lot
of
projects
on
gitlab
and
would
like
more
of
an
overview
or
window
into
all
of
those
projects,
so
to
speak.
So
this
is
rather
than
us
having
to
go
into
each
of
these
individual
projects
and
see
how
it's
doing
so.
That
would
be
where
I
said
about
how
many
records
it
processed
I'd,
probably
be
kind
of
more
happy
to
go
in.
B
If
I
wanted
to
specifically
see
this
connectwise
interface,
I
want
to
go
into
that
on
the
pipelines
page
I
can
see
the
last
10
runs
or
I
can
page
and
see
the
last
100
runs,
and
you
know
all
that
time
it
ran
and
it
processed
100
records
and
so
on
and
so
forth,
but
summary
view.
I
don't
think
I
need
anywhere
near
that
level
of
detail.
B
I
just
kind
of
want
to
see
here
are
all
our
integrations,
this
one's
green,
because
it's
running
you
know
99
success
rate,
this
one's
amber
because
it's
it's
not
quite
so
high
and
this
one's
red,
because
it's
it's
awful
and
it's
currently
failing,
so
that
I
again
forgive
me
because
I
think
we're
covering
two
like
two.
No
another
two
points
here,
we're
covering
the
additional
info
on
the
artifacts
page
or
whatever
page
it
is,
which
is
the
lower
level
than
this.
But
this
is
kind
of
a
higher
level.
B
I
would
like
to
see
somewhere
in
git
lab,
where
you
know,
there's
an
analysis
pages
that
have
graphs
and
charts
and.
A
Do
you
have
one
view
here
like,
for
example,
under
ci
cd
analytics?
We
do
have
an
all-star
machine,
real
quick,
like
like
overall
statistics
for
a
particular
project
where
you
total
pipeline
successful
pipelines
and
then
failed
pipelines,
and
you
can
like
filter
by
all
that
by
clicking
into
it
and
then
you
get
success,
ratio.
B
B
Won't
so
we've
kind
of
come
to
the
conclusion
that
we
need
to
create
a
single
group
and
put
everything
nested
inside
a
single
group
because
of
the
way
that
gitlab
is
currently
architected.
B
It
sucks
and
I
realize
that
you're
looking
at
kind
of
everything,
that's
going
to
become
a
name
space
or
workspace
or
whatever
it's
called,
and
you
know
things
will
improve,
but
not
soon
enough.
I
don't
think
so.
A
Got
it
so
I
do
think
that
that's
at
the
project
level,
like
those
charts,
are
but
there's
definitely
like
lots
of
things
on
the
pipeline
execution
roadmap
around
pipeline
reporting
and
flexibility
like
so
that
you
have
a
single
place
that
you
can
go
and
it's
kind
of
like
if
you
were
to
think
about
how
our
operations
dashboard
currently
lives
at,
like
top
level
group
being
able
to
have
that
for
a
pipeline
view.
So
I
think
that
that's
that's
an
intent
there.
So.
A
Reporting
perspective,
I
think
that
that's
that's.
That
is
a
great
context.
I'll
send
you
the
epic,
so
you
can
follow
along
perfect.
A
B
Not
not
necessarily
because
I
realize
that
priorities,
there's
there's
so
many
things
we
want
to
do
and
it's
it's
hard
to
kind
of
get
in
that
cut.
So
to
speak
and
again
I
I
think
of
git
lab
reporting
has
been
one
of
the
weakest
areas.
B
I
think
that's
more
because
I
don't
know
what's
there
and
I
don't
use
what's
there
as
opposed
to
it,
maybe
not
being
there
in
the
first
place,
or
maybe
it's
the
lack
of
configurability
and
customizability,
that
you've
got
these
out-of-the-box
views.
But
if
you
want
to
tweak
something,
you
often
don't
have
that
ability.
B
You
know
again
being
so
used
to
a
lot
of
what
the
work
that
we
do
is
building
reports
for
our
clients.
So
we
used
to,
I
guess,
building
reports
from
scratch
and
having
the
full
kind
of
flexibility
to
do
exactly
what
we
want
and
hence
why,
at
the
moment
you
know
we
we're
going
to
have
a
whole
suite
of
reports
that
are
querying
the
database
directly.
We
need
to
be
able
to
do
our
timesheets
and
our
billing
for
our
clients
through
gitlab
and
all
that
kind
of
stuff.
B
So
it
is
fine
as
a
sort
of
interim
measure,
but
what
I'd
love
to
do
is
kind
of
share
what
we
want
share,
what
we
may
be
able
to
achieve
and
then
see,
if
there's
any
desire,
if
it
overlaps
with
what
other
people
are
already
doing
or
if
we
can
help
to
maybe
sort
of
push
some
of
this
stuff
into
core
product
again.
B
A
That
makes
sense,
so
I've
linked
the
two
primary
epics
under
c
I
two
and
three
in
our
agenda.
I
think
that
that
those
two
are
gonna
give
you
like
better
insight
into
what
we're
thinking
about
in
pipeline
execution
and
then
once
I
share
all
the
stuff,
with
james
he'll,
be
able
to
set
up
a
follow-up
call
with
you
and
maybe
his
designer
vitica,
to
kind
of
walk
through
these
mocks
and
to
really
get
some
of
your
feedback
into
like
well.
A
How
can
we
get
the
flexibility
at
these
different
hierarchies
and
get
lab
and
and
those
kind
of
nuances?
I
do
think
that
there's
a
lot
to
there's
a
lot
left
to
be
desired
in
get
lab
reporting.
So
hopefully
we
can
start
to
improve
this.
Definitely
and
then,
when
you
first
mention
this
like
observability
of
ci
cd
pipelines,
that's
why
I
linked
that
really
big
epic.
We
do
have
a
lot
of
priority
around
monitoring
pipelines
and
monitoring
environments.
A
So
I
think
that
you
might
get
some
of
the
things
that
you're
looking
for
out
of
that
work
stream
as
well,
but
it
does
seem
like
there
might
be
a
couple
of
areas
where
you
can
get.
The
reporting
needs
from
that
for
sure.
B
Yeah
and
and
that
that's
something
with
45
000
issues,
it's
very
hard
things
to
discover,
what's
what's
already
out
there
and
trying
to
search
the
right
terms
to,
but
that's
why
it's
really
really
helpful
speaking
to
you
and
you
sharing
those
bits,
because
I
can
follow
along
on
some
of
those
issues
and
epics,
and
you
know
I've
already
cross
posted.
B
So
thanks
to
the
list
that
kind
of
shared
my
use
case,
which
again
realizes,
is
fairly
specific,
but
I'm
sure
there
will
be
a
number
of
other
people
and
and
sometimes,
if
it's
the
case
of
all
well,
actually
we
can
cater
for
that,
as
well
as
what
we're
already
catering
for
by
just
pivoting
and
tweaking
this
a
tiny
bit,
then
it's
worth
doing.
A
Okay,
got
it
I'll
post
this
recording
in
our
issue
from
youtube
unfiltered
and
then,
as
you
start
to
explore
the
notifications
and
scheduled
pipelines,
keep
me
keep
me
posted
and
see
if
there's
like
any
quick
ones
that
we
should
be
able
to
explore,
adding
to
the
notifications
of
pipelines.
A
C
Yeah
subscribe
to
a
great
project:
pipeline
notification:
okay,
okay,.
B
Yeah
now
that
that's
right,
I
think
that
is
what
I
had
previously
found.
The
pipeline
emails
integration
and
I
said
it
wasn't
specifically
for
scheduled
pipelines
that
shouldn't
matter
a
huge
amount,
but
to
give
you
an
example,
why
that
I.