►
From YouTube: CDF - SIG Interoperability 2021-03-18
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
B
B
Yeah,
I
think
andrea
was
talking
about
not
having
meetings
this
period
like
next
year
october.
Was
it
like,
when
will
it
take
again
november
yeah
this
thing,
so
we
can
perhaps
skip
that
meeting.
If
it
falls
in
between
this
or
we
fix
the
world,
I
don't
know
which
one
is
easier.
B
So
let
me
share
my
screen
and
thanks
everyone
for
joining
and
thank
you
dina
for
joining.
B
B
That
would
be
good
and
I
hope
you
can
see
my
screen
and
we
can
slowly
start
with
the
agenda,
followed
by
action,
item,
review
and
few
notes
and
then,
as
we
discussed
during
previous
meetings
and
announced
on
mail
lists,
we
have
dina
with
us
today
giving
us
a
short
presentation
on
four
keys
project,
followed
up
by
an
open
discussion
about
how
to
collaborate,
what
kind
of
challenges
they
faced
and
what
kind
of
challenges
we
are
working
on,
addressing
within
continuously
foundation
under
different
groups
and
projects,
and
then
last
topic
is
a
heads-up
on
poison,
driven,
ci,
cd
or
a
reminder
if
the
time
allows.
B
The
first
action
item
is
starting
to
work
on
artifact
metadata
and
bring
the
example
from
ortelius.
Do
we
have
steve
with
us
not
yet
so,
but
I
remember
seeing
some
updates
to
hack
md
documents.
B
Okay,
yeah
so
still
made
the
updates.
I
will
keep
tax
night
to
open
and
confirm
with
him
on
slack
to
see
if
he
wants
to
close
next
item
or
not.
Let
me
do
it
accordingly,
and
the
next
action
item
was
on
me
to
send
mail
to
cdfc
talk
and
user
mail
lists
about
dina's
visit
to
this
meeting
I
did
that
I
sent
mail
to
sig,
interpreted,
talk
and
end
user
mail
lists.
I
also
put
a
note
on
seek
best
practice
and
see
events
slack
channels
because
those
groups
are
using
slack
heavily.
B
C
I
have
added
to
the
hakam
d-doc
for
jenkins
open
source-
it's
relatively
light,
but
there
is
interest
actually
for
supporting
open
policy
agent
and
in
fact,
that
might
be
put
in
place
shortly,
but
for
cloudbees
plug-in
there
is
there.
Is
this
plugin
it
is
is
entirely.
You
know
it
is
moving
forward,
but
there
is
only
one
rule
in
place
right
now
to
enforce
timeout,
so
it's
more
putting
in
place
a
plug-in
that
will
be
extended
further
with
more
rules
in
the
future.
C
B
Thanks
carla-
and
it
is
like,
I
think
we
have
it
supported
on
policy
agents-
spinnaker
supports
policy
agent
as
well,
so
it's
like
policy
agent
seems
to
be
the
like
the
way
the
community
is
thinking
of
addressing
policy
yeah.
This
is
like
one
of
the
things
we
attempt
to
do
with
this
work:
to
identify
the
commonalities
between
different
communities,
so
closing
the
action
item,
thanks
for
adding
thing
for
kara
okay,
so
I
reached
out
to
cameron
about
armory
policy
approach.
B
I
also
learned
from
cameron
the
spinnaker
committee
started
talking
about
police
topic
as
well
so,
and
he
told
me
that
he
will
try
to
find
more
about
that
and
we
might
see
on
police
police
agent
in
spin
up
committee
as
well
and
as
I,
as
someone
noted
on
the
document,
armory
flavor
of
spinnaker
supports
open
policy
agent,
so
the
comment
might
go
the
same
direction
as
well.
B
B
For
closing
that
good,
I
think
we
closed
all
action
items
which
is
always
great,
so
the
next
topic
is
daylight.
Saving
time
changes
check
the
our
meeting
time
is
locked
to
4
pm
utc.
As
you
all
know,
and
when
I
check
the
meeting
time
for
our
next
meeting
and
following
meetings
until
the
next
change,
it
will
be
6
p.m
for
europe
and
it
will
be
9
a.m
for
pacific
and
similar
one
hour
later
for
other
time
zones
in
north
america
as
well.
So
the
question
here
is:
should
we
switch
back
to
3
p.m?
B
B
And
then
I
action
me
and
tracy:
can
we
update
the
calendar
because
I
always
get
confused
like
afraid
of
making
mistakes?
You
know
to
update.
B
B
Okay
and
reminders
will
be
sent
about
this
on
the
main
list
result.
So
we
keep
the
good
attendance
to
meetings
good
and
the
next
action
topic
is
from
christy
and
christy
won't
be
with
us
today,
as
she
noted
here,
so
she
opened
up
a
pull
request
in
a
glossary
repository,
if
I'm
not
mistaken.
B
Yes,
adding
initial
definitions
for
continuous
delivery
terms
and
this
discussion,
if
you
remember,
start
from
introverty
white
paper
and
then
we
decide,
we
can't
fix
this
an
interpreted
white
paper
because
it's
out
of
its
scope
and
then
it
moved
to
another
google
doc,
and
then
it
moved
to
a
new
repository
in
cd
foundation
on
github
named
glossary
and
trace
christie
started
adding
definitions
here.
So
if
you
could
review
the
pull
request
and
provide
comments,
that
would
be
great
and
the
other
thing
she
made
was
addition
of
the
timeline
which
is
great.
B
I
liked
it
a
lot
because
it
gives
you
some
historical
context
about
how
this
conte
instigation
continues:
delivery
deployment
devops
when
did
they
come
out
and
who
made
that
and
so
on.
This
was
a
good
reference.
So
please
look
at
the
repository
and
pull
request
and
provide
your
comments
and
the
link
is
in
the
document.
B
So
before
I
pass
the
warland
screen
to
dina,
anyone
wants
to
bring
up
anything
because
we
may
use
the
rest
of
the
meeting
for
discussion
on
four
keys
and
our
collaboration
opportunities.
D
Okay,
I
just
had
one
slightly
related
one.
This
is
a
follow-up
from
this
discussion.
We're
also
going
to
be
having
a
the
end
user
forum
discussion
around
measuring
devops
and
the
dora
metrics,
so
it
will
be
dina
again
with
a
couple
of
folks
from
my
end
user
console,
but
this
time,
taking
the
conversation
to
a
like
a
very
public
chat
and
discussion,
so
I'll
drop
that
link
in
the
chat
but
yeah.
D
This
is
just
part
of
our
trying
to
do
this
topic
to
death
from
various
perspective
and
trying
to
pull
in
as
many
people
into
the
discussion
as
possible,
and
then
we're
hoping
off
the
back
of
that
with
the
end
user
council
we
can.
D
We
can
do
a
nice
write-up
of
where,
where
this
topic's
are
in
the
industry
and
what
we
think
the
the
direction
we
should
move
forward
on
is
which,
in
large
part,
will
be
influenced
by
the
discussion
today
as
well,
and
I've
had
a
few
requests
for
recording,
so
just
checking
we're
recording,
which
is
great.
Okay,.
B
D
B
Okay,
so
then
we
are
on
to
dinner,
I
can
stop
sharing
and
then
please
take
it
over.
G
B
G
Awesome:
okay,
I'm
here
to
talk
about
the
four
keys
project
and
I'm
gonna
give
a
very
brief
background
on
dora,
because
I
assume
most
of
you
are
already
familiar
with
it.
G
You
know
dora
is
the
devops
research
and
assessment
team
inside
google
and
through
six
years
of
research,
they
identified
that
there
are
four
key
metrics
to
to
measuring
the
performance
of
a
software
delivery
team
and
they
can
roughly
be
split
up
into
two
categories:
speed
and
stability,
and
on
the
speed
side,
we
have
deployment
frequency
and
lead
time
for
change
and
on
the
stability
side
we
have
change,
fail
rate
and
time
to
restore
services.
G
G
G
You
know
over
six
years
lots
of
different
companies,
lots
of
different
industries
and
they
use
you
know,
academic,
peer-reviewed
methods
and
it
was
really
rigorous
stuff
and
now
what
we
have
on
on
the
google
cloud
devops
website.
Is
this,
like
quick
check
where
you
can
like
fill
out
the
survey
and
essentially
just
it
kind
of
asks
you,
the
the
questions
for
establishing
the
4k
metrics.
So
it'll
ask
you
like
how
long
does
it
take
for
a
change
to
get
into
the
production?
G
And
how
often
do
you
have
incidents
and
things
like
that
on
the
pro
it's
really
quick
and
easy
to
use.
So
if
somebody
doesn't
have
a
lot
of
resources
and
just
like
really
wants
to
get
a
quick
check
on
where
they're,
where
they
are
and
how
they're
comparing
to
their
industry,
it's
a
it's
a
good
place
to
start.
G
So
if
you
were
trying
to
measure
your
performance
over
time,
because
you
are
adapting
a
new
change,
review
system
or
you're
using
some
new
cd
tools-
and
you
think
that
this
is
going
to
help
with
your
velocity,
then
you
would
have
to
just
fill
out
the
survey
several
several
times
over
time.
And
you
know
what
we
see
is
that
people
have
survey
fatigue
and
it's
also
just
really
easy
for
people
to
ignore
it.
So
that
is
why
we
built
the
four
keys
project.
G
First
of
all,
it
helps
alleviate
bias
and
when
I'm
talking
about
bias
it,
you
know
it's
like
things
like,
like
recency
bias,
or
also
like
the
just
generally
like
in
your
work
role,
the
the
kinds
of
things
that
you
are
exposed
to
so,
like
somebody,
who's
who's
managing
servers,
is,
is
going
to
have
a
much
different
understanding
of
the
health
of
the
of
the
system
than
just
the
person,
who's,
writing
features
and
pressing,
submit
pressing
merge.
G
And
then
you
know
if,
if
there
was
incident
really
really
recently,
you
might
have
like
a
very
outsized
understanding
of
your
change
failure
rate
just
because
of
recency
bias.
So
using
systems
data
helps
alleviate
the
bias.
It
also
makes
it
easier
to
measure
over
time.
So
you
can.
You
can
really
see
like
how
your
you
know
how
the
experiments
that
you're
doing
are
actually
affecting
your
your
metrics
and
it
exposes
bottlenecks,
which
is
another
thing
like
the
survey.
Can't
really
do
so
if
your
change
fail
rate
is
a
week.
G
No
sorry,
if
your
lead
time
to
change
is
a
week,
then
you
know
survey
will
just
tell
you.
Oh,
you
are
then
you're
like
a
medium
high
performer,
but
what
it
doesn't
tell
you
is
like
how
much
of
that
time
was
spent
before
the
the
code
was
reviewed
peer-reviewed?
How
much
time
did
it
sit
in
the
system
waiting
to
be
merged
in
and
then
how
long
was
you
know
the
actual
rollout
process?
G
So
you
know,
sometimes
we
see
things
like
it's
really
really
fast
to
get
integrated
in,
but
it
just
sits.
There
just
sits
there
for
weeks
before
it
gets
pushed
out
and
the
system
data
helps
expose
where,
in
that
process
things
are
slowing
down
and
yeah.
It
helps
quantify
experimental
success
on
the
cons.
It's
a
little
more
difficult
to
set
up
than
just
taking
a
survey
and
there
are
storage
and
engineering
costs.
G
So
this
is
the
project
it
is
on
github,
and
that
is
our
little
bitly
link,
but
the
slash
dora
dash
for
keys-
and
this
is
the
design.
G
So
the
great
thing
is
is
that
we
have
all
of
the
data
that
we
need
to
calculate
these
metrics.
The
challenge
is
that
it
is
in
you
know,
dozens
of
different
systems
that
people
use
like
even
on
one
team,
you,
you
use
one
thing
for
your
version
control.
You
use
one
thing
to
manage
your
rollouts.
You
use
one
thing
to
manage
your
incidents
and
how
do
we
take
all
of
this
data
from
all
these
different
systems
and
join
it
together?
G
Also,
when
we're
you
know,
designing
an
open
source
project,
how
do
we
account
for
the
unknown
data
sources?
I
you
know,
I'm
familiar
with
a
certain
set
of
tools
that
people
may
use,
but
obviously
there's
more
out
there,
and
how
do
you
build
something
that
is
flexible
enough
to
be
able
to
adapt
to
these?
These
sources
that
we
don't
know
anything
about
and
the
solution
that
we
came
up
with
was
to
create
a
very
generalized
pipeline
that
takes
in
events
via
web
hooks
and
ingests
them
into
bigquery.
G
G
Okay,
so
data
mapping
is
an
upfront
cost.
It
is
kind
of
like
the
crux
of
the
issue
and
what
so?
What
we
do
is
we
put
everything
into
this
raw
events
table
for
keys.events,
raw
and
think
of
it
as
a
devops
log.
It
should
contain
every
event
related
to
development
deployment
and
incident
management,
so
you
know
when
in
doubt
stream
it
into
the
table
and
then
from
there
there
are
these
scheduled
downstream
queries
that
create
these
dimensional
tables.
G
You
want
to
be
able
to
compare
apples
to
apples
over
time.
If
you
decide
that
you
need
to
update
your
definition
of
a
successful
deployment,
maybe
before
you're,
looking
at
all
the
deployments
where
traffic
was
at
least
50
users,
and
now
you
wanna
drop
down
to
five
percent,
you
should
be
able
to
have
all
that
data
still
in
your
raw
events,
log,
so
that
you
can,
you
know,
just
add
a
few.
G
You
know
you
know
where
statements
in
the
sql
query
and
then
update
it
and
that
way
it'll
apply
your
new
definition
to
things
in
the
past
as
well.
So
then,
you
can
see
using
this
new
definition,
how
you
are
tracking
over
time,
so
we'll
get
into
the
metrics
now
the
first
one
is
deployment
frequency,
which
is
how
often
an
organization
successfully
releases
into
production.
G
This
is
the
easiest
one
to
collect
and
calculate
because
it
just
requires
one
table
and
actually
all
of
the
metrics
require
this
one
table
and
you're
joined
with
other
tables.
So
if
you're
using
you
know
any
kind
of
cicd
system,
you
have
all
the
data
you
stream.
This
information
into
this
table,
use
it
to
calculate
frequency
now.
One
thing
that
we
see
one
common
mistake
is
that
people
will
often
confuse
volume
with
frequency.
G
So
a
very
common
thing
is
that
somebody
will
look
at.
G
G
Somebody
will
look
at
their
deployments
over
the
past
month
and
they'll
say:
oh,
we
had
30
deployments
in
the
map
last
month,
so
that
comes
out
to
an
average
of
one
a
day,
so
we
deploy
daily.
Yes,
no,
not
exactly
because
those
30
deployments,
maybe
they
just
happened.
G
On
two
distinct
days
in
the
month,
so
then
you're
not
deploying
daily
you're,
deploying
monthly
and-
and
therein
lies
the
key
difference
between
looking
at
your
deployment
volume
and
your
deployment
frequency.
When
we
look
at
frequency,
we
want
to
count
not
the
number
of
deployments
that
we
have,
but
the
number
of
days
that
had
deployments
okay.
So
next
we
have
lead
time
to
change,
which
is
the
amount
of
time
it
takes
to
commit
to
get
into
production.
G
So
this
means
that
for
every
deployment
we
need
to
maintain
a
list
of
all
the
changes
included
in
the
deployment,
and
this
is
really
easily
done,
like
trivially
done,
if
you're
using
ci
cd
like
if
you're,
if
you're
triggering
your
pushes
to
deployment
to
production
from
changes
being
merged
into
your
to
your
repo,
you
know,
there's
like
a
shaw
it
maps
back,
you
can
unpack
it.
You
can
see
like
all
the
distinct
commits
that
were
in
it,
pull
out
those
time
stamps,
and
then
you
know
your
lead
time
to
change.
G
It
gets
a
little
bit
more
tricky
when
people
use
other
deployment
patterns,
in
which
case
it's
really
up
to
it's
really
up
to
the
developer,
to
somehow
maintain
that
list.
But
if
you
want
to
know
your
lead
time
to
change
you,
you
need
to
be
able
to
join
your
your
change
timestamps
to
your
deployment
timestamps.
G
G
G
Ideally,
we
should
be
able
to
join
these
two
tables
together
on
the
deployment
id.
So
in
an
ideal
world
we
would
have
for
every
incident
like
a
root
cause
like
a
like.
This
incident
came
from
this
deployment
and
then
we
can
use
that
to
to
calculate
our
change
fail
rate.
G
This
isn't
always
possible,
in
which
case,
then
you
need
to
start
being
a
little
more
loosey-goosey
with
the
with
the
metric
here
and
then
you
end
up
looking
at
it
and
saying
that
we
had
30
deployments
this
month
and
we
had
one
incident
now.
The
problem
there
is
that
incident
possibly
was
caused
by
a
deployment
that
happened
last
month.
Maybe
before
you
put
these
new
tests
in
place
and
these
new
these
new
systems
in
place,
and
so
then
it's
not
actually
a
reflection
of
your
performance
this
month.
G
So
next
we
have
time
to
restore,
which
is
how
long
it
takes
an
organization
to
recover
from
a
failure
in
production
and
again
we
have
deployments
and
incidents,
and
ideally
I
use
this
word
a
lot.
Ideally,
you
can
join
these
together
so
that
you
know
when
the
incident
was
created,
and
then
you
know
how
long
it
took
to
fix
it.
G
So
then,
what
we
do
is
that
we
look
at
the
the
time
the
incident
was
detected
like
the
time
that
you
know
somebody
added
it
to
the
incident
management
system,
and
then
it's
actually
at
that
point.
It
is
more
a
measure
of
yeah
how
quickly
you
were
able
to
to
fix
it
once
you
identified
it
the
the
reason
why
that
isn't
ideal,
the
reason
why
that
isn't
what
we're
precisely
going
for
is
because
we
want
to
consider
how
the
user
is
experiencing
this.
G
So
for
all
these
things,
you
know
like
we're
talking
about
our
deployment,
frequency
and
our
lead
time
to
change
it's
it's
all
dependent
on
on
things
going
into
production,
because
that
is
when
the
user
starts
to
experience
it.
So
if
there
was
a
an
incident
that
happened
with
a
deployment
that
was
started
last
week,
but
we
didn't
notice
it
until
this
week
and
then
once
we
noticed
it,
we
fixed
it
an
hour.
G
Well,
the
user
maybe
was
experiencing
that
the
entire
week,
and
so
we
should
know
that,
because
then,
what
that
tells
us
is
that
we
maybe
need
a
better
reporting
system
like
how.
Why
didn't?
Why
wasn't
the
user
able
to
communicate
that
to
us
or
we
need
a
better
health
check
system
like
we
need
something
in
there
and
knowing
the
full
span
of
that
time?
G
Okay,
so
this
is
the
infrastructure.
You
have
your
events,
the
event
handler,
which
is
your
web
hook,
and
it
sends
the
messages
from
the
events
like
the
entire
body
to
a
pub
sub
topic.
Each
pub
sub
topic
is
it's
for
just
that
source,
so
you
would
have
like
a
github
pub
pub
sub
topic,
for
instance,
and
then
that
goes
to
a
very,
very
tiny
little
little
like
cloud
run
instance,
which
does
very,
very
small,
etl.
G
Basically
it'll
like
look
at
that
big
json
body,
and
it
will
say,
here's
the
time
stamp.
This
is
what
the
source
was
etc,
and
then
it
puts
that
into
the
bigquery
raw
table,
which
has
a
field
that
has
just
the
entire
body
of
the
event,
because
we
don't
want
to
lose
any
data
because
we're
going
to
need
it
because
things
happen
and
suddenly
somebody
asks
a
question
or
the
definition
changes
very
slightly
and
we
need.
G
We
just
need
all
the
data
and
then
from
there
we
have
our
downstream
tables
and
then
that
feeds
into
the
dashboard.
So
I'm
going
to
skip
the
demo
video
you
can
watch
on
youtube.
So
this
just
shows
how,
like
you
know,
we
run
the
setup
script
and
how
you
you
can
integrate
it
with.
You
know
some
different
version,
control
systems
and
some
different
deployment
systems,
and
then
it
shows
the
data
going
into
bigquery
and
it
shows
the
dashboard.
G
So,
let's
go
to
the
dashboard
okay
right
here
down
the
middle:
these
are
the
four
key
metrics
as
defined
by
the
door
research,
and
it
puts
it
into
the
buckets
from
the
door.
Research-
and
this
is
this-
is
the
thing
that
is
predictive
of
successful
business
outcomes,
which
is
a
stronger
relationship
than
correlation
and
those
are
things
like
profitability
and
also
job
satisfaction.
So
your
developers
will
be
happier
if
these
badges
are
green
around
the
edge.
G
What
we've
done
is
we've
added
some
charts
that
show
like
basically
a
daily
running
log
of
your
performance,
and
the
useful
thing
here
is
because
it's
kind
of
like
an
early
warning
system
or
early
feedback.
G
So
if
you
have
a
new
experiment
that
you
think
you
think
this
new
cd
tool
you're
using
is
going
to
really
improve
your
deployment
frequency,
then
you
would
be
able
to
see
that
pretty
early
in
the
chart
in
the
upper
left-hand
corner
or
in
the
upper
right-hand
corner,
and
that
would
allow
you
to
get
a
feel
for
like
whether
you're
trending
in
the
right
direction.
G
So
because
the
the
the
buckets
down
the
center
are
there
they're
they're
90-day
aggregate
view,
and
so
it
could
take
some
time
for
for
that
to
flip
over
to
the
next
level.
So
things
we
don't
want
people
to
do
with
the
dashboard.
We
don't
want
people
to
like
sick
teams
on
teams,
because
the
product
that
you're
working
with
has
obviously
different
technical
debt
and
really
like
the
thing.
The
the
reason
to
use
this
is
to
create
establish
a
baseline
for
your
performance
now,
so
that
you
can
perform
on
it.
G
It's
not
to
to
for
me
to
compete
with.
You
know
my
friends
working
over
there.
It's
for
me
to
compete
with
me,
and
that
is
why
we
we
want
people
to
you
know
to
use
it
to
establish
a
baseline
and
then
to
do
some
kind
of
you
know.
Experiment
use
find
the
bottleneck,
work
on
it,
and
then
you
know
iterate
and
keep
coming
back
to
the
dashboard
to
see
how.
How
are
these
numbers
trending?
G
The
next
thing
is
the
metrics.
You
know
it's
a
diagnostic
tool.
It
is
not.
It
is
not
the
goal
itself
like
the
goal
is
not.
G
So
I
think
that
when
people
use
the
dashboard
and
set
goals
around
it,
they
should
really
reflect
the
projects
and
the
experiments
that
they're
that
they're
taking
on
to
get
closer
to
that
that
actual
end
goal,
and
then
you
use
the
metrics
to
to
measure
against
you
know
again
as
like
a
diagnostic
tool.
G
Okay,
so
here
is
the
the
four
keys
project.
If
you
go
into
the
repo,
you
would
run
the
setup.shell
script
to
to
play
with
it.
You
can
read
more:
we
have
a
blog
post
up
on
the
google
cloud
blog.
G
D
Thanks
for
that,
dina
I'll
go
first,
while
everyone's
warming
up,
can
you
say
a
little
about
the
the
integrations?
You've
already
done
so
I
I
know
this
tecton,
but
maybe
that's
worth
repeating
in
and
all
the
the
other
projects
that
you've
currently
done
integrations
basically.
G
G
G
And
we
have
up
on
the
repo,
we
have
a
roadmap
of
all
of
the
integrations
that
we
want
to
do
and
they
are
like
jenkins,
team
city,
spinnaker,
jira,
servicenow,
pagerduty,
fitbucket,
github
enterprise
and
git,
which
is
g-I-t-e-a
if
you
haven't
heard
of
it
and
also
which
I'm
gonna,
I'm
gonna
mention.
G
Since
this
ties
in
nicely
to
the
question
that
tracy
put
in
the
chat
there
is
I've
also
added
google
forms,
because
incidents
are
hard
to
identify
and
classify,
and
obviously
some
people
have
really
good
incident
management
systems
that
allow
you
to
identify
a
root
cause.
G
There
is
a
subset
of
incidents
that
are
not
as
hard
to
identify
because
you
can
like
in
things
like
stackdriver,
you
can,
you
know
automatically
detect
them,
and
then
you,
you
know
that
bubbles
up
into
your
data
and
then
that's
great,
but
not
all
incidents
will
cause
like
like
actual
errors.
Not
all
incidents
will
will
be
possible
to
detect
automatically,
and
so
that
is
why
I
I
tend
to
shy
away
from
doing
that
kind
of
approach.
G
I
ultimately
like
the
simplest
thing
that
you
could
do,
is
just
have
a
table
with
the
deployment
id
and
like
the
incident
and
you're
like
a
boolean
like
true
or
false.
Was
this
an
incident?
And
so
that
is
why
I
added
google
forms
like
if,
if
you
don't
have
for
people
who
don't
have
a
more
elegant
incident
management
system,
that's
kind
of
like
the
bare
bones
like.
If
you
had
a
form
where
you
answered
just
these
few
questions
and
put
in
the
ids,
then
you
can.
G
It's
yeah.
If,
if
you
look,
if
you
look
closely
at
the
at
the
sql
scripts,
currently,
for
instance,
it's
just
looking
for
like
key
it's
doing
a
regex
looking
for
some
keywords
to
pull
out
the
the
id
for
the
change
that
caused
the
incident,
and
that
was
just
to
show
like
here's.
Here's
one
way
that
you
can
do
it,
but
there
are.
There
are
better.
B
B
B
Do
you
reach
out
to
those
communities
directly
or
how
do
you
get
these
things
in
front
of
them
like
okay,
we
are
trying
to
do
this
thing,
four
keys,
trying
to
get
the
data
out
of
the
tools
like
jenkins
or
like
spinnaker,
or
get
it,
for
example,
software
configuration
or
art
factory
for
artifact
repository
like
how
do
you
you
know,
find
who
to
talk
to,
and
how
do
you
get
these
things
in
their
attention.
G
It's
really
interesting,
we
very
much
have
done
it
the
other
direction.
So
far,
it's
it's
been
like
users
who
have
like
filed
issues
on
github,
basically
saying
I'm
trying
to
use
jenkins
or
it's
like
our
customers,
like
the
some
of
the
the
big
google
cloud.
Customers
who
happen
to
use
this
or
that
and
yeah
all
the
customer
engineers
who
work
with
all
the
different
companies-
and
you
know,
basically
have
come
back
and
let
us
know
like
these
are
the
these
are
the
things
they
use.
G
B
Because,
like
we
are
talking
about
pretty
similar
things
like
that's,
why
this
artifact
method
or
the
metadata
topic
we
were
talking
about
during
the
action
item,
reveal
like
we
are
trying
to
collect
such
metadata,
like
git,
metadata
or
artifact
metadata,
to
see
like
how
we
can
somehow,
I
don't
want
to
say
standardized,
but
somehow
at
least
make
it
visible
to
start
with
and
then
find
some
way
to.
You
know
converge
on
some
common
way
to
use
the
metadata
or
get
the
metadata
or
consume
such
metadata.
G
I
mean,
I
think
some
kind
of
standardization
would
be
great.
My
you
know.
Concern
with
the
the
architecture
of
this
was
basically
kind
of
like
people
have
error
budgets
like
I
I
just
I
assumed
that
the
sources
could
and
probably
would
change
their
data
models
at
some
point,
and
I
try
to
to
get
the
data
into
a
persisted
state
as
soon
as
possible,
which
is
why
there's
not
a
lot
of
etl
that
happens
closer
to
the
event
source
it
once
it
gets
into
pub
sub.
G
Then
I
have
confidence
that,
even
if
the
the
data
mapping
is
no
longer
correct,
then
it'll
it'll
still
stay
in
pub
sub
and
keep
retrying
until
the
mapping
is
resolved.
So
then,
the
very
like
the
very
tiny
little
etl
worker,
which
is
looking
for
a
field
called
x,
y
or
z,
because
it's
trying
to
pull
out
the
time
stamp.
Then,
if
that,
if
that
breaks,
then
you
know
it
won't
return.
A
successful
message
to
pub
sub,
so
pub
sub
will
just
keep
trying
until
that's
fixed,
and
then
that
would
resolve.
G
So
I
yeah
we
used
more
of
like
a
defensive
approach
to
just
you
know.
Assuming
that
things
would
change
and
things
would
break,
but
it
would
be,
it
would
be
great
if
we
could
get
the
community
to
to
more.
You
know
to
like
kind
of
widely
acknowledge
that
these
are
the
pieces
of
information
that
are
important,
that
we're
looking
for.
If
we
could
even
have
like
four
keys
fields,
that
would
be
pretty
cool.
B
I
have
iran's
questions,
but
I
leave
them
to
events
seek
members
for
today.
Here's
the
one
last
question
about.
Like
again,
you
generally
touched
on
scm
like
commit
stuff
and
like
deployment
frequency
and
the
thing
I
want
to
ask
like
this
deployment
frequency
or
lead
time
for
changes.
B
They
also
depend
like
how
much
time
it
takes
to
run
your
tests,
for
example,
in
your
typical
configuration
continuously
and
deployment
pipelines
like
do
you
have
any
like
yeah
the
test,
the
how
to
say
how
much
time
it
takes
to
run
tests
or
how
many
tests
pass
and
fail
how
regularly
or
how
frequently
they
fail
is
not
in
those
like
four
keys,
but
they
are
hidden
within
those
figures.
You
know.
If
you
have
slow
running
tests,
then
your
deployment
frequency
will
suffer
from
that.
G
G
The
the
problem
is
like
these
they're,
not
they're,
not
the
metrics
that
are
you
know,
indicative
of
or
predictive
of
you
know
like
the
business
outcomes
and
all
the
things
that
dora
has
found,
but
they
kind
of
they
bubble
up,
and
so
that's
that
is,
you
know
when
we're
talking
about
things
like
identifying
your
bottlenecks
and
and
just
working
on
the
one
bottleneck
yeah.
The
research
talks
about
the
you
know
the
theory
of
constraints
that
that
you
basically
like
any
work
done
anywhere
out.
G
That's
not
the
bottleneck
is
it's
frankly,
it's
a
waste
of
time,
because
it's
your
your
system
isn't
going
to
go
any
faster,
as
long
as
the
bottleneck
is,
is
the
bottleneck
and
the
these
kind
of
like
lower
level
metrics
that
you're
talking
about
they're
they're,
the
things
that
help
you
drill
down
and
understand
where
the
bottleneck
is
so
yeah.
If
I
were
trying
to
improve
my
deployment
frequency
well
I
mean
you
know,
first
of
all,
if
if
it
was
like
weeks
or
months
between
my
deployments,
I
probably
wouldn't
be
looking
at
the
tests.
G
First
I'd
be
looking
at
like
you
know,
why
are
we
doing
deployments
on
demand,
but
then
you
know
as
like
those
bottlenecks
are
removed.
Then
yeah
you'd
get
down
to
a
point
where,
like
oh
well,
we
can't
go
any
faster
because
the
test
takes
a
whole
day
to
run
then
yeah,
you
would.
You
would
start
looking
at
that
and
then
the
question
there
is
why
you
know.
Why
does
the
test
take
so
long
to
run?
So
is
it
just
you
know
it
could
be
like?
G
Maybe
it's
you
know
the
architecture
that
you're
working
that
you're
trying
to
like
test
this,
like
really
big
thing,
and
so
you
have
a
lot
of
different
tests
about
how
these
things
fit
together,
whereas
you
know,
if
you
maybe
went
to
a
more
you
know,
service,
oriented
or
microservice
architecture.
Maybe
your
chest
could
be
very
small
and
distinct
to
the
point,
and
then
that
would
allow
you
to
push
those
services
independently
without
having
to
do
like
a
very
large
test
run.
G
I'm
not
sure
we
would
put
those
metrics
into
the
dashboard,
but
I
think
that
I
I
do
want
to
put
a
little
bit
more
work
into
or
a
little
bit
more
a
lot
more
work
into
teaching
people
how
to
use
the
data,
because
if
you
have
all
this
stuff
in
your
events,
raw
table,
then
you
can
you
can
drill
down.
You
can
pull
out
these
these
kind
of
lower
level
metrics
that
do
expose
help.
You
pinpoint
your
efforts
and
expose
the
bottlenecks.
B
The
reason
why
I
ask
is
like
it's
always
people
always
like
adding
new
test
cases,
and
that
gives
you
a
trend.
You
know
it
increases
increase
and
people
don't
tend
to
remove
test
case.
If
they
don't,
you
know,
add
any
value
but
yeah.
It's
like
yeah,
this
high
level,
four
keys,
give
you
where
it
could
be
the
bottleneck.
If
you
see
something
is
going
in
the
wrong
direction,
you
can
drill
down
and
say:
okay
yeah.
We
really
need
to
take
care
of
our
test
frameworks,
test
case
and
add
more
equipment
or
whatever
yeah.
G
D
I'd
love
to
hear
some
input
from
folks.
Who've
got
internal
platforms
about
whether
this
looks
useful
or
relevant.
I
don't
know
maybe
ramen
or
oliver
any.
A
It
definitely
looks
interesting
and
from
from
our
end,
we're
looking
into
the
topic.
What
I
understand
here
is
that
you
need
to
you
need
google
cloud
infrastructure
in
order
to
learn
today.
G
Well,
the
it's
our
setup
script
sets
it
all
up
on
google
cloud
infrastructure,
but
if
you,
if
you
look
at
the
actual
like
code
and
we
have
another
user
who
is
actually
starting
to
pursue
this,
trying
to
like
genericize
the
architecture
even
a
little
bit
more
but
like
the
the
event
handler,
the
the
like
little
etl
they're,
all
they're
containers
they're,
just
like
containerized
flask
apps,
so
you
could
run
those
anywhere
they
they
make
calls
out
to.
G
You
know
the
google
pub
sub
api,
but
obviously
you
can
use
a
different
kind
of
pub
sub
if
you
wanted
to
so
that
would
just
be
like
swapping
out
like
that
line
in
the
event
handler
and
the
I
think
you
know,
the
big
thing
would
be
would
be
big
query.
G
You
could
use
a
different,
I
mean
it's
all
sql
based
all
of
the
you
know
the
the
data,
the
calculations,
the
metric
manipulation,
like
all
like
the
heavy
lifting
like
etl
kind
of
stuff,
was
all
done
in
sql
to
to
make
it
easy
to
essentially
like
dig
into
the
data
and
change
definitions,
so
that
yeah,
it's
all
in
the
table
and
all
you
have
to
do-
is
like
update
a
sql
script.
G
You
don't
have
to
like
deploy
a
new
thing,
so
you
could
use
a
different
sql
provider,
I'm
a
big
fan
of
bigquery
and
for
something
where
you're
just
doing
insertions.
It's
pretty
good
and
also.
G
There's
a
little
bit
of,
I
think,
I'm
not
sure.
If
all
you
know
the
sql
providers
allow
you
to
do
little,
javascript
queries
or
like.
G
I
think
they
all
provide
easy
ways
to
parse
json
now
so,
like
a
lot
of
this
depends
on
being
able
to
go
into
that
that
body
that
event
body
that's
in
json
and
to
pull
out
the
fields
that
you're
interested
in
so
yeah
all
the
codes
there.
If
you
want
to
strip
out
the
google
stuff,
you
are
very
welcome
to
do
so.
A
Especially
larger
companies
will
likely
have
issues
right,
sending
the
data
to.
G
Yeah,
if
you
wanted
to
fork
it-
and
you
know,
try
to
install
it
into
like
a
kubernetes
cluster,
like
you
said,
be
you
know
very
open
to
to
seeing
to
seeing
that
fork
and
then
also
possibly
merging
some
of
those
changes
back
in.
F
Yeah
thanks
a
lot
for
the
presentation,
and
I
know
me
and
emil-
and
yes,
we
are
part
of
the
events
say
it's
a
newly
formed
interest
group,
the
cdf
and
I
guess
the
work
that
we're
trying
to
do
is
related
to
the
the
four
key
projects
in
a
way,
in
the
sense
that
we're
trying
to
create
a
standard
for
events
generated
by
ci
cd
platforms.
F
F
And
one
of
the
goals
is
to
allow
these
platforms
to
talk
to
each
other,
and
I
guess
the
other
goal,
at
least
from
my
point
of
view-
is
to
allow
platforms
to
like
do
monitoring
tracing
measuring
of
ci
cd
platform
across
different
systems.
So
to
have
like
one
single
system
like
with
the
project
that
you
worked
on
and
being
able
to
collect
the
same
type
of
data
from
different
platforms.
F
F
Code,
versioning
systems
and
testing
and
deployment
and
so
forth.
I
guess
one
of
the
areas
that
we
have
not
touched
too
much
upon
yet
is
incident
management,
so
we
started
considering
a
category
about
operations,
but
we
have
not
discussed
about
it
very
much.
So
I
think
it
would
be
interesting
for
me
to
understand
yeah
what
kind
of
sources
you
have
for
incidents
and
how
do
you
relate
them
to
the
deployments?
G
So
yeah
so
some
deploy,
some
incidents
can
be
automatically
detected,
but
it
will
never
be
all
incidents,
and
so
there
there
needs
to
be
a
way
for
a
human
to
like
go
in
and
like
mark
that
some
deployment
caused
some
incident.
G
G
It's
it
is
it's
a
difficult
it's
a
difficult
problem
because
it
necessarily
involves
human
input,
which
obviously
is
imperfect,
because
people
will
forget,
people
will
make
typos.
G
G
Because
yeah
like
not
all
incidents
are
easily
detectable.
Sometimes
you
get
like
you
know,
500
errors
and
you
can
detect
those
pretty
easily
other
times.
What
you
get
is
just
a
degradation
in
service.
Like
some,
you
know,
things
are
going
x
percent
slower,
sometimes.
G
You
sometimes
things
technically
work,
but
they
give
unexpected
results,
and
that
is
also
an
incident
as
long
as
it
is
like
actively
negatively
impacting
the
user
and
what
the
user
is
trying
to
do.
That
is
an
incident.
G
G
If
you
can't
do
that,
then
the
next
best
thing
is
just
to
use
the
time
stamp
that
the
incident
was
detected
and,
whatever
you
know,
incident
management
system
that
you're
using
and
then
that'll
be
part
of
your
that'll,
be
you
know
the
time
stamp
that
you
use
to
do
a
time-bound
rate
and
then
also
you'll
use
this
the
the
detection
time
stamp
to
tell
and
then
the
closed
time
stamp
and
the
incident
management
system
to
say
like
oh,
that
was
the
time
to
restore
the
problem.
G
F
G
Yeah
the
thing
I
like
to,
like
always
like
say
with
these
metrics
and
with
the
dashboard.
G
These
are
diagnostic
tools
to
to
help
you
pinpoint
like
where
the
pain
actually
is,
and
also
as
long
as
you
are
consistent
with
your
definitions
and
your
measurement,
then
it's
still
useful.
It's
like
a
scale
like
my
bathroom
scale
could
be
10
pounds
off.
It
doesn't
matter
as
long
as
that
is
the
scale
that
I
keep
using
now.
If
I
go
to
the
gym
and
use
a
different
scale,
then
suddenly
I'm
going
to
have
information
that
conflicts
really
wildly,
and
then
I
get
into
trouble.
G
G
B
Hey
so
sorry
to
interrupt
thanks
a
lot
for
questions
and
thanks
for
giving
providing
answers
dina.
So
like
what
happened
during
the
last
meeting,
we
suddenly
lost
two
meetings,
so
I
want
to
thank
everyone
for
joining
the
meeting
today
and
we
can
stay
online
and
see
if
someone
ends
our
meeting,
but
otherwise
we
meet
again
in
two
weeks,
first
of
april
3pm,
utc
and
so
yeah.
If
anyone
wants
to
stay,
I
don't
have
dinner.
B
You
have
time
to
stay
on
and
continue
conversation
we
can
stay
or,
if
not,
then
again,
thank
you
for
joining
the
meeting
and
presenting
four
keys
to
us,
and
hopefully
we
find
some
opportunities
to
collaborate
with
you
and
your
community,
and
hopefully
you
can
join
our
future
meetings
as
well.
Yeah.
Okay,
thank
you.
Thanks.
C
That's
great,
you
know,
thank
you.
So
much
for
your
presentation
can
I
can
I
ask
you
a
question:
do
you
have
time
to
yeah
yeah?
Okay,
it's
I.
I
really
noticed
how
you
said
these
tools
are,
or
should
be
best
used
at
this
point
for
your
own
team.
Both
the
team
being
measured
should
stay
consistent
and
then
also
the
measurements
you
use.
So,
theoretically
speaking,
these
would
be
very
difficult
at
this
stage
to
be
able
to
use
these
sort
of
measurements
to
give
a
predictive
like
recommendation
to
a
team.
C
Like
oh
you're
deploying
you
know
not
enough
times,
the
teams
at
your
size
or
your
workload
are
deploying
more
often
they
have
less
incidents.
That
would
be
really
really
not
wise
to
think
about
doing
those
kind
of
recommendations.
At
this
point.
G
Yeah
I
yeah,
I
wouldn't
recommend
that,
because
it's
kind
of
like
it's,
it's
missing
the
forest
for
the
trees,
the
you
know
at
that
point
you
you
look
at.
You
say:
why
are
other
teams
deploying
more
frequently
and
usually
it
has
something
to
do
with
like
a
different
capability
that
they
have
built
up,
and
so
then,
at
that
point,
then
the
goal
is
well.
Let's
work
on
this
capability
and
if
you
know,
if
we
identified
this
correctly,
then
we
should
start
to
see
deployment
frequency
go
up.
G
But
no,
if
you
I've
been,
you
know,
I've
worked
on
teams
as
a
data
analyst
like
operational
teams
as
an
analyst,
and
once
you
start
using
the
metric
as
the
goal,
people
will
do
all
kinds
of
really
strange
things
that
you
don't
want
them
to
do
just
to
try
to
optimize
that
number
so
stay
that
is
yeah.
Staying
away
from
that
and
focusing
on
like
capabilities
and
experiments,
it's
it's
a
better
way
to
drive
that
change.
G
You
know
otherwise:
you'll
have
people,
you
know
just
just
deploying
the
same
image
over
and
over
again,
or
maybe
they
will
change
the
definition
of
what
a
successful
deployment
is,
so
that
going
to
staging
is
a
successful
deployment
which
that
that
is
not
that
doesn't
count
so
yeah
you
focusing
on
the
numbers
does
not
usually
end
up
well.
D
I
just
had
a
quick
question
about
whether
you
looked
into
anything
like
on
the
jenkins
integration
side.
Yet,
like
I
know,
jenkins,
is
far
from
opiniated
and
has
lots
of
different
ways
to
do
different
things,
but
has
there
been
any
initial
investigation
into
kind
of
an
approach
there
or
or
one
part
that
would
make
sense?
G
On
the
road
map,
but
you
know
I
will
say
that
techton
is
another
thing.
That
is,
you
know,
not
very
prescriptive,
and
the.
C
G
You
know
that
that's
the
beauty
of
of
just
putting
in
the
entire
body
like
the
entire
json
into
the
table,
so
that
the
the
people
who
are
running
the
workflow
should
know
that
they
call
their
deployments
this
or
you
know
this
means
that
so
there
with
things
like
that,
that
are
not
prescriptive,
there
is
there's
an
upfront
cost.
A
data
mapping
cost
that
yeah.
We
can't
we
can't
do
for
you,
make.
G
B
Okay,
so
I
look
forward
to
your
panel,
then
you
put
the
date
there.
Tracy
I've
seen
it
third
first
of
march,
so
yeah.
So
that's.
That
would
be
great.
I
think
that
we
have
participants
from
end
user
council
as
well,
so
it
will
be
a
great
discussion.
I
am
sure,
yeah.
B
Okay,
then
again,
thank
you
dina
for
joining
thanks
a
lot
and
let's
see
how
we
can
help
each
other
and
move
this
thing
forward,
because,
like
many
things,
you
summarize,
during
your
presentation
and
during
q,
a
it's
like.
B
And
you
know
you
never
go
and
fix
that
thing,
because
you
someone
else
comes
up
with
something
else:
some
other
data
from
extinct
tool
chain
and
so
on.
But
yeah,
oh
you
see
it's
a
difficult
thing
to
you
know
deal
with,
but
that's
why
we
are
here.
I
suppose
so
we
have
to
continue
pushing
so
thank
you
and
have
a
nice
day,
everyone
and
talk
to
you
in
two
weeks
so
bye
for
now.
Thank
you.
Yeah.