►
From YouTube: ADDO Track 5 Block 1
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
good
morning,
all
to
the
cicd
track.
Please
give
us
a
couple
more
seconds
while
we
fix
our
video
issues
here,
but
I
think
chairman
you're
also
online.
Can
you
hear
us.
A
And
we're
just
waiting
for
chevy
to
be
able
to
unmute
himself
and
then
we're
good
to
go.
I
can
start
my
own
introduction.
First
of
all,
while
we,
while
we
save
some
time
here,
don't
do
everything
you
can
join.
My
name
is
hubert
schwandt.
I
am
a
regional
sales
manager
over
at
sonar
type.
A
We
focus
on
devsecops
and
I'll
be
guiding
you
for
today's
ci
cd
track
until
sort
of
1
pm,
uk
time
and
I'll
be
trying
to
bridge
the
gap
between
the
speakers
and
using
have
a
little
conversation
beforehand
to
give
the
audience
a
bit
of
an
introduction
and
an
overview
of
what
we
can
expect
in
these
sessions.
So
as
soon
as
sharina
can
join,
we
are
going
to
begin
our
first
session
of
today.
A
A
Beautiful,
that's
a
very
nice
place.
What
time
is
it
over
here
at
uad.
A
Perfect,
so
not
as
early
in
the
morning,
as
is
for
some
of
us
dining
in
from
london
or
from
the
year
season
on
the
east
coast.
I
think
it's
it's
3am,
so
I
think
we
have
a
great
big
audience
today.
Chevy.
I
already
talked
a
little
bit
about
what
you're
going
to
be
presenting
today,
but
I'll
leave
the
rest
to
you.
I
know
you've
planned
a
demo
at
the
end
as
well.
So
maybe
just
walk
us
through
what
you
want
to
show
today
and
what
the
what
the
learnings
are.
B
Today,
I'll
show
you
a
bit
about
the
basics
of
how
you
can
manage
your
releases
a
bit
about
releases
versus
deployment,
how
you
deploy
multiple
versions
on
productions
without
having
downtime
and
things
like
that,
some
of
the
basic
concepts
and
then
we'll
go
a
very
quick
gamer
with
microsoft,
azure
and
natural
devops.
It's
the
tools
that
I'm
using
correctly
to
do
these
things,
how
you
can
easily
do
that
using
a
deployment
slots.
A
B
Okay,
that's
good,
so
I
hope
you
can
all
see
my
screen
in
this
session.
We're
gonna
talk
about
managing
delivery
of
your
apps
via
vr
devops,
I'm
traveling.
You
can
follow
me
on
my
my
blog
or
in
linkedin
or
on
twitter,
I'm
also
a
microsoft
mvp
and
I'm
calling
from
the
beautiful
island
of
of
mauritius,
devops
and
its
evolution.
B
We
stick
now
together
with
containers
with
automations
all
these
improvements
in
devops
and
ci
cd,
it's
getting
more
and
more,
let's
say,
safer
and
easier
to
ship
to
production.
I
still
remember
back
in
the
days
whereby
making
a
production
release.
It
was
kind
of
a
big
event.
You
had
to
call
many
people
get
many
approvers.
B
Do
many
steps.
You
need
50
people
all
here
to
be
able
to
release
a
feature
in
production,
so
those
days
are
gone.
Hopefully,
now
now
things
are
better,
that's
good,
so
amazon
classifies
devops
as
the
combination
of
cultural
philosophies,
practices
and
tools
that
increase
and
organize
reference
ability
to
deliver
applications
and
services
at
high
velocity.
B
So
this
is
where
all
organizations
is
going
and
as
we
are
going
into
this
direction,
it's
it's
all
very
good,
but
it
also
increases
some
of
the
risk,
and
today
we
look
at
the
tools
and
concepts
that
we
can
use
to
to
mitigate
these
issues
and
currently,
right
now,
most
of
us
already
have
kind
of
your
release.
Fire
plans
in
place
where
you
can,
where
you
are
kind
of
automatically
building
testing
releasing
and,
of
course,
monitoring
your
applications
in
production.
B
B
There
are
some
strategies
that
you
can
use
about
how
you
release
your
so
depending
on
what
type
of
organization
you
are
or
what
is
your
deployment
strategy
and
you
might
or
you
could
be
looking
at,
or
you
have
been
at
some
point
in
time,
like
the
guy
in
the
picture
right
so
you're
releasing
in
production,
you
think
everything
is
under
control,
but
actually
it's
a
big
fire.
B
So
today
we'll
discuss
how
to
not
good
here,
let's
dive
directly
into
the
subject,
the
mean
take
away
like
the
main
message
that
I
would
like
to
iterate
here
is
that
yeah
deployment
release
should
not
be
the
same
thing
should
look
at
them
as
to
completely
separate
things.
If
you
are
already
there
with
your
organization
and
your
deployment
to
good
days,
you
are
on
the
right
track
else.
B
B
It
might
not
be
working
most
of
the
times
it
did
not
used
to
work
the
first
time
and
then
we
go
and
fix
this
stuff
in
production,
usually
late
at
night,
trying
to
get
it
working
and
then
you
make
it
available
that
was
kind
of
the
workflow,
but
that
was
back
then,
today
what
we
are
trying
to
achieve
is
to
be
able
to
push
to
production
more
often
and
depending
on
what
type
of
organizations
or
some
people
are
pushing
multiple
times
per
day,
some
per
day
some
weekly,
where
I
am
currently,
we
have
kind
of
releasing
at
least
once
per
sprint
and
you're
pushing
features
on
production
like
each
sprint
as
they
are
being
developed,
but
that's
kind
of
pushing
the
feature
of
the
code
to
production.
B
It's
it's
actually
should
be
considered
as
the
deployment,
but
it's
like
you're
taking
your
code
and
you're,
putting
it
on
the
production
environment,
but
this
does
not
mean
that
it
should
be
a
release.
The
release
is
where
your
customers,
like
all
your
customers
for
youtube
applications,
are
currently
seeing
that
feature.
B
That's
that
should
be
something
else,
so
you
can
put
your
code
there.
You
can,
for
example,
make
it
available
only
to
a
certain
group
of
people
to
see,
or
you
can
make
it
available
on
only
a
specific
url
or
a
beta
applications.
So
there
are
many
strategies
out
there
that
you
can
use
to
be
able
to
kind
of
test.
B
Your
apps
in
production
make
sure
it's
working,
get
feedback
iterate
only
accurate
on
it
and
make
more
deployments
fix
stuff,
improve
it
before
actually
making
the
production
release
and
making
it
available
to
everybody,
so
that
that
should
be
the
main
idea.
B
The
benefits
of
of
thinking
it
this
way
it's
like
here,
you
could
already
push
your
code
out
there,
even
if
it's
not
completed
and
half
the
future
on
production,
not
available
to
everybody
available
20,
for
example,
a
group
of
beta
customers,
and
these
guys
they
can
test
their
beta
application
and
you
might
have
been
building
metrics
logging
analytics
in
your
app
to
understand.
How
are
these
people
interacting
with
you
up?
Are
they
getting
crashes?
Where
are
they
getting
crashes?
And
then
you
make
another
deployment
you
fix
or
to
improve?
B
B
And
to
reach
there,
people
have
defined
multiple
kind
of
really
start
the
release
strategies,
the
first
one
we
discussed,
it's
called
like
reckless
deployment.
So
this
is
what
we
traditionally
used
to
have
before
it's
like
you
had
you
up
on
on
version
one,
you
push
something
and
you
make
it
a
release,
and
everything
is
now
on
version
two
like
fine,
that's
it
and
just
by
releasing
it
now,
you
can
see
that
it's
working
or
it's
not
working.
If
it's
not
working,
you
will
be
able
to
see
it
in
the
first
time.
B
B
Second
thing
they
started
to
do
is
like
okay,
let's
start
break
everything.
Let's
do
something
called
rolling
upgrade
a
ruling
upgrade
it's
like
introducing
a
new
version
of
your
code
into
the
existing
deployment
and
gradually
ramping
up
and
decommissioning
the
old
one.
It's
like
okay,
my
code
is
running,
might
be
running
like
on
three
nodes.
B
Let's
say
it's
on
running
on
kubernetes,
it's
running
on
three
nodes
and
you
make
them
released
like
nerd
binder,
but
the
idea
on
this
one.
It's
like
you
need
to
make
sure
it's
backward
compatible,
there's
no
breaking
changes
and,
of
course,
you
would
want
to
put
in
the
relevant
monitoring
in
place
to
be
able
to
monitor
how
these
nodes
are
progressing.
B
And,
let's
say
you
release
in
version
2
on
onenote,
and
you
can
check
how
it's
going.
If
it's
not
going
fine,
you
can
you
can
done
you,
can
then
roll
back
or
don't
proceed
with
that
with
the
upgrade.
By
doing
so,
you
make
sure
that
your
system
is
up
all
the
time
and
it
reduces
the
risk
compared
to
the
previous
one.
B
And
the
second
kind
of
diplomat
strategy
is
called
a
blue
green
deployment.
So
in
this
one
you
spin
up
a
new
separate
deployment
for
the
new
version
without
affecting
the
correct
one.
You
then
test
the
new
version
once
you
test
and
everything
is
fine.
Then
you
meet
the
one,
for
example,
you
could
be
having
this
is
your
first
state
before
then
you
deploy
new
version
of
your
services
by
your
new
services.
Let's
say
if
you
are
using
kubernetes,
it's
like
a
bunch
of
new
parts.
B
B
Maybe
you
connect
and
track
your
qa
can
connect
you
to
this
version
too
track
everything's,
fine
once
everything's
fine,
then
what
what's
happening
here?
It's
like
exchanging
the
url,
it
might
be
something
afterward
balancer
or
a
dns
server.
You
tell
your
traffic
hey
stop
going
now,
but
when
your
version,
2
is
good,
start
going
to
version
one
and
move
to
version.
Two,
that's
blue
green
deployment
and
to
build
on
that.
The
new
the
one
concept
further
is
called
kanye
redeployment,
which
is
seamless
as
blue
green.
B
It's
just
that
in
this
case,
you
can
keep
both
version
active.
You
can
keep
both
version
one
and
version
to
active,
but
you
can
switch
traffic
between
these
two.
You
can
say:
okay,
let's
90
percent
of
traffic
go
to
version
one,
and
then
ten
percent
traffic
go
to
version
two
and
then,
if
you
built
it
right,
you
have
your
monitoring
high
clicks
in
place.
Then
you
can
carefully
analyze
those
logs
and
see
how
our
people,
your
customers,
reacting
to
that
version
too.
B
Let's
say
if
it's,
if
it's
an
application
that
sells
stuff
like
oh
people,
buying,
carpets,
good
or
people
getting
crashers
or
people
less
prone
to
making
an
action
on
version
two,
as
you
get
this
feedback,
you
can
upgrade
version
two
you
can
roll
it
back
or
you
can
increase
traffic
to
it.
You
can
gradually
increase
traffic
to
version
two
until
you
make
the
switch
and
the
idea
is
you
do
that
without
in
down
time
right?
So,
let's
stop
here
and
move
quickly
at
the
dimmer.
So
what
I'm
gonna
show
you
right
now
is.
B
I
have
an
app
running,
so
this
is
actually
the
azure
photo.
What
I
have
here,
I
have
an
azure
web
app,
which
is
which
is
running
a
very
basic
application,
called
it's
a
java
application.
It's
just
returning
hello,
hello,
v1,
that's
it
and
inside
this
application.
What
I
have
there's
a
concept
called
deployment
slots
deployment
slots.
It's
like
you
can
have
multiple
slots,
but
everything
that's
set
in
production.
B
I
have
two
slots.
I
have
my
production
slot
where
my
app
is-
and
I
have
my
b
time
slot-
and
this
is
where
I
will
push
my
app
for
my
bitter
customers
that
I
want
to
test
in
production
and
the
idea
here.
I
will
have
the
production
up
running.
I
will
deploy
to
the
beta
computability
instance
and
then
at
some
point
I
would
want
to
make
a
dns
switch
just
to
flip
the
production
and
the
one
and
without
having
any
downtime.
B
B
Okay,
let's
continue
and
to
power
this
full
release
here,
we're
gonna
use,
azure
devops,
so
on
azure
devops,
I
already
have
my
reaper,
and
this
is
a
git
ripper.
It
has
all
my
codes
for
the
for
this
java
java
app
and
from
the
repair.
Now
I'm
going
to
build
a
pipeline
okay,
so
I'm
creating
a
new
pipeline
I'll
tell
it
to
take
it
from
take
the
code
from
azure
rapper.
B
Okay,
it
actually
did
something
that
I
did
not
want
to
kind
of
used
a
pipeline
which
was
already
here,
but
I
deleted.
So
that's
fine,
let's
just
review
this
one.
B
So
what
is
this
yammer
code
doing
this
armor?
It
is
actually
the
definition
of
the
pipeline
and
it
will
tell
what
what
will
happen.
Let's
walk
you
through,
so
this
is
saying
that
okay,
hey
this,
will
trigger
when
the
when
the
master
branch.
Why
is
that
here.
C
B
Creating
a
pipeline,
I
want
this
pipeline
to
pull
from
azure
reapers,
I'm
selecting
this
reaper
okay
and
it
was
already
in
my
code
base.
So
it's
pulling
it
again.
I
did
not
expect
that
and
in
this
pipeline
I
have
multiple
stages.
So
in
this
first
stage
I'm
telling
it
it's
starting
it
to
build
and
it's
bringing
in
even
build
and
publish
artifacts.
B
B
My
next
step
here.
What
I
have
I
have
a
deployed
stage
and
here
in
my
settings,
I'm
telling
it
okay
for
this
deploy
stage
what
you,
what
you're
gonna
do
is
to
pull
my
my
azure
subscription
and
you
will
deploy
on
web
app
for
linux,
and
here
I'm
telling
it.
What
is
the
warm
up?
Name,
I'm
telling
it
here.
That's
important,
don't
depend
production
but
deploy
on
the
slot
on
the
slot.
Here
I
have
a
slot
called
beta,
which
I
just
showed
you
earlier.
So
I'm
telling
it
okay
deploy
in
the
beta
slot.
B
In
this
step,
I'm
telling
you
to
read
release
to
the
slot
now
in
the
second,
once
we
release
to
this
slot,
what
we
want
to
do,
we
want
to
check
it
and
then
make
a
flip
and
release
it
to
production.
So
this
is
kind
of
what
the
next
task
is
doing
here.
This
next
task-
it's
telling
it
okay
connect
to
this
subscription,
but
this
time
make
a
swap
you
need
to
swap
the
production
slot
with
the
slot
which
is
currently
in
beta.
B
B
B
Stuff,
nothing
much
more
complicated,
it
will
render
post
checkout
and
then
it
should
tell
me
hey.
I
want
to.
I
want
to
release
to
the.
B
C
C
B
B
New
new
release-
yeah,
let's
see
so
it's
already
triggering
a
new
release
here
it
is
and
because
it
found
a
change
to
master.
It's
calling
the
the
build
state
here
and
it's
kind
of
building
that
up
again
and
just
after
this
should
take
about
30
seconds
and
just
after
it
should
push
to
the
deploy
stage.
B
Okay,
so
all
tests
has
passed
well
good
here
this
finish
now,
it's
asking
me
for
permission:
do
you
want
to,
or
do
you
want
to
deploy
to
the
btr
stage?
So
I
just
click
on
view
here.
B
Okay,
so
I
just
tell
it
to
permit
okay
and
I
grant
permission.
This
is
like
approving
this
release.
Once
I
approve
this
release,
it
should
deploy
to
the
staging
environment.
So
it's
telling
it
okay,
there
was
one
track
pending,
it
had
to
get
the
approval
and
approval
was
received,
and
it's
now
it's
now
releasing
to
production
to
the
beta
slot.
Sorry.
B
B
So
now,
if
I
refresh
this,
is
my
production,
one
production
it's
still
showing
v1
and
in
in
the
beta
one
it
showing
v50.
So
at
this
point
in
time,
your
qa,
your
product
managers,
can
go
and
check
how
it's
working
on
the
beta
and,
if
you're
happy,
you
will
then
go
to
your
pipeline
and
tell
it
okay.
Now
what
we
want
to
do.
We
want
to
swap
production
and
the
beta
slot
I
just
clicked
on
approve
and
that
should
just
take
one
30
seconds
or
so.
A
A
Minutes,
obviously,
to
wrap
up,
because
we
did
start
a
little
later.
B
Yeah
fine,
for
me,
I'm
just
finishing
that
gamer.
Then
we
are
good.
So
if
you
click,
if
you
check
here
on
your
production
url,
everything
will
be
up
all
the
time,
because
it's
not
making
any
kind
of
release
or
deployment
on
production.
It
is
just
making
a
swap
a
dns
server
installing
it
it's
telling
it.
Okay,
stop
pointing
to
the
steering
slot
switch
the
url
to
make
it
like
what
was
in
production.
What's
your
staging
and
what
was
the
steer
drilling
to
point
to
production?
B
So
this
should
help
you
make
your
release
here.
Okay,
so
you
already
see
right,
it
was
v50
v1.
Now
it's
v50
and
basically
there
was
no
downtime,
that's
just
because
we
are
using
deployment
slot
and
we
have
tool
it
to
make
a
swap
between
the
plot
and
the
bit,
iceland
so
yeah.
That
was
it
for
the
quick
gamer.
B
A
Thank
you
very
much,
and
if
anyone
has
any
questions
for
germany
after
the
after
the
this
brilliant
presentation,
please
head
over
to
the
cic
channel
on
the
arrow
slack
and
he'll,
be
on
there
probably
all
day
to
answer
any
questions
and
also
feel
free
to
connect
with
any
of
the
speakers
or
ask
moderators
on
linkedin
as
well.
We're
always
happy
to
welcome
you
guests
and
speaking
of
new
guests.
We've
got
rob
arkansas
dialing
in
and
he's
got
a
very
exciting
presentation
for
us
today.
Around
value
stream
management
across
an
it
organization.
A
Now
rob
you've
got
quite
a
bit
of
experience
in
the
devops
world.
Do
you
want
to
give
us
a
quick
overview
of
what
you'll
be
talking
about
today?.
D
Sure
thank
you
for
the
introduction
yeah.
So
my
name
is
robert
hook,
I'm
a
devops
architect,
which
basically
means
I'm
involved
in
different
organizations
to
to
set
up
their
end-to-end
devops,
let's
say
value
streams,
tool
chain
and
from
idea
to
production.
So
it's
basically
involved
in
how
do
you
engineer
the
full
automation
from
design
build
test,
deploy,
monitoring
but
also
security?
D
So
there's
a
lot
of
things
involved,
but
then,
looking
at
the
bigger
picture
of
everything
should
be
connected,
and
that's
also
this
top
book
about
value
stream
management
about
how
do
you
basically
integrate
and
automate
end-to-end
flows
of
work
throughout
your
entire
it
organization,
but
also
from
a
devops
perspective?
What
kind
of
value
streams
do
you
need
in
a
digital
operating
model?
So
we'll
go
through
those
topics?
What
kind
of
different
value
streams
do
you
have
it's
not
just
about
developing
and
releasing
something
new
in
production,
there's
also
operational
workflows.
D
We
need
to
automate
so
we'll
go
through
those
topics
and
and
and
look
at
how
do
you
can
take
that
to
the
next
level.
A
D
You
so,
let's
have
a
look
at
the
agenda
for
this
presentation,
so
today
I'm
going
to
look
at
value
stream
management,
as
I
mentioned,
and
how
does
that
position
within
the
overall
digital
operating
model,
so
many
organizations
are
transforming
their
operating
model
as
we
speak
to
devops
continuous
delivery,
organizing
around
teams
and
value
streams.
So
we're
going
to
look
at
what
is
value
stream
management
in
this
context,
it's
relatively
value
streams
are
not
new,
but
value
stream.
D
Now,
let's
have
a
look
at
value
stream
management
in
the
operating
model.
So
we
see
that
in
many
organizations
we
have
a
hybrid
mode
of
working
where
we
have
traditional
applications
and
clouds,
but
there's
a
lot
of
things
changing
as
you
could
see
in
this
picture,
so
we
changed
the
way
we
collaborate
in
our
network.
Instead
of
a
higher
organization.
We
have
different
teams,
collaborating
together
to
deliver
value.
D
We
have
new
architectures
with
microservices
and
platforms,
it's
of
course
devops
continuous
delivery
containers,
multi-vendor
sourcing,
where
we
have
different
external
vendors
involved
in
our
value
chain,
self-service
security
and
data-driven
I.t
and
one
topic
is
value
stream
management.
We're
going
to
talk
about
today,
but
you
see,
there's
a
lot
happening
in
the
operating
model
and
it
changes
how
we
deliver
and
operate
digital
products
and
a
key
area
is
value
stream
management.
D
Now
the
question
is,
of
course,
if
you
think
about
value
streams-
and
you
look
at
your
own
organization-
is
how
digital
ready
are
your
end-to-end,
it
processes,
and
typically
it's
not
talk
about
individual
product
teams,
but
a
lot
of
product
teams
need
to
work
together
and
still
dependent
on
each
other.
So
there's
potentially
handovers
and
sometimes
it's
a
handover
to
a
team
that
actually
deploys
something
in
production
or
it's
a
handover
to
other
teams
that
are
involved
in
the
actual
testing.
In
some
cases
we
have
team
dependencies.
D
We
have
manual
activities,
external
vendor
involvement
and
a
lot
of
times,
there's
no
clear
visibility
of
these
end-to-end
flows
in
the
it
organization
and
not
just
from
a
demand
getting
something
in
production.
But
also
is
there
a
security
issue
or
vulnerability?
We
need
to
fix
that.
We
have
customer
end-user
requests.
D
They
want
to
have
access
to
my
application
because,
as
a
devops
team,
you're
now
accountable
for
the
full
product
life
cycle,
not
just
delivering
new
features
and
new
releases,
you
also
need
to
be
able
to
support
them,
but
practically
detect
issues,
resolve
questions,
issues
and
so
on
and
at
the
same
time
we
need
to
look
at
basically
the
speed
of
delivery,
security
risk
compliance,
a
cost
of
our
services
and
products,
the
quality
and
the
customer
user
experience.
So
there's
a
lot
of
things.
We
need
to
look
into
now.
What
is
a
value
stream?
D
Basically,
and
what
is
value
stream
management?
Now?
I
guess
everybody
knows
what
a
value
stream
is,
but
briefly
a
value
stream
basically
is
represents
the
end
of
and
flow
of,
value-added
activities,
triggered
typically
by
demand
like
a
new
requirement
or
a
new
epic
feature,
and
the
outcome
is
that
it's
actually
delivered
to
the
customer
and
stakeholder
and
there's
a
lot
of
activities
work
in
this
value
stream
to
to
create
this
value
to
the
customer
and
value
stream
management.
What
is
that
done?
Value
stream
management
is
basically
saying
how
can
I
optimize
these
value
streams?
D
So
it's
really
about
designing
and
implementing
the
right
set
of
activities
processes
the
tools
right
set
of
practices
like
a
testing
tool,
deployment
tool,
the
backlog
tool,
the
monitoring
tool,
how
they
should
be
integrated
and
exchange
the
data,
because
eventually,
it's
all
about
a
flow
of
work
through
this
entrant
tool
chain
and
supported
by
the
right
set
of
practices
and,
of
course,
the
people
and
skills
and
competence
there
and
value
stream
management
is
really
about.
How
do
I
design
these
optimized
value
streams?
D
How
do
I
monitor
and
continuously
improve
the
value
streams
like
identifying
things
to
improve,
like
maybe
we
have
manual
activities?
We
can
automate
or
the
improve
the
release,
frequency
or
the
change
success
rate,
or
we
improve
the
mean
time
to
repair.
If
there's
any
issues
so
value
stream
management
is
basically
a
discipline
that
looks
at
how
can
you
continuously
optimize
these
end-to-end
value
streams,
and
these
end-to-end
value
streams
are
pretty
complex.
If
you
think
about
the
devops
tool
chain,
we
have
many
different
components,
part
of
this
end-to-end
value
chain.
D
We
manage
our
product
portfolio,
of
course,
because
that
we
don't
have
a
single
product
team.
We
have
hundreds
of
different
products
and
services.
We
have
a
portfolio
backlog
where
we
capture
the
epics
from
the
business,
the
product
team
backlog
like
indira
or
azure
devops
our
source
code,
our
build
artifact
repositories.
D
Now
you
can
see
the
complexity,
because
typically,
an
end-to-end
devops
tool
chain
consists
of
dozens
of
different
tools
with
different
data
models
with
different
practices
and
value
stream
management
is
really
about.
How
do
I
connect
these
all
these
things
together,
because
it
doesn't
make
sense
to
have
a
very
good
test
management
system
and
a
very
good
ci
cd
pipeline,
if
they're
not
very
well
connected,
and
we
don't
have
to
optimize
data
flows,
so
it's
really
about
making
it
flow.
D
Now,
if
you
think
about
this
end-to-end
devops
market,
you
see
the
complexity
here,
the
there's
a
lot
of
different
tools
out
there
in
the
market
and
it's
not
about
the
tools.
It's
really
about
the
end-to-end
flow
of
work
through
your
it
organizations
and
your
value
streams,
but
typically
different
teams.
D
They
have
different
tools
or
they
use
it
differently
and
it's
very
difficult
to
get
the
transparency
and
traceability
we
need,
and
and
continuously
new
tools
are
added
for
deployment,
provisioning
monitoring
and
so
on
and
reporting
dashboard
security,
not
to
forget
cost
management,
license
management
and
so
on.
But
there
is
a
specific
area
that
I
would
like
to
focus
on
in
this
complete
tooling
market
is
value
stream
management
tool.
So
there's
a
new
category
of
tools
appearing
that
they
focus
on.
D
How
can
I
manage
these
end-to-end
value
streams,
because
in
a
value
stream,
you
might
have
10
different
tools
like
your
backlog
tool,
your
source
code?
Your
build
your
test,
your
monitoring,
your
cmdb,
your
incident
management
system,
but
how
do
you
connect
them
all
together
and
get
this
traceability?
And
that's
where
value
stream
management
tools
fit
in
they're
trying
to
get
this
entrant
visibility
what's
happening
and
you
might
not
need
a
failure
stream
management
tool.
You
might
just
collect
the
data
in
different
ways,
but
it's
a
it's
a
theme
out
there.
D
How
do
I
create
this
enter
and
traceability
across
my
devops
tool
chain
now
before
we
zoom
into
what
are
the
value
streams
you
need
in
a
devops
model?
We
look
at
what
are
the
capabilities
we're
generally
talking
about,
so
we?
Basically,
if
you
look
at
devops,
we
could
say
there
are
four
key
areas
where
we
focus
our
work
on.
One
is
about
strategy
to
portfolio
which
is
really
about
managing
the
full
product
portfolio
because,
as
I
mentioned,
devops
is
organized
around
product
teams,
but
we
don't
have
one
product
and
one
team.
D
We
have
dozens
of
products,
dozens
of
teams,
larger
organizations
have
maybe
hundreds
of
applications
or
serve
products,
and
maybe
dozens
of
teams.
So
here
it's
about
managing
the
full
portfolio,
new
ideas
from
the
business
map
that
to
your
service
portfolio,
your
enterprise
architecture
and
manage
your
portfolio
backlog.
So
here's
really
about
looking
at
the
bigger
picture
of
everything
about
all
my
applications,
my
roadmaps,
my
target
architecture,
my
improvement
opportunities
and
collect
them
in
a
portfolio
backlog
and
then
on
the
team
level,
the
product
level.
D
Of
course
we
have
our
product
backlog,
we
do
the
development
and
design
and
we
create
our
releases,
which
is
about
continuous
integration,
continuous
delivery,
but
once
something
can
be
made
available
for
deployment
and
provisioning,
that's
not
ready,
because
then
we
have
something
running
in
an
operational
state,
but
typically
we
have
a
catalog
for
end
users
to
request
access
to
our
application
or
services
or
other
type
of
requests.
So,
typically,
there's
request
management
involved,
because
that
needs
to
be
automated
as
well.
In
a
devops
world
request
to
fulfill
has
been
bitterly
rejected.
D
They
don't
because
we
always
focus
on
designing
and
building
releasing
new
functionality,
but
the
end
user.
That
has
a
lot
of
type
of
requests.
They
want
to
have
access
to
applications
that
maybe
want
to
import
data
or
the
type
of
activities
that
they
initiate
against
our
product
and
we
need
to
fulfill
those
requests.
So
here's
about
cataloging
and
request
those
services
and
the
devops
teams
need
to
be
supporting
those
as
well.
D
They
need
to
maintain
their
own
request:
catalog
provide
self-service
self-help,
so
end
user
can
request
and
the
ties
leading
to
identity
access
management
as
well,
and,
of
course,
when
we
deploy
something
in
production.
Hopefully
we
know
what's
running
where
what
resources
are
linked
to
what
application
dependencies
do
I
have
so.
I
can
stop
monitoring
these
configurations
end
to
end
and
detect
issues,
hopefully
automatically
remediate
them,
if
not
by.
I
have
an
instant
management
capability
where
maybe
end
users
report
incidents.
Hopefully
we
detect
them
proactively.
D
We
try
to
resolve
them.
Hopefully
we
can
resolve
issues
by
maybe
doing
automated
provisioning
again
change
something
and
it's
fixed,
and
in
some
cases
we
need
to
do
the
feedback
loop
back
to
the
product
backlog
so
that
we
can
fix
them
like
putting
a
story
in
jira,
fix,
tsu,
deploy
it
and
test
it
and,
of
course,
there's
a
continuous
feedback
loop.
D
Now
what
you
see
here
appearing
that
in
the
devops
there's
a
lot
of
capabilities
we
need
and-
and
that's
not
just
one
value
stream-
and
I
will
show
you
later
what
value
seems
do
I
need
to
optimize,
because
that's
very
important
for
the
devops
and
a
devopsed
skill
that
you
identify
the
different
value
streams?
It's
not
just
about
releasing
new
features
in
production,
because
that's
what
typically,
the
devops
community
talks
about.
D
If
you
talk
about
software
engineering
value
stream
now
what
you
also
see
is
that
to
improve
these
end-to-end
flows,
many
organizations
have
dozens
of
initiatives
to
improve
these
value
streams.
As
you
can
see
here,
there's
a
lot
of
cards
played
today.
Maybe
we
want
to
improve
test
automation.
We
want
to
implement
a
common
city
pipeline.
We
want
to
improve
security
monitoring.
D
We
want
to
improve
self-service
and
self-help
data,
analytics
service,
portfolio
management
and
portfolio
backlogs.
So
there's
many
initiatives
but
they're,
typically
very
fragmented,
done
by
different
business
units.
Different
ways
done
by
different
product
teams,
different
ways,
and
the
idea
now
is
to
think
about.
If
you
think
about
value
stream
management,
can
we
not
starting
to
optimize
these
entrant
flows
and
invest
in
the
right
things
at
the
right
time
to
improve
these?
Instead
of
have
all
these
fragmented
improvement
initiatives
and
basically,
what
we
start
with
value
stream
management
is
thinking
about.
D
Some
lean
principles
is:
what
is
the
value
that
our
customers
get
and
it's
not
just
using
a
product,
but
also
how
can
an
end
user
request
access
or
how
can
an
end
user
report
incidents
or
if
we
have
an
issue?
How
can
we
fix
that
or
if
there's
technology
upgrade
we
need
to
do?
We
need
to
do
that,
so
the
different
flows
of
value
you
have
in
a
devops
our
product
life
cycle,
and
then
you
need
to
map
those
in
a
value
stream.
What
are
the
different
value
streams?
D
What
are
the
components
and
capabilities
I
need?
I
will
show
you
later:
what
are
those
so
you
can
establish
the
flow
you
create
a
continuous
improvement
of
that
flow,
because
it's
you
never
finished.
You
always
try
to
improve
your
end-to-end
way
of
working.
The
landscape
is
changing,
so
it's
basically
continuous
effort
to
improve
these
end-to-end
workflows
to
make
it
flow,
and
it
needs
to
be
very
high
on
the
top
priority
of
your
organization
and
your
teams
to
continuously
improve
those
end-to-end
value
streams.
D
Now,
if
you
think
about
what
are
those
value
streams,
then
now
we
I
had
a
picture
with
all
the
capabilities,
but
actually
there
are
eight
core,
let's
say
main
value,
streams
or
journeys
that
you
need
to
optimize
in
your
devops
community.
One
is,
of
course,
the
continuous
delivery
flow
of
work.
We
have
a
product
backlog,
new
features
and
stories
come
in
we
design
them.
We
build
them,
we
deploy
them.
We
test
them
of
course,
and
make
them
available
in
production.
D
This
is
the
typically
the
devops
value
stream
or
software
engineering
value
stream
that
most
people
are
talking
about.
This
is
about
ci,
cd
and
continuous
delivery
and
that's
already
a
complex
environment,
and
I
will
show
you
late
what's
in
there,
of
course,
but
it's
not
that's,
not
that's
one
value
stream
so
from
idea
to
get
something
in
production,
but
there
is
more
that
we,
a
separate
value
stream,
is
really
about
continuous
operations
detect
to
correct.
So
we
have
monitoring
in
place.
We
detect
issues
automatically,
hopefully
remediate
issues
if
we
have
the
right
automation.
D
If
not,
we
combine
all
the
data.
The
logs,
the
metrics
identify
events,
maybe
do
root
cause
analysis
and
we
can,
if
we
can
do
a
fix,
if
not,
we
raise
an
incident
in
our
instant
management
system
and
try
to
resolve
them.
So
it's
a
whole
flow
of
work
from
detecting
or
proactively
detecting
issues
through
the
actual
resolution,
which
is
a
value
stream
by
itself.
The
continuous
operation
value
stream,
but
there
is
more,
as
I
mentioned,
there's
not
just
one
product
in
our
portfolio.
D
We
have
many
different
products,
so
there
is
one
value
stream
that
really
looks
at
evaluating
continuously
my
product
portfolio.
What
are
my
products?
How
well
they
do
they
operate?
What
do
I
need
to
improve?
Do
I
still
want
to
invest
in
this
or
do
I
need
to
change
the
product
vision,
capturing
new
demands
from
the
business,
and
then
you
build
your
strategy,
your
portfolio
backlog.
D
So
this
is
where
you
optimize
the
portfolio
as
a
whole,
which
is
a
value
stream
by
itself,
and
it's
continuous,
like
every
quarter
or
every
month
we
review
our
portfolios,
identify
where
we
want
to
invest
in
what
we
need
to
de-invest
in
maybe
portfolio
rationalizations,
and
then
there
is
a
sometimes
we
have
new
ideas
in
the
market
and
and
if
a
new
idea
comes
along,
we
might
need
to
experiment
first.
So
it's
the
idea
to
concept
and
that's
a
value
stream
by
itself
by
we
have
new,
it's
basically
continuous
exploration.
It's
about,
there's
new
ideas.
D
We
need
to
create
a
prototype,
validate,
maybe
temporarily
set
up
a
team
to
validate
that
concept,
and
if
that
is
a
viable
concept,
then
we
create
a
team
around
it
and
they
starting
to
develop
and
continue
to
deliver
that.
But
it's
that
the
ex
investigate
new
opportunities
is
by
itself
a
value
stream,
basically
idea
to
concept,
and
as
I
mentioned
once,
something
is
operational.
D
We
we
need
to
have
a
good
service
catalog,
so
end
users
don't
consume
just
your
own
product,
they
consume
dozens
of
services
and
products.
So
we
need
a
good
catalog
that
the
end
user
can
find
the
service.
How
do
you
get
access,
sell
knowledge,
base,
etc,
and
then
they
provide
request,
support
from
basically
request
to
fulfill
value
stream,
and
we
automate,
hopefully,
all
this
type
of
request
and
it
can
be
application
related
requests,
but
it
could
also
be
platform.
D
Requests
like
I
want
to
order
a
new
server
or
capacity,
and
it
can
be
done
through
a
pipeline
or
through
a
portal,
but
it's
about
automating,
the
workflow
of
provisioning
deployment
and
then
operating,
and
you
see
they're
all
tightly
linked
to
each
other
and
the
last
value
stream.
Number
eight
is
basically
the
continuous
improvement
value
stream
where
we
evaluate
how
well
we're
doing
and
that's
feeding
back
into
the
other
value
streams.
So
here
you
see
the
different
flows
and
each
of
these
flows.
D
D
So
let's
have
a
look
at
the
requirements
to
deploy
value
stream,
which
is
really
about
translating
basically
starting
with
the
product
backlog,
design,
develop,
test
to
test
environment
acceptance,
production
with
the
end-to-end
flow,
and
you
can
already
see
that
this
value
is
complex
because
you
meet
many
different
capabilities
like
your
backlog
management,
your
testing,
your
deployment
and
and
change
management
capabilities.
You
see
many
different
tools
supporting
that
and
in
this
simple
workflow
you
see
you
have
jira
or
azure
devops.
You've
got
a
jenkins.
You've
got
your
artifactory
repository.
D
You've
got
your
code
quality
scanning
your
deployment,
your
like
terraform
provisioning,
but
you
also
need
credentials
and
access
right.
So
you
got
your
cyber
arc
as
your
id.
For
example,
you
got
your
service
management
system
to
to
register
the
chain
so
that
people
are
informed.
What
has
been
so?
You
have
service
now
there
and
you
see
that
this
is
a
complex
tool
chain
and
you
need
to
optimize
these
and
that's
why
value
stream
analysis
is
so
important,
starting
to
look
at
these
end-to-end
flows,
and
this
is
just
the
requirements
to
deploy.
D
This
is
basically
the
core
devops
creating
a
new
feature,
make
it
available
in
production,
but
we,
as
I
mentioned,
we
also
have
to
detect
to
correct
value
stream,
which
is
really
about
monitoring
your
service,
hopefully
proactively,
detect
issues
remediate
them
automatically
and,
if
not,
maybe
notify
people
that
there's
there's
something
going
on.
We
can
collaborate
with
different
teams
to
fix
and
resolve
it
and
then,
hopefully
close.
The
incident
maybe
update
some
knowledge.
As
you
can
see,
this
flow
is
also
complex.
D
We
have
many
different
monitoring
systems,
we
got
maybe
an
event
management
system
and,
of
course,
we
need
to
seem
to
be
to
understand
if
there's
an
event
happening
on
the
server.
How
is
that
linked
to
which
application?
And
what
is
the
impact
for
the
business
to
fix
something?
We
might
need
to
redeploy
reconfigure?
So
we
again
have
our
deployment
components.
We
need
our
service
management
system.
D
Our
slack
and
microsd
team
to
communicate
and
collaborate-
and
you
see,
there's
a
lot
of
data
involved,
but
you
could
see
the
complexity
again
and
that's
why
it's
so
important
to
visualize
these
entrance
value,
streams,
understanding
the
tools,
understanding
the
data
and
and
these
end-to-end
value
streams
are
complex
right.
So
one
in
area
where
people
invest
in
is
value
stream
management
as
starting
with
maybe
collecting
the
data.
That's
basically
a
level
one
maturity
level
of
value
stream
management,
just
collect
data
and
get
insight.
D
This
is
where
organizations
typically
start
off
with,
but
that's
not
sufficient,
but
it's
a
good
start
collecting
the
data
from
jira
azure
devops,
your
testing
tools,
your
deployment
tools,
your
service
management
system,
monitoring
into
a
data
lake,
and
it
creates
traceability.
What's
going
on
what
has
been
deployed?
Did
we
have
any
issues
after
deployment?
Is
the
performance
affected?
So
you
see,
this
is
an
approach
that
many
organizations
are
taking
now
to
collect
the
data
across
these
end-to-end
value
stream
tools.
D
But
the
reality
is
that,
if
you
don't
have
a
common
information
model,
then
it's
very
difficult
to
collect
all
the
data
from
all
the
different
teams
and
get
inside.
So
basically,
what
we
need
to
have,
if
you
want
this
value
stream
management
to
work,
is
defining
a
standard
information
model
and
it
starts
with
the
product
backbone.
D
As
you
see
here,
what
are
my
products
and
services
I
deliver
to
the
business
having
a
good
product
portfolio
and
the
teams
associated
with
and
the
owners
of
that
product
owners,
but
also
how
they
are
dependent
on
each
other
and
that,
of
course,
links
into
risk
assessment,
cost
teams,
but
eventually
it's
all
about
traceability.
If
somebody
updates
a
code
for
my
product,
there's
a
story
link
to
it,
there
may
be
a
feature
there
is
testing
and
it's
really
about
standardizing
how
we
develop
this
kind
of
integration.
D
How
can
we
trace
back
that
this
test
has
been
done
for
this
change
and
how,
if
something
is
deployed
in
production,
how
do
we
know
what's
running
where
what
version
has
been
deployed?
Do
we
have
monitoring
deployed
with
it?
How
and
that
results
into
instant
problem
change
and
so
on.
As
you
see
here,
this
end-to-end
data,
that's
that's
very
important
if
you
want
traceability
and
visibility,
which
is
essential
to
get
good
value
streams,
automated
and
optimized.
D
You
also
need
to
look
at
this
data
and
how
this
data
is
captured
in
the
different
systems
and
can
be
integrated.
So
that's
a
very
important
start.
Look
at
the
data
that
you
have
in
your
it
organization
across
all
your
different
tools
and
how
they
link
to
each
other,
because
eventually
we
want
to
link
them.
So
if
an
incident
occurs
in
production,
we
can
trace
it
back.
What
has
been
deployed
what's
been
part
of
the
build,
hasn't
been
tested.
What
stories
were
linked,
etc,
etc?
D
So
that's
complex,
but
you
see
that
most
organizations
and
about
60
of
the
organizations
are
really
now
starting
to
implement
value
stream
management
and
and
as
a
way
of
looking
at
the
end-to-end
picture.
Looking
at
the
data
flows,
understanding
how
all
the
different
teams
and
products
are
operating,
but
it's
still
an
early
phase.
So
it's
a
relatively
new
area
where,
where
organizations
are
working
on
and
garner,
expects
in
in
2023
about
70
percent
of
the
organizations,
do
have
value
stream
management
implemented
across
their
devops
tool
chain,
so
finalizing
this
discussion.
D
Is
we
if
we
look
at
the
current
state
of
many
organizations
that
we
have
many
different
teams
with
their
own
set
of
tools,
their
own
flows
with
a
lot
of
handover,
still
a
lot
of
manual
activities
and
dependencies
with
monitoring,
for
example,
or
service
management
security,
fragmented
difficult
to
trace
and
follow?
Now
we
want
to
move
into
really
the
digital,
workflows,
entrant,
value
streams
and
optimize
our
end
waves
working
and
that's
basically,
what
organizations
are
working
on
using
value
stream
management.
D
So
what
is
really
important
is
to
build
a
startup
with
this
integrated
picture,
and
so
basically,
what
are
the
steps
you
need
to
go
through
if
you
want
to
implement
value
stream
management,
it
starts
with
a
let's
say,
devops
blueprint,
and
this
is
just
a
simple
illustration
of
that.
Looking
at
the
building
blocks,
you
need
your
your
enterprise
architecture,
your
portfolio
manager,
agile
backlog,
you're
testing,
your
source
code,
you're
monitoring.
Your
event
manage
security,
compliance
everything.
How
are
things
connected?
D
What
data
is
managed
there
like
incident,
epics,
feature
stories,
source
code,
test
cases,
code,
quality
and
how
they
are
linked
together,
and
then
you
can
plot
your
own
tools
and
practices
to
it
to
see
how
you
can
optimize
things
so
and
as
I
mentioned,
you
also
need
to
look
at
what
are
the
value
streams
that
I
need,
because
it's
devops
is
not
just
one
value
stream
right.
Your
devops
teams
have
multiple
value
streams
they
com.
They
talk
about,
as
I
mentioned,
from
idea
to
production
or
if
there's
an
issue,
detect
correct
or
request
fulfillment.
D
But
there
are
other
value
streams
you
need
to
focus
on,
for
example,
resolving
vulnerabilities,
so
it's
basically
vulnerability
to
remediation
and
it's
very
important
that
you
start
identifying
all
these
type
of
flow
of
works
together
and
then
starting
to
standardize
these
building
blocks
and
the
integrations
to
automate
and
get
traceability
across
your
end-to-end,
tooling
landscape,
and
that's
pretty
complex
right
because,
as
you
can
see
here,
typically,
we
have
many
different
tools
out
there
and
and
many
different
product
teams
slightly
working
differently.
And
the
key
question
is
well,
you
should.
D
Can
you
ask
yourself,
do
we
want
to
standardize
these
value
streams
across
the
different
teams?
So
we
can
leverage
the
integrations
between
them
so
have,
for
example,
a
standard
way
of
working
around
the
agile
backlog,
because
if
we
all
use
jira
or
sjdevops,
but
all
in
a
slightly
different
way,
with
different
ways
of
how
we
categorize
our
stories
or
we
have
even
different
tri
instances,
we
also
need
to
have
multiple
integration
with
the
source
code
with
the
test
system
and
your
service
management
system.
So
the
complexity
increases.
D
D
Looking
at
the
current
value
streams,
looking
at
the
current
tools
and
the
processes
and
look
at
the
gap,
analysis
create
your
target
blueprint,
create
the
mvp
to
create
this
kind
of
end-to-end,
workflow
experiment
with
that
and
then
basically
co-create
and
collaborate
with
all
the
different
product
teams
to
starting
to
improve
that
for
the
enterprise
as
a
whole,
because
in
many
organizations
there
is
a
few
product
teams
that
are
very
mature
and
most
product
teams
are
less
mature.
So
how
can
you
mature
all
the
product
teams
to
that
level?
D
And
that
requires
this
integrated
approach
and
value
stream
management
approach?
Now,
if
you
want
more
information
about
this
approach
there
within
the
open
group,
for
example,
there's
the
it5t
forum
that
works
on
a
blueprint
there
is
an
id5t
form
that
creates
a
blueprint
for
devops
integrations
data
flows
that
you
can
use
as
sort
of
reference
modes.
It
says
how
is
your
own
organization
operating,
so
that's,
probably
good
reference
material
to
look
at
then
finalizing.
Thank
you
for
listening
to
this
presentation.
Hopefully
you
find
it
interesting
to
also
think
about
your
own
organization.
D
Think
about,
let's
investigate
what
value
streams
do
I
need,
and
how
are
these
value
streams
today
supported
and
able
to
create
this
target
idea
and
roadmap
and
blueprint
to
move
from
if
there's
any
feedback
or
questions,
please
don't
hesitate
to
contact
me
through
mail
or
linkedin,
and
I
will
try
to
answer
any
questions
also,
meanwhile,
in
the
slack
channel.
Thank
you
very
much.
A
Hey
rob,
we've
got
a
couple
more
minutes
or
maybe
just
a
question
from
my
side.
I
oftentimes
face
companies
who
are
exactly
at
that
stage,
one
of
the
maturity
cycle
that
you
described
earlier
when
you
just
start
capturing
data,
you
just
start
thinking
about.
How
can
we
set
up
an
it
architecture
that
works
coherently?
What
advice
would
you
give
people
who
are
just
starting
out
with
just
finding
their
their
bearings
in
the
second
world.
D
Yeah,
it's
a
good
question
and
that's
really
challenging,
because
every
organization
is
different,
but
I
think
what
works
very
well
is:
let's
say:
if
there's
multiple
teams
working
now
in
the
devops
mode,
or
maybe
more
that
you
could
say,
let's
create
a
community
where
we
all
work
together
and
a
community
could
be
something
like.
Let's
create
this
a
visualization
of
what
are
the
capabilities,
we
need.
What
are
the
current
tools?
What
how
each
team
does
it
today?
So
it's
my
like
almost
creating
a
picture
of
the
current
state
that
works
very
well.
D
So,
for
example,
if
you
have
make
a
good
inventory
of
all
the
tools
you
have
out
today,
it
sounds
really
silly,
but
it's
typically,
we
don't
have
a
clue
end-to-end
what
people
are
doing.
So
it's
probably
a
good
way
to
start
with
a
kind
of
reference
picture
of
the
different
value
streams
and
then
assess
with
the
different
teams.
How
are
you
doing
that
today?
What
are
you
even
if
they
use
the
same
tool?
D
They
typically
use
it
differently,
and-
and
so
it's
probably
a
good
start-
is
to
create
a
sort
of
community
where
we
say
okay,
how
do
you
operate
today
with
all
your
tools
and
practices
and
there's
a
lot
of
standard
tools
already,
like
maybe
there's
a
shared
jira
and
a
shared
service
management
system,
but
often
there's
all
of
all
the
other
things
happening.
Different
testing
tools,
different
deployment
tools,
but
probably
a
good
start-
is
to
work
in
this
joint
picture
of
the
current
state
and
then
slightly
created
for
a
picture.
D
How
do
we
come
to
a
core
target
state?
Do
we
want
to
standardize
some
data?
How
do
we
do
code
quality?
How
do
we
manage
stories?
How
do
we
manage
the
code
commits?
How
do
we
do
deployments
and
then
starting
to
integrate
things?
Because
then
you
can
start
create
visibility,
that's
probably
a
good
start
and.
E
A
D
It's
so
good
question.
The
problem
is
nobody
owns
this
in
many
organizations,
not
really.
Of
course,
some
of
the
products
are
owned,
like
a
service
management
system
and
some
monitoring
and
security,
but
there
are
many
different
owners
and
the
challenge
is
that
all
of
these,
a
lot
of
these
tools
are
not
managed
as
a
product.
They
just
managed,
as
here
is
the
tool
and
play
with
and
do
your
own.
So
the
ownership
is
one
of
the
challenges.
D
So
one
of
the
things
that
which
is
good
to
do
to
have
a
devops
rc
tech
to
sign
somebody
that
looks
at
how
does
it
all
work?
And
so
it's
maybe
it's
a
devops
architect
and
some
organizations
have
an
it4it
product
owner,
which
means
that
they
look
at
the
itf,
the
it
tooling
to
support
it
and
and
create
a
sort
of
community
there.
So
it's
probably
good
to
start
with
the
devops
architect.
We
get
a
bigger
picture.
A
Awesome
that
sounds
really
good.
We're
gonna
have
to
switch
over
to
the
next
speaker
now,
but
thank
you
very
much
for
your
time.
Rob
and
again
you'll
be
available
in
the
ca
cd
stack
channel
today.
So
there's
any
more
questions
about
the
very
exciting
and
very
complex
topic
of
value
stream
management,
and
I'm
sure
we
can
talk
about
that
for
another
few
hours
today.
A
A
In
austria,
wonderful
awesome
here,
I'm
going
to
just
give
a
quick
introduction
to
yourself.
I
mean
you're,
an
I.t
consultant,
and
do
you
want
to
give
us
a
quick
overview
of
what
made
you
get
into
the
continuous
delivery
world?
How
you
got
how
you
you
first
found
your
bearings
in
there
and
what
made
you
go
into
this
topic.
F
Yeah,
that's
a
great
question,
so
continuous
delivery,
for
me
is,
is
a
philosophy
that
helps
well
get
software
to
production
quicker,
not
necessarily
faster
but
earlier
and
with
a
higher
degree
of
quality,
by
making
small
steps
and
and
when
steps
fail,
change
directions
pretty
quickly,
so
it
it
tightly
integrates
with
agile,
for
example.
But
it's
to
me
it's
the
way.
We
should
all
be
doing
software
development.
A
F
Thank
you
yes,
good
morning,
everyone
today,
I'm
going
to
be
talking
to
you
about
a
real
world
case
study.
The
talk
is
not
entitled
real
world
continuous
delivery,
learn,
adapt
and
improve.
If
you
want
to
reach
out
to
me,
my
twitter
handle
is
on
screen
right
now
and
I'll
also
be
available
in
slack
after
the
session
to
to
answer
any
questions
that
you
have.
F
So,
let's
get
going
with
this
story,
which
is
about
a
project
that
I
was
a
part
of
from
late
october,
2017
to
late
2018,
so
it's
a
few
years
ago,
but
the
lessons
are
very
valuable
still,
and
this
all
happened
in
a
large
semi-government
organization
in
the
netherlands.
F
I
cannot
disclose
the
name,
but
for
the
sake
of
the
story,
it's
good
to
share
that
this
organization
is
in
a
very
competitive
european
or
even
global
to
some
degrees
market,
and
that
is
going
to
shape
a
few
of
the
ideas
and
and
directions
that
I
will
talk
about
in
the
next
25
minutes
or
so
now,
as
I
came
into
this
organization,
I
came
into
a
specific
department
which
caters
towards.
F
Customers
essentially
direct
direct
interface
with
the
customers
and
the
customer
experience,
and
this
department
was
responsible
for
a
number
of
applications
and
specifically
one
big
monolithic
application
that
renders
html
for
a
browser
to
to
render
and
also
an
ios
and
an
android
application
that
connect
to
the
monolithic
application
through
a
bunch
of
api
calls
and
the
monolithic
application
connects
to
a
number
of
databases,
mysql
elasticsearch
and
some
other
technologies,
and
there
are
a
few
supporting
services.
F
Now
this
application
was
developed
maintained
by
four
teams
of
about
two
pizza,
a
size.
You
know
a
general
size
of
eight
or
nine
folks
which
is
sort
of
industry
standard.
I
guess
a
bunch
of
developers
and
a
tester
and
scrum
master,
and
maybe
a
specialist
here
and
there,
but
that's
roughly
what
the
composition
of
each
team
was.
So
four
teams
of
roughly
equal
size,
roughly
equal
capability,
and
we
call
them
topic
teams.
Now.
Why
topic
teams?
F
The
domain
that
this
application
is
in
can
be
divided
in
roughly
four
big
chunks.
There
was
a
chunk
where
customers
could
buy
tickets
for
a
certain
service.
They
could
find
information
about
the
surface
and
some
other
some
other
domains
or
or
part
of
the
domains
that
you
know.
We
could
divide
them
in
four,
roughly
equally
sized
chunks
and
distribute
them
over
the
team.
So
each
team
was
responsible
end
to
end
for
a
piece
of
the
domain.
F
Now,
as
time
progressed,
each
of
those
four
teams
took
on
a
special
role
and
we
called
that
team.
The
fire
team.
Now
this
would
be
go.
The
responsibility
would
move
from
one
team
to
the
next
team
and
then
back
to
team
one
again
and
as
the
name
suggests,
the
fire
team
is
responsible
for
fighting
fires
or
at
least
the
first
response
to
a
fire.
F
So
when
a
fire
occurs,
when
a
bug
or
an
incident
comes
in
the
fire
team
is
the
team
that
is
supposed
to
handle
that
initially.
So
when
bugs
come
in
you
know
not
every
bug
is
the
same.
Some
bugs
need
to
be
handled
quite
quickly
and
other
bugs
can
wait,
and
that
all
depends
on
the
amount
of
customer
impact
that
the
bug
has.
So
the
fire
team
was
responsible
for
triaging,
the
bugs
or
the
issues
that
came
in
and
and
were
discovered
in
the
system.
F
F
Now
the
second
responsibility
for
the
fire
team
and
the
responsibility
that
is
key
for
today's
talk
is
the
release
rollout.
So
the
fire
team
would
be
responsible
if
there
was
going
to
be
a
release
of
the
monolith
and
a
new,
a
new
change.
Essentially
they
would
do
the
rollout.
F
So
what
I
want
to
do
to
with
you
today
is
go
through
a
typical
release,
as
it
was
handled
by
a
typical
fire
team.
Now,
of
course,
there
would
be
differences
between
releases
and
between
fire
teams
in
some
cases,
but
this
is
what
it
typically
looks
like
before.
I
get
into
that,
though
those
teams
were
operating
at
a
certain
cadence.
Now
the
cadence
that
these
teams
were
operating
in
was
every
two
weeks,
so
they
would
have
a
sprint
which
would
start
on
a
monday
and
end
on
the
thursday
in
the
subsequent
week.
F
Now
if
a
release
were
to
occur,
a
number
of
steps
would
have
to
be
executed
by
the
fire
team
and
those
steps
were
written
down
into
a
release
checklist
which,
if
you
would
print
it
to
regular
a4
paper,
it
would
come
down
to
six
pages
or
so,
which
is
nice
and
long,
which
means
it
contains
lots
of
steps
which
people
tend
to
forget,
or
maybe
even
do
in
different
order.
F
Especially
you
know,
one
fire
team
would
would
do
it
this
way.
One
fire
team
would
do
it
slightly
differently,
but
it
describes
a
number
of
manual
steps
that
are
required
to
get
a
release
out,
and
this
is
for
the
regular
release.
F
Now,
if
there
was
a
serious
issue
like
a
p1
bug,
it
would
be
essentially
an
in-between
release
and
that
was
called
a
hotfix
and
the
hotfix,
as
the
the
name
implies
means
we
need
to
get
this
fix
deployed
quite
quickly
and
under
that
stress,
under
that
pressure
we
saw
that
even
more
items
of
the
checklist
would
be
forgotten
or
switched
in
the
order.
Anyway,
we
did
some
calculations
and
whenever
release
went
out,
the
fire
team
was
taking
about
two
to
three
days
of
manual
work
on
it
for
the
entire
team.
F
The
release,
cadence
or
the
sprint
cadence
would
mean
that
in
the
most
ideal
case
we
would
get
a
release
every
two
weeks,
but
in
reality
it
turned
out
to
be
more
like
four,
six
or
eight
weeks
between
releases,
because
we
would
not
be
able
to
finish
the
release
for
the
window
or
any
other
reason.
F
But
the
manual
work
associated
with
the
release
and
the
inflexibility
were
key
concerns
for
for
this
organization
for
this
department.
So
a
number
of
goals
were
set
to
improve
this
situation
and
the
first
one
was.
We
want
to
reduce
the
amount
of
toil
that
is
associated
with
this
system,
with
this
application
and
toil,
if
you
look
the
definition
up
online,
it
it's
defined
as
the
kind
of
work
that
is
devoid
of
enduring
value,
which
is
a
nice
way
of
saying.
F
This
is
essentially
work
that
you
don't
want
to
do
because
it
you
know
it.
It
is
tied
to
a
service,
it
is
manual,
it
is
repetitive,
it
is
automatable,
it
is
tactical.
It
is
boring.
Quite
honestly,
and
the
problem
with
toil
is,
as
your
surface
grow,
it
scales
linearly.
So
we
want
to
get
rid
of
soil.
You
won't
ever
be
able
to
get
rid
of
it
completely,
but
we
wanted
to
reduce
it
to
more
manageable
levels.
F
F
We
also
wanted
to
have
faster
feedback
now
fast
feedback
in
this
context
means
if
we
make
a
change,
we
want
to
know
very
quick
whether
the
change
is
working,
whether
any
tests
are
failing,
whether
the
changes
viable,
whether
the
change
can
be
deployed,
whether
the
change
can
run
all
those
things
so
essentially
technical
feedback
on
the
change
that
we
make.
F
Remember
I
was
talking
about
this
organization
is
in
a
very
competitive
market.
Now
we
wanted
to
be
able
to
see
whether
the
changes
that
we
make
make
our
product
more
interesting
to
the
customer
than
the
product
of
the
competitor,
and
to
do
that,
we
wanted
to
run
a
b
tests
and
other
experiments
at
a
very
fast
rate,
which
we
needed
a
technology
support
for
enter
continuous
delivery.
F
But
why
well,
as
I
described
in
the
question
that
hubert
asked
me
at
the
beginning,
continuous
delivery
is
about
making
things
small,
essentially
and
making
things
small
is
a
key
component
to
making
things
more
predictable
and
reducing
risks.
Now,
if
you
look
at
the
picture
on
screen
right
now,
this
is
something
that
I
found
on
twitter,
which
makes
it
true.
F
You
see
a
infrequent
release
schedule
on
the
left
side
of
the
picture
you
see
over
time.
Time
goes
to
the
right.
Work
goes
up
and
we
see
cost
rising
linearly.
Assuming
that
we
have
a
fixed
team
of
people.
F
What
we
are
building
is
inventory
because
we
are
not
releasing
what
we
are
building
right,
so
we're
building
stuff,
we're
we're
building
code,
we're
building
pro
a
change,
we're
building
a
product
and
we're
putting
it
on
the
shelf
and
putting
things
on
a
shelf
is
risky
right,
because
when
we
release
the
thing
it
may
turn
out
not
to
be
useful
to
our
customer.
It
may
turn
out
not
to
be
the
answer
to
the
questions
that
they
have,
but
it
also
may
turn
out
not
to
work
functionally
or
technically.
F
E
F
Now,
if
you
look
at
the
industry
from
accelerate
the
state
of
devops
all
these
reports,
you
see
that
this.
This
is
what
differentiates
hive
performing
teams
from
low
performing
teams
right,
a
high
performing
team
deploys
more
frequently
has
their
changes
go
to
production
faster
is
faster
to
recover
when
there's
a
failure
and
their
failures
are
less
likely
to
occur,
and
this
is
what
continuous
delivery
essentially
brings.
You
now
back
to
the
story.
How
did
we
approach
this?
F
F
First,
we
thought
the
monolith
was
relatively
well
tested
and
second,
one
of
our
goals
is
reducing
toil,
toil
of
manual
work
manual
and
repetitive
work.
If
we
had
split
the
monolith
before
doing
more
automation
and
everything,
we
would
have
a
problem
because
we
would
simply
increase
the
amount
of
toil
in
rather
than
reduce
it.
So
we
we
opted
to
defer
to
weight
with
splitting
the
monolith
in
separate
pieces
in
separate
services.
F
How
did
we
do
it
then?
Well,
the
first
phase
of
the
project
was
increasing
our
release,
cadence,
where
we
simply
said
well
simply
saying
is
easier
than
doing.
Of
course
we
want
to
go
to
weekly
release,
no
exceptions.
Every
week,
we're
going
to
release
whatever's
ready
right,
we're
going
to
do
a
release
every
week
and
have
that
as
a
cadence.
F
Why?
Because
it
will
help
bring
the
pain
forward.
There
is
pain
associated
with
releasing.
We
know
that
we've
already
identified
part
of
the
pain
we've
already
seen
that
the
release
process
is
costly
and
we
want
to
bring
that
pain
forward
in
order
to
make
it
even
more
visible
and
expose
what
we
need
to
work
on
first
and
then
we
started
to
introduce
a
bunch
of
automation
right,
getting
rid
of
the
manual
steps
that
were
part
of
the
release
checklist
as
much
as
possible.
F
We
started
building
a
pipeline,
a
pipeline
which
in
conceptually
looks
like
this.
We,
the
pipeline,
takes
a
change,
takes
a
code
commit
typically,
it
builds
a
package,
it
automates
or
it
runs
some
tests
and
when
those
tests
are
green,
it
deploys
the
package
to
an
acceptance,
environment
and,
ultimately,
to
a
production
environment.
However,
we
were
not
ready
yet
for
it
to
automatically
continuously
deploy
to
production.
F
So
what
we
did
was
put
a
button
in
between
so
the
change
or
the
package
is
deployed
to
acceptance
and
then
the
deployment
process
would
wait.
It
would
wait
for
testers,
pos,
etc.
To
verify
to
validate
that
the
change
is
good
enough
and
can
proceed
to
production,
and
then
somebody
presses
the
button
and
the
process
continues.
F
So
we
implemented
all
of
this
in
jenkins.
Now
jenkins.
It
you
know
it's
neither
here,
nor
there
jenkins
is,
is
just
one
of
the
tools
that
can
can
do
this
right.
You
have
you
have
circle,
you
have
gitlab,
you
have
github.
Basically,
all
roughly
do
the
same
thing.
They
all
roughly
incurred
the
same
cost.
They
all
roughly
incurred.
The
same
nightmares
jenkins
is
no
different.
F
We
chose
jenkins
in
this
project
because
it
satisfied
our
needs
and
what
you
can
see
on
the
screen
right
now
is
basically
the
first
part
of
the
pipeline
written
in
jenkins,
where
again,
the
the
pipeline
starts
because
of
a
code
commit
it
pulls
the
code
in
builds,
a
package
pulls
in
our
open
source.
Dependencies
verifies
those
dependencies
runs
the
tests,
and
once
that
is
all
done,
it
deploys
the
package
to
an
acceptance
environment
and
then
it
waits
and
the
waiting
is
done
in
this
in
the
next
pipeline.
F
Essentially,
when
that
waiting
is
completed,
okay,
the
button
has
been
pressed.
We
deployed
to
production
now
the
full
pipeline,
if
assuming
somebody
immediately
presses
the
button
takes
around
an
hour
and
a
half
to
finish
next
to
the
automation,
we
introduced
a
bunch
of
changes
to
our
way
of
working,
and
the
biggest
one,
I
would
say,
is
introducing
the
concept
of
pair
programming,
which,
to
me,
is
the
superior
way
of
software
development
pair
programming
not
only
helps
with
knowledge
transfer
between
individuals.
F
It
also
helps
with
quality
of
your
system
with
design
discussions.
F
It
is
continuous
and
inline
code
review,
rather
than
asynchronous
code
review,
such
as
through
a
pull
request,
and
the
nice
thing
about
prayer
programming
is
that
you
can
discuss
and
that
you
can
talk
about
the
code
that
doesn't
get
written,
which
is
impossible
to
see
in
a
in
a
pull
request.
A
pull
request
only
shows
you
the
outcome
or
the
result
of
the
work,
whereas
the
pair
programming
is
actually
sharing
the
work
together
and
then
producing
a
result.
F
Programming
has
been
demonstrated
to
be
significantly
more
effective
than
having
two
separate
developers
churning
out
stuff.
However,
it
is
tiring,
it
requires
attention,
it
requires
stimulation,
but
it
is
a
very
good
way
of
developing
software
now.
F
Another
change
that
we
made
was
instituting
the
boy
scout
principle
or
the
the
the
don't
leave
broken
windows
principle,
because
if
you
don't
fix
things
as
they
occur,
if
you
see
issues
in
your
code
in
your
application
in
your
system
that
violate
any
of
the
agreements
you
have
as
a
team,
whether
they
be
code
style
or
design
or
naming
or
whatever.
F
In
this
department
we
also
added
tons
and
tons
of
metrics
right
to
help
with
the
operability
of
our
system
to
monitor
our
system
from
any
which
way,
from
any
point
of
view,
in
order
to
get
more
insight
about
how
our
system
is
doing
and
what
the
normal
behavior
of
our
system
is
and
when
things
start
to
appear
abnormal.
F
We
also
took
a
bunch
of
tips
from
steve
smith
in
his
book
measuring
continuous
delivery,
and
we
measured
our
journey
on
on
on
the
road
to
continuous
delivery.
Not
to
you
know
not
to
has
use
it
as
a
review
item
for
a
team
or
for
an
individual
individual,
but
rather
to
see
how
are
we
doing
as
a
collective
as
a
group
is
there?
Is
there
a
trend
and
is
the
trend
positive
or
negative?
F
Do
we
need
to
spend
attention
on
some
items
and
that
helped
with
with
making
things
structured
and
more
clear?
F
F
So,
if
you
cannot
trust
the
test,
remove
it
or
rewrite
it
into
something
that
you
can
trust
and
if
the
pipeline
were
to
fail
like
toyota,
pull
the
end
on
cord
punch,
the
the
the
warning
button
and
stop
the
line,
because
we're
stop
we're
not
doing
manual
deployments
anymore.
So
our
pipeline
is
the
way
we
release
changes
to
production.
F
So
if
the
pipeline
breaks
that's
a
priority,
one
thing
and
the
pair
that
was
working
on
the
change
is
going
to
work
on
fixing
a
pipeline
and
if
they
cannot
do
it
themselves,
they
pull
in
the
rest
of
the
team
or
more
teams
and
swarm
on
the
problem.
But
the
key
thing
is
fix.
The
pipeline
before
you
go
continue
and
to
know
that
the
pipeline
is
broken.
You
need
ways
of
getting
feedback
over
the
years.
I've
experienced
or
experimented
with
a
bunch
of
items
like
you
know.
F
Now
the
pipeline
completed
looked
like
this
and
it
took
about
25
to
20
minutes,
depending
on
on
the
size
of
the
change,
which
was
pretty
pretty
rad,
but
we
knew
it
could
be
faster.
F
So,
in
phase
three,
we
did
a
bunch
of
improvements
to
the
existing
system
and
the
testing
and
one
of
those
was
the
pipeline
speed.
Now
we
wanted
to
have
fast
feedback
and
we
knew
that
we
wanted
to
have
the
results
of
the
tests
available
within
minutes
and
that
it
was
possible.
So
what
we
did
was
initially
buy
a
bigger
box
and
buy
more
boxes
so
scaling
vertically
and
horizontally,
which
works
up
to
a
certain
point.
But
what
we
also
did
was
parallelize
some
steps.
F
You
know
we
can
do
a
bunch
of
deployment
preparation
while
we
are
running
the
tests
and
if
one
of
the
tests
were
to
fail.
Okay,
we
have
some
stop
stuff
to
clean
up,
but
if
the
test
pass,
we've
already
done
all
this
preparation
for
the
deployment
and
we
can
immediately
move
on
to
the
next
thing,
so
that
makes
the
pipeline
run
a
lot
faster.
F
F
We
don't
have
a
lot
of
unit
tests,
but
we
have
a
lots
of
manual
testing
at
the
top
manual
testing,
usually
regression
testing,
which
is
very
repetitive
and
does
not
scale,
and
we
endeavor
to
flip
it
around
to
the
testing
pyramid,
where
we
have
lots
and
lots
of
unit
tests
which
cover
most
of
our
business
logic
and
what's
not
covered.
We
cover
with
a
a
small
number
of
end
to
end
and
visual
tests,
and
then
the
last
thing
that
remains
on
the
menu
is
timing
of
releasing
a
feature.
Some
features
need
to
be.
F
F
F
We
also
found
that
testing
the
pipeline,
which
sounds
meta
but
is
very
useful
and
very
necessary,
is
a
key
thing,
because
if
you
make
changes
to
the
pipeline,
you
want
to
verify
that
they
work
before
rolling
them
out.
F
We
also
found
that
spending
enough
attention
on
the
way
of
working
and
mindset
such
as
pair
programming
and
stopping
the
line
requires
constant
attention
to
keep
that
in
and
let's
close
with
a
few
numbers
that
I
think,
look
very
cool
over
the
course
of
a
single
deployment.
We
would
do
on
average
about
4
000
tests,
so
that's
unit
tests,
integration
and
end
to
end
a
pipeline
run
took
average
of
10
minutes,
and
in
six
months
we
did
more
than
5
000
deployments,
which
is
a
pretty
nice
number.
F
So
I
hope
this
this
talk,
helps
you
to
improvise,
adapt
and
overcome
and
get
on
your
continuous
delivery
journey.
So
thank
you
for
listening.
If
there
are
any
questions
hit
me
in
slack
in
the
ci
cd
channel
today
or
reach
out
to
my
email
or
my
twitter,
handle
and
I'll
be
happy
to
answer
any
questions
that
you
have.
Thank
you.
A
Awesome,
thank
you
very
much.
I
mean
if
you
understand,
for
just
a
couple
of
minutes.
I
think
our
next
speaker's
running
a
little
bit
late,
sure
one
question
I've
I've
had
so
when
I,
when
you
showed
the
graphs
earlier
between
the
large
spaced
out
release
cycles
and
having
a
large
inventory
that
that
increases
risk,
doesn't
it
to
a
company
it.
Yes,.
F
A
F
Let
me
let
me
go
back
to
the
to
the
one
you
mean
all
the
way
back
all
the
way,
this
one.
A
Exactly
exactly
so,
this
inventory
risk
it's
obviously
a
monetary
factor
and
when
a
company
says
to
me,
okay,
we're
deploying
me
once
a
month
what
what
are
the
biggest
objections
that
you
face
as
a
consultant
when
you
want
to
push
them
to
to
faster
releases
and
more
frequent
leases,
because
oftentimes
the
mentality
is
that
this
might
be
more
costly
or
require
more
effort.
If
you
want
to
release
more
often.
So,
how
do
you
kind
of
conquer
that
initial
objection
of
you
know
having
faster
and
more
frequent
releases.
F
Yeah
and
that
that's
a
great
question,
because
that's
one
that
comes
up
constantly
right,
the
the
the
problem
is
what
you're
describing
is
the
the
risk
paradox
people
associate
risk
with
releasing
more
often
so,
because
they've
hurt
themselves
in
the
past.
They've
gone
to
this
infrequent
release.
Schedule
right.
We
do
it
once
per
year.
I've
seen
companies
to
do
it
once
per
year,
because
we
need
all
this
process
and
all
these
tests
and
all
this
verification,
because
we've
we've
burnt
our
our
feet
a
couple
times.
F
We've
we've
fell
into
the
into
a
hole
a
couple
times,
and
so
the
key
thing
here
is
to
say
that
to
to
convince
people
with
data,
smaller,
smaller
steps
are
less
risky,
and
so
it's
not
about
releasing
more
often
releasing
more
often
is
a
byproduct
of
taking
smaller
steps.
F
The
key
thing
is
this:
taking
the
smaller
step
and
the
smaller
step
is
always
always
less
risky
and
it's
always
easier
to
change
directions
when
that
small
step
fails
or
turns
out
to
not
be
useful
right
and
the
key
thing
to
convincing
people
that
this
is
less
risky
is
describe
any
recent
release.
F
Any
big
bang
release
that
you
were
a
part
of,
and
you
will
see
that
there
is
a
myriad
of
issues,
usually
right
and
and
so
okay,
if
we
did
this
one
thing,
a
small
thing
and
we
saw
it
was,
it
did
not
work.
How
much
did
it
would
it
have
cost
compared
to
changing
this
big
bank
release
cost,
and
so
it's
a
it's
a
difficult
problem
because
it
goes
against.
F
You
know,
usually
some
sort
of
organizational
trauma,
but
it's
based
on
on
data
and
books
like
accelerate
help
with
that
a
lot
as
well.
You
know
just
give
people
something
to
read,
and
it's
not
only
you
it's
more
companies
that
are
doing
this
and
that
have
been
part
or
have
been
part
of
the
same
problem
that
helps
convince
people.
In
my
experience.
A
So
the
focus
really
shouldn't
be
on.
How
fast
can
I
deploy,
but
rather
than
what
are
the
smallest
increment
of
change?
I
can
implement
without
breaking
you
know,
without
breaking
my
builds
or
breaking
my
apps
and
having
you
know,
having
small
risks,
small
steps
that
are
easy
to
roll
back.
I
guess.
F
Yeah
that
that's
exactly
it
and-
and
you
know
the
of
course-
it's
cool
that
we
did
5
000
deployments
in
a
month,
but
that's
not
a
goal.
That
was
you
know
that
that's
a
sexy
number
we
can
share,
but
the
goal
was
to
have
you
know
very
fast
feedback
on
the
functional
and
the
technical
level,
and
to
do
that,
you
need
to
take
small
steps.
A
F
A
F
No
not
anymore
the
the
the
feature
toggles
are
owned
by
the
the
the
team
that
you
know
the
feature
toggle
is
part
of
a
topic
of
a
of
a
domain
piece
and
the
the
team
that's
responsible
for
that
piece
of
the
domain
is
also
responsible
for
that
feature.
Toggle
and
typically,
you
will
see
that
feature.
Toggles
can
be
removed
quite
quickly.
F
You
know
the
feature
is
viable
and
it
goes
live
and
then,
two
weeks
later,
three
weeks
later,
you
can
remove
the
feature
toggle
or
the
feature
turns
out
to
be
completely
useless
unviable
and
you
remove
the
feature
and
the
toggle
as
well.
A
F
Yeah
and
also
the
ability
to
keep
deploying
stuff,
even
though
the
feature
is
not
done
yet
right,
so
you
don't
have
to
halt
the
pipeline,
even
though
you're
still
working
on
the
feature.
A
Sweet
very
cool,
well
awesome.
Thank
you
very
much
for
time
michelle,
as
we
said
earlier,
you'll
be
available
on
the
csd
slack
channel
for
the
rest
of
the
day.
Again,
thank
you
for
the
supreme
presentations.
Thank
you
for
questions
afterwards
and
I'll
see
you
soon
all
right.
Thank
you.
Take
care.
Take
care,
bye,.
A
Hey
no
problem
at
all
you'll
be
talking
to
us
today
about
everything
as
cold,
so
I
presume
it's
going
to
focus
a
lot
on.
How
can
I
deploy
new
parts
of
my
development
environment
as
quickly
as
possible?
There's
a
little
manual
work
required.
Is
that
correct.
G
Right
right,
I
will
try
to
overview
it
to
talk
about
the
the
the
culture,
the
the
tooling
the
processes
of
how
to
do
an
efficient
everything
has
a
good
strategy
into
your
company.
A
G
Well,
hello,
everyone
and
thank
you
again
for
assistant
to
this
session.
Everything
that
I
could
my
name
is
abdul
antham,
I'm
a
cloud
solution
architect
and
advisor.
I'm
microsoft
certified
trainer,
I'm
currently
based
in
montreal,
canada,
so
yeah,
it's
like
5,
5,
em
right
now,.
G
Okay,
let's
jump
into
the
agenda,
okay,
I
will
see
a
very
quick
overview
of
the
everything
other
code
strategy.
Then
we
will
be
detailing
all
the
aspects
of
the
everything
other
code
talking
about
architecture
of
the
code
infrastructure
as
a
code
network
as
a
code
policy
and
configuration
of
the
code.
So
basically
we'll
try
to
see
how
you
could
you
automate
everything
and
bring
this
into
your
company.
G
Let
me
just
do
this:
okay,
perfect.
The
some
software
term
of
the
code
will
be
the
new
standard
of
modern
infrastructure.
G
That's
right
and
so
don't
be
confused
whenever
you
see
something
as
a
code
means
that
it's
it's
some
new
automation,
it's
some
process
that
will
be
put
in
place
in
order
to
facilitate
stuff
to
some
chi
or
to
some
to
some
people.
Basically,
so
yeah
you
can
check
on
the
internet
today,
you
will
see
now
a
lot
of
stuff,
as
I
code,
even
security
as
a
code
call
each
other
code,
automation
of
the
code,
automation
as
a
service
yeah.
G
So
for
me,
for
me,
what
is
everything
as
I
could?
Actually,
sometimes
you
just
need
to
you
know
to
to
to
to
to
check
your
efforts
vs
the
results
you
are
trying
to
reach
and
in
order
to
not
flip
and
to
get
stuff
nicely
done
and
easily
done
also,
and
also
in
order
to
be
able
to
audit
and
monitor
everything
that
you're
trying
to
do,
and
you
know
in
a
cycle.
G
So
that
means
like
I've
done
something
I'm
able
to
redo
it,
and
I
I'm
also
able
to
check
it
and
get
go
back
to
the
end,
to
the
precedent
version
of
it
and
also
you
know
to
be
able
to
to
to
configure
it
so
yeah,
basically
we're
talking
about
architecture
as
a
code,
infrastructure
and
network,
as
I
could,
and
policies
and
configurations
of
the
code,
so
basically
policies
of
configurations
of
the
code.
It's
when
you
finish
deploying
and
designing
and
deploying
your
architectures
and
and
infrastructure.
G
You
will
need
to
do
some
monitoring
stuff,
some
old
detail
and
some
remediation,
sorry,
so
yeah,
let's
start
with
after
I
could.
G
I've
been
working
with
some
companies
that
are
that
were
trying
to
you
know
to
reuse.
Their
architectures
they've
got
a
lot
of
creature
architecture,
applications.
A
lot
of
you
know,
front-end
epi,
back-end
and
database
applications,
so
they've
got
also
a
lot
of
micro
services
applications.
So
basically
they
have
things
that
are
that
have
been
that
have
been
doing
like
for
four
years,
maybe
and
still
at
any
time,
they're
trying
to
at
their
process
to
to
deploy
or
to
create
this
new
application
based
on
the
same
architecture.
G
Why
well
they're
they're
doing
it
from
the
beginning
from
scratch,
which
is
for
me
very
bad?
It's
very
bad
practice.
Why
not
just
use
the
things
you've
had
already
done
and
try
to
to
to
just
customize
it,
especially
if
you
have
it
in
terms
of
templates,
something
that
is
viewable
in
diagrams
and
something
that
you
could
switch
directly
into
code
and
by
this
code
you
just
change
parameters
variables,
maybe,
and
then
you've
got
boom.
You've
got
a
new
architecture
with
new
standards,
maybe
with
new
with
some
new
database.
G
Instead
of
only
one,
you
know
like
higher
tendency
or
scalability
of
your
database.
No!
That's
architecture
as
I
could,
and
for
me
you
don't
you,
don't
really
need
some
tool
to
do
architecture
as
I
could
it's
all
about
culture.
You
know
when,
when
you
have
this
culture
to
this
ability
or
this
capacity
to
to
reuse
the
things
you've
already
done,
you
know.
E
G
Do
you
know
if
you're
an
ager
just
export
your
templates
of
your
whole
project
and
then
boom
you've
got
all
the
templates
of
of
everything,
as
I
could
so
yeah
just
do
this
and
then
try
to
reuse
the
template
and
try
to
customize
it
and
just
put
it
somewhere
now,
just
keep
it
somewhere
safe,
and
you
know
some
github
repository,
for
example,
and
you
know
cut
it
something
and
try
to
customize
as
as
many
as
possible,
all
the
variables
and
parameters
that
you
will
be
using
and
changing.
G
This
is
an
example
of
an
architecture
as
a
code
you
have,
for
example,
twitchy
application,
so
you
have
a
web
app
or
an
app
service,
an
e-drop
service.
You
have
an
ejoysphere
database
and
you
have
a
storage
blob.
You
know
for
maybe
the
it's,
maybe
it's
a
web
static
page
and
then
you
have
the
intro
active
directory
for
this
single
sign-on,
etc.
G
That
is
allowed
to.
That
is
allowing
your
application
to
be
connected
to
the
internet
and
you
have
your
id
or
dns
as
well.
So
this
is
an
example
of
very
standard
type
of
architectures
that
is
used.
You
know
at
least
like
a
thousand
times,
or
maybe
a
million
times
today
in
in
today's.
You
know,
websites
or
apps
that
are
flowing
in
the
internet
today.
So
why
not
just
to
use
it
and
yeah
be
efficient
and
whenever
you
have
to
deploy
it
just
change.
G
For
example,
you
need
three
databases
or
10
databases
instead
of
five,
because
you
have
some
really.
You
know
some
huge
black
friday
that
is
coming
and
you
need
your
database
to
be
very
efficient
and
very
elastic.
G
This
is
a
second
example
of
architectures.
You
you,
you
that
you
may
face
in
your
day-to-day
life.
Maybe
you
know
just
a
virtual
machine
placed
behind
some
load
balancer
and
you
have
a
public
iep
that
is
learning
this
virtual
machine
to
be
reachable
from
the
internet
and
from
and
from
users
from
the
internet
or
from
outside
your
organization
or
even
inside
organization,
and
from
the
internet
using
their
it's
public.
Maybe
there
is
some
website
or
some
something
which
is
installed
on
it.
G
So
yeah
just
check
this
one
put
it
put
it
as
a
template
or
as
a
json
code
somewhere
and
just
reuse
it.
G
I
will
be
showing
some
tools
that
could
help
you
guys
to
to
do
this.
I
know
it's
not
all
the
time
easy
to
do
this,
just
by
exporting
templates
from
egypt,
for
example,
from
eurovest,
but
there
are
some
tools,
some
third-party
tools
that
will
allow
you
to
you
know
to
design
diagrams
and
output
templates,
either
terraform
templates
or
json
templates.
G
So
basically,
what
to
keep
in
mind?
For
me,
you
have
three
principles.
You
don't
do
ig,
don't
just
get
out
of
the
itunes
integration
business.
You
know.
G
Value
driving
not
I
achieve,
while
you're
driving,
if
you're,
not
indicted,
then
why
keep
yourself
in
some
very
bad
practices,
just
reuse
things
and
and
and
and
do
the
the
thing
you
do
the
best
also
you
need
to
you
know
in
the
ig
today
we're
talking
a
lot
about
automation,
so
yeah
you
need
to
minimize
your
efforts
and
and
really
be
consistent.
G
So
you
need
to
eliminate
the
fire
drills,
especially
in
between
your
teams.
For
example,
you
have
a
new
project
that
you're
implementing
you
you'll
you'll,
need
to.
You
know
to
be
very
efficient
in
between
your
teams
in
terms
of
collaboration
in
terms
of
communication
yeah
and
just
try
to
automate
as
as
as
many
as
possible,
and
the
third
thing
to
keep
in
mind
for
me
is
the
dry
principle,
so
you
need
to
keep
it
to
yourself
to
keep
it
simple.
G
Sorry,
don't
repeat
it
to
yourself
means
if
you
do
something
today,
which
is
very
good,
then
tomorrow
try
to
do
it
with
with
a
script,
for
example,
or
yeah,
with
a
template
not
with
the
same
way
we've
had
you
have,
though
you
have
you
have
done
it.
Yesterday.
G
Yeah,
so
I've
been
talking
about
some
tooling,
it's
cool
to
use
tools.
You
know
it's,
it's
it's
easy.
Sometimes
it's
you.
E
G
You
pay
for
these
tools,
but
they
are
they.
They
are
there.
They
will
facilitate
your
life.
For
example,
I've
been
using
brain
board
for
the
last
year
and,
to
be
honest,
the
guys
behind
this
platform
are
really
awesome.
They've
got
a
lot
of
very
nice
integrations
with
mini
cloud
providers
and
many
other
party
tour
party
tools.
They've
got
a
nice
interface,
so
you
just
it's
like
visual.
G
We
just
need
to
drag
and
drop
things
and
it's
going
to
output,
you
a
very
clean
terraform
script,
which
is
very
interesting
yeah.
You
can
even
put
ip
ranges,
you
know
in
into
your
subnets
in
the
diagram
and
it
will
create
for
you,
the
subnets,
with
the
the
good
or
the
the
ip
wrenches
that
you've
put
there's
also
structure
easier.
G
I've
never
used
it
to
be
honest,
but
I
know
some
some
friends
who've
used
it
and
told
me
it's
very
good,
but
for
me
I
will
tell
you
this
again:
you
don't
you
don't
really
need
some
tool
unless
you're
you
you,
you
really
want
to
make
it
fancy
or
beautiful.
For
me,
you
have
to
get
up.
You
have
maybe
eat
your
devops.
G
You
do
thanks,
try
to
keep
them
in
in
your
repositories
as
a
code,
your
infrastructures,
as
a
code,
you
know
everything,
security,
network
configurations,
policies
have
the
whole
thing
and
put
it
somewhere
very
easy
to
reach
and
very
easy
to
reuse,
customizable,
customized,
sorry,
okay,
second
thing,
infrastructure
is
the
code.
As
you
all
know,
you
know
this
is
the
the
you
know
dependence
these
days.
Everyone
is
talking
about
infrastructure
as
a
code
and
it's
really
huge
thing,
a
huge
topic.
G
It's
it's
it's
you
know
it's
making
development
and
it's
making
production
or
is
dlc
production.
An
application
is
very
easy.
You,
where
you
can
deliver
fast
with
less
manual.
You
plan
your
deployments.
You
bring
value
to
the
run
teams
by
using
the
scale
the
scalability
features,
the
multi-cloud
mutual
platforms.
If
you
need
to
deploy
three.
H
G
Vmware
and
one
into
your
optimize
servers,
you
can
do
it
just
with
like
one
push
and
it's
very
efficient
and
of
course
you
have
cost
optimization.
So
you
you,
you
bury
everything
in
a
code
and
then
you,
you
know
you,
you
know
what
you
will
need
before
even
deploying
it.
G
And
network,
as
I
could
for
me,
this
is
this-
is
more
interesting
decision
today.
I
see
that
there
are
few
people
talking
less
talking
about
this
one
network
as
I
could
which
for
me
I
think
it's
even
more
important.
Now
the
network
configuration
they
are
always
said,
and
always
you
know
a
pin
in
the
asset
for
the
world,
but
especially
when
you
have
some,
you
know
some
network,
virtual
appliance
and
some
you
know,
rules
to
define
and
through
some
policies
to
put
in
place.
G
So
whenever
you
know
some
some,
some
devops
guy
from
your
team
deletes
by
accident
some
route
or
some
policy,
you
you
you,
it
will
be
very
difficult
for
you
to
to
to
to
return
it
back,
especially
if
you
don't
even
know
where
it
is
selected.
You
know,
maybe
the
devops
guy
doesn't
even
know
where
he
did
this
modification.
G
So
for
the
network
of
the
code
part,
I
think
that
source
control
management
is
something
very
huge
to
add
in
the
network.
Electrode
industry
you've
got
also
what
we
call
single
source
of
through
means.
You
have
just
one
repository
when,
where
you
have
all
your
network
configurations,
of
course,
you
have
some
branching.
You
should
put
in
place
some
branching
politics
where
you
could
specify.
This
is
the
master
branch
where
I
have
all
the
very
accurate
and
the
last
version
of
my
network
configurations.
G
So
whenever,
for
example,
there
is
the
network
team
that
needs
to
do
some
updates
with
you
know
the
ud
udr
routes,
you
just
you
know,
ask
them,
maybe
for
this
channel
and
put
in
place
by
yourself
using
using
json
or
using
transform
or
let
them
do
stuff
and
after
try
to
apply
your
template
and
then
you'll
see
the
changes
they
have
been
doing.
G
So
you
could,
if,
if
it's
okay,
if
the
chances
are
okay,
you
could
just
keep
your
changes
and
in
your
single
as
a
single
source
of
truth
repository
and
making
the
the
master
branch,
for
example,
and
yeah
programmatic
apis.
Whenever
you
need
to
and
deploy
a
new
network
service,
you
don't
really
need
to
to
you
know
to
to
be
in
place
or
to
be
on
in
the
data
center
or
whatever
you
just.
You
can
push
it
through
programmatic,
apis
and
boom.
It's
done
for
the
tooling
part.
G
You
know.
For
me.
I
try
to
separate
in
between
injure
iws
and
the
rest
multicloud.
So
for
the
edger,
you
know
we,
as
many
of
you
already
know.
We
are
talking
about
arm
templates
or
ego
resource
managers.
This
is
this
is
very
interesting
tool
to
automate
your
infrastructure
network.
As
a
code,
we
have
also
either
injured
bicep
b.
G
This
is
a
very
interesting
tool,
a
very
ambitious
project
from
microsoft
and
very
accurate
to
developers
who
wants
to
do
infrastructure
as
a
code.
So
it's
very,
very,
very
interesting.
I've
used
it
I've
loved
it
yeah.
G
I
really
recommend
it
to
every
one
of
you
and
we've
got
farmer
that
is
based
on
the
f-sharp
language.
Farmer,
basically,
is
just
an
open
source
project
and
yeah
it's
similar
to
by
dicep,
I
would
say,
but
less
less
advanced
than
bicep,
for
the
invest,
we're
talking
about
airbase
cloud
formation
and
for
you
know,
for
the
multi-cloud
industry
we
have
terraform
or
simple
cross
plane.
You
know,
we've
got
a
whole
of
a
hell
of
tools
available
to
do
your
infrastructure.
G
Good
yeah-
I
just
put
this
slide
here
just
to
to
to
show
the
you
know:
enterprise
today
that
are
spent
on
cloud
and
data
centers
vs
enterprise
spend
on
data
centers.
You
could
see
that
the
outcome
of
the
revenue,
the
revenue
from
the
cloud
activities,
is
very,
very.
G
And
last
but
not
least,
or
at
least,
but
at
last
we
have
policy
and
configurations
as
I
could
so
this
is
I
I
love
this
part,
it's
it's.
I
I
found
it.
I
find
it
very
interesting
for
for
for
many
of
you
yeah.
So
basically,
when
you
have
done
your
infrastructure
that
but
you've
deployed
everything,
you
will
need
to
to
make
sure
that
you've
got
some
other
some
rules
that
are
configured
you
know,
I'll
give
you
an
example.
G
You
deployed
a
website
and
you
just
need
to
make
sure
that
your
code
before
being
deployed
to
this
web
server.
Sorry
it
needs
to
to
be
passed
through
some
tests
or
you
know
basic
tests
in
order
to
fit
into
your
browser.
So
basically
you
could
do
this
using
some
cicd
pipeline
and
per
in
place
some
some
selenium
or
some.
G
You
know
some
third-party
tool
that
that
is
doing
test
plans
for
you
or
you
could
do
it
using
some
configurations
like
what
means
in
your
template,
where
you
you
that
you've
already
took
from
the
arc
from
the
architectural
part
and
customized
with
infrastructure
network
and
as
a
code,
you
could
still
add
some
scripts.
G
G
One
of
the
best
tools
or
best
open
source
projects
that
you
that
everyone
should
be
using
when
whenever
it
comes
to
policies
and
configurations
as
a
code
as
the
open
policy
agent
or
the
all
policies,
so
basically
the
oppa
policies
they're
trying
to
separate
in
between
you
know
the
policy
enforcement
and
the
policy
definition
itself.
G
So,
basically,
when
you
have
something
to
decide
on
some
policy
decision,
you
have
the
oppad
that
is
doing
this
decision
for
you
and
you
know
it's
very
easily
integrated
and
integration
with
me
services,
so
yeah
dark
or
kubernetes
elastic
platform
yeah.
I
definitely
recommend
it
for
you
guys
for
the
trolling
for
the
intro
part
we're
talking
about
the
combination
of
intro,
pulses
and
injured
devops,
you
could
do
some
very
good
cic
pipelines
and
for
the
multi-cloud
part,
we've
got
oppa
some
chanel.
G
Well,
I
think
I'm
on
time.
Thank
you,
everyone
for
your
time.
I
hope
it
was
very
quick
and
interesting
for
you
yeah.
Please
leave
me
what
you
think
in
the
comments.
If
you
have
any
questions,
please
feel
free.
Thank
you!
So
much
and
again,
thank
you
for
watching
all
the
develops
awesome.
A
I
C
A
Well,
at
least
we'll
wait
for
we
wait
for
pavel
to
come
back
and
join
us
here,
but
I'm
just
going
to
start
off
by
just
giving
a
quick
overview
of
what
you
guys
want
to
do.
It
really
is
about
multi-cloud
serverless
deployments
today.
It's
another
very
interesting
topic.
I
think
it's
quite
a
good.
Quite
a
good
follow-up
from
a
topic
of
you
know
terraform
and
deployment
as
a
code.
I
Yesterday
we
would
like
to
present
you
how
to
automatically
deploy
application
by
using
melodic,
and
we
would
like
to
present
present
modic
to
you
at
first
pavel
will
describe
the
fury
of
that.
And
after
that
I
will.
I
will
present
a
demo
of
our
platform.
J
Okay,
thank
you.
Thank
you.
I
have
the
issue
with
the
with
the
zoom,
but
now
I
hope
it's
okay.
Thank
you
all
for
the
introduction
and
thank
you
for
for
ability
to
to
be
here
and
today,
together
with
allah,
we
want
to
say
and
tell
about
the
melodic
platform
and
even
more,
I
will
make
a
live
demo,
which
will
be
the
more
important
part
of
our
presentation.
J
I
will
start
with
a
few
slides:
what
is
the
melodic
and
how
it
works,
and
then
the
more
interesting
part
will
be
the
live
demo
by
by
alicia.
So
melodic
is
open
source
project.
It
is
a
single
universal
platform
which
is
able
to
make
the
multi-cloud
deployment,
but
not
only
the
deployment,
but
also
the
optimization
of
the
cloud
resources.
So
melodic
is
selecting
the
most
efficient
cloud
provider
for
the
given
application,
most
efficient
resources
and
automatically
deploys
application
and
infrastructure
to
that
cloud.
Provider.
Melodic
is
fully
open
source
and
support
cross
cloud
deployment.
J
J
It
is
also
the
unified
way
to
deploy
virtual
machines,
containers,
serverless
components
and
other
applications
to
the
different
cloud
providers.
The
deployment
is
fully
automatic,
and,
last
but
not
least,
the
resources
are
optimized
using
very
unique
and
innovative
approach
based
on
the
utility
function,
as
well
as
with
the
set
of
the
advanced
solvers
to
solve.
The
optimization
problem
first
step
is
to
model
the
application.
We
are
using
camel
cloud
application,
modeling
and
execution
language.
It
is
the
meta
language
similar
to
tosca,
which
is
somehow
higher
level
than
typical
yak
languages
like
and
simpler
cloud
formation.
J
It
allows
for
modeling
components,
connections,
security
and
also
allowed
to
model
the
requirements
for
the
infrastructure.
Thanks
to
that,
it
is
possible
to
make
the
optimization
so
camel
can
be
considered
as
the
unified
way
to
optimize.
The
application
in
the
cloud
very
innovative
elements
of
the
melodic
is
the
ability
to
determine
what
is
the
best
deployment.
It
is
done
by
the
utility
function,
which
can
be
described
and
prepared
for
the
each
application
and
focuses
on
the
business
value
and
allows
for
the
optimize
the
trade-off
between
the
cost
performance,
availability
and
other
elements.
J
So
you
can
consider
melodic
as
your
smart,
autonomic
devops
and
how
it
works.
The
first
step
is
to
model
applications
set
initial
parameters,
and
then
everything
is
done
automatically.
Allow
me
to
show
that
in
her
parts
the
deployment
the
initial
deployment
plan
is
calculated.
Then
application
is
automatically
deployed
to
the
selected
cloud
providers.
J
Then
the
metric
collection
is
started
and
when
melodic
decide
that
it
needs
to
be
reconfigured,
then
the
new
solution,
new
deployment
plan
is
found
and
application
is
automatically
reconfigured
and
some
resources
are
added
removed
and
application
components
are
reprovisions,
but
it's
not
all.
In
the
following
project.
In
the
morphemic
we
are
extending
the
melodic
platform
through
two
new
concept.
J
One
is
the
polymorph
architecture,
so
the
melodic
will
be
able
also
to
not
only
to
select
the
number
the
resources,
but
also
to
change
the
technical
form
of
the
application,
for
example,
to
decide
that
applications
should
use
accelerated
resources
like
fpga
and
jp
gpu,
for
the
given
proposal
in
given
time
frame,
of
course,
for
the
application
which
supports
that
and
so
on.
The
second
concept,
which
will
be
introduced
in
the
morphemic
and
it
is
already
implemented
in
the
melodic,
is
practice
adaptation,
thanks
to
the
proactive
adaptation.
J
I
will
tell
very
briefly
about
the
one
use
case.
One
company,
which
is
using
melodic
for
the
optimization
of
resources.
Allah
will
show
another
example
of
the
application
live,
so
you
can
see
how
it
works.
In
reality,
I
will
tell
about
the
ai
investments.
It
is
the
company
which
is
which
developed
the
system
for
the
investing
portfolio,
optimization
using
advanced
time,
series,
forecasting,
method
and
optimization
algorithms
and
typical
business
goal
for
their
investment
is
to
train
the
let's
say:
50
forecasting
model
in
one
hour,
using
as
minimal
number
of
resources
as
possible.
J
So
the
investments
analysis
is
starting
to
to
train
the
models
using
on-premises
resources
because
they
are
already
bought
and
available.
But
usually
it
is
not
enough.
So
melodic
is
calculating
how
much
cloud
resources
is
needed
and
add
that
necessary
resources.
Then
the
models
are
trained
stored
in
the
cloud
storage
and
after
the
training,
melodic
automatically
removed
the
resources.
J
You
can
see
these
extreme
models
and
this
hybrid
dynamically
managed
approach
is
the
most
efficient
in
term
of
cost
and
the
savings
are
really
significant
yeah.
So
that's
all
from
my
part.
I
I
try
to
be
brief
to
to
let
show
that
the
system
live,
because
that
would
be
the
most
interesting
part.
I
I
suppose
you
can
download
melodic
here,
as
you
can
see,
and
also
follow
us
on
the
social
media
web
on
the
melodic
web,
page,
twitter,
linkedin
and
facebook.
I
Okay,
so
now
I
would
like
to,
I
would
like
to
show
you
how
to
use
melodic
and
how
to
automatically
deploy
on
application
by
melodic
platform.
I
will
perform
deployment
of
spark
based
application.
We
will
monitor
application
metrics
and
observe
reconfiguration
process,
which
is
done
by
melodic.
For
reasons
of
optimization,
a
member
of
the
platform
is
installed
on
virtual
machine
and
it
is
admin
training.
E
I
And
today
I
would
like
to
deploy
genome
application,
which
will
be
described
also
in
a
minute
deployment.
We
need
to
model
our
application
with
its
requirements
incoming
model,
it's
human,
understandable
and
editable
form,
after
that
such
model
is
transformed
by
the
dedicated
tool
to
xml
format,
form
understandable
for
melodic.
I
I
I
In
the
meantime,
I
would
like
to
briefly
describe
application,
which
is
being
deployed
by
melodic.
Tower
genome
is
a
big
data
application
which
performance
some
calculations
and
sales
results
in
aws
as
free
bucket,
so
we
need
to
provide
developers
credentials
to
aws,
and
it
was
down
by
me
during
uploading
of
xml
file
with
model
of
application.
I
Melodic
creates
proper
number
of
spark
workers,
as
virtual
machines
considered
our
requirements
from
canva
model
thanks
to
measurements
of
application.
Metrics
and
melodic
makes
a
decision
about
creating
additional
instances
with
workers
or
about
deleting
an
accessory
once
a
spark
divides.
All
calculations
named
tasks
between
available
workers
in
order
to
optimize
application,
performance
and
cost,
and
now
I
would
like
to
describe
briefly
the
configuration
of
loading
platform
which
was
done
by
me
just
before
this
demo,
because
of
optimization
of
time
of
our
presentation.
I
I
I
After
choosing
this
box
or
offers
option
from
menu,
we
are
directly
to
view
of
all
currently
available
offers,
and
there
are
clouds
with
my
cloud
from
nws,
with
my
credentials
list
of
available
hardwares
with
information
about
number,
of
course,
ram,
etc.
I
I
I
Also,
we
we
can
see
detailed
view
of
the
constant
problem
here,
and
here
we
can
see
a
list
of
variables
and
with
additional
information
about
a
component
and
variable
type
and
also
domain
and
type
of
this
domain,
a
utility
formula.
I
Here
we
can
see,
for
example,
a
minimum
and
maximum
cardinality
of
a
worker
component
and
also
the
same
type
of
restriction,
for
course,
for
spark
worker.
So
we
can
see
that
in
this
deployment
we
prefer
to
have
from
one
to
maximum
10
workers
and
last
the
most
important
element
here
list
of
metrics
with
their
types
and
values
they
describe
current
performance
of
this
application.
I
And
we
can
see
that
when
constructor
is
generated,
it
is
time
for
rezoning,
melodic
finds
here
the
most
the
most
profitable
solution
and
for
the
problem
defined
by
us.
When
resolving
is
completed,
we
can
observe
information
about
calculated
solution
so
about
utility
value
and
values
for
each
variables,
in
that
case,
one
as
worker
cardinality
for
worker
course
and
provider
for
spark
worker
from
zero
index,
which
means
aws
in
such
case.
I
Also,
if
you
would
like
to
have
more
detailed
view
of
this
deployment,
it
is
possible
to
see
it
by
using
commander.
I
I
I
I
Also,
we
have
here
information
about
ip
addresses
and
a
possibility
to
create
web
ssh
connection,
which
is
really
useful
in
testing
process.
I
Also
genome
application
has
on
graphical
settings,
and
we
can
see
it
here
and
now
we
can
observe
the
initial
state
of
our
deployment
process,
so
we
can
see
only
the
first
measurement
of
some
parameters,
but
you
need
to
see
the
the
complete
the
complete
view.
I
I
I
And
on
left
on
the
first
chart
we
can
monitor
number
of
instances,
and
now
we
have
one
worker,
so
one
node
one
virtual
instance
on
the
bottom.
We
can
see
number
of
simulations
of
remaining
simulations.
So,
of
course,
this
value
is
decreasing
with
performing
next
task
by
spark
on
the
right
on
the
chart
name
number.
Of
course
we
can
see
value
of
minimum
course.
We
need
to
finish
calculations
on
time
and
current
number,
of
course,
and
the
total
course
value.
I
So
as
far
we
have
one
one
core
and
well
the
claims
that
we
need
to
have
four
cores,
and
also-
and
now
we
can
see
that
estimated
time
is
higher
than
time
left,
because
it
is
equal
to
60
minutes
and.
I
I
I
For
this
reasons,
the
first
step
is
rezoning.
As
a
result,
we
can
see
new
calculated
solution
which
is
being
deployed
for
us
now
and
now.
Melodic
will
create
new
additional
virtual
machine
for
us,
because
the
worker
validality
now
it
is
equal
to
two.
I
Also,
we
can
observe
the
results
of
this
deployment
in
our
our
aws
console
and
here
after
refreshing,
and
we
can
see
that
we
have,
of
course,
melodic
machine
and
two
spark
workers.
So
we
have
the
first
one
which
was
created
at
the
beginning
and
now
a
melodic
melodic
is
creating
for
us.
Anyone
and
also
it
will
be
visible
soon
in
grafana.
I
Even
now,
we
guess
that
we
have
a
one
additional
additional
virtual
machine,
but
of
course
we
need
to
wait
some
time
for
the
full
for
for
finishing
the
full
configuration
of
this
of
this
virtual
machine
and
after
that
it
will
be
available
for
us
and
also
it
is
possible
to
see.
C
I
C
I
I
Okay,
so
yeah,
our
time
is
closing
to
to
the
end.
But
in
a
minute
we
should
see
a
new
additional
instances
available
for
for
our
deployment
and
thanks
to
them
and.
I
Then
our
estimated
time
needed
to
create
to
finish
all
calculations
should
be
lower
and
lower
okay.
So
you
need
to
believe
me
that
it
will,
it
will
appear
for
us
and
yeah.
Now
we
can
see
that
our
pre-configuration
process
is
successfully
finished,
and
so
thank
you
very
much
for
your
attention
and
this
is
the
end
of
spark
application
deployment
on
the
melodic
demonstration,
and
so
thank
you
very
much
and
please
feel
free
to
ask
any
questions
on
dedicated
slack
channel.
A
Alicia
pavel,
thank
you
very
much
and
we
will
see
you
in
the
cic
section
where
you'll
be
available
for
the
rest
of
the
day.
If
you
want
to
share
your
slides
on
there,
that'd
also
be
much
appreciated.
We've
got
our
next
speaker
lined
up
already.
Michelle
is
in
in
the
chat
already
welcome
you're
joining
us
from
gitlab.
I
presume
you're
based
in
germany,.
H
A
Yeah,
brilliant,
I
mean
yeah,
welcome,
we're
also
very
happy
to
have
you
here
gitlab,
you
guys
do
everything
I
heard
so
your
topic
today
is
shift
left.
I
presume
what
do
you
want
to
focus
on
today?
What
are
you
going
to
share
with
us.
H
I
want
to
dive
into
a
little
bit
of
ci
cd,
but
also
talking
about
monitoring
and
observability
and
how
to
like
shift
it
left
and
have
have
a
better
experience
both
as
developer
ops,
sec,
devops
devsecops
everything
around
that
it
will
be
a
speedrun.
I
have
a
lot
of
stories
to
tell
so
I
really
wanna
kick
us
off
and
yeah
awesome.
H
A
H
Let's
go
yeah,
so
I
will
be
sharing
the
slides
directly
in
the
slack
channel,
so
you
can
follow
along
with
everything
I'm
now
talking
about,
and
my
my
thought
or
my
storyline
goes
from
monitoring
to
observability
left
shift
oslos,
and
I
want
to
start
out
with
a
devops
tale
or
something
from
the
past
decade
I
experienced
myself
we
kind
of
assume
that
security
has
shifted
left.
So
it's
not
optional
anymore.
It's
fully
integrated
into
our
development
workflows
into
the
the
reviews,
the
deployments,
much
requests,
pull
requests
and
so
on.
H
So
this
is
like
a
given
fact
right
now,
when
we
turn
back
time
a
little
bit
and
say
well
how
about
monitoring?
And
how
does
this
change?
How
maybe
we
can
shift
left
monitoring
as
well?
How
does
this
work
and
we
kind
of
had
like
the
the
black
box
monitoring
in
the
past?
We
were
learning
about
metrics
and
not
just
state-based
monitoring.
H
We
had
sort
of
sla
reporting,
service
level
availability
and
basically
it
worked
somehow,
but
it
wasn't
really
there
yet
and
we
we
have
been
moving
on
in
the
past,
defining
service
level,
agreement,
service
level,
objective
and
several
service
level
indicators.
Like
saying
hey,
I
have
potentially
errors.
I
have
latencies,
I'm
defining
the
error
budgets
and,
while
my
agreement
with
the
customer
is
potentially
99.5
availability,
our
objective
should
be
99.9
at
nearly
a
hundred
percent.
H
H
And
there
are
certain
things
we
can
adopt
from
the
sre
book
from
google,
for
example,
with
using
the
golden
signals
and
saying
okay,
we
want
to
focus
on
monitoring
and
measuring
latency
traffic
error
situation.
On
the
other
side,
this
oftentimes
needs
code
instrumentation.
H
So
as
I
need
to
take
action
as
a
developer
or
as
an
sre
to
modify
my
code
to
actually
see
something
to
get
the
results
and
after
all,
it's
like
it's
a
common
effort,
and
I
want
to
turn
back
time
into
like
my
history,
of
where
it
would
have
been
amazing,
to
have
monitoring
embedded
into
my
development
workflows,
to
have
slos
defined
and
to
not
fall
into
regressions,
which
then
burn
me
out
after
a
while.
H
So
a
while
ago
we
had
this
idea
of
we
were
creating
a
c
plus
plus
steam
on
a
service,
actually
a
monitoring
daemon
and
it
has
had
a
rest
api
and
json
json
rpc
api,
and
it
was
getting
slower.
We
were
spawning
lots
of
threats,
cpu
was
being
locked
and
at
some
point
we
came
to
the
to
the
idea
to
the
conclusion
here.
H
Maybe
let's
use
something
from
golang
like
go
routines
which
are
available
as
co,
routines,
libraries
in
c
plus
plus,
and
they
were
kind
of
like
stack,
class
and
they're,
putting
a
function
point
on
the
heap
and
stack
unwinding
works
with
continuation.
This
is
rather
complex,
technical
stuff,
but
it
looked
fine.
The
only
problem
it
it
has
was
like
there
was
a
crash,
but
it
only
happened
with
like
a
thousand
api
clients
and
only
after
a
while
of
running
it.
After
a
while,
we
figured
well,
the
memory
is
corrupted,
maybe
it's
exhausted.
H
H
I
think
the
term
didn't
exist
back
then,
but
it
would
have
been
nice
to
have
the
the
feedback
early
on
and
not
deployed
to
production,
not
to
not
release
it,
not
debug
it
with
the
customer
and
fix
it.
Basically
yesterday
so
yeah
this
would
have
been
nice.
Another
story
around
slos
is
recently.
H
I
read
an
article
about
gta,
5
and
or
gta
online,
and
the
login
was
taken
quite
some
time
so
like
10
minutes
or
whatever,
and
the
blog
post,
which
is
linked
at
the
bottom,
explains
or
shows
that
there
was
a
kind
of
a
config
setting,
sorting
algorithm
from
the
chase
file
and
the
user.
Reverse
reverse
engineered.
The
code
found
out
how
to
optimize
it
and
then
kind
of
injected
a
dll
in
into
the
binary
being
loaded.
H
But
what,
if
like,
maybe
how
to
how
to
mitigate
that
and
implement
that
into
the
development
process
before
it
reaches
the
user?
Who
then
sits
there
and
says?
Well,
I'm
I'm
totally
black.
I
really
want
to
play
this
game
online
on
the
sass
platform,
but
I
can't
and
measuring
the
login
time
defining
application
timing
points.
H
Monitoring,
like
the
metrics
and
later
on
the
tracing
spans
could
be
an
idea
for
slos
would
be
interesting
to
have
like
a
test
or
staging
environment
which
gets
deployed
from
a
ci
cd
pipeline,
and
we
are
defining
end-to-end
test
scopes
with
like
user
login
and
defining
a
time
until
the
user
can
can
start
playing
and
for
the
slos
saying,
okay,
the
login
time
is
potentially
should
be
under
two
minutes.
But
what?
If
we
have
a
low
latency
connection
between
different
locations
all
over
the
world?
H
Maybe
we
have
a
different
slo
of
five
minutes
and
with
having
the
idea
of
a
quality
gate
and
saying
okay,
if
the
slo
is
not
matched,
I'm
not
allowing
a
deployment
and
or
emerge
into
the
default
branch
and
the
release
is
getting
stopped
or
being
delayed,
which
is
not
awesome,
but
it
should
be
rather
than
introducing
login
times
which
are
not
applicable.
H
Moving
from
sas
platforms
logins
to
dns
problems-
and
this
is
like
a
common
saying-
it's
always
dns.
I've
been
working
as
a
dns
operator
in
in
in
the
past,
so
yeah.
I
totally
agree
a
while
ago
or
like
one
one
months
ago,
slack
was
down
and
I
did
some
digging
and
so
while
surf
fail
is
returned.
When
I
just
tried
to
query
slack.com
and
well
and
was
like-
maybe
maybe
it's
just
me
so,
let's
ask
another
recursive
resolver,
okay,
it
works.
Is
it
the
missing
glue
record?
H
Is
it?
Is
it
ipv6
something
else?
It
was
quite
fun
to
like
debug.
It
live
on
on
twitter
on
the
internet,
but
also
like
figure
out.
Okay,
it's
dnsec,
which
is
like
another
layer,
different
scope,
and
there
was
the
problem
that
the
dns
key
and
the
ds
records
have
been
pushed
to
dot
com
to
the
parent
zone.
H
They
were
removed
and
but
it
was
cached
and
the
problem
is
everything
which
is
cached
has
the
time
to
live,
which
in
this
case
was
24
hours,
and
so
everyone
had
to
like
wait
for
24
hours
or
use
a
different,
recursive,
resolver
or
just
invalid
invalidate
the
cache
which
means
asking
your
isp
to
do
so.
It
was.
It
was
not
a
nice
situation,
but
in
the
end
there
were
workarounds
and
like
moving
this
scenario
into
into
your
own
environment
and
say:
okay,
maybe
this
affects
me
as
well.
H
I
could
be
running
into
this
problem.
If
I
would
want
to
deploy
dns
sec,
I
should
be,
for
example,
monitoring
five
different
global
locations
and
have
the
sli
of
ddnsx
fail
count,
and
it
should
always
be
zero,
because
otherwise
it
would
be
breaking
my
production
environment
and
potentially
have
like
an
isolated
test
environment
where
this
gets
deployed.
Dns
is
being
tested
and
it
never
gets.
It
never
reaches
production
or
when
it
reaches
production
and
deployment,
it
should
be
worked
back,
but
then
again,
caching,
problem
problems
in
dns
are
not
nice,
but
yeah.
H
Another
story
around
like
monitoring
and
the
slows
and
dns
was
we
deployed
the
nsx
in
the
at
zone
like
10
years
ago,
and
this
was
like
the
first
steps
or
the
first
ways
to
do
it
with
signing
hardware,
which
was
a
state
machine
of
steps.
So
you're
like
pulling
the
zone
records
the
zone
itself,
you
sign
it
and
so
on
and
in
in
those
steps
on
a
friday
afternoon,
which
is
like
typically,
a
change
was
being
done
which
broke
the
entire
state
machine.
H
There
were
no
more
signings,
which
means
not
only
newly
registered
domains
were
not
updated,
but
also
like
dns
updates
for
domain
delegation,
so
different
name,
servers
and
so
on
yeah,
and
what
what
does
did
the
monitoring
do
in
that
regard?
Well,
the
problem
is,
we
had
monitoring
and
it's
good
that
we
had
it.
H
We
checked
for
the
zone
serial,
which
was
a
unix
timestamp
with
regards
to
dns
sec
and
checking
like
an
offset
of
one
hour.
If
it's
different,
the
alarm
came
at
3
am
in
the
morning
on
the
saturday
the
escalation
sms
came
at
4
am
from
all
name
servers
actually
because
we
didn't
have
any
dependencies
or
alert
groups
or
something
so.
Everything
was
like
ding
dong
in
the
morning
and
debugging
at
5.
00
pm
is
not
quite
fun.
H
We
figured
out
that
the
change
was
persisted
in
git
and
then
road
to
production,
but
well
it
would
have
been
nice
to
actually
prevent
that
and,
like
again
saying
I
have
a
staging
signing
hardware,
which
is
like
my
staging
environment
for
ci
cd,
I'm
rolling
out
the
changes
there
with
infrastructure
as
code
using
github's
workflows
and
defining
the
service
level
indicator
and
saying
the
zone
serial
h
is
is
what
what
is
like
my
indicator
of
when
a
problem
occurs
and
then
applying
it.
H
The
the
zone-
zero
h,
shouldn't
be
older
than
one
hour,
and
if
it's
the
case,
the
slo
is
like
failing,
and
there
is
something
wrong
and
the
deployment
into
production
doesn't
happen.
Now.
This
was
one
of
the
thoughts
from
like
10
years
ago.
H
Moving
back
into
like
the
current
time,
facebook
has
had
dns
problems
as
well,
which
was
kind
of
bound
to
rooting
in
that
regard,
they
would
name
their
name
servers
as
anycast
and
that
they
didn't
announce
the
as
anymore
and
the
recursive
resolvers
were
pretty
much
on
fire,
because
all
the
queries
failed,
the
clients
they
tried
immediately.
So
there
was
no
caching
and
so
on,
and
it
was
kind
of
I'm
kind
of
unsure
what
what
the
problem
actually
is.
H
The
interesting
part
was
there
was
there
were
several
blog
posts
and
engineering
insights
being
published
before
and
later
on,
and
one
of
the
things
which
sparked
my
interest
was
this
continuous
delivery
with
a
dedicated
php
agent,
which
has
been
optimized
for
the
speed
and
the
sizing,
and
this
deploys
policies
has
certain
testing
and
deployment
workflows
and
a
data
center
deployment
and
from
there
I
was
wondering
yeah.
H
Maybe
when
I'm
building
or
when
we
are
building
something
around
that
our
data
centers
with
bgp
with
ospf
with
monitoring
dns,
cannot
define
the
slo
scenario
for
that.
How
about
like
monitoring
this
a
little
bit
more
and
so
like
defining
the
policy
on
it,
because
this
was
one
was
the
problem
actually
which
caused
the
bgb
rules
to
being
dropped
and
saying?
Okay,
the
sli
are
the
failed
policy.
Pushes
the
service
level
objective
should
be
oh,
this
is
a
there
should
be.
The
failed
policy
pushes
should
be
zero.
H
This
shouldn't
be
greater
than
zero.
I
need
to
fix
the
slice
and
for
production.
The
slo
should
be,
for
example,
the
name
service,
detecting
an
unreachable
data
center,
which
was
a
side
effect
from
that,
so
could
could
work
in
a
specific
sense
of
saying.
Okay,
these
are
like
infrastructure
as
code
slos,
which
I
want
to
change.
H
If
we
look
more
into
infrastructure
side
of
things
and
and
how
this
affects
our
development
workflows
last
year
talked
about
docker,
hub
rate
limits
and
and
how
this
effects
or
could
affect
what
what
we
are
doing,
the
the
red
limiting
was
being
deployed,
and
we
didn't
really
know
back
then
what
is
possibly
affected
like
a
cicd
pipeline,
which
is
running
a
docker
container
and
doing
the
pull
cloud
native
deployments
in
a
kubernetes
cluster
in
a
different
container
orchestrator
organizations
which
are
located
behind
the
knot
and
using
a
single
source
ip
address
for
pulling
something
and
and
obviously
cloud
providers
who
also
depend
on
pulling
pulling
images
at
large
scale
back
then
we
knew
about
the
limits.
H
Did
some
like
created
our
own
monitoring
scripts
and
promises.
Exporters
passed
everything
this
pretty
much
worked
out
from
the
known
state
the
unknown
state
was
still
like.
Does
it
affect
my
deployments?
How?
How
can
I
monitor
my
ci
cd
pipelines?
H
There
might
be
something
like
too
many
requests
in
the
logs
and
the
implication
of
while
the
deployment
worked
in
half
half
the
way
down
and
the
new
state
of
the
application
for
the
new
pricing
is
being
deployed
and
half
of
35
33
of
customers
see
that,
but
the
others
don't
so
from
there.
H
It
was
like,
let's
think
about
this
and
say:
how
can
we
monitor
that,
and
how
can
we
ensure
that
developers
don't
need
to
wait
for
broken
ci
cd
pipelines,
because
docker
pull
didn't
work
and
reviews
can
continue
and
so
on,
like
the
service
level
indicator,
the
pull
counts
for
the
remaining
docker
pulse
or
docker
hub
pull,
pull
image
count
and
the
slo
should
be
like
when
it's
when
it's
underneath
10
we
we
potentially
should
fail
the
slo
in
that
regard.
H
This
is
an
arbitrary
number
depending
on
the
environment
and
when
the
slo
is
failing
in
the
quality
gate,
we
shouldn't
start
another
deployment,
because
we
cannot
guarantee
that
it's
actually
happening
now.
These
were
kind
of
a
lot
of
stories
around
when
something
is
broken,
and
how
can
we
improve
from
that?
How
can
we
learn
from
that
and
think
about,
while
the
using
the
slos
and
monitoring
for
our
own
purpose
and
our
own
benefit
in
a
way
of
like?
H
Let's
try
to
shift
it
left
and
one
of
the
things
of
one
of
the
like
toolings
around?
There
is
monitoring
with
promisius
promisius
for
the
reason
it
scrapes
an
http
endpoint.
It
doesn't
follows
a
specified
matrix
format
and
it's
easy
to
add
slash
matrix
as
an
endpoint
to
your
application.
H
Promcal
is
another
thing
to
know:
it's
the
query
language
of
promisius,
it
has
a
specified
format,
will
be
used
for
defining
alert
rules
which
are
in,
in
that
case,
defining
the
slos
later
on,
but
as
a
developer.
Where
should
I
be
starting
in?
In
this
whole
scenario?
I
know
there
is
like
an
slo
there's
monitoring.
H
There
are
metrics,
there's
a
like
key
tag
specified
in
promises,
and
then
there
are
values
which
could
be
a
count
or
a
gauss,
something
like
that
and
when
starting
out
think
about
metrics
with
promises,
I
would
recommend
like
separating
between
the
infrastructure,
so
like
memory,
cpu
io,
which
is
already
there
and
there
are
exporters
existing
the
prometheus
node
exporter
could
be
running
on
a
virtual
machine
on
the
port
in
the
cluster
for
specific
services,
we
have
the
promises
exporter
for
mysql,
for
snmp
for
everything
else,
but
then
again
for
app
instrumentation.
H
We
really
need
to
modify
the
code,
and
this
is
something
you
should
look
into
like
on.
H
On
a
playful
basis,
for
example,
with
python,
did
that
a
while
for
a
workshop
which,
which
helped
me
understand,
of
course
like
how
it
how
it
works,
but
also
tried
myself
in
in
a
funny
way
of
not
over
engineering
everything,
but
instead
creating
a
small
sample,
building
a
docker
image,
pushing
it
to
the
container
registry
and,
as
a
second
step,
deploying
it
into
kubernetes
using
the
promises
operator,
the
service
monitor,
custom
resource
definition
and
inspecting
the
metrics
with
promises
and
grafana
and
getting
a
sense
how
this
app
instrument
instrumentation,
actually
works
with
later
on,
actually
adding
it
into
my
application,
which
could
be
python,
which
could
be
ruby
rust.
H
There
are
so
many
client
libraries
out
there,
which
you
can
actually
use,
and
I
would
recommend
like
to
really
focus
on
instrumenting
the
application,
because
there
is
like
there
is
more
than
metrics.
Of
course,
we
have
metrics.
We
have
logs
events
tracing
profiling.
We
also
like
shifting
our
applications
from
a
monolith
to
microservices.
H
There
is
so
much
to
learn
and
please
don't
try
to
solve
everything
in
one
go
start
with
app
instrumentation
and
then
move
along,
because,
if
you're
looking
at
logs,
there
are
so
many
decisions.
I
cannot
give
you
a
clear
direction
of
saying:
either
you
should
be
using
elastic,
search
or
open
search
with
specific
agents
with
a
totally
new
environment
to
learn
how
logging
works.
H
We
have
jiga
tracing
available.
We
have
open
telemetry
as
collector,
so
there
is
a
lot
of
things
going
on
it's
worthwhile
to
learn
more,
but
still
again
continue
with
adding
metrics.
H
H
It
there
are
ideas
around
using
continuous
profiling,
also
for
alerting,
which
then
again
defines
an
slo
and
then
again
you
can
add
it
into
your
ci
cd
pipeline
in
a
certain
way.
It's
also
something
totally
new
to
learn
super
interesting,
but
yeah.
We
we
want
to
look
a
little
bit
more
into
slos
and
how
we
can
make
use
out
of
it
with
promisius.
H
We
are.
We
are
using
the
metrics
and
alerts
to
actually
calculate
the
slo,
which
is
a
promcal
query
defined
in
the
openslo
format
and
captain,
for
example,
is
a
tool
which
evaluates
the
slos
and
defines
the
quality
gate.
When
the
quality
gate
says
the
slos
fails,
the
icd's
cicd
the
build
or
the
jobs
are
failing.
The
pipeline
and
nothing
gets
merged.
Nothing
gets
deployed
to
production.
H
Captain
itself
is
like
could
be
a
little
complex
from
the
architecture
because
it
provides
much
more
than
just
a
quality
gate.
But
again
it's
like
you
install
it
into
your
kubernetes
cluster.
You
have
promises
on
the
side,
you
do
the
deployments
with
ci
cd.
H
You
can
tack
it
onto
your
ci
cd
system,
basically
so
you're
not
like
vendor
locked
in
to
use
it
just
exclusively
with
gitlab,
for
example,
but
you
can
really
like
use
it
where
it's.
Where
is
needed
in
your
continuous
delivery
pipelines
from
there?
It's
like
defining
the
slo
on
the
left
hand,
side
using
a
yaml
format
which
is
which
is
commonly
commonly
known,
and
it
also
provides
you
with
the
ui
to
actually
see
what
is
going
on
what
is
happening.
H
We
don't
have
so
much
time,
so
I
won't
be
doing
a
live
demo,
but
I
would
recommend
checking
out
the
demo
playground,
which
is
available
publicly,
where
you
can
see
different
applications
being
evaluated
against
the
slos
and
seeing
where
it
fails,
seeing
how
it
goes
and
totally
trying
it
out
yourself.
H
The
other
thing
around
shifting
left
with
slos
and
quality
gates
is
continuous
delivery
with
captain
and
promises
works,
but
like
simulating
production
for
applications
is
pretty
hard.
You
could
write
unit
tests
and
end-to-end
tests
and
fast
testing
and
so
on,
but
actually
next
to
slos.
You
also
want
to
add
chaos
to
your
deployments.
H
And
you
also
want
to
treat
slos
as
code,
so
generate
everything
you
have
open
as
low
as
a
spec.
You
have
slots
for
generators,
you
have
purer
for
slo
management,
for
chaos,
recommend
checking
out
litmus
chaos
to
fail.
Your
infrastructure
see
how
the
app
behaves
see
if
the
slo
is
still
matching
and
from
there
define
actions
and
improvements.
H
Last
but
not
least,
how
can
we
like
move
now
from
do-it-yourself
monitoring
to
slow
slop
servability
as
a
low
observability?
Somehow
left-shifting
your
slos
is
like
you
correlate
your
slos
with
instrumentation
observability,
you
make
slo's
part
of
your
devsecops
workflow,
you
define
what
is
needed
so
like
scale
other
scaling.
Virtual
machines
and
chaos
in
container
clusters
has
become
way
more
easy
to
really
consume
it.
H
Also
as
a
developer,
not
needing
to
learn
everything
and
as
a
developer
also
like
see
the
value
in
metrics
logs
and
traces,
so
see
the
value
of
providing
application
insights
for
like
non-devs,
so
that
an
sre,
ops,
devops
engineer
immediately
sees
hey.
The
problem
is
not
is
it
your
code?
Is
it
the
code,
the
problem,
or
is
it
something
else
in
the
infrastructure
or
in
the
in
the
deployment
and
from
there
use
boring
solutions?
Boring
solution
is
like
you
have
an
immediate
success
and
you
have
smaller
iterations.
H
You
start
with
adding
metrics
to
an
http
endpoint
to
application
later
on,
think
about
structured
logging,
think
about
ops,
dashboards
and
and
how
this
whole
thing
works
with
promises.
Captain
litmus,
but
first
have
your
own
success
in
the
five
minutes
of
deploying
an
application
with
the
slash,
metrics
endpoint
and
then
querying
it
and
later
on
defining
alerts
on
the
other
side.
H
Don't
do
everything
yourself
do
it
as
a
team
test
coverage
might
not
be
possible
but
learn
about
chaos.
Chaos,
engineering,
fast
testing,
security
scanning,
learn
more
about
observability
methods
like
slos,
embrace
the
unknown
ions
in
observability,
learn
documented
educate.
So
don't
like
think
that
you
only
own.
Do
it
as
a
team.
Do
it
as
a
group?
H
Do
group
programming
join
meetups,
join
talks
and
question
everything
wishlist
for
myself
or
for
you,
cicd
tracing
like
looking
more
into
the
cicd
pipelines
themselves,
machine
learning,
maybe
to
correlate
even
more
auto
slos,
more
generation,
less
yammer
riding
devsecops
becoming
deaf
slowsack
ops,
I
don't
know
just
kidding
yeah
and
from
there.
H
The
recap
a
lot
of
things
to
learn
a
lot
of
things
to
unpack,
but
I
would
totally
like
recommend
checking
out
everything
I've
shown
today,
especially
captain
and
promisius,
with
slos,
but
also
chaos,
engineering
with
litmus
and
maybe
continuous
profiling.
If
there
is
a
little
time
but
yeah
from
there
I
spoke
quite
fast.
I
am
dns
miki,
which
is
dns
mi
chi
on
social
media,
on
gitlab.com
on
linkedin.
H
A
Michelle
thank
you
very
much
for
this
very
interesting
talk.
I
guess
just
one
quick
question
for
me
to
finish
up:
where
does
this
fit
into
where
these
tools
fit
into
your
csd
pipeline
and
whose
responsibility
would
it
be
to
implement
these
tools
and
maintain.
H
Them
so
I
think
I
would,
I
would
recommend
doing
the
same
like
seeing
the
benefits
with
security
scanning
and
also
seeing
the
benefit
of
a
quality
gate.
Failing
when
the
is
is
being
failed.
H
I
would
recommend
that
you,
as
a
devops
engineer,
hopefully
implement
that
and
start
with
adding
metrics
to
application,
then
install
captain
as
a
quality
gate
in
your
cicd
pipeline
as
a
as
a
separate
job
as
a
separate
workflow
separate
in
an
integrated
workflow
and
from
there
navigate
and
see
how
you
can
motivate
your
team
and
your
entire
company,
hopefully
to
make
use
of
that
and
benefit
from
seeing
early
when
there
is
like
a
performance
request,
regression
in
your
application
or
specific
other
influences.
You
cannot
estimate
as
a
developer.
A
Awesome
well,
thank
you
very
much
for
the
insight
again
we'll
be
switching
over
to
the
next
speaker
now,
but
michelle
will
be
available
on
the
slack
channel
and
thank
you
again,
we'll
speak
soon.
Take
care.
A
A
I
can
hear
you
awesome
brilliant
thanks
for
dialing
in
where
are
you
coming
from
today?.
A
Bosnia,
brilliant
awesome
and
you're
going
to
talk
to
us
about
the
topic
of
security
in
ci
cd
pipelines,
and
I
guess
also
about
bridging
the
gap
between
devops
and
security.
Do
you
want
to
give
us
a
quick
overview
of
what
your
talk
is
going
to
be
about
today?.
K
Yeah,
so
that's,
I
think,
the
good
introduction,
because
we
usually
speak
about
security.
You
know
when
we
deploy
application,
everything
works
well,
let's
say,
and
then
we
have
some,
let's
say:
security
teams
that
just
came
there
and
tried
some
penetration
testing
or
someone
hack
us
so
yeah.
I
will
talk
about
how
we
can
secure
or
at
least
improve
security
in
pipeline
and
how
to
secure
that
part
of
software
delivery
process.
You
know
we
can
have
secure
code,
secure
environment,
but
what
if
enemy
is
inside
these
variables?
K
A
K
Yeah
so
hi
everyone.
Once
again,
my
name
is
mirzadel
begovic,
I'm
coming
from
bosnia
and
I'm
senior
adults
engineer.
My
today
topic
is
security
in
csd
pipeline.
Is
that
myth
or
truth?
Let
me
see
so
just
a
few
slides
about
me.
I
work
as
a
senior
deputy
engineer.
I'm
also
devos
institute
ambassador.
I'm
also.
We
said
that
community
boston
clouds
and
I'm
really
passionate
about
cloud
security
and
devops
in
general.
H
K
K
So,
let's
start
with
agenda
first,
we
will
talk
about
what
is
cic.
Maybe
a
lot
of
us
already
know
it,
but
let's
just
start
with
that.
Also
talk
about
software
delivery
process,
security,
devops
and
also
pipeline
security-
that's
the
let's
say
main
topic
for
today.
So
what
is
the
ci
cd
this?
This
is
from
google
to
us,
and
everyone
from
you
can
have
different
overview.
What
is
cicv?
K
C
K
From
source
to
production
in
maybe
just
a
few
minutes,
so
let's
say
that:
well,
that
will
be
some
kind
of
over
what
is
cicd
pipeline.
I
mean
what
is
cicd
in
general
and
as
processed
also
science.
We
are
today
world.
Is
that
they're
moving
too
quickly
and
we
are
in,
let's
say
some
phase
where
we
need
to
deploy
software
as
quicker
as
possible.
So
we
need
to
have
these
cicd
pipelines
also.
We
should
include
those
practices,
the
most
practices.
K
E
K
Much
more
than
automation,
it's
much
more
than
just
the
cloud,
its
entire.
Let's
say
life
cycle
of
the
code
from
source
to
production.
Also,
what
is
what
we,
let's
achieve
with
this
kind
of
process
we
just
get.
We
are
speeding
up
deployment
process.
You
know,
for
example,
I
have
an
example
from
real
world.
You
know
from
my
daily
job
we
had
deployments,
let's
say
every
few
months,
and
that
was
let's
say
old-fashioned
way.
K
You
know
every
few
months
you
get
this
time
slot
for
deployments
and
then
you
deploy
entire
application
in
this
one
or
two
weeks
so
with
devops
and
with
all
of
these
practices,
we
are
speed
up
all
of
these
deployments
in
just
a
few
weeks.
You
know,
for
example,
if
we
deploy
software
a
few
years
ago
in
every
two.
H
K
K
Okay,
as
devops,
we
need
source
code,
we
need
some
kind
of
pipeline.
We
need
production,
maybe
there's
the
cloud
or
it's
just
on
premise,
but
we
have
some
kind
of
environments.
Then
we
can
start
with
creating
pipelines
and
everything.
But
besides
that
developers
are
starting
to
writing
the
code
committing
that
code
to
some
code
repo
and
then
there
is
the
this
part
of
devops,
where
we
are
just
jumping
in
taking
taking
the
code,
build
this.
This
code
deploy
this
code
to
qa,
environment
or
testing
environment
and
then
luckily
deploy
to
production.
K
This
is,
let's
say,
the
best
version
of
levels,
but
in
general
it's
not
like
that
when
we
let's
say
build
this
code,
qtqa
team
should
be
already
prepared
these
manual
tests
or
automate
tests,
whatever
they
have
integration
test
and
also
performance
tens
tests,
and
besides
that,
as
devops,
we
should
already
have
prepared
some
kind
of
let's
infrastructure
testing.
You
know
performance
testing
load
tests
also
will
be
good
to
include
in
this
kind
of
phase.
K
After
that
we
deploy
our
code
to
production
and
then
last
but
not
least,
we
create
monitoring
solution
for
that
we
implement
monitoring,
also
not
only
on
infrastructure
layer.
You
know,
usually
when
we
talk
about
monitoring,
we
usually
just
think
about
collecting
logs
from
infrastructure,
but
we
say
that
we
need
and
we
must
collect
blocks
also
from
application.
So
we
can
see
what
this
application
is
doing
there,
or
maybe
we
can
based
on
that
kind
of
logs.
We
can
also
maintain
our
infrastructure.
K
E
J
K
As
I
already
mentioned,
we
have
few
phases:
writing
the
code
building
this
code
into
some
artifacts
or
docker
images.
Whatever
deploying
this
code
to
some
kind
of
testing
environment,
then
testing
the
application.
After
that
brief
with
security
themes,
you
know
talk
about
them.
What
we
are
already
prepare,
what
we
are
deploy
after
that
security
team
go
to
penetration
testing
and
we
get
security
reports.
This.
E
K
Let's
say
old
fashion,
because,
as
you
see,
you
lose
lost
a
lot
of
time
in
just
between
these
two
briefing:
security
themes
and
penetration
testing.
Instead
of
that,
you
can
include
a
security,
a
new
pipeline.
You
know
a
lot
of
these.
Things
can
be
already
done
through
the
pipeline.
K
So
instead
of
waiting
for
security
team
to
make
penetration
testing
to
create
security
reports,
for
example,
it
can
takes
a
week
to
be
honest
because
I
work
in
that
kind
of
environment-
and
I
know
that,
for
this
kind
of,
let's
say,
security
of
your
software-
you
need
two
weeks.
You
know
briefly
security
teams,
after
that
they
create
this
penetration
testing.
K
Then
security
report,
then,
when
you
get
security
report
they
show
you,
okay,
you
have
issue,
I
don't
know
with
this
list
of
problems,
you
fix
that
problems
then
again
write
the
code
fix
that
problems
build
it
deployed,
test
it
and
then
again
briefly:
security
team.
I
mean
it's
the
life
cycle,
but
it
takes
a
week
for
this
kind
of,
let's
say
securing
of
your
application.
K
So
second
devops
I
mean
today
we
will
see
the
a
lot
of
these
devse
cops.
They
have
bizops
they've
been
sexual.
I
mean
there's
a
bunch
of
different
words
between
dev
and
ops,
so
in
general
my
personal
opinion
that
in
every
phase
of
software
development
we
should
include
this
security
part.
I
mean
it's
pretty
hard.
If
you
have
only
a
few
guys
that
work
in
security
and
then
you
try
to
secure
your
application
or
your
platform
because
in
general
there
is
not
always
there
is
not
always
the
threat
outside
of
of
organization.
K
Of
course
we
have
denial
of
service
daily,
let's
say
those
attacks.
We
have
different
of
types
of
attacks
to
our
application,
but
in
general
or
application
should
be
secured
from
early
phase
of
development.
So
that
means
that
even
developers
should
be
included
and
involved
in
security
process
of
securing
application.
K
K
I
mean
as
devops
you
will
never
maybe
be
the
security
expert
as
someone
who
spent
10
years
or
20
years
in
security,
but
you
can
learn,
let's
say
basics
of
security,
so
you
can
implement
it
new,
let's
say
daily
tasks.
So,
as
I
already
mentioned,
integrate
security
in
every
phase
of
development
from
developers
to
qa
testers.
Even
if
I
have
from
my
personal
opinion,
is
that
even
project.
K
Should
be
involved
in
this
security
phase,
so
they
can
decide.
Okay,
we
have
maybe
new
security
future
or
we
have
no
new
security
tool
in
market.
So
maybe
why
not
try
it
so
also,
we
should
be
aware
of
security
risks
daily.
We
had,
I
don't
know
thousands
of
different
risks,
especially
in
this
I.t
world.
I
mean
in
cyber
security,
there's
a
bunch
of
different
risks
that
just
pop
up
every
day,
especially
during
this
corona
time,
where
a
lot
of
companies
just
go
totally
remote,
and
that
was
the
really
good
playground
for
hackers.
K
H
K
Maybe
it
will
be
good
to
have
this
kind
of
option
to
create
blue
and
red
red
teams
inside
the
organization
inside
devops
teams.
Even
so
even
devops
can,
let's
say
in
one
point
of
time
or
in
one
point
of
software
delivery.
It
can
be,
let's
say
hacker,
so
he
can
try
to
hack
application
or
infrastructure
in
general.
K
H
K
Single
part
of
security
can
be,
let's
say,
learning
all.
Also,
every
single
part
of
every
single
person
inside
your
devops
team
or
every
single
devops
should
be.
Let's
say
aware
of
this
security,
let's
say
challenges
and
he
should
have
let's
say
some
kind
of
daily
tasks
to
secure
an
environment
where
he
working.
K
So
what
to
do
today
we
can
implement
source
code
analyst.
There
is
a
real
times,
static
code,
analyst
they're
working
in
real
time.
So,
while
you're
writing
your
code,
the
already
checking
security
issues
inside
your
code,
pre-commit
checks
may
be
good
to
include
in
existing
pipelines
or
in
new
pipelines
security
in
build
stage.
K
This
is
also
important
because
the
new
code
is
built
in
most
of
case.
It
is
that
kind
of
application
should
store
artifacts
in
one
part
or
in
some
server.
You
know
for
at
least
few
minutes,
or
maybe
even
longer
so
there's
a
great
playground
for
hackers
to
take
a
look.
What
are
you
building
or
maybe
do
sniff
configuration
files
inside
this
test
phase?
K
There's,
I
think,
a
bunch
of
different
tool
that
you
can
use
and
I
think
that's
the
phase
where
you
can
include
as
much
as
you
want
this
kind
of
security
testing
or
testing
in
general.
So,
as
I
mentioned
you
always
you
always
can
include
eos
dust,
hardening
checks
also
fast
test.
Maybe
even
I
would
like
if
I
was
a
tester,
I
would
like
to
include
here
also
testing
of
infrastructure.
You
know
stress
test
and
everything.
K
Deployments
and
scripts
configuration.
This
is
one
part
where
I
would
like
to
stop,
because
I
see
that
a
lot
of
people
make
a
issue
here,
why,
in
general,
if
developers,
trying
to
create
something,
usually
they
are
presented
by
a
project
manager.
So
they
need
to
create
this
kind
of
scripts
as
soon
and
as
quick
as
possible.
K
K
When
we
secure
everything
from
source
code
pipeline
and
everything,
we
should
also
secure
infrastructure
as
devops.
You
are
deploying
your
application
to
some
kind
of
infrastructure
and
you
need
to
have
a
really
good
and
secure
environment,
so
you
application
will
work,
let's
say
as
good
as
possible.
K
Anything
from
this
should
be
included
really
from
monitoring
phase.
As
I
already
mentioned,
monitoring
phase
is
really.
K
So
let's
talk
about
the
security
in
the
pipeline,
there's
a
bunch
of
different
tools
that
already
can
be
used
here.
If
you
use
some
cloud-based
pipelines,
most
of
these
tools
are
already
included.
So
there
is
a
data
protection
logging
and
I
think
I
already
talked
about
that
am
access,
infrastructure,
security,
compliance
validation.
All
of
this
should
be
included
inside
every
single
pipeline.
E
K
If
you
have
all
of
this
in
place,
you're
on
good
way
to
have,
let's
say
99
secured
system
security
of
the
pipeline,
this
is
also
one
really
important
point.
K
C
K
K
I
see
that
some
companies
and
some
teams
sharing
this
kind
of
pipeline
between
multiple
clients,
multiple
projects
and
that's
not
good
idea,
because
if,
let's
say
20
people
have
access
to
build
server,
I
can
see
everything
and
everyone
have
a
root
user.
So
that's
the
issue,
so
I
will
show
you
what's
what
is
the
real
issue?
K
E
K
I've
no
notice
this,
so
you,
as
you
see
you
can
see
database
connector,
string,
username
and
password,
and
that
that,
to
be
honest,
when
I
try
to
log
in
as
this
user,
I
get
full
access.
I
mean
admin
access
to
this
database
and
I
can
see
everything
there.
So
I
think
it
was
the
real
issue
and
security
is
not
process.
K
Security
is
the
process,
not
the
final
state,
so
in
every
single
phase
we
should
think
about
this,
so
include
security
in
every
approach,
phase
of
development
and,
let's
say,
learn
every
day:
something
new
about
security
and
about
securing
your
applications,
and
that's
it
from
my
side.
Thank
you
for
your
attention
and
stay
safe.
If
you
have
any
questions,
I
will
be
on
slack.
So.
Thank
me
also.
A
Thank
you
very
much
for
this
exciting
and
very
interesting
talk.
One
question
from
myself:
I
come
from
the
sca
side
of
things,
so
what
we
do
is
a
lot
of
open
source
security
and
scanning
your
dependencies
for
any
vulnerabilities
in
there.
How
often
do
you
suggest
people
should
you
know,
check
their
source
code
check
their
applications
for
security,
just
when
at
time
of
release
or
continuously
do
it
out
throughout
the.
K
To
build
honest,
I
would
like
to
include
that
in
continuously.
You
know,
if
you
have
enough
time
to
check
them
every
day,
because
there's
a
bunch
of
small
issues
that
we
are
not
even
noticing
them.
You
know
until
they
are
become
a
big
issue.
So
from
my
point
of
view,
it
should
be
every
single
day
or
even
maybe
every
single
day,
plus
every
single
release.
A
Yeah,
but
there
you
encounter
another
issue
that
you
just
mentioned,
is:
if
you
have
the
time
for
it
because
oftentimes
when
we
have
you
know
100
software
developers,
there's
one
item
security
person
responsible
for
the
whole
for
the
whole
life
cycle.
So
what
do
you
think
is
the
solution
to
solve
that
problem?
Do
you
think
that
developers
should
carry
more
responsibility
for
security.
K
Yeah
definitely,
as
I
already
mentioned,
every
single
person
in
team
should
be,
let's
say,
let's
say,
developer,
plus
security.
You
know
we
always
can
have
dedicated
security
teams,
but
even
developers
should
be
aware
of
security
risks.
You
know
to
have
some
knowledge
about
that,
especially
this
senior
developers.
You
know,
usually
they
let's
say,
maintain
this
code
and
they
reviving
this
request,
so
they
should
be
aware
of
security
risks
and
how
they
can
secure
their
application
efficiently.
Even
qa.
E
K
A
Awesome,
so
it's
really
about
flexibility,
people
taking
on
more
responsibility
for
security
across
the
life
cycle
and
then
from
a
security
point
of
view.
I
guess
it's:
it's
also
about
communicating
the
expectations
and
secure
calling
practices
from
very
very
early
on
in
the
software
development
lifecycle,
yeah.
A
Awesome
well
moza.
Thank
you
very
much
for
your
time.
I've
really
enjoyed
the
presentation
and
I'm
sure
everyone
else
has.
If
there's
any
more
questions
to
you,
there,
people
will
come
to
the
cic
slack
channel.
We
are
going
to
go
into
a
short
half
an
hour
break.
There
is
going
to
be
a
keynote
at
the
end
of
the
break
at
1
pm
uk
time,
which
is
by
direct
weeks,
and
it's
the
call
for
code
with
the
lions
foundation,
so
I'm
very
excited
to
be
joining
that
from
my
side.