►
From YouTube: London OpenShiftCommon Gathering Lightning Talk Dynatrace Software Intelligence for Enterprise Cloud
Description
London OpenShift Commons Gathering 2019
Lightning Talk Dynatrace
Software Intelligence for the Enterprise Cloud
A
Thanks
so
I'm
here
from
dynaTrace
I,
basically
run
the
what
we
call
Sales
Engineering
at
dynaTrace
for
the
UK.
Now
what
that
basically
means
from
from
my
perspective
is
that
it
is
the
technical
side
of
the
sales.
So
you
know
it's
going
into
point:
dynaTrace
helping
you
get
used
to
it
and
understanding
how
it's
performing
and
I
do
go
into
customers
using
OpenShift
on
a
regular
basis
and
getting
dynaTrace
in
there,
and
we
do
actually
have
customers
here
today
which
are
using
dynaTrace
on
OpenShift.
A
Now
we
have
been
around
since
about
2006,
but
in
2012
our
CTO
decided
that
he
was
going
to
reinvent
dynaTrace
and
change
the
way
that
it
works.
So
he
took
out
200
of
our
R&D
to
basically
go
and
reinvent
the
company
that
included
the
product
and
the
platform,
but
also
the
way
that
we
did
things
from
a
development
point
of
view.
A
You
know
how
are
we
going
to
become
more
agile
and
go
down
more
of
that
DevOps
route
to
be
able
to
stay
up
with
the
containerization
to
be
able
to
stay
up
with
stay
up
the
pace
and
keep
up
the
pace
with
cloud
and
multi
cloud
environments.
So
we
ended
up
building
a
completely
new
platform
in
2012,
starting
to
bring
that
out
in
2014
and
then
in
2016.
A
But
in
short,
it
is
a
software
intelligence
and
monitoring
type
solution.
So
we're
getting
everything
from
code
level,
transaction
tracing,
we're
getting
the
database
monitoring.
We
get
in
the
container
monitoring
the
kubernetes
node
monitoring,
but
we're
also
getting
the
monitoring
of
things
that
your
mainframes,
your
monolithic
applications
as
well
and
then
bring
that
down.
We're
also
looking
into
things
like
the
cloud
monitoring,
the
virtualization
monitoring,
the
the
network
munch
and
the
log
monitoring
the
sort
of
digital
experience.
That's
going
on
as
well,
so
you
real
users
every
button
click
they
take
every
page,
they
load.
A
You
know
even
in
SAS
environments.
So
you
know,
even
if
you're
talking
about
your
your
office,
365's
and
your
sales
forces
and
and
everything
else,
and
we're
still
getting
that
real
user
monitoring
perspective
and
again
you
know
bringing
in
synthetics
robots.
You
know
looking
at
that
clean
room
environment,
to
understand
the
performance
of
your
application,
and
recently,
we've
also
brought
in
session
replays,
who
actually
seen
a
video
of
what
your
user
is
doing,
where
their
Mouse's
are
going.
A
What
they're
cooking
on
are
they
having
issues
because
of
a
design
issue
rather
than
an
actual
performance
issue?
And
then
all
of
this
is
wrapped
around
with
with
an
artificial
intelligence
engine
to
automatically
go
and
find
you
those
problems
and
automatically
understand
what
there
is
there
and
even
take
data
from
external
sources
and
feed
that,
in
with
context
to
provide
that
insight.
So,
as
I
said,
that
brings
all
into
a
single
solution
with
AI
at
its
core:
it's
not
separate
modules.
It's
not
separate
tools,
it
is
all
together
all
understood.
A
You
know
you
get
everything
out
of
the
box,
now
a
IO
side
of
things
and
the
automation
side
of
things.
All
of
it
really
depends
on
the
quality
of
data
you
get.
So
what
we're
doing
is
we
are
trying
to
get
the
highest
fidelity
data
possible.
So
we
are
capturing
every
single
transaction
that
takes
place
at
a
method
level
data.
We
are
capturing,
every
single
user,
every
single
action
that
they
take
everything
that
goes
on.
We
are
trying
to
understand.
A
You
know
every
event
that
takes
place,
whether
that's
new
deployments
restarts
whatever
it
is
that's
going
on
and
we
are
mapping
this
end
when
so,
truly
understanding
what
is
going
on
as
part
of
those
transactions.
What
exactly
is
going
on?
What
you're
doing
what
services
are
talking
to?
What
containers
it's
talking
to
which
other
external
services?
A
You
know
whether
that's
third
parties
or
your
own,
but
also
understand
the
topology
of
that
not
just
the
container
and
and
what's
running
within
the
container,
but
also
the
actual
underlying
infrastructure
that
that
it
relies
on
and,
as
I
said,
this
all
goes
into
that
deterministic
AI,
so
not
just
time-based
correlation,
but
also
truly
understanding.
What
the
impact
of
this
is
what
the
cause
of
this
is
how
many
users
are
impacted
by
this
issue.
A
That's
going
on
and
basically
trying
to
get
rid
of
those
alert
storms
you
often
get
with
with
many
of
the
traditional
monitoring
tools
out
there,
where
you
know
something
goes
wrong
and
you
get
a
flood
of
100
emails
that
come
in.
You
know
this
is
bringing
it
all
together
to
say
this
is
your
problem.
This
is
your
business
impact.
This
is
your
cause,
and
this
is
what
you
need
to
go
about
and
do
it
so
then
very
easily.
A
A
It
starts
with
freeing
up
resources
now
that
may
start
with
things
like
tool:
consolidation,
because
we
have
a
lot
of
customers,
especially
the
larger
enterprises,
where
you
know
you
have
50
60
70
tools
out
there
all
doing
different
forms
of
monitoring,
and
we
can
help
sort
of
clean
up
and
narrow
it
down
and
understand
exactly
what's
going
on,
so
you
can
remove
not
just
resources
from
a
you
know.
The
service
you've
got
supporting
it,
but
also
in
terms
of
you
know,
the
people
you've
got
maintaining
it.
A
So
getting
your
and
very
cool
delivery
pipeline,
basically
again
having
dynaTrace
throughout
that.
So
that
not
just
you
know,
did
my
unit
or
integration
test
pass,
but
as
the
performance
got
worse,
because
of
this
has
you
know,
has
there
been
ten
more
database
statements?
You
know
what
exactly
is
it
that's
going
on
now
the
royalty
around.
That
is
that
it
means
that
you
can
understand
it
far
away
run
and
you
can
prevent
these
and
make
these
changes
far
early
on,
because
you
know
something
takes
an
extra
ten
database
statements
in
reality.
A
It's
going
to
perform
absolutely
fine,
but
when
you
scale
that
up
and
you
put
it
through
load
suddenly,
you
know
if
you've
got
ten
times
the
amount
of
database
statements.
Your
database
is
probably
gonna
fall
over.
So
it's
about
giving
you
that
understanding
and
being
able
to
break
that
pipe
as
soon
as
possible
so
that
it
can
go
back
round
and
it
can.
A
It
can
start
again
without
having
you
get
all
the
way
to
production
before
you
realize
that
something's
go
and
then
continuing
on
to
changing
culture
and
that
sort
of
enterprise
automation,
side
as
well.
This
is
where
things
come
in
key,
so
providing
you
with
that
business
impact,
finding
that
understanding
and
then
being
able
to
hook
it
up
to
things
like
ansible
tower,
to
do
that
automation
to
do
that
self-healing
and
remediation
that
takes
place.
It
means
you
know
when
an
issue
goes
on,
you
know,
if
it's
at
2:00
a.m.
A
in
the
morning,
do
you
get
somebody
up
and
actually
have
a
look
at
that?
Do
you
get
somebody
to
go
and
look
at?
What's
going
on
and
understand
what's
going
on,
or
do
you
try
and
get
the
automation
to
self
remediate
that
issue?
First,
you
know
fair
enough.
You
may
not
have
have
the
the
ability
to
self
mediate
everything,
but
at
least
you
know
your
your
your
easy
picking
fruit
in
many
ways
will
be
able
to
get
rid
of
that
way.
A
Instead
and
again,
you
know
it
helps
bring
that
change
of
culture,
because
it's
giving
you
that
information
democracy,
because
you
are
getting
visibility
from
your
infrastructure,
all
the
way
up
to
your
your
real
users
and
the
business
side
of
things.
So
you
know
we're
over
at
table
4,
which
is
just
on
the
right-hand
side.
You
know
happy
to
give
you
guys
a
demo,
but
basically
you
know
we
do
have
over
550
developers
and
we
are
releasing
every
two
weeks.
A
So
we
do
have
26
releases
a
year
and
it
is
major
functionality
that
comes
out
each
time,
so
we
are
trying
to
push
development
and
push
forward
with
changes
with
endurance
races
as
quickly
as
possible,
but
also
as
securely
and
as
stable
as
possible
as
well,
and
it
says
there
you
know
we
have
been
sort
of
within
things
like
the
garden
at
mitre
quadrant
for
the
last
eight
years
and
near
the
number
one
market
share
leader
so
come
on,
come
and
see
us.
Thank
you.