►
Description
Most of the challenge of digital transformation comes down to people and culture. But in the midst of all that, your technology choices can either make things easier or harder. Applications are at the heart of this transformation and performance is a must-have. Yet with so many stakeholders involved in designing, building, and running these applications—each with their own perspectives and data—is it any wonder transformation can sometimes feel like a slog? Join us as we discuss the AIOps must-haves for getting transformation done...or at least off to the Day 2 races because it’s never really done, is it?
A
A
A
B
A
A
Yes,
yes,
it
will
I
and
we
can
take
the
front
slide
down.
If
you
want
perfect
yeah,
I
really
enjoyed
doing
our
dry
run.
The
other
day
I
mean
like
turbonomic,
is
probably
a
company
that
I
have
worked
very
closely
with
over
the
last.
A
B
I
I
I
first
thank
you
that
is
like
incredible
and
yes,
it
is.
I
wholeheartedly
agree,
I
I
say
I'm
living
the
dream,
because
I
my
job
is
my
hobby
and
my
hobby
is
my
job.
So
it's
kind
of
a
beautiful
pairing.
A
And
so
turbonomic,
let's
get
the
the
the
the
the
table
stakes
taken
care
of
here.
What
is
turbonomic.
B
A
Yeah,
but
no
it's
like
honestly,
I
you
know
my
team
works
with
a
lot
of
various
different
software
companies
that
test
and
certify
their
products
on
on
openshift
and
red
hat
enterprise,
linux
and
anne's
bowl,
and-
and
I
would
have
to
say
that
turbonomic
is
hands
down
one
of
the
coolest
funnest
companies
to
work
with.
We
have
done
various
different
scavenger
hunts
when
remember,
remember,
remember
before
these
challenging
times
eric
when
we
actually
used
to
be
able
to
travel
around
the
world
and
go
to
really
interesting
places.
B
It
was
it's
hard
to
imagine
I've
I've,
like
literally
for
the
last
year,
when
I
see
a
commercial
with
people
in
a
crowd.
You
start
to
get
a
little
skeeved
out
and
like
I'm
like
it
feels
like
we're
we're
going
back
but
yeah.
I
remembered
like
standing
on
top
of
a
booth,
literally
with
a
millennium
falcon,
a
lego
millennium,
falcon
over
my
head
with
about
500
people
around
it.
Yelling
out,
when
I
say
turbo,
you
say
nomik
and.
B
A
B
A
I
actually
I,
I
would
highly
doubt
anyone's
going
to
do
logoed
masks.
I
know
when
divide
when
the
pandemic
was
first
getting
going.
You
know
the
the
there
were
people
inside
red
hat
who
were
asking
corporate
like
hey.
We,
you
know
we
want.
You
know,
other
companies
are
making
logoed
masks
and
we
we
we
tended
to
stay
away
from
that,
and
I
think
I
don't
think
we're
gonna
see
any
of
that.
What
do
you
think
kubecon
los
angeles
in
october,
right.
B
La
yeah
and
it's
I
like
the
questionnaire
for
the
call
for
papers,
which
I
I
submitted
one
as
a
community
presenter
as
well.
We've
got
a
turbo
one
thing:
we've
got
submitted.
It
was
you
know.
If,
given
the
opportunity,
would
you
be
there
in
person?
And
so
it's
I.
I
predict
that
we'll
have
a
bit
of
a
hybrid
face
for
the
next
few
months,
but
yeah
kubecon
la
that'd
be
kind
of
cool.
A
Yeah,
so
it's
turbonomic,
but
it
hasn't
always
been
terminomic.
Look
when
I
first
started
working
with
the
senna
and
the
people,
the
other
people
at
your
company.
It
was
vm
turbo.
B
B
I
am,
I
guess,
I'd
be
considered
og
because
I
I've
I
was
pre,
rename
and
rebrand
so
yeah.
We
were
vm
turbo.
That
was
almost
12
years
ago.
Now
that
the
company
was
founded,
I've
been
with
the
company
for
seven
years,
which
is
in
itself
probably
longer
than
most
people
would
imagine
to
survive
at
a
startup.
It's
been,
I
mean,
I
guess
we
can't
even
call
ourselves
a
startup
anymore.
We
have
700
people
and
a
pretty
significant
revenue,
we've
just
of
course
anybody
that
reads.
B
The
news
announced
the
acquisition
by
ibm
lots
of
craziness
in
the
ecosystem
over
my
career
for
sure,
but
yeah
vm
turbo,
that
was,
that
was
old
school.
A
A
You
know
you
spend
millions
of
dollars
in
legal
reviews,
brand
reviews
and
consulting
agencies,
and-
and
you
and
you
end
up
coming
up
with
another
logo,
that's
like
a
slightly
different
shade
of
red,
but
but
you
folks
changed
the
entire
name
of
the
company
and
were
able
to
survive
that.
What
was
that
like
and-
and
why,
like,
like-
was
vm
turbo,
a
bad
name
and
and
people
were
like.
Oh,
we
don't
like
the
founders
of
the
company
from
giving
us
vm
turbo.
We
need
to
make
it
like.
Why
was
that.
B
Yeah,
it's
that's
actually
one
of
the
core
questions.
That's
the
one.
You
know.
Why
would
we
choose
to
rename
the
first
one
was
you
know,
vm
turbo
was
obviously
it
wasn't
even
virtual
machine.
It
was
virtualization
management,
so
our
platform
was
built
to
be
able
to
manage
virtual
resources
using
economic
principles.
So
virtual
management
was
the
vm.
B
But
of
course,
a
little
company
started
by
someone
named
diane
greene.
You
may
have
heard
of
them
called
vmware,
they've,
obviously
kind
of
co-opted,
the
vm
as
a
virtual
machine
name-
and
you
know-
and
I
mean
there's
it
technically-
is
virtual
management
where
as
well
but
people
started
to
say
you
know.
B
Oh
vmware,
turbo
and-
and
you
started
to
kind
of
just
like
wince
a
little
when
you
heard
it,
because
we
were
synonymous
because
we
did
so
much
in
the
vmware
ecosystem,
but
at
the
same
time
we
were
also
doing
stuff
with
citrix
and
with
microsoft
and
then
early
stuff
with
with
docker
and
ultimately
became
what
we're
doing
with
openshift
and
the
kubernetes
ecosystem.
Now
and
public
cloud
and
ibm
cloud.
We
were.
B
And
so
we
said
as
a
company,
we
need
to
rebrand
to
more
represent
what
we
do
and
turbo
turbinomic
became
sort
of
the
combination
of
turbo
and
economic
principles,
and
you
know,
there's
a
I
I'd,
love
to
say:
there's
like
a
a
kiss
naming
story
of
three
guys
in
the
back
of
a
limo.
That
just
said
we'll
call
it
kiss
and
everybody
just
said
yeah.
B
B
Yeah
I
mean
just
the.
I
won't
even
account
for
the
fact
of,
like
the
legal
team
having
to
deal
with
patents
and
trademarks
and
all
of
this
stuff-
and
you
want
to
do
this
kind
of
you
want
to
launch
like
it's
like
a
stealth
thing
right.
So
you
try
and
do
it
as
covertly
as
you
can,
but
it's
hard
to
do
it
when
all
of
the
stuff
has
to
be
legally
registered
and
it
takes
time.
B
So
we
were
actually
deciding
in.
You
know
the
middle
of
that
particular
year
to
rename
the
company
and
we
were
already
booking
our
presence
at
vmworld,
which
is
going
to
be
the
largest
one.
Ever
it
was
like.
The
diamond
booth
was
huge
and
then
you've
got
to
submit
all
your
marketing
materials
months
in
advance
and
we're
like
we're
going
to
change
the
name
of
the
company.
But
we
can't
like
show
our
hand
just
yet
because
it's
going
out
into
their
materials,
so
we
show
up
at
the
event
and
literally
like
a
week
before
vmware
vmworld.
B
That
year
was
when
we
did
the
rename
of
the
company.
So
it
was
fun.
But
you
know
a
lot
of
people
were
kind
of
like
you
know,
looking
around
at
the
booth
going.
This
is
you
you
guys
used
to
be
vm
turbo
right.
Aren't
you
just
joking
forever
like
it
was
news
to
so
many
people
and
all
of
a
sudden
we're
there
with
24
000
of
our
closest
friends
and
trying
to
explain
the
story
made.
You
know
it
was
fun,
but
it
was
not
a
not
a
simple
task
for
sure.
A
B
Yeah
headquarters,
500,
boylston
and
we've
got.
We
have
a
presence
in
in
white
plains.
That's
where
our
engineering
office
is.
We've
got
some
folks
that
are
in
tel
aviv
that
are,
we
have
an
injuring
team
out
there.
I
am
the
roaming,
weird
canadian
guy
that
goes
back
and
forth.
I
live
in
new
jersey,
so
I'm
I'm
the
new
jersey
office.
I
guess,
as
it
were,
okay.
B
Yeah
so
turbonomic
it
was
originally
called
operations.
Manager
was
the
product,
and
so
we
we
chose
you
know
it's
it's
arm
is
what
we
actually
call
it
application
resource
management,
but
the
most
common
thing.
If
you
look
at
it,
of
course,
it
just
says
turbonomic
in
in
the
logo
and
that
also
an
interesting
problem
right
seems
like
a
very
simple
thing:
we
have
one
platform.
B
B
So
now
we
had,
you
know
park
my
cloud,
a
turbonomic
company,
because
it
was
really
hard
to
really
quickly
turn
them
around,
and
then
we
about
a
year
and
a
half
ago
now
bought
a
company
called
sev1
network
performance
monitoring,
really
amazing
team,
an
amazing
firm
lots
of
really
cool
stuff.
So
all
of
a
sudden,
we
had
a
couple
hundred
extra
new
new
staff
members,
and
now
we
had
what
do
we
call
it?
You
know.
B
A
From
my
perspective,
there's
tons
of
companies
out
there
that
in
the
world,
according
to
mike
waite,
seem
to
do
the
same
thing.
Is
application
resource
management
the
same
as
prm
performance?
You
know
like
there's.
Companies
like
like
appd
does
something
similar
like
in
instanta
ibm,
actually
just
bought
an
astana.
So,
like
are
those
overlapping
technologies?
Does
everyone
do
a
little
bit
of
the
same
thing
or
how
are
you
folks,
different
than
than
all
the
all
the
others.
B
Yeah,
it's
a
great
question
and
you
know
we.
We
say
that
the
m,
in
almost
all
of
the
descriptions,
whether
it's
npm
ar
er
apm
it's
generally
was
monitoring,
was
what
the
what
that
stood
for.
Our
turbonomic
story
was
that
we
are
an
app
an
automation
platform
and
an
application
resource
management
platform.
So
we
manage
the
resources.
B
Monitoring
is
a
side
effect
like
we
have
to
monitor
to
do
it
so
the
apms
of
the
world.
You
know
obviously
like
if
you
look
at
instanto,
they're
they're
solving
this
observability
challenge
on
kubernetes,
specifically
well
and
then
they're
expanding
even
further
appd.
B
Their
primary
audience
is
developers
so
they're
like
hey,
I'm
I'm
building
mike
and
I
are
building
an
app,
and
this
app
is
gonna.
Do
amazing
conferencing,
so
I'm
gonna
build
a
tool
that
just
monitors
it.
I
have
300
other
business
applications
that
run
me
mike
and
my
business,
but
I'm
not
going
to
instrument
that
with
apm.
B
You
know
number
one.
So
we've
got
this
beautiful
area
where
there's
all
these
other
non-monitored
resources
that
we
can
bring
better
performance
to.
But
on
top
of
that,
the
difference
between
arm
and
apm
is
that
apm
says:
hey
mike
looks
like
we
bumped
into
the
threshold
on
this
thing
or
looks
like
we've
got.
You
know
your
your
slos
are
are
going
a
little
weird
on.
You
know
your
your
your
mate,
your
banking
app.
B
Well
now
it's
mike's
fault
right,
you
can
hand
it
to
the
team,
said:
hey
folks,
look
over
here
and
stannis
says
we're
in
trouble
and
that's
super
important
right
operationally.
You
need
that.
What
it
doesn't
do,
though,
is
move
towards
the
problem
solving,
which
is
where
we
come
in,
where
we
actually
allocate
and
and
assign
resources
and
and
reallocate
resources.
In
order
to
prevent
that
problem
from
occurring,
we
use
monitoring
and
all
that
instrumentation
to
drive
that
decision
using
our
our
analytics
platform.
B
B
I
am
a
nerd
at
heart,
with,
as
you
can
tell,
by
the
frown
lines
and
the
and
the
bags
under
my
eyes,
I've
done
20
plus
years
in
in
data
center
operations
and
management.
I
worked
for
raymond,
james
and
financial
and
sun
life
before
this
for
a
couple
of
decades,
one
decade
each
I
did
a
couple
of
tours
of
duty
so
to
speak
so
yeah
I
actually
came
out
of
the
space.
A
Yeah
I
there
was
a
there
was
a
reason.
I
wanted
to
ask
that,
because
you
know
we
have
we've
done
a
lot
of
these
tv
shows
and
generally
someone
you
know
they
offer
up.
You
know
a
product,
marketing
manager
and
they
come
on
and
they
time,
and
let
me
tell
you
about
the
features
of
my
product,
but
they're
really
not
like
super
interesting.
They
don't
have
a
900
microphone
in
front
of
them.
A
A
All
right,
well,
where
I
was
going,
was
we
are
streaming
live
on
twitch,
we
are
streaming
live
on
youtube.
We
are
streaming
live
on
facebook
and
I
want
to
offer
so
people
can
ask
questions.
They
can
put
them
into
the
chat.
You
know
question
things
on
any
of
those
platforms
and
our
our
our
bots
will
magically
pick
that
up
and
transfer
any
questions
over
here
into
the
chat.
A
So
I'm
going
to
offer
up
a
200
amazon
gift
card
for
the
first
person
who
can
stump
eric
wright
and
have
him
basically
look
uncomfortable
on
on
live
tv.
So
that's
an
offer.
Basically,
of
course
it
can't
be
anything
about
calculus
or
I
don't
know,
you'd
probably
do
pretty
well
with
that
as
well.
I
would.
B
Imagine
I
I'm
a
little
late
in
my
calculus
lately.
I
I'm
good
with
spinner
theory
and
some
stuff
on
advanced
physics,
but
I
said
my
background
to
getting
in
technology
is.
I
was
a
shoe
repair
man.
I
was
a
cobbler
and
a
landscaper,
so
it's
a
like.
I
said
natural.
Fluoride
is
technology,
I'll
use
that
as
my
out,
if
I
can't
answer
a
question,
it's
because
I'm
a
cobbler,
but
if
I
can,
then
I
got
lucky.
A
So
cobbler
landscaper
you
basically
pushing
a
mower
and
running
over
in
a
weed
whacker.
That's
that's
pretty!
That's
pretty
cool
20
years
in
the
in
the
data
center,
and
now
here
you
are
tournament.
A
So
what
do
you
got
like?
Can
you
show
us
something
about
your
technology
and,
and
you
know
how
it
works
and.
A
B
Every
underneath
it
all
you
know
we're
all
just
yaml
operators,
that's
really
what
it
is
it's
well.
The
funny
thing
is
when,
when
we
show
this
I'm
used
to
doing
you
know
I
I
do
a
lot
of
analyst
demos.
I
use
a
platform,
I
mean
I've
got
people
that
I
can
see
that
are
on
the
chat
with
us
and
that
are
on
this.
I
I
stand
beside
amazing
people
that
build
this
technology
and
and
it's
it's
been
great-
to
learn
with
them.
B
How
we
can
do
this
so,
like
I
said
I,
I
ran
the
stuff
that
these
circles
are
made
of
virtualization
platforms,
application
building,
help
to
do
devops,
implementations
in
you
know
before
it
was
called
devops
as
a
shout
out
to
chris
short,
you
know
I,
like
the
devops
ish.
I
I
always
think
of.
I
was
always
a
bit
devops-ish
in
the
way
that
I
was
able
to
work
and
when
we
get
to
you
know
what
does
our
platform
do?
B
A
Why,
like,
how
would
people
know
when
they
need
to
use
turbonomic
like
like
you,
you
just
basically
started
by
saying
all
right,
so
when
people
know
they
want
to
use
turbonomic
like
do
they
need
this,
can't
they
can't
people
roll
their
own
or
or
you
know,
do
the
same
type
of
functionality
with
in-house
technologies.
If.
B
They
can
I'm
going
to
hire
them
as
engineers,
because
it's
a
an
intractable
problem
in
effect
right.
We
we
can.
We
can
get
fairly
good
at
certain
things,
but
when
you
talk
about
the
complexity
of
like
any
scale,
is
pretty
difficult
and
the
reason
especially
is
like
just
at
the
virtual
machine
layer
just
at
the
kubernetes
layer
right,
there's
enough
difficulty
that
it
spawned
all
these
other
different
startups
to
solve
specific
problems
across
within
them.
B
And
so
when
I,
when
you
say,
like
who's
who's,
a
great
you
know,
consumer
for
turbo,
every
environment
that
I
deploy
into.
We
can
generally
see
about
30
improvement
in
performance,
and
we
can
do
it
on
about
20
to
50
percent.
Less
infrastructure,
which
is
kind
of
weird,
so
that
whole
thing
of
like
doing
more
with
less,
doesn't
mean
that
you
can
like
throw
away
your
hosts
necessarily
because
that's
hard
to
do.
A
B
B
What
if
I
could
tell
you
that
I
could
give
you
just
the
resources
you
need
without
having
to
over
provision
net
result,
it's
a
lot
cheaper,
but
the
real
benefit
is
like.
Why
did
we
over
provision?
Because
we
had
to
guess
right?
The
the
top
capacity
management
tool
in
the
industry
is
microsoft,
excel
people
just
kind
of
ballpark
yeah,
they
lick
the
air,
they
check
the
wind
and
they
say:
okay,
I'm
building
a
sharepoint
farm.
B
B
So
that's
my
measurement
of
how
do
I
guess
what
it
is,
but
it
doesn't
account
for
shared
resources.
It
certainly
doesn't
talk
about
the
cloud
so
elon
from
datadog
100
correct
that
people
are
are
guessing
and
if
you're
going
to
guess,
I
mean
ask
yourself
this
one
mike
right.
If
you're
going
to
guess
which
side
are
you
going
to
guess
on
you're,
definitely
going
to
go
over.
A
B
And
the
old
thing
of
like
you,
the
funny
thing
is:
we've
generally
accepted
that
that's
we
call
the
cost
of
doing
business
right.
Like
the
cloud
forget
my
friend
randy
bias
says
the
cloud
is
cheaper
as
long
as
you're
willing
to
pay
more
and
what
the
real
benefit
of
cloud
was
was
that
you
could
suddenly
put
an
application
up
without
having
to
buy
servers
and
wait
four
months
right,
so
I
can
immediately
spin
it
up.
I
can
use
adjacent
services
like
platform
as
a
service
and
sas
stuff.
B
Like
that's
fantastic,
that's
the
real
benefit
of
cloud,
but
the
dominant
number
of
resources.
Today
in
cloud
I
think
aws
even
alluded
to
a
number.
I
I
I
won't
quote
it,
but
it
says
it
was
like
it
was
a
high
percentage
of
their
real
resources
and
revenues
still
coming
from
ec2
like
old
school
virtual
machines
running
on
the
cloud
and
those
are
going
to
be
run
using
the
traditional.
B
I
t,
ops
pattern
of
I've
got
a
virtual
machine
here,
I'm
going
to
give
it
four
gigs
of
ram
and
eight
cpus,
because
I
think
of
you
know
back
in
the
day.
That's
how
I
used
to
buy
my
servers
but
meanwhile
like
when
we
look
at
openshift
when
I'm
looking
at
my
openshift
environment,
I'm
picking
my
app
like
I'm,
not
thinking
about
number
of
cpus,
I'm
talking
about
millicor's.
B
Now,
I'm
talking
about
megabytes
of
ram
kilobytes
of
ram
for
a
process,
so
I
can't
just
think
of
2
4
8
12
like
which
is
the
itops
sort
of
methodology.
Let's
count
like
we're,
we
live
in
binary
and
we'll
think
in
gigabytes
and
terabytes
and
then
we're
going
to
guess
it's
a
pretty
fantastic.
You
know
formula
for
error.
If
you
really
think
about
it,.
B
So
if
I
I'll
just
quickly
say
like
the
real
reason
why
we
do
all
this
right
when
I
look
at
this,
what
is
this?
This
is
an
application.
What
is
that
virtual
machine
running
on
aws?
It's
an
application,
that's
running
on
a
virtual
machine.
What's
my
vmware
environment,
it's
a
bunch
of
applications
running
on
virtual
machines
on
physical
servers.
B
And
then
we
understand
the
relationships,
we're
building
these
relationships
so
that
I
know
that
my
mobile
banking
application,
in
this
case
it's
actually
instrumented
by
appd.
You
know
we
talked
about
that
before,
so
I'm
able
to
see
now
the
application
itself,
all
the
components
that
it's
made
up
of
the
real
true
application
response
time,
because
this
is
being
instrumented
right
from
an
apm
partner,
we're
doing
this
within
stana
with
dynatrace
with
new
relic
and
others.
We
can
pull
it
from
prometheus,
which
is
amazing.
B
We
got
a
lot
more
people
that
are
diving
in
from
atheists,
and
so
we're
able
to
see
at
each
layer
of
the
stack
what
the
different
issues
are.
Potentially
that
can
be
relieved
before
they
occur.
That's
the
real
goal.
It's
not
just
like
wait
till
it
goes
wrong
and
then
you
know
tell
mike
hey
mike:
you
need
to
reboot
the
server
or
you
need
to
add
heap
because
it
broke.
B
But
apm
will
only
know
this
host.
It's
like.
I
don't,
like
my
optics,
sounds
a
bit
negative,
but
it's
very
single
focused
in
that
it's
saying
what
does
this
application
need
right
now,
without
an
understanding
of
the
adjacent
effects?
Sorry
mike
go
ahead.
A
So
what
is
it
magic,
meaning
like
I'm,
I'm
pretty
familiar
with
things
like
instanta,
where
you
install
agents
on
a
on
a
server
host
whatever
and
it
and
it
you
know.
Basically,
phones,
home
and
radios
in
and
you
know,
provides
you
know,
information
and
stuff,
but
I
think
I
heard
you
said
that
that
you
don't
use
agents
for
this.
It.
B
B
A
Hey
that
would
actually
qualify
for
me
stumping
eric
wright,
because
when
we
did
the
dry
run,
we
spent
probably
10
minutes
talking
about
why
you
were
fumbling
around
with
your
card
deck
and
it
was
actually
pretty
cool.
It
was
actually
a
pretty
cool
story.
Tell
me
about
how
you
do
this
without
agents,
because
I
don't
understand.
B
But
you
know
you
just
you
actually
answered
in
a
way
yourself
right.
So
instanta
has
agents,
appd
has
agents,
steiner,
traces
agents,
so
if
they
are
already
gathering
all
this
data,
I
only
need
to
talk
to
appd
and
dynatrace
and
instanta,
because
they're
already
gathering
this
so
then
I
just
need
to
consume
their
instrumentation,
their
analytic,
their
data
and
then
use
my
analytics
engine.
B
B
So
if
we
look
at
at
the
the
cloud
layer
at
the
private
cloud
side,
we've
got,
you
know
traditional
private
cloud
stuff
we've
got,
we've
got
at
the
hypervisor
layer,
the
major
hypervisor
providers
here
is
a
demo
system,
so
you
don't
actually
get
to
see
the
red
hat
enterprise
virtualization,
you
don't
see
some
of
the
other.
All
of
the
different
players
are
available.
B
We
see
down
underneath
the
physical
infrastructure
stuff,
that's
abstracted
away
like
converged
and
hyper
converged.
So
what
we
do
is
each
one
of
those
layers
has
its
own
management
api
and
its
own
set
of
data.
So
we
take
all
of
that
in
through
their
native
platforms,
and
then
we
use
our
economic
scheduling
engine
in
order
to
relate
the
different
relationships
between
resources
and
then
because
at
the
application
layer
I
can
tap
into
the
apm
or
I
can
even
go
right
to
the
guest
operating
system
itself
and,
like
specifically,
talk
to
resources.
B
Now
what
I'm
able
to
do
is
do
what
none
of
those
platforms
can
do,
which
is
take
all
of
that
data
build
a
continuous,
real-time
topology
of
the
dependencies
and
then
not
just
show
you
what's
going
on,
but
you
know
when
I
looked
at
my
my
banking
transfers
app
as
an
example,
I
can
look
and
I
see
actionable
responses
right.
So
I
can.
I
need
to
provision
additional
storage
because
it
could
be
by
the
performance
or
it
could
be
just
that
it's
literally
physically
running
out
of
space.
B
B
I
need
to
be
able
to
move
virtual
machines,
storage
between
resources,
change
instance,
types
and
sizes
or
skus
in
the
cloud
when
you
get
to
the
kubernetes,
you
know
in
the
openshift
side
of
the
world,
it
gets
even
more
exciting
because
now
I
can
look
at
this
cluster
layer
and
here's
an
example
right.
I
actually
spun
up
just
for
fun,
so
I
I
spun
up
this.
B
B
But
if
I
look
at
a
a
bit
more
of
an
active
one,
what
I
look
at
is
here
where
now
I
can
see
again
applications
all
the
way
down,
but
now
I'm
thinking
instead
of
in
the
context
of
virtual
machines,
I'm
thinking,
pods
and
container
sizing
and
the
container
sizing
is
going
to
be
much
more
granular
right.
It's
going
to
be
adding
12
megabytes
to
a
container
because
it's
a
smaller
process
but
12
megabytes
could
be
the
difference
between
hitting
95th
percentile
response
time
or
going
over.
B
When
I
look
at
the
controller
layer
at
the
namespace
layer
now
I
can
give
context
not
just
in
what
I
can
do
to
the
applications
in
the
platform,
but
my
audience
right.
So
if
this
is
my
view
as
a
developer,
then
in
turbo,
okay,
no
problem
I'll
give
you
your
name
space.
So
I
can
then
go
to
my
namespace
layer
and
I
can
see
all
the
actions
that
are
there
and
not
just
see
these
actions.
But
if
I
were
to
click
into
them
now
I
have
the
option
to
actually
you.
B
Them
in
the
platform
or
most
importantly,
you
can
automate
them
so
that
if
you've
got
you
know,
midday
resizes
that
you
need
to
do
because
they're
non-disruptive
on
virtual
machines,
then
you
can
say,
go
for
it
add
resources
as
needed
in
real
time
to
meet
demand
and
then
set
a
change
window
so
that
I
can
say,
hey
scale
down,
because
it's
disruptive,
you
know
or
let's
just
say,
they're
using
openshift-
for
what
it's
really
designed
for
right.
They're
using
kubernetes
they've
got
stateless
applications.
B
So
now
I
could
go
to
my
stateless
application
and
I
could
say:
yeah
go
for
it.
Just
I
can
scale
the
pod.
I
can
scale
the
container
but,
most
importantly,
remember,
the
life
cycle
of
containers
is
shorter,
so
they're
going
to
say,
hey
well,
this
is
great,
but
it's
completely
dynamic.
All
those
apms,
they're
gonna,
say:
okay,
cool
I've.
I've
got
my
my
name
space.
I've
got
my
cluster.
They
can
show
you
this,
but
your
application
patterns
have
changed
over
time
and
what's
real
time
matters.
B
B
What
can
affect
the
application,
but
now
I
can
talk
about
the
container
specification
where,
when
I'm
setting
cpu
memory
different
specifications
at
that
container
layer,
I
can
now
define
them
as
a
spec,
because,
let's
just
say,
I
scale
out
the
pod.
If
I
scale
up
the
application,
I
want
it
to
scale
based
on
what
it
actually
needs.
B
So
we
are
tracking
that
both
with
historical
and
real-time
information,
so
that
when
I
look
at
that
layer
now,
what
I'm
going
to
see
is
what
are
the
applications,
the
dependencies?
What
are
the
real?
You
know
like
95th
99th,
what
the
the
observation
periods,
the
percentiles,
that
you
want
to
see
to
show
you
why
we're
making
this
downsize
recommendation
and
what's
the
net
result
once
you
do
it?
B
But
on
top
of
this
you
know,
here's
the
I'll
say
the
simplest
possible
example,
which
is
for
the
virtualization
kids.
They
may
know
this
well
right.
I've
got
an
application
and
it's
struggling.
My
application
is
targeted
by
apm.
The
apm
says
you
need
to
size
up
memory.
Okay,
cool
makes
sense,
I'm
going
to
listen
to
the
apm.
Now
I'm
going
to
go
and
look
for
memory.
B
So
if
I
look
at
the
different
application
components
it
could
be
heap,
it
could
be
other
things
that
are
affecting
you
know,
setting
cpu
measures
setting
memory
measures-
I
can't
size
it
down
or
up
because
it
could
affect
the
application.
So
now
I'm
application
aware,
but
then,
on
top
of
that
it's
sitting
on
a
virtual
machine
which
is
struggling
for
virtual
resources,
it's
struggling
for
memory
from
the
physical
host
as
physical
hosts.
B
A
I
got
I
mean
I
do
want
to
ask
you
and
I'm
sorry
if
I'm
completely
interrupting
your
demo
and
then
I'm
taking
you
way
off
track.
I
I
want
to.
I
want
to
ask
about
predictive,
analytics
and
and
the
ability
to
learn
to
be
able
to
make
recommendations
to
the
devops
team,
but
I'm
not
going
to
ask
that
one
right
now.
What
I
want
to
know
is
you
just
said
that
the
the
apm
and
you
kind
of
refer
to
it
as
sort
of
like
the
the
infrastructure
czar.
A
A
B
B
All
you're
going
to
get
is
a
bunch
of
red
on
your
monitoring
tool,
saying:
hey,
guess
what
mike
you
got
a
problem,
you
got
to
go
to
the
gigawatt
store
and
buy
a
bunch
more
memory
when
you
say
that's
great,
but
it's
saturday
night
and
I
can't
find
any
more
memory
and
it's
physical
and
like
we
can't
get
the
stuff
on
demand.
This
is
really
what
the
core
of
what
we
did
is.
So
imagine
that
you
could
knowing
the
applications
that
you
could
say.
B
So
now,
if
I
go
scrolling
way
down,
because
this
is
lots
of
performance
problems
right
so
now,
you're
going
to
see
stuff
and
ultimately,
if
you
scroll
down
far
enough
what
you'd
end
up
seeing
and
I'll
make
it
a
little
easier
is
that
you
can
say
whether
it's
an
efficiency
or
a
performance
problem.
So
if
I
just
nail
her
down
to
efficiency
well,
I
can
move
stuff
around
just
to
free
up
resources.
B
So
I
can
actually
better
manage
it's
effectively
like
they
call
it
bin
packing
in
in
the
cube
world
right.
So
I
can
do
this
where
I
can
do
this
automatically.
I
already
I
know
it
needs
to
be
done.
I
can
size
up
and
size
down.
I
just
go
and
say
all
right
cool.
Let's
do
it,
let's
take
it
away
right
boom.
B
I
just
gave
up
these
resources
back
to
the
cluster
without
having
to
go
to
the
gigawatt
store
and
buy
more
memory,
which
is
pretty
cool.
Well
then,
on
top
of
it,
then
you
got
the
kubernetes
story,
and
people
have
prometheus
and
they've
got
instant
and
they've
got
a
bunch
of
things
and
they've
got
maybe
they're
using
they've
got
ansible
for
initial
provisioning
they're
dabbling
with
terraform
they've
they've
got
a
home-built
bash
script.
That
goes
in
and
checks
for
thresholding
and
it
runs
a
couple
of
health
checks,
and
why
I
mean
we,
I
call
this.
B
The
cube
goldberg
machine
right
they've
got
a
bunch
of
different
tools
that
are
trying
to
come
together
and
call
and
be
automated,
but
for
us
like
being
able
to
see
the
resources
that
serve
virtual
physical
memory
to
the
cube
cluster
and
the
cube
nodes,
I
can
know
that
not
just
current
allocation,
but
I
can
reallocate
resources
to
get
better
performance
without
busting
through
the
threshold
and
waiting
for
the
pod
to
fail,
which
is
kind
of
cool.
So
we
are
literally
creating
this.
Like
autonomous
level,
five
self-driving
infrastructure.
A
A
A
B
Yeah
there
could
be,
there
can
be
basically
think
of
it
as
like
risk
levels
in
and
minor
risk
levels
can
include
like
hey,
I
can
reclaim
resources,
but
ultimately
the
real
red
critical
risk
is
going
to
be
a
performance,
a
real
performance
problem
that
can
be
affected
generally.
You
know
look
if
if
the
whole
thing
was
running
at
92
percent
utilization
across
the
board,
it
would
be
a
lot
of
red.
B
You
know
even
the
reclaimed
resources
there
wouldn't
be
any
yellows,
because
there's
no
free
space
to
kind
of
steal
back,
but
if
it's
at
92
percent
we
know
how
to
keep
it,
bring
it
back
down,
because
even
with
there
we're
going
to
be
able
to.
You
know
dodge
thrust,
parry
all
those
different
resources,
because
of
the
way
that
our
engine
works.
A
But
doesn't
this
so
I
mean
it's,
it
sounds
to
me
like,
so
all
the
all
the
all
the
instantanes
and
data
dogs
and
dyna
traces
of
the
world
do
all
the
phone
home.
They
send
all
the
information
in
in
the
world
according
to
mike
wait.
What
you
folks,
then
have
is
something
that
just
makes
it
easy
for
the
people
running
the
infrastructure
to
affect
changes
in
a
distributed
multi-cloud
world,
but
they
still
have
to
do
the
work.
They
still
have
to
go
in
here
and
they
still
have
to
say.
A
Oh,
I
need
to
go
look
for
that.
I
got
to
click
on
the
red
thing
and
tell
the
thing
to
go
to
the
gigawatt
store
and
go
get
some
more
memory
it.
How
come
there's
no
predictive,
analytics
capabilities
here
which
would
allow
the
overall
czar
system,
if
you
will
to
start
making
recommendations
for
these
poor
people
that
have
to
manage
their
ait
infrastructure.
Why?
Why
do
they
have
to
sit
there
and
go
looking
for
problems?
Why
can't
turbonomic
just
predict
them
in
advance
and
and
allow
people
to
work
smarter.
B
Well,
you're
not
gonna
win
the
gift
card
on
this
one,
sir,
but
I'll
tell.
B
B
What
I
can
do
is,
I
show
you
these
actions
and
they
have
a
check
box
for
a
reason,
because
I
can
actually
take
a
set
of
actions
and
I
can
take
them
right
here
in
the
ui.
That's
generally
the
first
comfort
level
right.
It's
like
I'm,
okay
cool.
I
believe
what
you're
doing
you're
telling
me
why
you're
going
to
do
this
you're
showing
me
the
results,
I'm
going
to
go
for
that
I'll
check
that
checkbox
and
I'll
say
it.
B
But
then
you
just
go
over
at
the
same
time
and
we
create
a
simple
automation
policy,
we'll
call
it
a
virtual
machine
policy.
This
is
the
equivalent
of
me
diving
in
and
doing
the
cube
cuddle
and
as
I'm
going
to
call
it
cube
cuddle,
because
I
hate
that
phrase.
But
I'm
going
to
say
it
because
I
know
other
people
hate
it
as
much
as
I
do
so.
Let's
imagine
using
cube,
ctl
or
cube
cuddle
whatever
you
want
to
call
it.
This
is
my
vm
version
of
the
demo,
so
I'm
going
to
call
it
resize.
A
B
B
I
think
you
get
a
gift
card
just
for
that.
So
imagine
that
I
want
to
do
like
a
size
up.
You
know
cpu,
I
want
to
size
up,
you
know,
let's
go
with
memory
size
up,
and
these
are
non-destructive
changes,
so
I
can
literally
say:
okay,
I'm
going
to
take
these.
I
can
give
it
a
schedule
in
which
we're
allowed
to
do
it.
I
can
say
it's
you
know,
give
it
a
schedule
or
just
let
it
run
anytime
and
then
I
can
say:
okay
cool.
B
Let's
make
these
automated,
and
I
know
that
this
particular
schedule
is
it's
non-disruptive,
so
I
can
just
say:
go
for
it
right
completely
automate
it.
I
can
use
our
native.
You
know,
platform
scheduling.
I
can
also
trigger.
Like
a
servicenow
workflow,
I
can
write
a
ticket
to
servicenow.
I
can
go
and
use
actually
servicenow
automation
and
approvals,
so
I
can
create
a
ticket
wait
for
an
approval.
The
approval
comes
back
to
turbo
turbo
says
all
right
mike
said
cool.
B
I
got
no
choice
right
so
if
I've
got
a
physical
cluster
and
it's
physically
out
of
resources,
well,
every
you
know
I'll
say:
vcenter
vrops,
whatever,
whatever
the
tools
are,
whatever
the
v
tools,
are
you
got
they're
just
going
to
tell
you
hey,
you
know
it's
going
to
have
a
little
graph
going
up
into
the
right
like
every
startup
revenue
should
and
it's
going
to
say.
You
ran
out
of
resources
here
and
you're
on
path
to
run
out
of
resources,
even
more
so
and
you're
like
well,
I'm
already
out
of
resources.
B
What
do
I
do
all
it
does?
It
shows
you
that
you've
got
a
problem.
Well,
the
difference
for
us
is
that
we
show
you
how
to
fix
that
problem.
We
show
you,
when
you're,
going
to
run
out
of
resources
at
the
host
level
and
what
you
need
to
provision-
and
this
could
include
things
like
you'd,
say
hey.
This
is
a
rev
environment
with
rel
servers
on
it,
so
they're
licensed,
but
I
can
create
a
policy
where
I
could
say
I
need
to
scale
my
rel
cluster,
because
it's
licensed
assets
I
don't
want
to
scale.
B
I
want
to
steal
a
host
from
another
machine
and
put
it
into
this
cluster
because
it's
under
utilized
over
there
and
I
can
take
that
cluster
host
and
attach
it
to
an
existing
cluster.
So
we
can
literally
move
all
of
the
bits
underneath
in
this
beautiful
sort
of
jenga
data
center
cloud
consolidation
type
of
scenario.
B
So
there
that's
the
the
big
difference
again
apm
like
I.
I
hope
I
never
sound
like
I'm
detracting
from
what
they're
doing
they
do
fantastic
stuff,
that's
very
solving
a
very
specific
problem,
but
there's
also
times
when
you
don't
I've
got
instanta,
but
I
don't
have
instanta
on
every
application
or
appd
on
every
application.
B
So
now
I
can
go
through
and
and
here's
an
example.
So
this
application
is
a
neat
little
one
called
turbonomic
and
I've
actually
instrumented
my
own
system
and
I've
done
it
with
my
own
platform.
We
call
it
apex
or
application
performance
extensibility,
which
is
really
hard
to
say
fast
right.
B
So
I
can
actually
understand
this
topology
and
I
can
define
these
resources
in
this
relationship
and
then
it's
it
shows
me
the
risk
at
the
top
so
that
now
what
I'm
doing
is
I'm
getting
this
now,
I'm
not
getting
response
time
out
of
here,
necessarily
because
I
may
not
be
pulling
that
data
across.
But
what
I
am
getting
is
the
relationship
because
like
when
it,
I
ran
it
operations
for
a
long
time.
B
Yeah
they
say
the
banking
app
is
down
you're
like
okay.
Now
you
got
a
team
of
12
people
going
hunting
around
okay
cool.
Let
me
take
a
look
at
the
banking
app.
Well,
it's
not
down,
but
it's
it's
there's
something
going
on.
Well,
what's
going
on!
Well,
it
looks
like
it's
not
just
the
app
itself,
but
it
looks
like
okay,
so
statement
video
download
is
is
running
slow
but
oh,
look
at
this
quick
pay.
I
see
a
change
in
the
slos
right.
B
I
see
a
change
in
those
those
thresholds
and
so
now
that
I
can
in
quick
literally,
I
said
help
this
calls
banking
apps
down.
I
go,
take
a
look,
it
says
not
down,
but
here's
the
problem.
Look.
I
can't
physically
get
storage
right
now,
but
what
I
can
do
is
I
can
maybe
reallocate
resources
and
ultimately
get
back
to
health,
but
now,
as
an
operator
whether
it's
a
cube
operator
an
it
operator,
an
app
operator,
I
can
just
say
at
the
application
layer.
B
All
of
this
risk
is
affecting
my
application
and
here's
what
I
can
do
about
it
and
I
can
take
these
actions.
So
it's
quick
context.
Instead
of
me
having
to
go
okay,
the
banking
app
right,
it's
four
vms,
it's
run
across
two
clusters.
It
which
clusters
it
in.
Well,
that's
the
whole
idea.
We
can
do
cross-cluster
moves,
so
you've
got
to
go
to
vcenter.
You
got
to
go
to
microsoft.
B
You
know
scvmm,
you
got
to
go
to
like
nine
different
places
or
you
go
here
and
then
you
say:
hey
guess
what
I
looked
at
the
app
everything
looks
clean,
but
there's
still
a
problem.
Well,
let's
head
over
to
instanta
right
and
maybe
there's
like
slow
code.
There's
long-running
sql
queries
there.
Maybe
they're
like
we
call
it.
Is
it
the
code
or
is
it
the
node?
B
But
the
whole
goal
is
that
immediately
risk
is
propagated,
actionable
intelligence
to
be
able
to
do
something
about
it,
even
better
to
automate
it
and
then
a
shortened
mttr,
which
is
like
I'm
sure,
a
buzz,
wordy
phrase
that
people
use
a
lot.
But
I
call
it
mean
time
to
not
me
like:
hey
resources
are
fine,
I
think
there's
an
app
problem
and
then
you
can
head
on
over
to
appd
or
instagram
or
whatever
it's
going
to
be
and
say,
coolio
you're
right.
There
is
a
problem
going
on.
A
So
I
sent
you
sent
an
email
to
your
marketing
people
their
day.
We're
like
hey,
make
sure
you
send
us
a
whole
bunch
of
softball
questions.
So
if
we
run
out
of
things
to
talk
about
we'll
you
know
we'll,
you
know,
please
tell
us
about
the
blah
blah
blah.
I
don't
think
we're
even
going
to
get
there.
I
hope
they're
not
going
to
be
mad.
A
I
think
we
should
have
you
back
again
or
like
I
I
I
find
this
really
super
interesting,
but
we
are
running
out
of
time
and
so
again
and
I'm
sure
I'm
gonna
get
all
kinds
of
hate
mail
from
your
marketers
and
so
forth,
and
next
time
we'll
make
sure.
But
oh
look
at
this.
We
have
a
little
call
to
action
slide
that
was
prepared.
B
I
would
be
remiss
if
I
didn't
send
you
so
you
know
you
talked
about
the
the
data
clouds
report
and,
like
that's
one
of
the
ones
that
we've
been
running
our
we
call
the
state
of
multi-cloud
report,
which
is
we've.
I've
ascent
is
going
to
kill
me
for
this
talk
about
being
a
bad
marketing
team
member
I've.
I've
come
here
without
the
number
it's
either
four
years
or
I
think
it's
four
years
that
we've
been
running
this
and
we've
got
like
800
plus
respondents
that
contribute,
and
it's
a
really
great
report
talks
about.
B
You
know
where
multi-cloud
is
like
I'm
the
funniest
person,
because
I'm
like
multi-cloud,
isn't
a
strategy.
It's
a
thing,
you're
stuck
with
it.
There
are
strategic
ways
you
can
leverage
it,
but
this
is
a
cool
report
that
it
kind
of
unpacks
a
lot
of
stuff.
If
you
want
to
dig
in
on
the
kubernetes
side,
we've
made
it
just
turbinarity.com
forward,
slash
kubernetes
super
easy
to
get
to
and
of
course,
just
reach
out,
I'm
at
disco
posse
everywhere.
You
go.
B
That's
the
fastest
way
to
find
me,
but
you
can
always
reach
out
through
anybody
at
the
team.
At
red
hat
now
they
they
know
how
to
get
ahold
of
us.
A
Yeah
same
here
I
mean
people
can
send
me
an
email
like
hey.
I
really,
you
know
want
to
learn
more
about
eric
wright
and
and
his
patents.
You
can
shoot
me
an
email,
it's
just
wait.
Dot
com,
don't
forget
the
e
speaking
of
that
I
am
mike
waite,
and
this
has
been
the
openshift
commons
briefings
operator
hours,
eric
wright.
I
really
think
we
should
have
you
guys
back.
I
think
this
was
really
fun.
Hopefully
it
was
interesting.
A
You
know
kubernetes
and
openshift
is
that
is
that
synonymous-
and
you
know,
is
your
operator
written
in
golang
or
is
it
one
of
those
fake
helm,
charts
we
just
didn't
have
a
chance
to
get
there.
So
hopefully
you
can
come
back
next
time
and
we
can.
We
can
talk
some
more.