►
Description
Join Andrew Hillier, CTO and Co-Founder of Infrastructure Optimization Analytics Software Vendor, Densify. He will be discussing the emerging challenges and 3 things to think about associated with managing resources and capacity in OpenShift Container infrastructure.
A
Okay,
welcome
to
this
this
bumpy
start
here
for
this
wednesday's
edition
of
the
openshift
commons
briefings
operator
hours,
I'm
michael
waite,
from
red
hat,
and
today
we
are
fortunate
enough
to
have
with
us
a
whole
team
from
densify.
We
have
andrew
hillier,
the
cto
and
co-founder
of
densify,
and
we
also
have
chuck
tatham
their
chief
marketing
officer
and
today
they're,
going
to
talk
about
three
important
areas
to
think
about
in
container
resource
management,
andrew
and
chuck.
How
are
you
today.
B
Well,
I'm
I'm
great
it's
very
nice
to
be
honest
today,
michael.
C
And
I'm
great
as
well,
except
for
my
camera,
but
thanks
for
having
us,
michael
and
and
red
hat.
A
You
know
we
had.
We
did
a
dry
run
of
this
a
couple
weeks
ago.
We
talked
about
what
were
you
know
what
you
folks
wanted
to
talk
about
and
we
tested
the
audio
we
tested
the
video.
We
made
sure
that
the
screen
sharing
button
was
familiar
because
we
use
we
use
blue
jeans
for
us
and
and
not
you
know,
most
people
are
familiar
with
zoom
and
you
know
webex
and
stuff
like
that,
but
you
know
as
much
as
we
tried
to
make
sure
there'd
be
no
glitches
to
getting
started.
A
We
have
just
one
little
video
glitch
anyways
thanks
for
being
here
today
we
have
our.
We
have
our
hour
here.
We
are
live
here
on
our
bridge.
We
have
some
people
on.
Oh
look
at
that
chuck
taylor
welcome.
B
A
C
B
Ahead
and
say
my
bait's
just
north
of
toronto
in
in
markham
and
I'm
sitting
in
toronto
right
now
and
chuck
here,
just
north
of
the
city
right.
A
Okay,
well
tell
us,
tell
us
about
densify.
You
know
you
folks
have
been
working
with
us
to
build
and
test
your
your
red
hat,
certified
containers
for
the
red
hat
platform.
You
folks
have
an
operator
that
works
to
help.
You
know
improve
day,
two
supportability
of
your
software
running
on
openshift.
You
folks
are
members
long
time
standing
members
of
the
openshift
commons
community
and
you
know,
tell
us
about
densify.
B
Well,
I'll
start
and
chuck
you
can
chime
in.
If
I
miss
anything
we're
where
it's
it's
a,
we
focus
on
resource
optimization,
it's
really
analytic
software.
You
can
think
of
it
as
pretty
deep
analytics
that
looks
at
the
workloads,
the
patterns
of
activity,
what
the
workloads
are
running
on
and
does
a
bunch
of
different
types
of
optimization,
and
if
you
step
way
back,
I
think
if
you
look
at
the
progression
of
the
industry,
you
know
we've
gone
from
heavy
use
of
virtual
machines.
B
Of
course,
people
still
have
these
virtual
machines,
where
the
style
of
optimization
is
more
around
workload.
Placement
sizing
vms
in
the
cloud
people
focus
a
lot
on
the
cloud
bill,
but
in
our
view,
a
lot
of
that
cost
comes
from
the
resources
themselves.
So,
if
you're
using
the
wrong
types
of
instances
in
the
cloud
there's
a
big
problem
there,
so
we've
seen
the
progression
to
cloud
and
of
course,
containers
is
a
whole
new
challenge.
B
Because
again
resource
optimization
is
important
and
what
we're
finding
is
is
quite
overlap
overlooked,
especially
initially
in
these
in
these
container
deployments,
so
resource,
optimization
kind
of
deep
analytics
pattern,
recognition
that
type
of
thing
to
to
basically
optimize
environments.
C
Give
him
an
a
he
he
he
he
covered
it.
He
covered
it
very
nicely.
A
B
Yeah
well,
we've
been
around
a
while
I'd
say
if
you
look
at
where
we
really
really
got
focus
on
this
problem.
Space
was
around
2005
time
frame,
so
I
think,
that's
quite
relevant
because,
as
we
know,
containers
aren't
new,
so
we
were
actually
working
with
very
early
solaris
zones.
That
type
of
thing
so
we've
been
around
a
while
and
a
lot
of
the
core
foundational
analytics.
You
know
grew
out
of
that.
A
B
Well,
I
think
I
think
it
comes
down
to
the
way
I
view
containers
is
that
they
do
two
different
things:
modern
containers,
they're
they're,
an
app
delivery
construct
and
they're
a
virtualization
construct.
So
I
think,
in
the
early
days
there
was
no
delivery
construct.
If
you
go
way
back
to
these
zones
or
wpars
or
all
these
different
structures,
they
didn't
provide
a
way
for
you
to
deploy
new
versions
easily.
They
didn't
they're,
not
like
what
docker
provided.
So
I
think
that's
what
really
precipitated
the
rise
is
that
developers
can
now
use
these.
B
B
They
also,
of
course,
let
you
run
multiple
workloads
on
one
node,
which
is
kind
of
the
virtual
side
of
it,
which
has
been
around
a
while,
and
you
know,
that's
obviously
another
big
part
of
it,
but
that's
it's
kind
of
that
one-two
punch
of
having
developers
adopt
them
for
their
purposes
and
then
being
able
to
use
them
to
host
workloads
and
operate
them.
It's
it's
really
kind
of
a
winning
combination.
Now
sure,
okay,.
A
So
densify
is
a
predict,
so
in
looking
at
your
website,
you
know
you
type
in
densify
brings
you
to
you
know
whatever
wikipedia
or
your
your
main
page.
It
basically
says:
densify
is
a
predictive
analytics
analytics
engine?
B
So
what
we
mean
by
that
is
that
it's
it's
where,
if
you
look
at
the
solutions
in
the
market,
there's
there's
ones
that
kind
of
react
quickly
to
kind
of
short-term
demands,
almost
think
of
like
monitoring
systems
or
something
getting
hot
or
in
the
vmware
world.
You
have
drs
that
moves
vms
on
a
short
term
basis
and
then
there's
longer
term
operators.
So
we
we
kind
of
look
at
the
longer
patterns
and
say
that
thing
gets
busy
every
month,
end
or
every
morning
at
8
a.m
that
thing's
busy.
B
So
it
lets
us
do
a
much
deeper
level
of
analysis.
So
what
it
means
is
when
we
give
recommendations,
they're,
usually
ahead,
of
something
going
wrong
as
opposed
to
reacting
and
both
are
useful
and
both
are
necessary.
You
want
to
react
to
unplanned
situations,
but
we
feel
that
if
you
get
ahead
of
it
and
actually
do
the
right
analytics
again,
a
simple
case
would
be.
B
So
and
that
applies
to
all
types
of
infrastructure,
and
I
think
containers
you
know,
especially
so,
because,
as
we
see
it's
kind
of
a
mystery
to
a
lot
of
people,
what
their
containers
are
doing
like
what
they're
so
complicated,
there's
so
many
moving
parts.
It's
hard
to
understand
so
being
predictive
is
a
key
part
of
that.
A
Yeah
and
and
for
the
people
on
the
bridge
and
those
that
are
watching
on
twitch
and
youtube
and
other
places,
if
you
have
questions
for
either
andrew
or
chuck,
just
drop
them
in
the
chat
down
over
there
and
we'll
make
sure
to
get
your
questions
addressed.
A
So
you
know
you
you
said
vms
are
are
vms
dead.
I
mean.
Why
did
why
I
don't
I
mean
this,
isn't
really
something
that
that
you
need
to
respond
to
if
you
don't
feel
comfortable
but
like.
Why
did
why
did
dell?
Just?
Why
did
dell
just
announced
they're
going
to
spin
off
vmware
seems
like
vmware
just
keeps
bouncing
around
from
one
owner
to
another
owner
for
the
last
you
know,
10
or
15
years
is:
are
they
are
they
are
they
getting
out
of
that
business?
B
B
The
majority
of
the
conversations
are
around
containers,
so
it's
kind
of
that
where
things
are
heading,
but
it's
I
don't
see
it
going
away
very
quickly,
but
it's
it's
kind
of
interesting,
because
if
you
go
back
to
the
the
progression
going
back
to
these
kind
of
zones
and
that
type
of
older
technology
and
you
replay
history,
the
way
the
way
I
feel
it
kind
of
went
down
was
that
there's
these
container-like
structures
that
were
good
technology,
but
then
vmware
kind
of
eclipsed
that
with
this
you
know
heavyweight
vms,
x86,
virtualization
kind
of
riding
on
the
back
of
x86
gear
and
really
took
over
the
market
for
10
or
15
years,
huge
success.
B
Obviously,
and
now
it's
kind
of
coming
back
around
the
containers.
I
think
it's
partly
because
containers
are
much
more
efficient.
B
You
know
they
they
don't
virtualize
the
device,
drivers
and
they're
a
nicer
thing
to
run
in,
but
it's
almost
like
that
was
a
necessary
phase
of
the
market,
so
people
get
used
to
sharing
so
so
vms
caused
people
to
actually
start
to
share
resources
and
they
didn't
like
it
at
first,
but
they
got
used
to
it
and
now
we're
at
a
place
where
people
are
are
used
to
sharing
resources,
we're
kind
of
seeing
a
switch
back
to
containers
which
are
probably
more
efficient
way
to
share
resources.
B
But
I
think
the
vms
are
going
to
stick
around.
I
don't
think
vmware
is
is
about
to
go
under
it's
just.
I
think
the
attention
is
moving
away
from
you
know
the
on-prem
and-
and
you
know
the
cost
associated
with
running
that
type
of
environment,
yeah.
A
So
being
able
to
do
predictive
analytics
about
apps
containers,
presumably
micro
services,
which
was
just
lots
of
lots
of
tiny
little
containers.
What
about
what
about?
Knowing
where
workloads
run,
I
mean,
as
as
people
adopt
more
and
more
multi-cloud,
you
know,
you've
got
clouds
all
over
the
place,
various
different
public
clouds.
You
know
hybrid
models.
A
B
I
mean
it's
definitely
made
it
much
more
complicated,
even
even
before.
Knowing
where
things
run
just
doing
inventory
now
is
really
complicated,
so
I
mean
it
used
to
be
you
could
say
I
had
a
thousand
servers
or
5
000
vms.
Now,
if
you
look
at
a
cloud
bill,
there's
so
many
things
in
it
like
it's.
Just
it's
become
really
complicated
as
far
as
what
the
entities
are
that
you
actually
even
are
are
owning
or
renting,
and
you
mentioned
microservices.
Now
these
things
are
coming
and
going
you
know
very
rapidly.
B
How
do
you
even
report
that?
How
do
you
even
tell
someone
how
much
stuff
they
have
you?
It's
it?
It's
you
know
even
containers,
or
even
just
you
know
the
elastic
elasticity
in
the
cloud.
If
things
are
coming
and
going,
I
can't
tell
you
you
have
a
thousand
that
doesn't
mean
anything
you
could
have
had
a
million
micro
services
run
over
the
course
of
a
day
for
10
seconds
each,
so
even
just
being
able
to
even
just
being
able
to
quantify
what
you
have
has
become
a
big
challenge.
B
You
know
it's
almost
becomes
the
area
under
the
curve.
I
I
it
doesn't
matter
how
many
things
you
have
you
use
this
much
cpu
hours,
or
this
much
horsepower
is
what
you're
consuming.
So
that's
even
that
you
know
even
before
we
get
to
where
things
are
running,
being
able
to
describe
to
someone
what
they
have
and
what
they're
what
they're
buying
in
the
cloud
or
what's
running
is
is,
is
really
really
become.
It's
not
impossible.
It
just
needs
to
take
a
big
shift
and
then
to
your
point
once
I
can
get
the
handle
on
that.
B
Okay,
I
have
things
in
two
different
clouds
and
have
different
container
environments.
You
know
the
the
placement
of
those
workloads
is
is
critical
like
it's
just
one
thing
that
we,
for
example
like
to
talk
about
is
licensed
software.
You
thought
about
compliance.
If
you
have
to
pay
for
software
wherever
you
use
it.
Well,
you
can't
just
start
spraying
workloads
around
that
are
using
sql
server
instances.
That's
that's!
B
You're
gonna
get
a
huge
bill,
so
there's
many
practical
things
that
impact
exactly
where
you
run
from
a
cost
perspective
or,
like
you
said
compliance,
you
know
data
residency,
all
that
type
of
stuff
it.
It's
usually
challenging,
and
so
you
know
with
the
your
part
about
how
we
help
we.
We
do
have
a
rule
engine
in
our
product
that
dictates
that
type
of
thing
so
and
what
we
call
fit
for
purpose.
So,
where
things
land
it
has
to
be
based
on
the
what
it
requires.
You
know
the
capabilities
it
requires.
B
I
need
a
gpu
and
do
I
need
complete
pci
compliance
or
sock
3.
All
that
stuff
doesn't
go
away
just
because
you're
running
containers
and
to
your
point
becomes
more
complicated
because
you're
running
stuff
everywhere.
So
I
think
people
think
that
some
problems
go
away
when
you
start
to
migrate
to
these
newer
technologies,
but
they
don't
some
problems.
Do
some
problems
don't
and
keeping
tabs
on
where
you're
running
things
is
definitely
important.
A
You
know
narrow
that
resource
allocation
to
make
sure
that
various
different
containers
and
workloads
are
getting
what
they
need
without
having
to
pay
for
excessive
amounts
of.
You
know:
resources
from
any
of
the
of
the
providers.
B
Yeah,
no,
it's
it's
a
huge
focus
for
us
and
we
we
absolutely
see
that
exact
same
trend
and
and
it's
it's
there's
there's
several
reasons
for
it.
But
I'll
go
back
to
the
there's,
a
technical
reason
and
I
think,
there's
an
organizational
reason
why
it
happens.
But
technically,
if
we
go
back
again,
we
rewind
history
to
virtualization.
B
Overcommit
is
a
very
important
concept
in
all
of
this.
So
the
fact
that
in
a
virtual
environment
I
can
give
you
two
cpus
and
I
can
give
chuck
two
cpus
and
they
can
be
the
same
two
cpus,
and
so
what
I'm
playing
on
is
the
fact
that
you
don't
get
busy
at
the
same
time,
back
to
like
I
mentioned.
If
you
get
busy
in
the
morning
and
chuck
gets
busy
at
night,
I
can
give
you
the
same
cpus
because
you're
not
using
them
at
the
same
time.
B
So
that
is
a
very
powerful
construct
to
drive
efficiency
which
you
lose
as
soon
as
you
go
into
something
like
ec2
like
an
amazon
instance.
If
you,
just
if
you
just
rent
an
instance-
and
I
run
your
workload
in
one
and
trucks
in
the
other,
I
have
to
size
them
to
your
peak
of
activity
and
at
the
size
chucks
to
his
peak
of
activity,
and
I
might
end
up
buying
twice
as
much
capacity
as
I
would
have,
because
I'm
building
these
little
islands
of
capacity.
B
So
that's
the
kind
of
when
you
go
to
virtual
to
cloud
you
lose
over
commit.
I
you
know
the
organizational
way
to
look
at
is
that
I
can't
I
can't
give
youtube
cpus
and
chucks
to
cpus
and
then
under
the
covers,
make
them
the
same
too.
I
actually
have
to
rent
two
different
systems,
so
that's
kind
of
created
the
first
wave
of
of
resource
challenge
that
then
carries
over
into
containers
because
containers
do
kind
of
over
commit.
B
A
So
that
sounds
like
a
major
drawback
of
of
public
cloud.
Was
that
by
design
was
that
just
an
oversight
or
is
it
is
it
I
mean
it
almost
sounds?
Like
you
know,
people
looking
at
public
cloud
like
oh,
this
is
great
until
you
start
thinking
about
you
know
mike
wanting
four
cpus
and
chuck
wanting
four
cpus
and
and
having
to
pay
for
100
of
those
resources.
A
B
Yeah,
I
mean
it.
Definitely,
I
think,
that's
a
big
cause
of
the
bills
being
big
when
you
first
get
a
bill,
but
what
it
comes
down
to.
If
you
step
back
is
I
don't
think
the
cloud
providers
were
pretending.
Otherwise
I
think
that's
where
you
get
into
cloud
native
versus
legacy
apps.
So
if
you
build
an
app
to
run
the
cloud,
it
doesn't
work
that
way
it
just
uses
100
of
a
resource
until
it's
done
with
it
and
gets
rid
of
it.
B
Ideally,
so
you
know
things
like
serverless
or
like
small
micro
services,
they're
architected
differently,
so
the
cloud
is
very
cost
effective.
If
you
use
it
well
when
you're
using
it
and
then
don't
use
it
when
you
don't
need
it
that's
kind
of
the
paradigm.
So
if
you
architect
an
app
to
work
that
way,
it's
going
to
be
quite
efficient.
B
If
you
take
a
legacy
app
like
some
banking
app
that
has
transactions
all
day
long
and
ebbs
and
flows
that
can
be
very
expensive
in
the
cloud,
if
you
just
put
it
in
as
it
is
because
again
you
have
to
size
to
the
peak
like
the
month
end
or
something
and
then
the
rest
of
the
time
you're
buying
resources
you're
not
using.
So
I
don't
think
you
can
criticize
the
cloud
providers,
it's
more.
A
B
So
so
the
way
that
works
is
a
container
environment.
Again,
it's
it's
kind
of
sits
in
between
the
virtual
and
the
cloud,
so
it
it.
You
do
ask
for
resources
and
they
are
dedicated
to
you,
but
you
can
also
run
multiple
workloads
on
one
node,
so
it
does
over
commit,
but
how
well
it
over
commits
depends
on
the
settings,
the
requests
and
the
limit
values
in
in
the
containers.
B
So
if
you
look
at
the
way,
the
way
the
operator
works
is
what
it
does,
is
it
just
automatically
connects
to
the
container
environment
and
pulls
back
all
the
data
for
the
the
the
pods
and
the
deployments
and
the
nodes,
and
all
you
know
all
that
all
the
supply
and
all
the
demand
data
and
then
analyze
the
patterns
and
says.
Well,
you
don't
need
to
give
your
your
container.
You
know
2
000
millicourse,
like
that.
It
just
doesn't
need
that
to
operate
properly.
If
you
look
at
the
whole
system,
you
look
at
all.
B
The
containers
that
are
running
because
that
gets
back
to
the
organizational
problem
is
that
if
you're
an
app
owner
and
you're
deploying
an
app,
you
want
to
be
safe,
so
you're
going
to
ask
for
2000
millicourse
for
your
web
server
and
we'll
look
at
it
and
say
you
know
what
we've
watched
this
thing
over
time.
It
doesn't
need
you'd,
be
perfectly
safe
with
500
millicourse,
and
so
that's
where
that
comes,
in
is
to
say,
people
will
make
decisions
based
on
risk
risk
mitigation,
which
is
part
of
the
right
decision
for
them.
B
C
And
the
the
the
just
to
add
to
that
the
dynamic
of
devops
and
the
way
functionality
is
built.
Today,
you
may
have
hundreds
of
individuals
in
a
large
entity
that
are
specifying
their
desires
for
resources.
So
there's
there's
a
much
more
distributed
problem
versus
you
know
the
older
way
of
managing
it
in
a
more
central
fashion.
So
it's
another
element.
B
Yeah,
that's
that's
a
that's
a
great
point
and
michael,
we
use
the
phrase
micro
purchasing,
sometimes
to
describe
that
where
you
know
you've
gone
from
a
legacy
environment
where
you
buy
or
release
stuff
every
three
to
five
years.
But
now
a
junior
engineer
can
put
a
value
in
a
terraform
file
and
something
gets
purchased
and
in
the
chuck's
point
it
can
happen.
B
There
can
be
thousands
of
these
files
floating
around
made
by
different
people,
and
so
the
the
actual
specification
resources
has
become
very
very
distributed,
and
even
if
everybody's
making
the
right
decision,
which
they
probably
are,
but
even
if
they
are,
you
still
end
up
with
tremendous
inefficiency,
because
all
decisions
are
made
in
isolation.
They're,
not
they're,
not
there's,
not
one
kind
of
answer
saying
this
is
how
all
this
should
work
together,
and
this
is
how
the
resources
should
be
should
be
used.
A
Okay,
so
in
your
in
your
title-
and
I
I
kind
of
I'm
not
following
a
script
here,
I
was
just
actually
kind
of
just
bombarding
you
with
questions,
because
I
was
curious.
But
in
your
title
it
basically
says
you
know
three
important
areas
to
think
about
container
resource
management
and
when
we
were
going
through,
our
our
dry
run
the
three
different
areas
you
wanted
to
to
make
sure
we
addressed
during
this
call.
A
Today,
I'm
going
to
just
look
at
my
notes
here
was
containers
behaving
like
vms
in
some
way
the
second
one
was
leaving
resource
specifications
to
manual
and
human
efforts,
and
then
the
third
major
area
was
devops
fin-ops
gap.
Let's
talk
about
the
first
one.
You
know
containers
behaving
like
vms
in
some
way,
but
there
are
critical
differences
when
it
comes
to
setting
resources.
So
you
know
what
is
the
impact
of
incorrectly
setting
resources?
A
B
Well,
it's
it's
kind
of
like
we
just
talked
about
so
the
the
again
in
a
virtual
environment.
You
can
set
resources,
you
can
kind
of
make
them
wrong
and
you
can
over
commit
your
way
out
of
it.
I
I
can.
I
can
give
you
a
lot,
but
I
can
give
those
same
results
to
somebody
else.
So,
there's
a
even
though
the
the
app
owners
are
asking
for
certain
things.
B
The
central
group
can
kind
of
turn
the
crank
and
say
I'm
going
to
drive
up
the
density
of
this
environment
by
by
sharing
those
resources
with
other
people.
So
that's
that's.
What
happened
in
a
virtual
environment
virtual
environments
also
have
a
vmware
environment,
for
example,
have
what's
called
a
reservation
where
you
can
say
this:
vm
needs
to
get
a
cpu
or
two
cpus,
but
people
rarely
use
it
because
it's
so
draconian.
It
actually
then
locks
up
that
resource
for
only
that
vm.
B
So,
even
though
it's
a
feature,
it's
very
rarely
used
because
of
what
it
does
to
efficiency.
Now.
Containers
have
requests
and
limits,
and
the
request
is
like
a
hard
reservation.
If
you
ask
for
a
thousand
millicourse,
you
get
a
cpu
if
chuck
asks
for
a
thousand
mil.
Of
course
he
gets
a
cpu
once
all
the
cpus
are
given
out
on
a
node.
You
know:
kubernetes
starts
scheduling
to
a
new
node
that
one's
full
doesn't
matter
if
you're
using
them.
B
You're
you
own,
the
cpu
and
chuck
owns
a
cpu
and
there's
only
two
cpus
in
the
box.
So
we
move
on
so
that
directly
impacts
the
the
the
resourcing.
Once
you
give
it
out,
you
move
on
it'll
just
keep
on
consuming
nodes
without
them
actually
being
utilized
they're
just
you've
allocated
them
out
to
workloads,
but
not
use
them,
and
we
see
we
see
this
all
the
time.
We
do
analysis
a
lot
of
times
where
there'll
be
a
cluster
like.
B
I
know
an
open
shift
cluster
where
the
the
resource
is
80
of
the
resources
are
given
out
to
the
containers
but
they're
only
seven
percent
utilized,
I'm
thinking
of
one
recent
example.
So
by
making
these
numbers
too
big,
you
directly
drive
down
utilization
and
cause
your
organization
to
buy
more
gear
that
it's
pretty
much
a
direct
relationship.
And
again,
it
comes
back
to
the
fact
that
that
is
they're,
not
quite
like
a
vm
in
the
way
overcommit
works
to
get
it
to
over
commit
you
have
to
set
the
resource
requests,
lower.
C
B
Yeah
yeah,
so
so
michael,
I
mean
there's
an
interesting.
I
did
this
thought
exercise
at
one
point
where
I
went
through
my
laptop,
and
I
said
if
I
were
going
to
put
all
my
you
know,
outlook
and
word
and
all
these
things
in
containers.
What
would
I
give
them
for
resources?
So
I
said
you
know.
Oh
my
email.
Well,
let's
give
that
a
half
a
cpu
because
it
doesn't
use
much,
but
it
gets
busy.
Sometimes
then
zoom.
Well,
let's
give
zoom
four
cpus
worth
of
you
know
it
gets
really
busy.
B
It's
just
it's.
It's
kind
of
that
simple
like,
like
you
know,
powerpoint.
I
think
we
joked
about
this
in
our
dry
run.
You
know
how
many
millicorns
you
get
powerpoint,
I
I
don't
know
I've
been
using
it
for
15
years
and
I
have
no
idea
how
many
millicourse
I
would
give
powerpoint,
but
I've
seen
it
get
pretty
busy.
So
let's
give
it
at
least
a
cpu
worth
right,
but
that's
that's
the
rational
thought
process
to
do
this,
but
it's
not
based
on
the
numbers.
B
When
you
take
all
those
numbers,
then
and
run
it
in
real
world
scenario.
None
of
those
need
anywhere
near
that
amount
of
of
earmarked
resources-
and
that's
that's
kind
of
the
heart
of
the
problem-
is
that
if
we
leave
some
of
the
other
points
is
that
it
shouldn't
be
left
to
humans.
It's
not
humans,
doing
their
best
job
rationally,
we'll
still
not
get
it
right,
because
if
the
answer
is
something
is
in
the
analytics,
not
in
opinion.
A
Okay,
so
how
so,
how
does
it
work?
How
how
how
does
your
magic
work?
Is
there
an
agent
that
sits
there
on
every?
You
know
virtual
instance,
or
node
out
there
and
phones
home
information
to
the
mothership,
or
is
this
a
sas
offering
with
with
an
agent
or
or
people
deploy
it
inside
their
infrastructure?
How
does
it
all
work,
I'm
sure.
A
Said,
given
that
we
only
have
26
minutes
left
on
our
call,
probably
not
can
have
a
detailed
response
to
that,
but.
B
Yeah
well
so
it's
it's,
it
is
agentless.
I
mean
it's.
We
we
piggyback
off
existing
data
collection.
Openshift
is
nice
because
it
bundles
prometheus.
So
our
operator,
just
it's
just
seamless,
you
just
run
the
operator.
It
gets
the
data,
so
the
one
component
that
runs
gets
the
data
from
prometheus.
It
is
a
sas
offering,
so
it
all
goes
up
into
the
cloud
where
it
gets
analyzed
and
and
again,
there's
multiple
levels.
B
So
there's
the
the
container
request,
the
node
level
data
and
the
kube
state
metrics
and
all
that
stuff
goes
up,
and
then
it
also
depends
on
what
it's
running
on.
So,
if
you're
running
in
the
cloud
you're
probably
running
on
a
scale
group
and
that
data
goes
up
as
well,
because
those
have
to
be
married
together,
the
the
the
containers
are
running
on
nodes
that
are
scaling
in
and
out.
So
we
we
get
that
side
from
things
like
cloudwatch
as
an
example.
B
So
in
in
containers
or,
let's
say
openshift
in
the
on
cloud,
we
get
those
two
levels.
If
it's
on-prem,
then
of
course
it
might
be
coming
from
vmware
data
collection,
because
the
nodes
are
vms,
you
know,
or
those
might
be
bare
metals.
So
it
depends
on
the
container
deployment
scenario,
but
it's
all
agent
lists
they're,
just
basically,
basically
interface
with
prometheus
or
interface,
with
the
apis
that
are
available,
pull
it
all
up
into
the
cloud,
the
lights
dim.
B
We
chew
all
the
numbers
and
then
we
basically
come
up
with
recommendations
that
come
back
down
and
they
make
all
kinds
of
nice
reports
like
app
owner
reports
and
it
can
go
to
slack
or
teams,
and
then
it
also
goes
to
things
like
terraform
to
actually
make
changes.
C
We
did
a
a
webinar
last
week
on
integrating
with
helm
and
related
frameworks
to
drive
those
changes
back
into
infrastructure,
that's
available
as
a
recording
on
our
site.
If
people
are
interested.
B
And
this
is
this
is
where
we
get
back
to
michael
to
the
predictive
part,
because
all
of
that
is
predicated
on
having
the
right
answer.
So
one
of
these
that
we
really
heavily
focus
on
is
we
don't
say
you
might
want
to
make
that
bigger
or
you
might
want
to
make
that
smaller.
We
say:
no,
we've
looked
at
all
this
data
and
all
of
these
rules
and
all
these
different
things-
and
this
is
exactly
what
it
should
be
and
we've
taken
into
account.
B
The
fact
that
you
don't
use
burstable
instances-
and
you
don't
use
that-
and
you
can't
do
this
and
you
have
to
run
that
so
there's
a
lot
of
policy
involved
to
say
when
we
say
do
this:
it's
actually
an
answer
that
you
can
automate,
and
so
we
we
have
example,
customers
on
our
website
where
they,
you
know,
there's
videos
where
they've
been
taking.
You
know
open
shift
environments
using
our
apis
to
get
the
recommendations
out,
bring
it
through.
B
A
Now
you've
used
the
word
openshift,
specifically
at
least
six
times
since
we
started
here
today.
Am
I
paying
you
for
that
or
like
clearly,
there
must
be
other
container
orchestration
platforms
that
you
folks
work
with
or
run
on.
Is
that
right.
B
Yeah,
for
sure
I
mean
I
think
you
know,
kubernetes
obviously
has
risen
as
the
kind
of
de
facto
standards,
so
there's
lots
of
variants
of
kubernetes,
and
so
there's
you
know
kind
of
roll,
your
own
kubernetes,
there's
eks
aks,
so
all
those
variants
that
it
it's
the
it's
the
same,
there's
also
ecs,
which
isn't
kubernetes,
but
we
also
support
that.
B
What
we
find
is
that
the
the
kubernetes
there's
a
lot
of
variations
on
it,
that
people
are
using
either
just
kind
of
grabbing
their
own
versions
or
or
as
a
service
in
the
cloud
or
or
openshift,
and
that's
they're,
all
they're,
all
kind
of
cut
from
the
same
cloth.
So
those
are
all
all
supported
pretty
much
the
same
way
again.
The
major
variation
is
their
supply
side.
What
they're
running
on
they
will
look
different
depending
on
if
you're
running
it
on-prem
versus
in
the
cloud.
B
A
You
could
have
just
said:
openshift
is
the
only
platform
we
support
period.
If
you
wanted
to,
I
thought
I'd
give
you
the
chance
to
say
that.
But
that's
that's
fine.
Well,.
C
So
we
certainly
see
it
most
commonly
in
in
our
enterprise
customers.
B
B
Do
like
it
because
it
does
bundle
promethean,
it's
a
much,
simpler
thing:
it's
much
more
known
quality,
so
it
is
it.
We
find
it
is
a
great
solution,
because
it
removes
a
lot
of
the
variables
for
the
customer
and
and
for
us,
so
we
really
like
working
with
it,
because
it's
it's
again.
That
operator
is
a
clear
example
where
how
fast
that
is
to
install
and
get
up
and
running.
You
can't
do
that
on
any
other
platform
that
easily
it's
it's
great.
A
A
I
forget
it
was
on
a
webinar
that
we
did
with
you-
or
maybe
I
think
no
actually
might
have
been
with
kong,
and
we
were
talking
about
api
gateways
and
service
mesh
and
the
whole
concept
of
containers
getting
smaller
and
smaller
and
smaller
and
becoming
microservices
and
importance
of
service
mesh
in
that
space,
and
then
somebody
chimed
in
on
chat,
saying
we're
seeing
just
the
opposite:
we're
seeing
the
size
of
our
containers
getting
bigger
and
bigger
and
bigger
and
measured
in
like
forget
if
he
said
gigabytes
in
size
or
maybe
even
larger.
A
How
would
you?
How
would
you
talk
about
what
you
folks
know
based
on
your
analytics
engines
about?
You
know,
containers
turning
into
microservices
when
when
can
we
not
live
without
service
mesh,
to
manage
anything
or
is
it
going?
The
other
way
and
containers
are
actually
just
getting
bigger
inside
production
environments.
B
That
is
dispatching
the
work
and
doing
the
work
and
micro
servers
are
just
taking
that
thread,
pool
and
making
it
outside
the
app
where
everything's
running
kind
of
for
shorter
bursts,
scheduled
by
by
you
know
by
the
the
container
scheduler
instead
of
inside
the
app.
So
it's
really
whether
you
expose
the
internet
or
not,
and
some
apps
benefit
from
having
all
the
threads
have
access
to
the
same
memory.
So
again,
it's
basically
an
application
architecture
question
when
you
blow
things
into
microservices,
it's
really
good.
B
B
Common
data
is
better
to
have
in
the
same
memory
space,
so
I
think
it
all
comes
down
to
they're
kind
of
all
the
same
thing
just
exploded
out
to
a
different
level,
whether
your
threads
become
microservices
or
whether
they
stay
inside
the
app
and
microservices
are
a
funny
thing
from
a
resource
perspective,
because
what
we
find
is
that,
just
because
something
runs
for
a
very
short
period
of
time
doesn't
mean
you
can
be
sloppy
with
it.
B
So
so
so,
for
example,
we
see
you
know,
we
see
a
microservice
that
runs
for,
let's
just
say,
even
for
like
a
minute.
Somebody
runs
for
a
minute
to
do
something
and
it's
given
a
lot
of
resources
and
everybody
says
well
who
cares?
Because
it
only
runs
for
a
minute.
What's
it
really
going
to
impact,
it
only
runs
for
a
minute.
Well,
if
a
million
of
those
things
run,
it
really
impacts
things.
If
you
see
what
I'm
saying.
B
A
false
sense
of
security
that
something
running
for
a
short
period
of
time
doesn't
need
to
be
correct,
but
what
we
find
is
the
opposite.
If
you,
if
the
microservices
are
incorrect,
then
it
multiplies
out
very
quickly
and
you
need
a
lot
of
hardware
to
run
it.
It's
just
the
same
story,
just
it's.
It's
like
it.
Just
isn't
intuitive
people
and
in
that
sense
running
things
as
one
bigger
element
is
kind
of
easier
because
you
can
measure
it
and
you
can.
B
You
can
actually
specify
it
more
easily
than
a
million
small
things
that
are
running
for
short
periods
of
time.
So
it's
you
know
it's
it's.
We
see
kind
of
both
things
happening.
I
think
people
will
probably
find
that
the
big
things
might
be
easier
to
manage.
You
know
you
know
than
trying
to
manage
all
the
small
ones
and
aggregate,
but
it
really
comes
down
to
the
the
app
that
you're
deploying.
A
Okay,
well,
I
I
invite
the
people
that
are
watching
on
twitch
and
youtube,
and
even
here
on
the
bridge-
or
you
know,
patrick
or
project
or
whatever
share
your
thoughts.
If
you
have
a
question
or
or
care
to
comment
about
how
you
feel
about
predictive
analytics
and
resource
management
in
a
multi-cloud
world
put
them
in
here,
I
get
the
sense
that
that
andrew
and
chuck
can
just
about
address
any
type
of
question
that
may
came
may
come
up.
So,
let's,
let's
play
let's
play
stump
the
andrew.
A
Let's
play,
let's
play
stump
the
chuck
here
today.
Let
me
have
a
video.
A
B
Well,
so
it's
interesting
the
capacity
part,
because
you
know
we
see
a
lot
of
organizations
that
they
have
developed
very
mature
capacity
management
capacity,
planning
processes
when
they're
buying
a
lot
of
on-prem
gear.
So
we
see
some
extremely
sophisticated
stuff
being
done
on
that
front.
You
know
predictively
forecasting,
that
type
of
stuff
back
to
the
cloud.
You
know
the
cloud
disruption.
What
we
found
happened
was
people
kind
of
just
dropped
that
entirely
and
just
focused
on
the
bill.
B
There's
almost
this
gap
that
that
resources,
whatever
it's
magic,
it's
just
we're
just
buying
in
the
cloud
as
we
need
it,
we're
going
to
focus
on
the
bill
and
understanding
the
bill
and
giving
chargeback
reports
and
all
that
kind
of
thing,
and
I
think
and
no
planning
whatsoever,
because
how
can
you
possibly
plan?
This
is
all
dynamic
right.
So
what
we
found
is
that
that
went
on
for
long
enough
that
people
started
to
realize.
Okay,
maybe
we
do
need
to
pay
attention
to
this.
B
For
the
reasons
I
mentioned,
that
you
know,
I
have
this
huge
spend
and
I
need
to
look
at
the
resources
and
then
containers
of
course
make
that
even
more
complicated.
So
we
do
see
a
return
to
the
need
for
that
type
of
function.
Although
it
may
not
look
the
same
it
again,
it
may
not
look
like
people
trying
to
plan
out
three-year
purchasing
strategies.
It's
different!
It's
more
of
an
operational,
we
joke
about
cap
ops.
Is
there
a
cap,
ops,
kind
of
function
and
we
don't
want
to
coin
that
phrase?
B
It's
not
a
great
phrase,
but
is
there?
Is
there
some
more
operationalized
capacity
function
needs
to
occur
because
you
clearly
need
to
pay
attention
to
this
and
it's
falling
through
the
cracks
right
now.
Nobody
there's
people
looking
at
the
bill
and
there's
people
that
are
looking
at
you
know
writing
java
code
who's.
Looking
at
this,
that's
that's
the
the
big
gap
in
chuck.
You
work
a
lot
with
the
finnops.
B
C
Yeah,
the
initial
wave
seems
to
be
more
around.
How
do
we
allocate
out
the
cost
of
this
infrastructure
to
the
various
consumers
inside
inside
an
enterprise,
but
those
generally
don't
focus
on
the
act
of
getting
those
actual
resource
specifications
correct
either
from
a
risk
or
efficiency
perspective?
It's
more
about
you
know.
Okay,
are
you
paying
for
your
fair
share
and
how
do
we?
How
do
we
account
for
things
that
are
shared
services
versus
dedicated
to
a
business
service?
C
So
it's
not
to
trivialize
those
pure
finance
challenges,
because
enterprises
have
to
have
to
allocate
cost
and
make
sure
that
you
know
the
right
parts.
The
business
are
budgeting
and
paying
for
things,
but
it
doesn't
solve
the
question
of.
Is
my
supply
chain
introducing
risk
and
or
waste?
That's
a
that
detailed
level
stuff.
A
Hey,
maybe
this
is
a
question
for
chuck
but
andrew
andrew,
when,
in
his
last
statement
there
was
saying
talking
about
chargeback
reports
and-
and
I
I
took
that
to
mean
that
densify
provides
chargeback
reports,
for
you
know
the
the
the
managers,
so
they
can,
you
know,
take
reactions
to
the
information.
That's
shared
from
from
your
technology.
Isn't
that
something
that
already
would
come
from
the
cloud
providers
themselves?
C
So
I
think,
as
you
look
at,
but
I
think
you
need
to
separate
the
the
pure
cloud,
consumption
cost
and
and
the
act
or
the
complexities
of
containers
as
andrew
was
talking
about
them.
The
actual
precise
representation
of
what
a
a
business
service
or
an
application
is
actually
consuming
is,
is
a
much
more
precise
type
of
answer.
C
That's
required
and
one
that
we
deliver
versus
a
general
sense
of
what
something
is
costing
at
a
high
level,
which
is
what
often
times
we
find
from
the
the
the
cloud
providers
they.
They
tend
not
to
drill
down
into
the
detail,
and
you
know
our
point
of
view
is
that
they're
not
entirely
motivated
to
drive
cost
efficiency
when
they're
the
ones
that
are
actually
selling
the
capacity.
So
you
think
yeah.
Well,
it's
I.
I
sometimes
compare
it.
C
B
Yeah,
I
thought
this
was
relevant.
I
pulled
up
while
you
were
speaking,
is
that
you
know
we
view
we
view
cost
as
kind
of
a
pyramid
and
you
think
of
the
top
as
kind
of
how
you're
buying
in
the
bottom,
as
what
you're
buying
and
so
I'll
just
kind
of
build
this
out.
So
we
saw
it
like
the
cloud
market
progress
where
there
was
a
huge
focus
on
understanding
the
bill
figuring
out
who
needs
to
pay
the
bill?
You
know
I'm.
A
B
My
job,
if
I'm
just
recovering
the
cost
from
all
the
lines
of
business
that
are
buying
those
services,
are
there
any
anomalies
and
then,
of
course,
how
you
buy
reservations
savings
plans.
So
it's
like
picture,
you
have
a
fleet
of
vehicles
and
everybody's
driving
them
everywhere
and
you
think
well,
first,
step
is:
let's
get
a
discount
program
with
a
gas
station.
I
gotta
get
a
card,
a
points
card
or
whatever.
Well
that
doesn't
really
fix
the
problem.
The
problem
is
actually
that
everybody's
driving
everywhere
it
fixes
it
partly
fix
the
problem.
B
It
gives
you
a
better
problem,
but
you
know
we
see
this
line
you
cross,
where
you
get
into
not
just
how
you're
buying
it
and
what
you're
spending
but
optimizing
what
you're
actually
using.
So
I
think
this
kind
of
captures
nicely
the
dividing
line.
We
do
things
on
both
sides
of
this
line,
but
we
heavily
focus
on
the
bottom
area,
because
you
know
we're
not
we're
not
focused
on
the
taxation
and
all
that
kind
of
stuff
of
you
know
the
bill
analysis.
What
we're
saying
is
no
you're
buying
the
wrong
things.
B
You
should
be
buying
compute
optimized
instances
that
are
half
the
size
or
or
your
containers
should
all
be
smaller
or
different,
and
so
that's
where
we
see
you
know
we
draw
like
a
period
because
that's
where
we
see
the
bulk
of
the
inefficiency
comes
from.
Is
that
not
that
you're
buying
it
wrong?
Or
you
don't
understand
it?
It's
that
you're
using
the
wrong
resources
and
most
analysis
we
do
you
mentioned
you
know
somebody
did.
A
study
of
most
containers
are
size
wrong.
B
Absolutely
and
also
even
just
most
cloud
instances
are
the
wrong
types
of
instances.
You
know
we
like
cases
where
customers
say
that
we
recommended
to
go
on
to
like
an
i3
and
they
had
never
heard
of
an
i3
or
it's
because
we
kind
of
look
at
the
whole
the
whole.
You
know
portfolio
of
resources,
so
we
find
this.
This
diagram
is
useful
and
cloud
providers
to
what
you
were
mentioning
in
chuck,
as
well
kind
of
dabble
in
the
top
part
of
this
diagram.
B
C
Is
really
and
the
other
dynamic
is
that
the
bottom
part
of
this
diagram,
the
personas
there,
the
the
engineer,
the
app
owner
generally
doesn't
want
to
take
recommendations
as
to
infrastructure
change
from
a
finance
view.
C
It's
the
number
one
challenge
in
a
recent
fin-ops
survey.
Phenom's
foundation
ran
a
survey
in
the
la
in
february
number
one
finding
was
that
or
the
number
one
issue
was
getting
engineers
to
actually
make
the
change
in
order
to
drive
efficiency,
and
a
lot
of
that
problem
is
because
just
of
the
resistance,
the
sort
of
talk
to
the
hand
when
a
bill
generated
suggestion
for
efficiency
is
brought
forward.
C
B
And
as
you
get
into
containers,
it
gets
more
and
more
obscure,
because
there's
no
way
a
finance
person
is
going
to
know
or
have
the
information
even
realize
that
all
the
containers
are
mis-specified.
B
The
request
values
are
too
big,
causing
the
the
the
nodes
to
inflate,
causing
there
to
be
more
nodes
like
you,
don't
see
that
in
the
bill,
you're
just
getting
a
big
bill,
so
we
see
people
say,
let's
make
a
container
chargeback
report
to
figure
out
what
you're
spending
the
money
which
is
useful
but
doesn't
solve
the
root
cause.
The
root
cause
is
coming
from
the
the
actual
devops
tool
chain
in
that
case.
So
that's
the
that's
a
disconnect.
You
know
between
the
devops,
fin-ops
and
and
the
capacity
side.
There's
still
that
gap
there
that
that.
C
A
Okay,
one
final
question
I
have
then
then
maybe
we'll
start
with
some
of
the
script
that
was
provided
by
heidi.
If
we
have
time,
I
don't
know,
I'm
sorry
are
there
other
apps
that
don't
jive
well
with
densify
like
there's
a
lot
of
apps
that
are
built
on
kubernetes
for
kubernetes,
you
know
they
call
themselves
cloud
native
apps,
there's
other
companies,
you
know
that
are
or
other
developers
that
are
trying
to
forklift
upgrade
legacy
apps
into
the
cloud
or
kind
of
trying
to
bring
the
cloud
to
them.
I
don't
want
to
name
the
point.
A
You
know
point
fingers
at
any
big
giant
oracle.
Excuse
me
database
vendors,
but
other
that
actually
was
a
slip
of
the
tongue.
Are
there
any
apps
that
that
you
guys
can't
handle,
or
are
there
some
workloads
that
are
better
suited
for
densify
and
are
there
other
other
workloads
that
people
should
just
stay
away
from
and
just
just
give
up.
B
Well,
I
mean
that's
a
good
question,
I
don't
think
there's
any
specific
apps
but
I'll
characterize
it
as
types
types
of
apps
or
types
of
workloads
and
the
one
thing
that
that
and
it's
not
just
us,
I
think
it
kind
of
is
very
difficult
to
deal
with,
is
if
you
have
a
lot
of
churning
systems,
containers
or
cloudances
that
come
and
go
a
lot.
Let's
say
that
because
they're
microservices
or
something
else,
what
we
find
is
that
you
can
see
that
and
you
can
actually
tell
if
it's
wrong
or
not.
B
But
if
it's
not
tagged,
you
don't
know
what
to
do
with
that
answer.
So
so
it's
kind
of
the
whole
ephemeral
instance.
So
it's
a
picture.
If
you
have
a
a
big
monolithic,
app
turning
along
for
days,
that's
easy!
You
see
it.
You
know
what
it
is.
If
you
should
optimize
it,
you
can
identify
a
recommendation
and
you
can
identify
the
terraform
file.
It
came
from
and
say
there
you
know,
put
this
line
in
the
terraform
file
and
it'll
be
automatically
fixed.
B
If
stuff
comes
and
goes,
let's
just
say,
like
a
grid,
we
see
customers
running
grids,
either
in
containers
or
just
in
the
cloud
and
nodes
come
and
go,
but
they're
all
identified
to
be
part
of
that
grid.
So
that's
great,
so
we
can
say:
okay,
all
your
grid
nodes.
You
should
do
this
with
them.
They'd
be
better,
maybe
better
optimized,
but
then
you
get
into
stuff
that
nobody
knows
what
it
is
and
we
still
have
that
problem.
B
It's
it's
shameful
that
we
solve
this
problem
where
we
are
in
the
you
know,
progression
of
the
it
industry
that
nobody
tagged
it.
Nobody
knows
what
it
was,
somebody
just
blips
in
and
out
and
who
knows
where
it
came
from
so
tagging
is
still
a
problem.
It's
always
been
a
problem.
You
know
right
through
all
this
progress
we
talked
about.
So
if
somebody
doesn't
tag
it
and
say
what
it
is,
you
can
look
at
it
and
say:
well,
that's
all
wrong,
but
you
have
no
idea
what
to
do
with
that
answer.
B
So
that's
what
kind
of
I
would
say
if
you
say
what
trips
us
up
is
it's
something
we
give
an
answer
for,
but
the
answer
has
nowhere
to
go
because
organizationally
people
haven't
identified
what
they
are
like
again.
This
is
like
this
dark
matter
that
just
runs
in
the
cloud
and
nobody
actually
knows
what
it
is,
except
for
the
person
running
it
and
that
still
exists.
So
you
can
see
it,
you
can
quantify
it.
B
You
can
tell
if
it's
wrong,
you
can't
fix
it,
because
you
can't
you
have
no
line
of
sight
to
where
it
came
from
and
so
that
I
think
again
we
see
that
I
mean
chuck.
You
might
have
some
comments
on
other
challenges,
but
that
that
one,
I
think,
is
a
big
one
where,
if
you
don't
have
the
discipline
to
to
describe
what's
running
in
the
cloud,
it's
like
you're
running
a
car
without
a
blueprint.
Nobody
knows
what
the
pieces
are.
You
still
run
the
challenges.
You
can't
you
can't
really
fix
it.
C
I
think
you've
also
pointed
out
that
certain
applications
that
are
single
purpose
and
and
with
built-in
scaling
and
and
architected
from
the
ground
up
to
operate.
You
know
I
mean
I
don't
know.
Maybe
facebook
is
a
good
example,
but
where
it's
just
a
native
part
of
the
way
that
software
runs,
that
that
also
may
not
be
a
case
where
we
could
add
value.
B
Then
people
might
hand
tune
that,
and
they
might
there
might
be
a
level
of
detail
as
opposed
to
like
again
a
bank
that
has
3
000,
apps
and
they're
all
different,
well
that
right
that
becomes
very
difficult
to
optimize.
So
the
ones
that
are
true
with
kid
gloves.
You
know
they
still
benefit
from
optimization
and
they
still
benefit
from
the
telemetry,
but
they
usually
have
people
that
wrap
their
head
around
it
and
and
they
are
more
proactive
in
optimizing
it.
It's
the
diversity
that
kind
of
creates
the
problem.
A
So
I
I
had
I
had
here
in
my
list
of
of
things
to
talk
about,
you
know
was
customer
aha
moments
and
you
know
for
me.
I
just
had
an
aha
moment
when
you
were
talking
about
like
having
a
car
with
no
blueprint
and
customers
being
able
to.
You
know,
use
something
like
densify
to
get
this
type
of
analytics,
but
if
they're
not
managing
their
work,
you
know
their
development
chain
and
they
don't
actually
have
a
way
to
take
action
on
the
feedback
that
your
tooling
can
provide
them.
How?
B
So
so
we
do
have
a
report,
we
call
a
tag,
compliance
report
that
will
say
here's
a
bunch
of
stuff
out
there
that
isn't
described
well
enough
to
to
get
back
to
whoever
created
it,
and
so
that's
one
of
the
most
effective
ways
you
give
that
to
a
customer
will
take
that
and
say:
okay,
let's
now
go
and
make
sure
everybody
we
see
some
customers
that
will
even
kill
things
if
they're
not
tagged
properly.
We
had
one
customer
that
had
an
auto
killer.
B
B
B
What
happens
is
there's
a
point
where
I'll
use
terraform
as
an
example,
where
they're
going
to
make
a
file
that
says,
run
my
app
and
give
it
a
thousand
millicourse
and
give
it
so
much
memory
and
do
this.
So
that's
the
point
right
there.
They
don't
need
to
change
what
they're
writing,
but
the
point
where
they
say
how
to
deploy
it
and
what
resource
it
needs.
B
You
go
and
change
the
object.
You
go
and
change
the
vm.
You
don't
do
that
in
container
environments
or
cloud
environments.
You
change
the
code
that
created
the
instance
for
the
container.
So
as
all
the
developer
puts
a
line
of
code
in
there
saying
to
link
it,
then
they're
done
they
don't
need
to
do
any
more.
It
just
says
when
the
recommendation
comes
up
and
the
engine
finds
a
recommendation
insert
it
here,
and
so
it
actually
makes
the
developer's
life
better
because
they
don't
want
to
be
thinking.
B
They've
got
someone
powering
a
fist
saying
get
the
java
code
done
for
our
release.
Next
friday,
they
don't
care
about
millicourse.
They
they
they
don't.
They
shouldn't
be
burned.
With
that
we've
seen
customers
put
spelling
mistakes
in
these
files.
They
just
shouldn't
be
doing
that
they
should
just
put
code
in
there
and
and
worry
about
their
apps.
A
We
got
here,
I'm
gonna
share
my
screen.
This
is
our
closing
slide.
You
know
normally,
like
you
know,
what
what
would
you,
how
can
people
find
out
more?
What
do
you
want
them
to
do?
You
gave
us
some
qr
codes
here.
Certainly
they
can
go
to
your
website
as
well,
and
presumably
there's
there's.
You
know
ways
to
get
engaged
with
densify
any
any
final
closing
words.
A
Normally
I
ask
people
like
hey,
you
know
what
are
the?
What
are
the
two
things
that
your
your
cmo
would
want
to
make
sure
that
you
talked
about
that
you
didn't,
so
you
can
prevent
that
phone
call
immediately
after
the
call
here,
but
given
that
chuck
is
on
here,
I
can't
ask
that
question
or
should
I.
C
Well,
you
would,
you
would
expect
to
get
a
decent
answer.
I
think
a
couple
of
things
to
take
away
would
be
that
our
trials.
Yes,
we
call
them
free,
but
they're,
also
very
low
effort.
As
far
as
seeing
what
our
analytics
would
tell
you
about
your
your
patient.
What
would
the
radiology
tell
you
about
the
health
of
your
patient?
C
As
andrew
said,
it's
it's
agentless,
it's
a
very
low
impact
from
from
a
a
a
load
on
your
infrastructure.
We
can
stand
it
up
very
quickly
and
give
you
that
radiology
report
very
very
quickly
as
well.
So
that's
an
open
door.
If
someone
just
wants
to
explore
some
of
these
concepts
with
us,
conversationally
and
consultatively,
we
we
love
to
do
that
as
well.
A
Well,
andrew
and
chuck
thanks
so
much
for
joining
today.
If
anybody
who's
watching
on
twitch
or
youtube
or
facebook
or
linkedin
or
any
of
those
other
places
wants
to
get
in
touch
with
with
any
of
the
folks
at
densify,
and
they
don't
know
how
to
do
it,
you
can
always
send
me
an
email,
it's
just
wait,
redhat.com
and
I'll
be
sure
to
get
whatever
connections
made.
You
need
great
we're
done.
A
Chris
short
is
hasn't,
started
yelling
at
me,
but
I'm
sure
that
I'm
we're
eating
into
somebody
else's
time
here,
so
we're
gonna,
we're
gonna.
Hang
it
up
for
the
open
shift
commons
briefing
operator
hours
show
for
for
this
wednesday
and
we're
not
going
to
be
on
next
wednesday,
because
the
red
hat
summit
is
going
to
be
consuming
every
resource.