►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I
think
we
are
all
set
now
we
can
just
start
our
wonderful
webinar
this
morning,
hello,
everyone,
I'm
sasha,
and
I
would
like
to
thank
you
for
joining
us
at
the
next
gen
observability,
using
open
source
monitoring
webinar.
Our
today's
speakers
are
coming
from
obscurus
scott
fulton.
He
sealed
upscrews
and
aloguca
his
cto
at
upscrews.
A
Before
we
start
just
a
few
house
warming
rules,
this
webinar
the
talk
will
be
about
the
half
an
hour,
long,
followed
by
the
demo
part
and
and
then
we
will
have
q
a
session
at
the
end
of
the
talk.
After
the
talk,
it's
a
q,
a
as
I
mentioned,
you
can
submit
your
questions
in
the
section
below
under
the
q
a
during
the
webinar,
and
we
will
try
to
answer
to
all
of
them.
At
the
end
of
the
talk
this
webinar
will
be
also
recorded.
A
So
in
you
will
have
a
chance
to
review
it
once
we
send
you
a
link
later
on
thanks
again
for
joining
and
I'll,
let
scott
fulton
ceo
of
upscrews.
Take
it
from
here
so
scott
stage
is
yours:.
B
Yeah
well
well,
thank
you
sasha,
and
I
welcome
everybody
this
morning
or
this
afternoon,
depending
on
where
you're
you're
dialing
in
from
and
I
know,
there's
a
lot
of
distractions
in
the
in
the
world
today
and
everybody's
moved
to
virtual,
and
so
just
getting
any
time
with
folks
like
you
is,
is
precious
and
so
thanks
for
making
at
the
time
this
morning,
you
know
what
we
want
to
talk
about.
Is
this
real
shift?
That's
going
on
in
the
industry?
B
Obviously
many
of
you
are
on
this
journey
to
cloud
native
apps.
I
I
how
you
you
see
many
changes
here
in
the
90s
when
we
had
distributed
applications.
B
B
You
know
of
the
cloud
our
architecture
and
benefits
and
really
refactoring
those
apps
into
microservices
and
containerizing
them
and
whatnot.
I
mean
that
shift.
I
is
in
full
swing.
I
and
gartner
estimates,
as
you
can
see
in
the
quote
below
that.
B
You
know
a
vast
majority
of
workloads
will
be
containerized
in
the
next
three
to
four
years
and
that
agenda,
as
you
probably
know
in
most
organizations,
is,
is
driven
by
the
developers
and
the
business
units
and
the
folks
that
are
in
you
know,
operations
or
site
reliability,
engineering
teams
in
many
cases
they're,
you
know
struggling
to
keep
pace
with
that
kind
of
change,
because
there's
just
a
heck
of
a
lot
of
new
challenges.
B
You
know
you
have
an
order
of
magnitude,
more
components
that
you
have
to
manage
that
make
up.
You
know
these
modern
applications
you're
releasing
a
heck
of
a
lot
more
frequently.
I
you
have
many
more
applications
than
you
had
a
decade
ago.
So
much
of
the
business.
A
B
B
You
know
these
days,
if
you're,
a
commerce,
e-commerce
company
and
you're
doing
logistics
and
tracking
that
that's
a
third-party
service
you're
calling
out
to
if
you're,
calculating
sales
tax,
that's
probably
another
third-party
service
that
you're
taking
advantage
of
and
if
you're
using
geolocation,
that's
probably
a
third
service,
and
so
the
dependencies
are
pretty
significant
in
in
a
modern
application.
B
And
so
then,
when
you
think
about
you,
know
monitoring
those
apps.
You
know
those
modern
apps
need
a
modern
approach
to
observability
the
kinds
of
tools
that
you
know
were
popular
and
worked.
You
know
in
the
in
the
90s,
you
know,
aren't
going
to
be
the
same
tools
that
are
architected
to
support.
You
know
the
current
set
of
environments,
and
so
what
we
see
is
that
you
know
in
the
the
tools
of
the
of
the
90s
and
the
2000s
that
you
know
served
the
apps
of
that
time.
B
They
were
fairly
proprietary.
You
know:
you'd
go
in
you'd,
set
thresholds,
you'd
triage
things
and
do
resolution
by
kind
of
jumping
between
different
screens
of
different
tools.
You
tend
to
have
one
tool
for
logs
another
for
traces.
You
know
another
for
for
metrics
and
so
the
net
effect
was.
You
know
they
were
pretty
expensive
to
kind
of
buy
and
run.
I
and
that's
just
not
plausible.
B
You
know
in
the
in
the
in
the
in
the
next
generation
of
apps
what
we
see
the
industry
moving
to
somewhat
into
independent
of
of
ops.
Crews
is
you
know,
an
open
source
foundation
for
monitoring
a
vast
percentage.
You
know
of
the
r
d
spend
of
those
companies
on
the
left.
I
worked
at
several
of
them
over
my
career.
B
It
was
spent
on
you
know
the
the
sheer
fact
of
having
making
sure
you
have
agents
that
support
all
the
platforms
making
sure
you
could
aggregate
you
know
all
the
metrics
and
logs
in
a
central
place
and
they
had
good
visualization
technology
for
dashboards.
That's
where
a
big
amount
of
the
spend
was
today
that's
much
more
plausible.
That
foundation
layer
is
much
more
plausible
through
the
open
source
tool.
B
B
So
you
know
what's
driving
this
change,
why?
Why
is
open
source
monitoring?
You
know
becoming
more
popularized?
Well,
I
you
know
we
have
a
few
kind
of
key
theories
around.
That
one
is
that
the
infrastructure
tech
stack
itself
is
is,
is
increasingly
open
source.
You
know,
think
of
the
databases
you
use
to
build
modern
apps,
probably
a
might
be
an
elastic
a
my
sequel.
Those
are
open
source.
B
You
know,
20
years
ago,
most
apps
were
built
on
oracle,
same
thing
with
messaging
same
thing
with
analytics,
so
the
tech
stack
is
open
source,
and
so
it's
just
a
natural
extension
that
the
monitoring
will
be
open
source
as
well.
There's
a
lot
of
new
instrumentation
standards
out
there.
You
know
you
have
something
like
c
advisor
around
containers.
It
doesn't
matter
what
vm
that
container
runs
on.
B
It
has
the
same
set
of
instrumentation
that
you
can
depend
and
and
rely
on,
and
the
community
is
driving
that
I
think
there's
also
just
a
lot
of
you
know
changing
priorities
for
these
modern
apps.
You
know
used
to
become
it
used
to
be
critically
important
to
understand
what
was
happening.
You
know
in
the
code
you
know
of
a
sigma
of
a
single
vm
or
a
single
you
know
set
of
binaries
now
with
microservices,
it's
more.
B
It's
more
important
to
understand,
what's
happening
between
all
those
services,
so
at
the
networking
layer
and
the
latency
and
response
time
and
whatnot
is
almost
if
not
more
important
than
you
know,
what's
happening
in
a
small
segment
of
the
code,
and
then
you
know,
one
of
the
last
is
just
the
you
know:
instrumentation
can
be
used
for
a
lot
of
different
purposes.
You
know
you
used
to
aggregate
all
this
stuff
and
ship
it
off
to
your
monitoring
tool,
and
then
you
had
you'd
aggregate.
B
The
same
stuff
are
very
similar
and
ship
it
off
to
your
security
tool
and
and
then
do
it
again
for
your
capacity
planning
and
so
now,
with
these
standards.
There's
the
possibility,
where
you
can
collect
this
stuff
once
own
it
in
these
tools
and
then
I
you
know
ship
it
off
for
different
different
business
purposes.
You
know,
as
opposed
to
every
proprietary
tool,
aggregating
and
collecting
this
stuff
on
their
own.
B
So
that's
those
are
some
of
the
key
things,
that's
kind
of
driving
the
change
that
we
see
and
not
to
mention,
obviously
the
cost
of
the
the
proprietary
tools
that
are
out
there.
So
what
are
some
of
the
popular
ones
there?
There
are
many,
the
the
ones
that
we
most
closely
follow
are
around
cncfs
of
the
cloud
native
computing
foundation.
You
know
that
has
kubernetes
is
its
kind
of
anchor
and
foundation
project.
B
That's
you
know
really
becoming
you
know
the
operating
system
for
cloud
applications,
and
so
kubernetes
provides
a
rich
set
of
telemetry,
both
from
a
configuration
perspective
and
a
metric
perspective
and
a
networking
perspective.
You
know
that
can
be
used
and
analyzed
in
in
higher
level
tools
and
then
on
the
top.
We
have
kind
of
four
key
areas:
metrics
through
prometheus
prometheus
was
one
of
the
first
projects
to
graduate
cncf
after
kubernetes.
B
It's
a
time
series
metrics
database,
you
know
very
simple
to
use.
You
just
have
a
metric
and
and
some
key
value
pairs.
It
has
a
very
powerful
sql
query
engine.
You
know
behind
it,
hundreds
of
different
exporters
for
every
database
and
middleware
component,
that
you
can
imagine
it
was
authored
by
julius
volts
who's,
one
of
our
board
advisors.
B
You
know
at
ops,
crews,
he
he
came
out
of
google
I
and
invented
it
in
sound
and
soundcloud
and
and
still
was
very
active
in
that
in
that
community.
But
you
know
julius
is
complimented
by
some
500
other
commanders
that
are
out
there
in
the
community.
B
A
close
cousin
to
prometheus
is
is
loki,
which
is
you
know
all
about
aggregating
logs
in
a
multi-tenant
system
and
having
some
of
the
same
attributes
you
know
of
prometheus,
so
doing
all
your
kind
of
searching
and
indexing
and
so
forth
of
logs.
You
know
through
loki
for
traces
as
there's
the
jaeger
project.
Yeah
jaeger
came
out
of
uber.
B
I
I
I-
and
it
really
is
at
the
intersection
between
observability
and
networking
and
helping
you
understand-
that
the
trace
path,
latency
errors
between
all
the
microservices
and
containers
that
make
up
a
modern
application.
B
You
know
it
does
sampling
and
and
really
its
primary
primary
role
in
life
is
for
offline
analysis
to
do
debugging.
You
know
the
code,
it's
it's
really
a
favorite
of
the
of
the
development
community
themselves,
so
net
net.
You
know
all
of
these
these
these
four
areas.
They
have
a
powerful
set
of
features,
a
lot
of
contributors,
a
really
rapid
adoption
and
then
the
great
thing
about
these
projects
that
we've
seen
they
haven't
really
been
bastardized
by
all
the
commercial
vendors.
B
You
know
for
their
area
of
technology
and
that's
just
great
great
to
see
and
just
to
give
you
a
sense
of
how
fast
it's
really
moving
for
those
of
you
that
have
a
little
gray
hair
like
like
myself,
you'll
you'll,
be
familiar
with
nagios
nagios
was
probably
the
most
popular
open
source
monitoring
tool
in
the
late
90s
early
2000s
and
it's
still
active,
still
has
a
very
active
community.
B
But
you
know
in
the
scheme
of
things
relative
to
all
the
proprietary.
You
know
monitoring
foundations
that
were
out
there.
It
had
relatively
small,
you
know,
market
share
and
you
compare
that
to
prometheus.
You
know
that's
only
four
to
five
years
old.
You
know
it
has
50
times
the
adoption.
If
you
look
at
kind
of
the
different
metrics
around
github
stars
has
10
times
more
contributors.
B
If
you
do
kind
of
search
analysis
on
google,
it's
off
the
charts,
so
just
in
a
short
span,
you
know
much
greater
adoption
than
we
saw
in
kind
of
the
prior
generation
of
tools.
For
many
of
the
reasons
that
I
mentioned
on
the
on
the
prior
slide.
B
So
to
get
you
know
a
perspective,
you
know
on
this.
We
want
to
bring
in
you
know:
carl
who's
been
a
fortune
500.
You
know
cio
for
much
of
his
career
and
has
seen
a
lot
of
these
trends.
So
I'll
turn
it
over
to
you,
carl.
C
Okay,
scott
soundcheck
everything
good
there.
You
can
hear
we're
all
good,
okay.
Well,
thank
you,
and
it's
great
to
be
here
with
this
crew
great
to
be
talking
to
infrastructure
and
operations
professionals,
because
we
all
know
the
development
community
tends
to
innovate,
innovate,
innovate,
but
then
they
kind
of
forget
about
the
management,
the
operations,
the
monitoring
and
all
these
things
that
have
to
happen
so
as
a
senior
technology
executive
and
one
that
has
seen
multiple
generations
of
this
thing,
you
have
to
you.
C
When
you
look
at
value,
you
look
at
it
from
three
different
lenses.
You
typically
look
at.
You
know
addition
to
the
top
line,
reduction
of
costs
and
then,
of
course,
risk,
and
so
let's
just
talk
a
little
bit
about
each
one
of
those
from
a
revenue
perspective.
You
know,
poor
monitoring
can
impact
your
net
promoter
score.
It
can
have
it
can
impact
your
customer,
stickiness
or
abandonment,
depending
on
how
you
measure
it.
C
It
also
lowers
your
agility
and
frankly,
it
could
provide
just
a
poor,
pur,
poor
reputation,
issues
which
leads
to
some
risks
that
you
may
run
in
terms
of
cost.
You
know
how
frustrating
it
is
to
have
an
outage
in
your
modern
environment
and
you're
chasing
your
tail.
Trying
to
figure
out
where
it
is.
Is
it
is
it
the
vm?
Is
it?
Is
it
the
kubernetes
pod?
Is
it
you
know
what?
Where
is
this
thing?
C
Failing
and
like
scott
said,
the
complexity
has
gone
up,
the
number
of
components
continues
to
go
up,
and
so
it's
difficult
to
figure
out
where
the
outage
could
be
or
where
the
outage
could
be
could
come
from,
as
in
the
case
of
ops
crews.
What
they
do
is
they
could
predict
some
of
these
things
because
they
have
a
model,
a
machine
learning
model
that
actually
is
looking
at
the
behavior
of
the
application.
C
B
No,
I
was
just
curious,
you
know
of
those
of
those
three,
and
you
know
we
have
a
lot
of
people
that
are
on
the
phone
that
are
probably
trying
to
justify
these
kind
of
things
to
bosses
like
you,
which
is,
which
is
the
which
of
these
three
is
the
hardest
to
measure
any
any
tips
on.
C
Well,
we'll
start
with
the
easiest,
and
we
all
know
what
our
cost
structure
is.
So
the
easiest
one
is
to
say
you
know,
take
it
from
a
cost
perspective,
the
always
sometimes
the
hardest.
One
is
revenue,
because
nobody
will
look
at
operations
as
a
source
of
revenue,
but
certainly
if
the
system
goes
down
and
you
cannot
manufacture
what
you
what
you
make
or
you
cannot
process
what
you
issue.
C
You
know,
depending
on
the
business
that
you're
in
if
the
system
is
down-
and
you
happen
to
be
in
health
care-
and
you
can't
take
care
of
patients-
revenue
is
an
issue.
Reputation
is
an
issue
on
the
risk
side,
so
those
two
tend
to
be
harder.
Cost
is
always
easier,
but
I'll
offer
a
fourth
one.
C
Okay
and
the
fourth
one
that
I'll
offer
which
is
not
on
this
page
is
experience
this
fourth,
one
could
be
a
wrapper
for
all
three
of
these,
and
that
is
it's
it's
very
hard
to
to
measure,
but
net
promoter
score
could
be
something
that
you
could
measure
and
when
you
have
your
crown
jewels
in
the
digital
world.
C
C
So,
given
all
that,
the
question
is:
why
are
using
yesterday's
generation
of
systems
monitoring
and
apply
to
today's
modern
workloads?
I
mean
it's
a
question
that
you
have
to
ask
yourself:
you
have
a
couple.
Three
choices
really
use
something
from
the
past
that
has
been
retrofitted
and
dealing
with
intrusive
agents.
Worse,
yet
ask
application
developers
to
instrument
their
code
and
put
telemetry
in
their
code
so
that
you
can
monitor
them.
We
all
know,
after
30
years
of
trying
that
trick
that
doesn't
work.
C
These
legacy
type
of
systems,
monitoring
technologies
are
also
very
noisy.
With
lots
of
data
require
manual
intervention,
they
tend
to
be
siloed
focus
on.
You
know
the
mainframe
operating
system
focus
on
the
distributed
database
side
with
oracle
or
whatever
you
may
be.
Using
they're
also
proprietary,
so
you're
locked
in
so
trying
to
take
a
legacy
systems.
Management
technology
that
has
been
extended
to
quote-unquote
manage
kubernetes
is
not
the
answer:
they're,
intrusive
and
they're
very
expensive
so,
which
leads
to
my
next
point.
Scott.
B
Yeah,
can
it
go
the
other
way
can
can
the
modern
tools
manage
the
legacy
apps,
or
is
that
too
much
to
do
not.
C
Not
likely,
it
is
not
likely
that
the
modern
tool
say,
for
example,
in
the
case
of
you
all,
with
op
screws,
you're,
not
you're,
likely
not
going
to
invest
on
managing
cics
on
the
mainframe
right
you're
going
to
leave
that
to
ibm
they
did
it
30
years
ago.
That's
going
to
happen,
could
you.
A
C
A
potential
integration
with
logs
or
messaging,
maybe
but
the
you
know
the
the
pipe
dream
of
the
single
console,
the
single
intergalactic
console.
That
gives
you
everything
you
need
from
all
of
your
systems.
You
know
we
tried
that
with
hp
openview
in
the
90s,
and
we
all
know
where
that
led
right
right
yeah.
So
this
is
additive,
and
why
wouldn't
you
you
have
modern
workloads,
you
manage
them
with
modern
systems,
management,
technology,
observability
and
prediction
as
to
what
yours
of
the
behavior
of
your
application
got
it.
C
You
got
it
okay,
good,
which
leads
me
to
my
next
point,
which
is
why
would
you
use
proprietary
system
on
the
next
slide?
Scott?
Why
would
you
use
oh
right?
There,
proprietary
systems,
source
systems
which
lag
typically
in
the
innovation
in
the
operations
and
infrastructure
space,
and
instead
I
would
want
to
use
the
power
of
the
community,
the
thou.
You
know
the
hundreds
of
developers
and
it's
going
to
continue
to
grow.
C
That
are,
you
know,
loki
and
prometheus,
and
all
the
things
that
scott
talked
about
all
those
frameworks
are
they're,
going
to
continue
to
advance
them.
They're
going
to
use
the
power
of
you
know
the
innovation
that
comes
out
of
typically
silicon
valley
and
you
you
should
expect
nothing
but
continued
innovation
over
the
proprietary
closed-end
source
type
of
systems
from
your
traditional
vendors
that
exist
out
there.
Now.
The
question
is
how
how
you'll
make
this,
how
you'll
integrate
these
and
how
you?
How
do
you
make
that
work?
C
That's
something
that
we're
going
to
learn
more
about
it
and
then,
finally,
think
of
what
op
screws
is
doing
as
the
commercialization
of
this
cncf
or
cloud
native
computing
foundation.
Think
of
their
commercializing.
C
What
is
delivered
through
the
open
source?
So,
instead
of
you
having
to
roll
your
own,
which
we
have
done
for
past
technologies
right
everybody
remember
trying
to
roll
your
own
distribution
of
linux,
downloading
those
distributions
and
then
figuring
out
how
to
how
to
make
it
work.
Well,
red
hat
made
that
easy
same
thing
with
a
vendor
like
cloudera.
They
made
hadoop
easy
as
opposed
to
downloading
five
things
and
then
making
version
7.6.2
work
with
version
3.4.7.
C
You
know
that
work
is
being
done
for
you
with
this
commercialization
of
the
cncf
systems,
management,
observability
and
other
families
of
software
that
exist
within
what
scott
just
talked
about.
That's
what
they
do,
and
so
you
can
try
it
yourself
and
with
that
scott,
I'm
gonna
turn
it
over
to
alok.
D
Thank
you,
I'm
just
trying
to
get
my
screen
back
up
here,
I'm
going
to
take
myself
off
video,
so
I
can
share
my
screen
here,
guys
all
right,
so
picking
up
where
carl
and
scott
have
talked
about.
Let's
talk
about
what
the
obstacle
solution
looks
like
right.
So
one
think
about
it.
Once
we
have
this
open
ecosystem
with
with
the
microservices
with
kubernetes,
primitives
and
all
the
collections,
what
can
we?
What
do
we
need
to
do?
We
have
a
very
dynamic
environment
and
in
order
to
be
able
to
be
staying
ahead,.
E
D
It,
as
carl
mentioned,
we
need
to
be
more
predictive
rather
than
reactive
right.
So
what
oscars
does
is
build
on
top
of
that
open
source
framework
for
monitoring
that
comes
in
these
kubernetes
environments
and
essentially
understand
the
application
which
we
talk
about
in
a
minute
and
then
once
we
understand
the
applications.
Easiness
is
changing,
detect
issues
because
we
build
the
behavioral
model
of
every
component
that
comprises
the
application
that
allows
us
to
go
into
detecting
fault
isolation,
because
we
understand
the
distributed
structure
and.
B
D
Causal
analysis,
because,
at
the
end
of
the
day,
that's
what
you
want
to
get
to
right
and
the
whole
idea,
of
course,
is
to
reduce
the
possibility.
Outages
stay
out
of
the
curve
now
something
contrarian
that
we
do.
As
we
have
mentioned,
is
we
don't
think
ops
or
sre
teams
can
be
guessing?
What
metrics
to
look
at?
There
are
so
many
components
you
can
have
hundreds
and
thousands
of
containers
and
services
and
trying
to
guess
all
the
metrics.
D
There
are
tens
of
them
in
each
and
then
trying
to
figure
out
what
the
threshold
is
is
not
feasible
at
all.
So
one
of
the
advantages
there
is
understanding
an
ml
driven,
behavior
model
contextually
and
then
understand
when
those
behaviors
have
changed
or
there's
a
problem
right,
and
we
want
to
do
this,
of
course,
without
instrumenting
the
code
or
touching
the
application.
Of
course,
we
also
avoid
trying
to
maintain
this
open
source
tool
framework
because
that's
already
there.
Why
are
we
doing
this?
As
we
said,
if
we
can
be
predictive
that
reduces
outages?
D
D
So
then
how
do
then?
Then?
Let's
talk
about
how
this
works
right.
So
if
you
think
about
the
left-hand
side
being
all
those
open
source
frameworks
that
we've
already
talked
about,
we
are
essentially
to
use
a
term
that
one
of
our
customers
have
talked
about
use
the
digital
exhaust,
pull
that
data
in
and
use
it
to
both
understand
and
build
out
the
intelligence,
the
application,
both
its
understanding
and
its
structure
and
then
in
background
for
every
component
of
servers
that
comprise
the
application
builder
predictive
model.
D
That
allows
us
to
essentially
once
we
know
that
find
out
when
there
are
deviations
right
again,
as
we
said,
don't
have
to
look
for
metrics
or
search
for
what
to
look
for
and
when
that
change
of
behavior
across
the
whole.
Kubernetes
state
will
tell
us
when
there's
a
problem,
because
we
know
the
structure.
We
can
then
run
it
to
do
fault,
isolation.
And
if
you
can
isolate
the
fall
and
we
have
enough
detailed
information
from
our
ml
model,
which
can
also
provide
you
explanations
of
where
that
model
and
why
those
shifts
have
happened.
D
That
caused
potential
problem.
That
allows
us
to
narrow
down
fence
down
the
area
where
we
need
to
focus
on
and
then
find
the
corrective
actions,
because
we've
narrowed
it
down
to
a
granular
level.
Now,
if
you
can
automate
that
as
we
can
in
some
specific
kubernetes
cases,
you
can
now
close
the
loop
now,
the
reason
it
cycles
back
is
because
applications
change,
infrastructure
changes,
scale
out
scaling
happens
right
code
changes
are
made,
so
this
has
to
happen
in
a
continuous
basis,
but
essentially
what
oscars
wants
to
do?
D
Help
your
ops
teams
and
sra
teams
to
be
on
top
of
it
on
a
continuous
basis
and
keep
running
on
the
background
to
help
ops
team
stay
ahead,
it's
important
to
understand
that,
given
all
the
different
metrics
and
the
telemetry
they're
using
how
do
we
kind
of
move
this
one
of
the
primary
things
for
us
to
be
product
proactive
and
for
ops
teams
to
be
able
to
handle?
That
is
start
with
the
real-time
metrics?
D
It's
like
watching
your
self-driving
car,
looking
at
what's
happening
to
see
whether
shifts
are
happening,
as
opposed
to
which
has
happened,
which
is
looking
at
events
and
logs
of
failures
and
then
working
backwards.
That's
like
csi
for
your
applications,
so
using
the
metrics
that
we
have
and
you'll
notice,
there's
one
other
entity,
we've
added.
We
also
have
to
look
at
the
cloud
infrastructure
right
so
using
these
open
store
framework
and
the
cloud
infrastructure.
As
we
said,
we
understand
the
application
structure.
D
So
that's
how
we
tie
and
log
in
this
sequence,
and
then,
if
you
see
on
the
bottom
of
the
screen,
many
times
failures
and
problems
happen
because
the
chain
is
introduced,
the
change
could
be
because
of
an
application
code,
change
and
infrastructure.
Something
happens
kubernetes
when
that
happens,
because
we
can
also
capture
continuously
what
the
structure
and
the
application
topology
is
using
something
called
time
travel
for
taking
snapshots.
We
can
now
tie
those
changes
back
into
that
and
kind
of
close
the
loop.
D
Now
this
whole
sequence
of
how
we
do
it
and
how
we
integrate
all
of
this
holistically
is
key
for
us
to
be
kind
of
being
proactive
and
predictive.
Now
remember
the
big
advantage
we
have
in
the
kubernetes
system
is
we
don't
have
to
worry
about
having
multiple
specific
tools,
whether
it's
for
logs
or
metrics
or
tracing
it
all
comes
in.
Unfortunately,
in
that
whole
kubernetes
ecosystem,
so
we
are
leveraging
all
of
that
in
one
place.
This
is
the
beauty
of
working
with
this
open
kubernetes
framework
and
useful
system.
D
How
do
we
deploy?
As
we
said,
we
are
not
in
the
application
space
right,
so
we
don't
have
to
be
invasive.
We
can
just
sit
in
the
monitoring
plane
and
essentially
collect
data
from
those
environments
and,
as
you
can
see,
we're
collecting
information
on
the
cloud
infrastructure,
the
kubernetes
configurations,
the
real-time
metro,
supremeties.
We
collect
flows
that
essentially
are
the
flows
at
both,
therefore,
and
layer,
seven
between
every
service,
para
services
that
comprise
the
application
and
finally
collecting
logs
into
the
framework.
D
So
we
can
confirm
just
using
all
of
these,
and
these
are
deployed
as
open
pods
in
your
monitoring
plane.
We
collect
those
compress
it
and
send
it
down
to
our
sas
controller,
where
we
do
the
processing
and
then
feed
it
back
directly
to
the
operations
for
real-time
viewing,
alerting
tying
it
into
their
existing
incident
management
and
ticketing
system
right
so
again,
a
full
open
framework
without
being
intrusive,
while
building
and
leveraging
the
intelligence
that
we
have
to
build.
On
top
of
that
framework,.
D
So
it's
good
time
to
now
switch
into
kind
of
a
demo
mode.
Let
me
do
a
little
setup
here.
What
you're
seeing
here
is
a
manageable
application.
This
is
a
sample,
simulated
e-commerce,
application
that
we
have
here.
We
look
at
these
gray
boxes.
There
are
six
of
them.
These
are
standard
kubernetes
services.
However,
two
of
these,
the
load
balance
and
the
ingress,
as
well
as
the
database,
are
also
used
leveraging
sas,
which
is
typical,
because
not
everything
in
your
community
is
state
and
your
application.
D
D
We're
going
to
use
a
elastic
load
balancer
as
an
example
as
part
of
this
estate,
as
well
as
rds
as
part
of
the
database
service
as
well
on
back
here.
So
we
have,
as
you
can
see
from
this
path,
and
the
reason
is
when
we
go
to
the
demo.
You'll
see
how
we've
kind
of
rebuilt,
auto,
discovered
and
auto
build
this
structure.
D
These
three
services
from
lot
load,
balancing
the
web
service,
cache,
cart,
management
of
database
and
the
bottom
path.
The
blue
ones,
are
your
typical
pods.
There
are
eight
of
them,
and
this
is
running
on
a
five
node
community
service
and
using
those
collectors,
you're
gonna
essentially
build
out
and
start
providing
that
capability.
D
So
what
we
want
to
do
is
do
two
parts
here:
one
we'll
talk
about
how
prometheus
enables
the
open
source
monitoring,
so
I'm
going
to
spend
a
few
minutes
because,
as
we
said,
prometheus
allows
us
to
collect
metrics
from
pretty
much
everything
in
this
case
the
kubernetes
services
and,
of
course,
the
open
source
capability
of
dashboard
against
grafana.
So
we
leverage
that
we'll
show
you
how
that's
done,
then.
D
The
question
is:
how
do
we
add
this
observability
that
we
have
so
understanding
the
application
and
structure,
as
well
as
the
topology,
which
we
call
visibility,
which
is
what
we'll
show
as
the
first
step
using
obscures?
Second,
what
we
call
our
behavioral
analytics
with
leverages
are
ml
and
we'll
give
an
example
of
how
we
proactively
detect
a
problem
that
leads
to
fault
isolation
and
third
tie
back
with
the
time
travel
to
see
what
changes
may
have
caused
this
now.
D
Just
to
make
this
interesting,
what
we
will
do
is
we
will
inject
a
failure
mode
in
that
application
on
on
something.
That
is
what
we
call
under
the
radar.
For
example,
it
has
a
cache
element.
We
will
change
it
and
reduce
the
cache
iteration
to
see
what
the
implications
are
and
see
if
our
system
detects
it
so
without
much
further
ado,
let
me
switch
screens
here
into
the
live
demo
screen
here.
C
I
think,
while
we're
waiting
here
to
be
a
good
reminder
for
people
who
have
questions
submit
him
through
the
q
a
button
welcome
those
questions
in
we
can
start
parsing
through
those.
D
We
are
getting
continuously
what
the
cpu
usage
is
on
our
per
bases.
There
is
about
six
of
those
eight
services.
Some
of
them
are
outside
the
scope
of
this.
So
cart
cash
card
server,
db,
survey,
engine
experiment-
those
are
all
the
six
servers
we
talked
about.
We
can
get
the
cpu
consumption
at
any
time
going
over
this.
Of
course
we
have
the
full
history
of
this.
D
Similarly,
when
we
talk
about
memory
usage,
you
can
see
the
memory
usage
of
this
and
then
finally,
as
you
can
see,
if
you
go
down
here,
we
can
also
look
at
the
rights
primary.
The
rights
are
happening
on
the
database
server,
as
you
can
see
that,
and
when
I
hover
on
that,
you
can
see
that
the
amount
of
writes
that
are
happening
for
all
of
those
right
so
off
off
the
bat
once
you've
deployed
kubernetes
this
you
essentially
can
pull
up
all
the
metrics
right
now.
What's
the
challenge
here,
we
know
the
metrics.
D
What
we
don't
know,
if
you
remember
the
structure
of
the
application,
we
don't
know
how
they
relate
to
each
other.
This
is
where
we
need
to
build.
Now,
if
you
were
the
developer,
you
wouldn't
know
that,
but
often
sre
teams
don't
have
so.
This
is
where
we
come
in
and
what
we
will
do
is
essentially
show
you
what
it
would
look
like
if
obscures
were
there.
So
what
you're
seeing
here,
for
example,
is
exactly
that
same
structure
that
we
showed
you
before,
basically
auto
discovered
and
built
in
the
the
application
itself.
D
So
if
I
hover
on
this,
you
can
see
coming
from
the
ingress
side.
It
actually
goes
into
the
elb.
If
I
click
on
it,
you
can
see
we've
captured
automatically
from
kubernetes.
We
know
this
is
a
the
the
lb
from
the
cloud
vendor.
It
goes
into.
The
nginx
service
goes
into
the
corresponding
container
goes
into
the
the
one
inside
the
application
and,
as
you
can
see,
when
I
hover
on
that,
you
can
actually
see
the
average
response
time
between
the
nginx
service
and
its
container.
D
So,
if
I
look
at
any
of
these
kubernetes
servers,
of
course,
we
have
all
the
aspects
to
this.
What
about
the
container?
That's
sitting
on
it
remember.
The
container
is
sitting
on
not
just
this
as
a
service,
it's
actually
sitting
on
kubernetes
notes
and
sitting
on
top
of
the
infrastructure.
So
we
provide
you
that
dependency
as
well.
So
here's
an
example:
the
application
tells
you
what
the
primary
metrics
that
are
being
used
without
your
guessing
on
it.
What's
coming
in
and
what's
going
down
in
which
containers
and
then
for
this
service.
D
What
is
it
shared
on
that
kubernetes
node
and
then,
of
course,
where
is
that
kubernetes
not
sitting
on
what
aws,
including
its
storage?
So
essentially,
you
have
the
full
service
dependency
top
layer,
as
well
as
the
underlying
dependency
on
kubernetes
and
the
cloud
vendor.
So
let's
just
go
into
that
just
briefly
and
as
you
can
see
here,
this
gives
you
the
visibility
of
all
those
five
kubernetes
nodes,
and
if
I
can
just
move
this
out
here
and
scroll
down,
it's
a
little
trickier.
D
But
you
can
see,
there's
one
two,
three
four
five,
and
if
you
notice
that
ip
address
it
tells
you
exactly
what
the
kubernetes
node
is
all
the
allocation
and
how
does
it
allocate
across
the
containers
that
sit
on?
In
fact,
we
want
to
give
you
a
real-time
view
of
that,
and
so
this
is
an
example
where
you're
trying
to
understand
how
much
has
been
consumed
in
real
time
across
all
the
key
services,
as
you
can
see
for
the
database
on
the
cache,
how
much
cpu
and
how
much
memory.
D
Why
is
this
important
kubernetes
can
decide
if
you
did
not
set
the
right
limits
and
one
of
them
is
not
set
properly.
An
existing
guaranteed
pod
might
take
more
of
the
resources
and
can
evict.
This
allows
us
to
be
proactive
by
using
this
data
and
make
sure
that
parts
can
be
moved
to
run
for
part
balancing.
It
also
tells
you
what
the
sizing
should
be
based
on
that
now,
let's
go
back
to
the
application
view
that
we
have.
Of
course
we
can
understand
metrics
right.
D
So
if
you
look
at
any
of
these
metrics
remember,
all
of
that
is
coming
in
directly
from
kubernetes,
and
you
know
exactly
what
it's
consuming
over
the
last
one.
You
know,
whatever
time
frame,
you'll
notice,
we
don't
have
to
guess,
because
we
know
what
metrics
are
important
and,
what's
not
so
there
is
no
rights
going
on
for
this
component.
This
allows
us
to
build
that
behavioral
model.
D
So
let's
switch
to
that
piece
on
what.
How
do
we
understand
problems?
The
way
we
understand
problems,
as
we
said,
is
instead
of
guessing.
We
have
built
behind
the
scene,
a
a
predictive
model,
collecting
data
as
it's
running
for
every
container
and
service
that
comprises
the
application,
you'll
notice.
There
are
two
containers
that
look
red
here
now.
What
that
means
is
something
detected
as
behavior
changed.
As
I
mentioned
earlier,
we
essentially
reduce
the
cash
hit
ratio.
What's
the
implication
of
that?
D
Well,
you-
and
I
know
that
means
I'm
going
to
have
to
pull
more
data
out
because
it's
not
in
their
system
and
it's
going
to
go
downstream,
even
though
the
total
requests
come
into
e-commerce
service
has
not
changed.
Can
we
detect
it,
and
the
answer
is
yes:
if,
for
the
same
number
request,
the
cash
hit
ratio
was
no
longer
the
same,
the
system
will
say.
Let
me
tell
you
what
I
found.
I
found
that
I'm
transmitting
more
data
out
and
pushing
out
what
we
call
our
supply
side
to
the
outside
the
total
amount
of
counts.
D
So
this
is
going
on
continuously
and
being
managed,
and
we
can
provide
insights
to
this,
even
if
the
cache
component
hasn't
failed
right.
So
this
provides
the
detail
on
that
and,
of
course,
we
can
tie
it
to
the
applications
and
the
metrics
history,
which
will
go
back
and
see.
When
did
that
change
happen,
and
you
will
see
when
that
change
occurred.
We
know
exactly
when
that
happened
right,
how
much
there
was,
and
also
the
amount
of
cpu
that
processed.
D
So
that
gives
you
a
proactive
view
of
starting
to
look
when
things
happen
which
might
cause
a
problem.
You'll
notice,
there's
a
problem
here
on
the
database
side,
it
went
red.
So
if
I
click
on
that
that
says,
hey
I've
got
more
cpu
processing
going
on.
It's
probably
because
I
have
more
data
processing.
If
I
go
to
the
insights
again,
I
can
tie
it
back
to
the
metrics
history
and
sure
enough,
and
the
question
is:
why
did
it
go
up
when
I,
when
someone
like
an
ops
team,
sees
this?
D
So
one
of
the
things
we
can
do
is
say:
let's
look
at
the
metrics
across
those
right,
I'm
going
to
increase
this
to
about
an
hour
and
because
it's
going
to
be
kind
of
small,
I
might
change
change
the
screw
and
go
to
some
detail
screen.
Now
it's
hard
to
see
this
detail,
although
you
can
see
this,
so
what
I'm
going
to
do
is
show
you
the
detailed
view
of
how
they
are
tied
together
right.
This
is
essentially
taking
the
path
from
the
cache
to
the
cache
management
of
the
database.
D
D
Looking
at
the
metrics
and
understanding
the
behavior
and
the
structure,
this
allows
ops
teams
to
be
ahead
of
time
to
say:
hey
if
it's
invalidated
can
I
act
on
allows
us
to
do
fault
isolation
right
without
trying
to
do.
You
know
detailed
offline
tracing.
Finally,
how
is
this
tie
into
what
change
happened?
So
one
of
the
things
that
we
can
do
is
monitor
because
we
do
something
called
taking
snapshots.
We
can
look
at
when
these
changes
are
happening
at
any
one
time.
D
D
A
little
were
a
little
bit
later
about
26
minutes
later,
I
think
when
we
introduced
that
change
in
the
cache,
if
you
go
click
back,
you
can
see
the
event
red
again,
nothing
has
failed.
The
question
is:
what
caused
this
change
and
what
you
provide
you,
as
this
is
happening,
is
actually
say,
go
look
for
the
differences
because,
as
we
know,
changes
cause
problems
and
if
I
click
on
this,
we
know
exactly
at
what
time
those
happen
when
the
database
started
increasing
cpu
and
then
question
is
at
this
point.
D
D
B
D
Quickly
conclude
on
what
we
are
talking
about
here,
so
if
I
go
back
to
this,
if
I
were
to
just
summarize
and
kind
of
point
out
what
we
just
did,
what
we
have
essentially
done
is
shared.
You
know,
provided
you
a
view
of
how
we
went
just
go
back
to
the
screen.
D
If
I
can
see
it
essentially
leveraging
what
we
call
frictionless
non-invasive
existing
monitoring
framework
that
came
with
kubernetes,
using
that
to
build
the
application,
understanding,
structure
and
understand
behavior
and
that
dependency,
and
all
of
that
so
that
we
know
what's
going
on.
We
have
this
inside
view
holistically
of
the
application
we
also
without
adding
any
code
instrumentation
you
can
see,
live
those
changes,
those
flows,
this
dependency
without
adding
any
additional
heavyweight
infrastructure
and
use
that
intelligence
in
a
runtime
mode.
D
With
that,
I'm
going
to
pass
this
over
to
scott
and
thanks
for
taking
the
time.
B
Thanks
thanks
alok,
so
we'll
we'll
switch
to
q
a
now.
B
So
we've
got:
we've
got
some
that
have
come
in
via
the
chat
window
and
then
some
others
that
have
come
into
the
actual
actual
q
a
so
let's
just
let's
just
check
that
out.
I've
answered
some
of
them,
this
one's,
probably
for
you,
a
loca,
so
application
infrastructure,
behavior
itself
changes
quite
frequently
from
release
to
release
example.
Introducing
a
layer
between
two
services
like
apogee,
will
cause
increased
delay
and
possibly
also
cpu
and
memory
usage.
B
D
Question
and-
and
it's
very
very
relevant,
so
one
of
the
things
that
we
do
is
whenever
we
see
a
change
that
we
detect
from
kubernetes
events
or
any
other
events
like
even
I
didn't
show
this,
but
name
it
changed
or
infrastructure
change.
We
would
pull
that
up
from
the
cloud
layer
or
the
kubernetes
here
when
that
happens,
essentially
what
we,
as
we
switch
from
being
a
pure
predictive
mode
to
our
learning
mode.
We
start
collecting
data
and
essentially
go
into
this
learning
mode.
D
So
for
any
component
that
change
or
a
new
service
that's
been
introduced.
We
start
collecting
data.
Technically,
you
could
start
getting
understanding
of
the
application
behavior
in
1r,
but
really
what
you
want
is
a
wide
range
of
operational
behavior,
because
what
our
ml
model
does
is
actually
understand:
correct
regions
of
operation
so
for
different
demands
and
services.
D
So,
typically,
in
the
default
mode,
within
about
24
hours,
we
get
the
full
understanding
of
an
application
and
we
can
get
a
predictable
more
importantly,
because
we
have
not
seen
the
full
range
or
demands
or
requests
coming
in
every
time
we
see
a
new
change
or
a
new
demand
so
that
we
have
not
seen
new
data.
We
go
back
in
to
collect
that
data
and
make
incremental
upgrades.
So
if
a
change
happened
because
you
introduced
a
new
service
or
a
infrastructure
change,
we
would
detect
that
and
basically
kick
in
to
do.
E
B
Okay,
good
look,
and
then
I
see
a
second
one
here.
Are
you
able
to
see
the
exact
data
in
the
data
flow
along
with
the
lack
of
encryption?
Maybe
either
you
or
or
swedar.
D
Yeah,
I
think
shooter
is
our
resident
right
now
we
are
collecting
the
data
at
the
host
level,
so
you
know
we
are
not.
We
are
not
you're,
assuming
that
the
app
is
not
encrypting
it,
so
we
can
collect
the
data
through
because
that's
what
prometheus
does
to
pull
the
data
out.
D
So
if
it's
coming
up
from
there,
we
can
see
the
data
and
they're
clear
and
that's
by
the
way,
that's
where
the
industry
is
going,
but
I'm
going
to
let
sridhar
comment
on
that
because
he's
tracking
this
much
more
closely
shridha,
you
want
to
add
to
that.
E
Offline,
yes,
sorry,
sorry,
I
had
my
headset
on
mute.
I
mean,
I
trust
you
can
hear
me
properly.
Regarding
that
question,
it
depends
on
the
environment
we
are
set
up
in.
In
certain
cases,
we
get
visibility
to
the
clear
text,
in
which
case
we
can
look
at
certain
details
like
the
http
headers
and
do
some
level
of
a
packet
inspection
sort
of
analysis.
If
we
are
operating
in
a
service
mesh
environment,
we
have
got
specific
configurations
that
we
have
done
for
istio.
E
That
will
give
us
what
we
need
and
in
cases
where
we
absolutely
don't
have
any
access
to
the
clear
text,
then
we
would
be
operating
at
a
level
four
level,
but
we
still
discover
all
the
connections
discover
all
the
interact
in
interactions
and
who's
talking
to
whom
and
we
are
able
to
wire
all
of
that
in
into
the
rest
of
the
information
that
we
get
from
kubernetes
and
the
cloud.
So
is
that
I
trust
that
I
hope
that
answers
your
question.
D
B
Okay,
good
thanks
a
locan
streeter
another
one
in
via
chat,
so
other
monitoring
vendors
have
started
to
integrate
with
these
same
cncf
tools.
Is
that
any
different
from
what
ops
cruise
is
doing?
So
I
I
I
can
take
that
one.
B
Yeah,
it's
quite
different.
You
know
from
the
ground
up
we've
architected
on
top
of
these
on
top
of
these
tools
and
we're
embedding
these
tools
and
we're
providing
support
and
distributions.
You
know
for
these
tools.
You
know
most
of
the
traditional
you
know
legacy
monitoring
vendors
that
are
out
there.
You
know
they've
just
taken
the
data
from
these
tools
and
pushed
them
up
to
their
cloud
or
their
central
platform.
They.
A
B
Kind
of
re-architected
their
stack
to
sit
on
top
of
this
stuff,
and
so
it's
quite
a
different
model.
You'll,
be
on
your
own
to
kind
of
support
those
tools
and
maintain
them
and
upgrade
them
and
so
forth,
and
then
further
you
know.
B
The
central
repository
for
this
kind
of
data,
you
know,
is
those
tools
in
in
the
case
of
our
deployment,
so
that
you
can
use
it,
as
we
talked
about
earlier
in
the
presentation
for
more
than
just
monitoring,
you'll,
be
to
use
that
same
data
for
capacity
planning
and
security
and
whatnot
the
the
legacy
vendors
tend
to
suck
up
all
that
data
and
store
it
in
their
cloud
or
their
central
platforms,
and
you
you
pay
for
that.
You
know.
B
C
C
Some
of
the
lessons
of
the
past
right
and
you're
applying
them-
and
you
kind
of
I
mean
I-
these
were
conversations
that
we
had.
If
we
were
to
do
this
all
over
again
instead
of
having
you
know,
bmc
or
hp,
or
you
know
dynatrace
or
whatever
pick
your
legacy
package,
you
had
an
opportunity
to
do
it
all
over,
and
so
here
we
are
with
something
that
you
all
have
created,
that
is,
for
the
modern
world
next
generation.
Exactly
exactly.
B
D
D
C
For
the
application
development
crew
right-
because
I
mean
you
know
as
a
senior
leader
in
it
having
a
preview
of
both
the
voices
from
infrastructure
operations
which
sometimes
can
be
conflicting
with
application
development-
and
you
got
to
listen
to
both
and
you
got
to
say
you
know
folks-
we
all
got
to
get
along.
C
B
Yep
absolutely
good
one,
one
other
one
in
chat,
carl
that
might
be
fit
for
you,
let's
see
when
you've
embraced
these
types
of
open
source
tools
in
in
in
your
organizations,
what's
been
the
primary
driver.
Is
it
a?
Is
it
a?
Is
it
a
cost?
Dimension,
innovation.
C
Innovation,
innovation,
innovation-
I
mean
you
know,
certainly
there's
a
cost
element
to
it,
but
nothing
is
free,
nothing
is
free.
So
if
you
think
you
can
just
download
the
open
source
patent,
you
know
do
the
free
thing
and
right
on
top
of
it,
then
you
become
the
integrator,
so
it
does
make
sense
to
bring
in
some
a
commercialization
which
ops
cruise
does
for
this
cncf
set
of
frameworks.
But
at
the
same
time
to
me
these,
in
this
day
and
age,
it's
innovation.
C
It's
leveraging
the
power
of
the
community,
there's
a
cost
issue,
there's
also
legal
and
security
ramifications.
Right,
you
got
to
make
sure
your
lawyers
are
on
board.
You
got
to
make
sure
your
ciso
is
on
board,
because
just
downloading
open
source
and
running
it
you
know
you
may
get.
I
know
it
happened
to
me,
maybe
five
six
years
ago,
where
we
got
a
lawsuit
from
from
a
company
that
was
a
patent
troll,
because
we
downloaded
a
little
itty
bitty
piece
of
software
that
was
in
all
the
developers
downloaded
and
embedded
in
their
stuff.
C
So
yeah,
you
gotta.
As
always,
you
gotta
involve
all
the
parties,
your
your
attorneys,
your
compliance,
people,
your
vendor
management
people,
in
addition
to
the
it
professional.
B
Got
it
got
it
now,
good,
good,
insights,
carl,
okay
and
then
there's
an
another
one
here
on
chat,
maybe
for
you,
a
local
shooter,
how
how
scalable?
I
you
know,
are
these
tools
like
prometheus
and
loki.
I
I've
I've
seen
them.
You
know
pretty
pervasive
in
startups,
but
are
they
really
ready?
For
you
know
large
enterprise
environments.
D
Sure
I'll
take
my
first
shot
of
that.
The
good
news
about
in
the
last
two
years,
actually
in
prometheus,
there's
scalable
options,
they're
also
cncf
projects,
the
two
ones
dominantly
autonomous,
which
allows
you
horizontal
scaling,
as
well
as
able
to
collect
more
data
and
cortex,
and
we
have
had
actually
customers
scott.
If
you
remember
that
we're
using
thomas
as
well
to
scale
anything,
there
was
one
customer
recall
had
about
29
or
no
39
sites,
they
were
collecting
data
from
and
actually
federating
it
and
they're
using
thomas
and
we've
come
across
more
of
them.
D
B
So
yeah,
I
just
add
that
you
know
on
those
tools.
You
know
they
were
invented
by
you
know
some
of
their.
They
were
incubated
in
some
of
the
you
know,
largest
item
infrastructures
in
the
world
like
google
and
uber,
and
whatnot
and
yeah
you'd
be
amazed.
You
know
the
the
the
the
second
largest,
the
largest
kind
of
brick
and
mortar
retailer
in
the
world
you
know
is
using
these
platforms.
B
C
A
B
Yeah
I
mean
you,
you
you,
you
can
go
to
offscrews.com
and
and
just
sign
up
and
register
on
our
portal.
There's
a
few
adapters
you'll
install
if
you
have
an
existing
prometheus
or
loki
environment.
A
few
adapters
you
install
or
you
can
take
the
full
package
and
install
that
in
your
environment
and
you'll
start
getting
visibility
in
about
30
minutes
and
the
analytics
will
start
kind
of
producing
results
in
about
a
day.
B
So
yeah
also
feel
free
to
feel
free
to
jump
in
and
the
last
and
most
important
thing.
We've
got
a
short
drawing
here.
That's
why
all
of
you
are
really
here
for
this
oculus
vr
headset,
so
I've,
I've
scratched
all
the
names
onto
little
pieces
of
paper
and
I'm
gonna
draw
them
out
of
my
my
secret
mug.
B
From
but
he's
a
lucky
way.
B
Think
he
is
because
I
I
I
checked
in
the
beginning-
and
I
just
screenshot
those
names
and,
and
so
matt
is
our
winner
this
morning.
So
congratulations
matt,
I'd
like
to
thank
everybody
for
attending.
We
will
post
the
webinar
up
online
on
our
youtube
channel
over
the
next
couple
days
and
if
you
have
any
other
questions,
feel
free
to
drop
us
a
note
and
thanks
again,
carl
for
making
the
time
this
morning.
Thank.