►
Description
The CNCF End User Technology Radar is a quarterly overview from the CNCF End User Community which shows what end users really use and recommend in cloud native. Join Kunal Parmar (Box), Marcin Suterski (NY Times), Jason Tarasovic, (payitgov.com) and Jon Moter (Zendesk) as they discuss what projects and trends they chose for Observability. Moderated by Cheryl Hung (CNCF).
Speakers:
Kunal Parmar, Director, Software Development, Cloud Native @Box
Marcin Suterski, Software Architect @The New York Times
Jason Tarasovic, Principal Engineer @payitgov.com
Jon Moter, Senior Principal Engineer @Zendesk
A
A
Hi
everyone
I'd
like
to
thank
everybody.
Who's
joining
us
today
welcome
to
today's
cncf
webinar,
which
is
on
the
end
user
technology
radar
for
september
2020
on
observability,
I'm
cheryl
hung
I'm
the
vp
of
ecosystem
at
the
cncf,
and
I
will
be
moderating
today's
webinar
and
we'd
like
to
welcome
our
presenters
tonight
today.
A
Kunal
palmer
who's
director
of
software
development
at
box,
martin
ciutersky,
who
is
software
architect
at
the
new
york
times,
jason
tarasovic,
who
is
principal
engineer
at
payetgov.com
and
john
motor
who's
senior
principal
engineer
at
zendesk,
a
few
housekeeping
items
before
we
get
started
so
during
the
webinar
you're,
not
able
to
talk
as
an
attendee,
but
there
is
a
q,
a
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
A
A
A
All
right
so,
as
I've
already
said,
today's
webinar
is
going
to
be
about
the
end
user
technology
radar.
So
I'm
cheryl.
Basically,
I
work
with
end
users.
So
end
users
are
companies
who
are
not
selling
any
cloud
native
products
or
services,
but
helping
them
get
active
and
involved
with
the
open
source
community
and
get
engaged
with
meeting
each
other.
A
A
B
Hello,
I'm
john
motor,
I'm
a
percentage
principal
engineer
at
zendesk.
I
work
in
the
what's
called
a
foundation
organization
or
a
foundation
engineering
organization.
In
zendesk
we
provide
compute
storage
infrastructure,
all
the
technologies
and
tools
needed
for
the
rest
of
the
zendesk
engineering
org
to
deploy
and
run
their
update.
Their
applications
been
working
with
kubernetes
and
cloud
native
technologies
for
probably
about
five
years
now,
right
around
the
time
that
kubernetes
first
came
out
and
I've
been
part
of
the
cncf
end
user
group.
B
For
I
don't
know,
probably
three
years
now,
I
think
so
yeah
nice
to
meet
you.
C
Hey
everyone:
my
name
is
kunal,
I'm
a
director
of
box,
I'm
in
the
back
end
organization,
where
you
know
I,
my
team
is
responsible
for
the
platform
that
the
rest
of
the
engineering
team
uses
to
run
all
of
our
applications
on,
and
so
you
know
we're
responsible
for
everything
from
kubernetes
to
service
mesh
to
observability
box
and
myself
have
been
involved
with
the
cncf
and
the
tools
in
this
space
as
well
from
the
very
beginning.
C
You
know
we
were
very
early
adopters
of
kubernetes
and
also
we've
been
very
we've,
been
part
of
the
cncf
end
user
community.
I
think
from
the
very
beginning
as
well,
so
I've
been
involved
in
all
the
end
user
meetings
and
everything.
So
nice.
C
D
D
E
Hello,
I'm
jason,
so
I
started
the
platform
engineering
organization
at
payet
and
up
until
recently
led
that
team
and
we
were
responsible
for
our
kubernetes
infrastructure,
which
we.
A
Awesome,
I
want
to
thank
all
four
of
you
for
joining
me
today
and
for
working
with
me
on
this
radar
to
represent
the
whole
of
the
cncf
end
user
community.
A
So
this
is
the
second
time
that
we've
run
the
cncf
end-user
technology
radar,
where
we
survey
the
different
companies
in
the
end
user
community
and
ask
them
to
report
what
solutions
they're
using
and
whether
they
would
recommend
them
to
other
people
effectively.
So
there's
three
levels
that
we
have
one
is
adopt,
meaning
we
clearly
recommend
it.
We've
used
it
for
a
long
time.
A
It's
stable,
secondly,
is
trial,
which
is
we've
used
it
with
some
success,
so
we
can
recommend
it,
but
maybe
it's
only
applicable
for
certain
use
cases,
or
we
only
use
it
in
certain
ways
and
then
the
third,
the
third
level
is
called
assess,
so
assesses
we've
tried
it
out.
We
think
it's
promising
and
we
recommend
that
you
take
a
look
at
it
and
then
each
technology
radar
is
accompanied
by
some
themes,
which
is
anything
that
the
radar
team
thought
was
interesting
or
unexpected
or
noteworthy
about
what
they
saw.
A
A
D
I
think
it's
really
difficult
to
run
any
business
or
an
organization
without
knowing
how
it's
doing
and
having
an
insight
into
the
processes.
The
products
and
engagement
with
users,
for
example,
is
essential
to
be
successful,
and,
given
that
we
are
all
running
systems
and
providing
features
and
products
to
users,
it's
important
for
us
to
provide
them
in
a
reliable
way
and
understand
how
how
we're
doing
and,
as
you
know,
the
landscape,
including
cncf
products,
sort
of
exploded.
D
Recently,
there's
many
many
things
to
look
at
and
be
interested
in.
D
So
you
know,
starting
with
metrics
logs
traces,
there's
many
things
to
to
analyze,
to
collect
and
to
measure,
and
there
is
also
many
different
things
and
capsules
in
the
observability
from
tools
to
different
protocols
to
different
processes
to
different
ways
of
collecting
a
lot
of
those
things.
So
I
feel
like
that.
This
is
something
that
everyone
could
benefit
from,
knowing
how
other
people
are
doing,
and
we
chose
to
talk
about
it
and
give
it
a
try.
A
B
Well,
yeah,
just
just
for
everyone
watching
the
this
started.
Cheryl
invited
us
to
be
a
part
of
this
radar
team
and
it's
not.
She
like
she
or
anyone.
She
has
to
came
with
us
with
a
topic
or
a
gender
or
anything
like
that.
We
just
started
with
a
conversation
about
like
all
right.
What
would
be
both
interesting
to
us
and
would
we
think,
would
be
interesting
to
the
larger
community.
B
We've
announced
ideas
back
and
forth
to
try
a
few
things
and
observability
seemed
like
something
like
like
marston
said
it
is
both
universal.
Like
you
can't
you,
a
company
can't
run
and
be
successful
without
in
some
way
observing
the
state
of
their
servers,
their
users
and
that
sort
of
thing
it's
something
that
we
all
need
to
struggle
with
and
there's
also
been
a
lot
of
change
in
development
like
when
I
first
started
zendesk
about
five
years
ago.
B
C
Yeah,
I
think,
similar
to
what
marcin
and
john
have
mentioned.
You
know,
from
my
perspective,
there's
been
an
and
kind
of
rapid
increase
in
the
cloud
native
space,
a
lot
of
adoption
of
the
new
tools,
the
new
technologies
and
the
new
kind
of
paradigm
in
which
developers
are
writing
code
and
operating
their
code
in
and
and
a
lot
of
them
are
choosing
like
a
micro
services
based
architecture.
That's
kind
of
what's
become
the
norm
and
in
this
kind
of
you
know
massively
distributed
system.
Observability
is
really
really
important.
C
C
I'm
sure
everybody
here
also
feels
the
same
way,
and
so
from
that
perspective
you
know
having
a
better
understanding
of
what
does
the
landscape
look
like
for
observability
understanding
what
you
know
our
peers
in
the
end
user
community
are
using
and
finding
helpful
for
their
own
needs
is
very
compelling
for
us
to
know
so
that
we
can
understand
how
to
you
know
how
to
chart
our
journey
forward
and
what
tools
technologies
can
we
take
advantage
of?
C
A
Cool.
Thank
you
canal,
jason.
E
Yeah,
I
think
the
topic
of
observability
made
a
lot
of
sense.
It
seemed
both
timely.
I
think
this
is
something
that
is
top
of
mind.
A
lot
of
organizations
seemed
like
there
were
a
lot
of
projects
open
source
and
enclosed
projects,
standards,
vendors,
like
software
as
a
service
type
solutions,
so
it
seemed
like
we
really
could.
A
A
A
These
are
the
companies
sizes
by
the
number
of
employees,
the
total
number
of
employees.
So
you
can
see
that
there's,
probably
maybe
50
50,
maybe
slight
bias
towards
the
1000
employee
plus
so
mid
to
large
size
companies,
and
then
the
companies
that
were
represented
were
across
a
range
of
different
different
industries.
A
Software
is
a
bit
I'm
not
quite
sure
what
that
means,
but
you
can
see
the
rest
is
quite
quite
a
widespread
across
different
industries.
A
John,
did
you
want
to
add
anything
just
before
I
move
on?
Oh.
B
Yeah
and
just
just
one
note
about
the
as
we
start
to
get
into
the
some
of
the
actual
numbers
here
so
yeah,
we
canvassed
32
companies.
We
said
effectively
like
a
survey
or
a
spreadsheet,
so
we
did
not
have
like
in-depth
scientific
interviews
with
like
super
deep
analysis
here
so,
and
there
were
a
larger
number
of
technologies
that
we
kind
of
the
people
that
companies
brought
up
then
are
represented
here.
So
you
might
be
asking
yourself
hey
why
you
know
why
isn't
so?
B
And
so
here,
like
the
the
lack
of
something
in
here,
does
not
necessarily
mean
it's
like
it's
not
used,
we
don't
like
it,
but
we
needed
to
kind
of
winnow
down
to
a
like
an
interesting
and
useful
set
of
things
to
have
opinions
on.
B
So
this
is
by
no
means
attempting
to
be
an
exhaustive,
authoritative
view
of
the
entire.
You
know.
32
companies
is
not
the
entire
industry,
but
from
the
data
we
did
see
we're
highlighting
a
couple
of
interesting
bits
of
information.
E
A
Yeah,
thank
you
for
adding
that
john.
You
make
a
really
good
point
that
this
is
not
supposed
to
be
a
hundred
percent
objective.
I
don't
even
know
if
you
can
be
100
objective
on
this,
but
hopefully
you
can
see
at
least
from
this
roughly
what
kinds
of
companies
and
the
size
of
companies
are
represented,
and
this
tech
radar
is
really
intended
to
be
a
guide
for
people
as
well.
It's
not
supposed
to
say
there
is
what
exactly
one
technology
stack.
That's
going
to
be
perfect
for
you.
A
B
Well,
the
first
thing
I
want
to
point
out
is
that
the
observer
might
notice
that
open
telemetry
is
effectively
a
format,
whereas
thanos
and
kiali
are
like
prod
or
you
know,
products,
software
systems
that
you
install
and
use.
So
it
points
out
another
interesting
bit
about
the
observability
space.
There
was
like
you
know:
there
are
sas
providers,
they're
open
source
software.
You
can
run
locally.
B
There
are
like
data
formats
and
methodologies,
and
we
we
decided
to
kind
of
make
it
a
fairly
broad,
like
you
know,
asking
companies
like
what
are
you
using
and
leaving
it
a
fairly
broad
question
just
to
see
what
people
came
up
with,
so
you
might
notice,
there's
some
interesting
like
okay,
how
do
you
compare
open
telemetry
to
thanos?
That's
a
bizarre
thing,
but
again
this
is
more
just
like
what
people
are
interested
in
using.
So
first
of
all,
these
are
all
like.
B
Relatively
new
systems,
like
you
know,
chiali,
I
think
miss
babe
you've
been
running
around
for
a
year
thanos,
so
I
think
the
it
shows
that
you
know
I
what
I
what
I
pause
it
from,
at
least
from
our
experience.
B
What
I
see
is
that,
like
newer,
open
source
projects
in
particular,
have
like
interests
and
companies,
are
it,
like,
you
know,
checking
them
out
interested
in,
but
we
didn't
see
a
whole
lot
of
people
who
would,
like
you
know,
put
all
their
chips
on
that
and,
like
you
know,
committed
like
this
is
a
a
key
part
of
our
entire
infrastructure.
B
A
A
C
You
know
what
we
noticed,
of
course,
was
one
thing
in
there
is
that
some
of
these
tools
got
a
significant
number
of
contributions
from
the
votes
that
came
in
and
a
big
fraction
of
those
were
actually
people
who
successfully
went
on
to
adopt
them
and
so
that
that's
kind
of
what
help
get
these
tools
moved
into
into
a
phase
where
you
feel
they're
more
in,
like
the
trial
phase
with
the
end
users
who
are
using
it.
C
Some
of
these
names
are
quite
common
and
popular
and
most
people
know
about
these
tools
as
well.
So
there's
a
good
amount
of
tools
here
that
people
have
experience
with
using
successfully
for
their
observability
needs.
E
Yeah
and
I
think
that
we're
seeing
tracing
as
a
little
bit
of
a
newer
thing,
and
so
you
you
can
really
see
that
here
with
opentilmer
telemetry
jager
light
step
things
like
that
being
a
little
bit
more
towards
the
bottom,
as
organizations
are
starting
to.
You
know,
experiment
with
these
tools
and
really
get
on
board
with
them.
E
D
D
I
think
you,
if
you're
running
on
aws
cloudwatch,
is
something
that
you
have
to
be
familiar
with,
that
at
least
familiar
with
and
many
many
organizations
already
sort
of
assess
what
it
can
do
and
what
what
it's
capable
of
and
cloud
login
and
statsd
are
things
that
have
been
available
for
a
long
long
time.
So
people
are
also
familiar
with
both
of
them.
B
Yeah,
you
can
also
you
know
yeah,
so
we
see
some
more
more
maturity
in
the
in
these
as
well.
So
apparently,
cheryl
has
just
lost
her
audio.
So
she's
asked
one
of
us
to
say
something
so
as
we
can
see
next
we
can
move
on
to
the.
Can
you
advance
a
slide
sure?
D
Okay,
so
let
me
talk
about
it
a
little
bit,
so
the
adopt
category
is
where
we
see
products
and
technologies
that
are
actually
pretty
well
established,
so
things
like
prometheus
and
grafana
and
data
organistic
openmetrics,
those
all
have
been.
D
So
when
we
look
at
those
they,
those
are
the
tools.
I
think
that
solve
the
actually
solve
problems
for
people
in
reasonably
good
ways
and
compared
to
the
the
products
and
things
that
we
have
in
assess
and
try
out
trying
an
asset
seems
to
me
like
things
that
people
have
are
looking
for
solutions
for
in
many
cases
and
are
trying
out
those
tools
to
check
if
they
can
actually
solve
those
problems
for
them
and
are
hoping
to
get
those
solutions
to
solve
those
problems
for
them
and
with
adopt.
D
A
Well,
awesome.
Thank
you.
I
think
my
audio
is
back,
so
I
appreciate
the
summary
that
you
just
gave.
I
think
that's
a
really
good
summary
and
there
were
a
lot
more
solutions
that
people
gave
answers
for
that
were
not
listed
here.
So
I
think
we
had
more
than
30
in
total
and
I
think
it
was
we
had
to
sort
of
choose
how
many
we
could
actually
fit
onto
one
radar
just
to
not
be
overwhelming
so
jason,
I'm
gonna
ask
you:
how
did
you
find
creating
the
radar?
E
Yeah,
the
the
just
kind
of
proliferation
of
tools
and
vendors
and
and
projects
in
this
space
made
it
challenge
so
just
dovetailing
right
into
what
you
just
finished
talking
about.
E
I
think
we
looked
at
that
as
a
blessing
like
oh
there's,
a
lot
of
tools
that
will
be
really
helpful,
but
because
there
were
so
many
tools,
you
know
that
we
know
there
were
projects
that
that
cncf
end
users
are
using,
but
we
may
we
didn't
get
any
respondents
that
were
using
those
tools,
and
so
you
know
we
can't
make
a
judgment
about
about
that.
E
You
know
a
tool
that
no
one
who
responded
is
using
or
if
there
was
just
very
few
respondents
that
were
using
it,
so
it
was
both
a
blessing
and
a
curse.
Unfortunately,
that
was,
I
think,
the
hardest
part,
and
I
think
it
made
it
a
lot
harder
than
we
were
anticipating
going
into
it.
D
D
D
It's
pretty
common
everywhere
or
almost
everywhere,
but
there
are
tools
that
have
almost
100
percent
of
adoption,
but
on
the
smaller
scale,
and
that
was
sort
of
interesting
to
me
to
see
that
there
are
tools
that
solve
problems
really
well
seemingly,
but
are
not
as
popular
or
widely
adopted
as
things
like
promote
user
grafana,
and
it
was
difficult
for
us
to
then
you
know
judge
where
they
should
land
on
the
on
the
radar,
because
we
didn't
want
to
necessarily
promote
at
all.
D
Let's,
let's
call
it
promote
that
is
good,
but
not
widely
adopted.
Yet.
B
And
what
one
of
the
experiences
I
had
is
that
we,
you
know
we
got
information
from
the
other
end
using
communities,
but
in
a
lot
of
cases
it
just
ended
up
bringing
up
more
questions.
B
For
me,
like
I
wanted
more
like
there
were
several
companies
like
zendesk
included
that
had
like
said
that
were
adopting
like
multiple,
arguably
competing
products
or
things
you
know
with
prometheus
and
datadog
and
splunk,
all
kind
of,
as
as
tools
we've
used-
and
I
was
wondering
like
okay
are,
you
know,
is
that
is
one
of
them:
a
legacy
that
you're
moving
on
to
the
new
one?
Are
you?
Is
it
different
teams
like
these
groups
in
the
organization?
It
has
different
use
cases.
B
So
there's
this
like
wondering
like
what
are
all
the
the
various
stories
and
the
in-depth
thing
and
like
the
radar
we're
trying
to
flatten
everything
into
kind
of
a
two-dimensional
grid
of,
like
you
know,
adopt
trial
so
like
trying
to
wondering
about
all
the
stories
involved
here
and
the
reasoning
for
things,
but
still
trying
to
like
okay,
we
need
to
converge
on
a
a
useful
story.
D
Yeah
I
sort
of
came
to
this
evaluation
with
open
mind
because
we,
as
an
organization,
went
through
a
pretty
extensive
process
of
evaluating
what
we
actually
want
to
do
for
the
future.
D
So
we
did
pocs
with
open,
open
source
tools,
but
at
the
end
we
decided
to
go
with
a
sas
provider
and
I
was
very
curious
about
what
other
organizations
do
and
how
they
do
it
and
what
kind
of
tools
they
adopting
for
all
those
different
use.
Cases
and
data
points
that
we
now
have
to
sort
of
keep
track
of,
and
the
number
of
tools
was
sort
of
surprising
to
me,
like
the
how
many
tools
there
are
that
I
was
not
aware
of
of
some
of
them.
D
D
We
decided
that
we'd
rather
focus
on
building
our
own
or
helping
our
own
business,
rather
than
learning
all
the
things
that
other
people
already
know
and
are
experts
at
and
that.
D
B
So
now,
like
the
entire
engineering
organization,
needs
to
interact
with
observability
tools
and
gets
you
know:
they're
like
individual
teams,
the
ones
getting
paged
or
alerted
to
look
at
their
slos
and
that
sort
of
thing
so
that
you
know
the
scope
of
who
needs
to
like
who
this
needs
to
work
with,
and
in
case
the
use
cases
has
like
changed
dramatically,
so
our
tooling
has
needed
to
evolve
to
match
them.
A
A
All
right
so
now
I'm
going
to
talk
about
the
themes.
So
what
things
did
you
find
interesting
or
noteworthy
about
what
you
saw
and
the
first
one
that
the
radar
team
came
up
with
was
the
most
commonly
adopted?
Tools
are
open
source,
and
I
thought,
when
I
saw
this
like
well
duh
right
of
course.
Of
course
this
is
open
source
because
everything
is
open
source
in
this
world
right,
so
john,
maybe
you
can
comment
on.
Why
was
this
interesting.
B
Oh
yeah,
I
think
I
mean
like
yeah
like,
like
you,
said
it's
kind
of
unsurprising
in
so
far
as
the
like.
The
end
user
group
are
a
set
of
people
who
are
you
know
most
almost
all
of
us
are
running
kubernetes,
either
managed
or
open.
So
we've
all
kind
of
bought
into
the
idea
of
open
source
community
supported.
B
You
know
cloud
provider
supported
technologies,
so
it
kind
of
makes
sense
to
use
other
ones
as
well,
but
at
least
in
our
experience
at
zendesk
at
least
the
once
you
get
to
a
certain
scale
of
like
of
like
data
and
company.
It
takes
a
lot
of
effort
and
time
to
actually
run
a
lot
of
these
open
source
tools
at
scale
and
it's
easy
to
spin
up
in
a
weekend
following
a
blog
post
that
sort
of
thing.
B
So
it's
interesting
to
see
that
even
at
fairly
large
scales,
a
lot
of
these
companies
are
investing
the
time
and
energy
to
run
their
own
prometheus
clusters
or
rafana,
or
you
know
and
like
manage
the
complexity
there
and
in
many
cases,
rather
than
like
okay
using
a
sas
provider
or
paying
someone
to
handle
that
stuff.
For
it
now
to
be
fair,
some
of
some
of
the
companies
like
datadog
and
splunk
are,
you
know,
were
in
the
upper
upper
range
of
commonly
used.
B
So
I
think
it's
while
open
source
is
most
common,
even
amongst
these
32
set
of
companies,
there's
a
variety
of
approaches
and
financial
tradeoffs
and
work
trade-offs
that
you've
all
taken.
D
Right
so
the
it
was
actually
very
surprised
to
me
that
so
many
organizations
are
running
those
open
source
tools,
like
john
said,
probably
at
a
bigger
scale,
because
it's
actually
either
opposite
to
what
we
did
or
how
we
evaluated
our
situation.
D
Or
maybe
those
organizations
didn't
get
yet
to
that
point
where
they
had
an
opportunity
to
evaluate
what
they
actually
want
to
do,
and
they
just
go
with
a
flaw
like
they
started
with.
I
don't
know,
promise
and
grafana
and
as
they
are
growing,
they
expand
and
the
deployment
of
those
tools,
and
that
was
very,
very
surprising.
C
A
B
Right,
I
mean
robust,
we,
we
are
a
sas
company,
so
we're
all
like
yeah.
You
know
sas
is
a
good
idea.
Everyone
should
do
that,
but
yeah
we
like
for
a
while,
like
logs,
for
example,
for
a
while.
We
had
we're
running
for
you
know
way
back
in
the
day,
we
just
had
a
log
server
that
people
used
grep
for
and
then
eventually
moved
to
a
system
where
we
were
like
pushing
logs
to
kafka
and
coffee
goes
to
an
elastic
search
cluster.
B
But
then
we
realized
that
that
that
elasticsearch
cluster
was
getting
bigger
and
bigger
and
we
realized
okay.
We
either
need
to
hire
several
engineers
just
to
keep
that
up
and
running
and
tuned
and
scaled,
and
that
sort
of
thing
like
that
and
that's
expensive
or
we
should
you
know
we
should
try
to
find
a
sas
provider
to
handle
logs
for
us,
and
we
looked
at
the
you
know
the
cost
of
hiring
people
the
opportunity
cost.
B
The
fact
that
you
know
we
just
figure
a
good
sas
provider
would
probably
do
a
better
job
of
managing
logs
than
you
know.
Three
and
three
or
four
or
five
engineers
would
do
on
our
team,
so
we
decided
okay,
let's
like
take
that
money
and
give
it
to
a
provider.
So
we
use
datadog
in
this
case,
but
that
was
you
know,
there's
a
fair
bit
of
like
back
and
forth
and
trying
to
determine
the
like.
Okay
do!
Do
it
in-house
versus,
like
admit,
that's,
not
our
core
competency
and
have
someone
else.
D
The
other
problem
that
we
ran
into
was
that
we
want
engineering
teams
to
be
independent,
but
it
came
with
a
with
a
cause
of
them,
deploying
and
maintaining
their
own
observability
infrastructure,
which
again
caused
another
problem
where
there
was
almost
no
transparency
across
the
organization
about
how
those
systems
perform
or
where
those
metrics
are
where
those
dogs
are,
and
by
adopting
one
of
our
goals.
D
First,
was
to
consolidate
everything
and
with
that,
the
next
step
was
to
use
a
sas
provider
to
just
give
people
tools
and
processes
to
adopt
the
platform,
and
it
was
easier
than
managing
those
the
infrastructure
for
all
those
teams,
especially
that
they
had
different
use
cases
different
requirements.
A
E
C
Yeah
this
one
was
actually
very
interesting
from
my
perspective.
You
know
what
we
noticed
was
that
a
large
number
of
companies
had
actually
given
opinions
on
a
large
number
of
the
tools
that
have
been
mentioned
here,
which
means
that
they
have
actually
tried
and
have
experience
with
at
least
many
of
these
tools.
C
You
know
five
or
more
tools
that
have
been
mentioned
here,
which
is
a
lot
of
tools
for
for
observability,
and
so,
as
the
you
know,
as
the
radar
team
was
kind
of
looking
at
the
data
and
trying
to
understand
that
one
of
the
things
we
realized
was
a
that.
You
know
the
the
cloud
native
space
is
a
very
thriving
community.
C
There's
a
lot
of
interesting
innovation,
that's
happening
here,
and
so
there's
a
lot
of
new
tools
that
are
coming
in
that
are
looking
to
solve
some
of
the
problems
as
people
build
more
cloud
native
systems,
and
so
as
these
new
tools
are
coming
in,
that
are
solving
problems.
People
are
looking
at
those
tools
to
try
and
understand
how
to
use
them.
So
I
think,
that's
kind
of
what
makes
a
lot
of
people
at
least
have
some
experience
with
with
these
tools
to
be
able
to
give
some
opinion
on
them.
C
But
I
think
the
interesting
thing,
the
that
we
also
noticed,
was
that
a
large
number
of
these
tools
are
actually
being
used
on
an
ongoing
basis
and
part
of
the
reason
we
think
that's.
The
case
is
because
you
know
observability
itself
is
a
very
kind
of
interesting
art.
C
You
know
I
mean
you
will
often
hear
people
talk
about
observability
in
the
sense
of
logs
and
metrics
and
tracing
so
you're,
basically
looking
at
a
lot
of
data
from
a
very
lot
of
different
angles,
and
a
lot
of
these
tools
have
their
strengths
in
one
or
maybe
a
couple
of
those,
but
not
necessarily
all
of
the
dimensions
in
which
you're
interested
in
understanding
your
data,
and
so
that's,
probably
a
contributing
factor
to
people
having
to
choose
more
than
one
tool
in
order
to
understand
all
of
the
data
that's
coming
in
and
to
be
able
to
make
observabilities
observable
decisions
based
on
the
data
and
then
finally,
I
think
one
thing
that
I
think
a
couple
of
us
had
experience
with
on
the
radar
team.
C
That,
I
think,
contributes
to
this
fact
as
well
is
a
lot
of
us
are
not
in
the
business
of
building
observability
tools
themselves
in
our
core
businesses
are
somewhere
else,
and
so
often,
once
you
make
a
choice
for
a
tool
and
you
invest
heavily
in
adopting
that
tool,
it
becomes
very
hard
to
move
into
a
different
tool,
like
the
cost
of
investing
to
move
completely
from
one
system
to
another,
is
pretty
high
and
often
there
isn't
enough
roi
to
want
to
make
that
investment,
and
so
that's
kind
of
one
of
the
contributing
factors
where,
once
you
adapter
tool,
you
tend
to
stay
around
with
the
tool,
even
though
you
might
introduce
another
tool
or
give
it
a
shot
to
see
if
it
solves
another
problems.
D
Yes,
and
sometimes
it
can
feel
that
you
know
every
month
or
every
quarter,
there
is
a
new
way
to
do
things
new
way
to
deploy
your
infrastructure
or
workloads.
There
is
new
platforms
to
deploy
to
you
know
we
went
from
vmware
for
from
vms
to
containers
to
cloud
functions.
D
All
those
things
require
different
ways
to
observe
your
workloads
and
your
infrastructure
and
any,
and
that
comes
with
a
cost
of
adopting
yet
another
tool
to
do
those
things
for
you
and
yet
another
protocol
or
pattern,
and
it's
it
feels
like
a
natural
thing
to
go
because
the
technology
progresses,
and
it
requires
us
to
try
and
assess
things
constantly,
and
there
is
just
more
and
more
things
showing
up
on
the
market.
C
And
I
think
that's
probably
one
of
the
reasons
why
even
kind
of
ties
together
both
the
first
and
the
second
theme
you'll
see
here,
is
the
fact
that
when
you
choose
an
open
source
format,
it
actually
makes
it
easier
for
you
to
experiment
with
other
tools
or
move
on
to
a
different
tool
from
at
least
that
perspective.
D
C
D
We
did
even
though
we
adopted
a
sas
platform.
We
still
stayed
with
the
open
metrics
for
metrics,
because
we
do
want
to
have
flexibility
to
migrate
somewhere
else
if
we
ever
needed
to,
even
if
it
comes
with
a
higher
cost
of
just
running
those
systems.
B
Yeah,
I
would
say
that
in
the
consolidation,
even
like,
with
my
experience
at
zendesk,
there's
a
it's
a
constant
conversation
going
on
here
about
the
the
relative
value
and
use
case
of
different
tools
like
metrics,
for
example
like
stats,
these
diametrics
don't
give
you
a
whole
lot
of
granular
detail
of
exactly
what
happened,
but
you
know
so
logs
give
you
a
lot
more
information
but
they're,
more
expensive,
and
you
know
to
take
take
more
space
and
sort
of
thing
and
then
there's
you
know,
distributed
tracing
so
like
what
you
know.
B
We
kind
of
have
like
multiple
ways
of
of
monitoring
stuff
which
is
or
close
to,
but
have
their
own,
their
own
quarks.
So
what
are
the
like?
What's
the
relative
value?
What
is
most
useful
to
teams?
You
know
trying
to
monitor
system
to
know
up
their
errors
to
be
able
to
troubleshoot
things.
If
an
incident
does
occur,
look
at
historical
trends,
so
they
all
have
these
trade-offs.
B
So
we
end
up
like
running
a
couple
of
different
tools
in
different
both
for
different
use
cases,
and
I
think
we're
still
experimenting-
and
you
can
see
like
I
think,
slack
came
out
with
a
blog
post
the
other
day
about
their.
You
know
new
their
system.
What
is
it
yeah,
just
tracing
and
netflix
has
their
egg
drifting.
So
it's
it's
a
constantly
churning
domain
with
lots
of
really
interesting
technologies.
Kind
of
constantly
popping
up.
A
Oh,
and
I
see
we
have
a
comment
from-
I
guess
one
of
the
founders
maintainers
of
openmetrics.
You
said
the
assume.
I
think
this
was
after
martin,
like
that's
in
part,
why
I
started
openmetrics
to
force
metrics
into
a
label
system
and
then
let
the
best
one
win.
So
he
loves
to
see
that
it's
being
used
and
thought
about
as
such,
which
thanks
a
great
comment.
A
Yeah
any
other
thoughts
on
these
first
two
themes.
We
have
one
more
to
discuss.
A
I
also
actually
found
this
second
one
interesting
compared
to
the
first
radar
that
I
did
where
it
was
fewer
projects
and
sort
of
more
easy
to
choose
where
it
was.
I
felt
like
this
one
had
a
lot
more
projects
and
it
was
much
less
clear
which
levels
they
should
fall
into
and
I
was
corrected
he's
the
founder,
so
I'm
sorry
founder
of
openmetrics,
okay,
third
theme,
prometheus
and
grafana
are
frequently
used
together.
A
D
D
They
both
come
hand
in
hand
like
if,
if
you're
deploying
from
ethos,
you're
essentially
going
with
grafana.
D
Sometimes
there
is
a
mention
of
graphite,
but
those
two
are
like
as
a
bundle
that
come
together
and
even
if
you
look
at
things
like
hand,
charts
or
different
deployment
patterns
for
those
systems,
they
all
are
ban.
They
both
are
bundled,
and
it
may
be
that
they
just
essentially
work
very
well
with
each
other
and
provide
people
with
what
they
are
looking
for
and
because
it's
so
widely
adopted.
It
is
now
easy
to
deploy
and
maintain
them
as
a
bundle,
essentially.
E
No,
I
think
it
it
makes
a
ton
of
sense,
but
it
was
really
striking.
You
may
not
be
able
to
see
it
in
the
radar
on
the
way
that
the
data
is
presented
there,
but
in
looking
in
the
responses,
it
was
almost
100
percent
kind
of
overlap,
so
everyone
that
was
using
prometheus
was
also
using
grafana,
which
was
interesting.
I
don't
think
it
was
exactly
100,
but
it
was
close
enough.
It
was
very
close.
Yes,.
B
Oh,
I
just
want
to
point
out
that
as
soon
as
ns
we've
got
like
a
thousand
engineers,
we
have
a
foundation
team.
We've
got
a
vendor
review
board,
but
but
at
the
end
of
the
day,
usually
when
we
are,
you
know
trying
something
out
or
doing
proof
of
concept.
There
is
some
engine
like
one
engineer
who
is
like
googling
and
reading
blog
posts
to
figure
out
okay.
How
do
I
get
this
up
and
running
on
a
test
cluster
or
something
like
that?
B
So
the
you
know
the
the
process
that
goes
through
initially
is
like
the
exact
same
as
you
know,
hobbyist
or
a
three-person
startup,
or
something
like
that.
So
I
think
that,
like
all
right,
a
couple
of
tools
that
fit
together
nicely
and
make
it
really
easy
to
see
value
and
end,
you
know
it's
a
lot
easier
to
get
to
the
point
of
like.
Oh,
this
is
cool.
I'm
going
to
go.
Tell
my
boss
about
this,
and
you
know
I
had
to
work
up
the
chain.
B
A
C
Yeah,
so
so
we
got
into
the
observability
game.
You
know
right
around
the
time
when
we
were
starting
to
get
into
the
kubernetes
side
of
the
world
as
well
and
when
we,
when
we
started
on
that
journey,
prometheus
actually
wasn't
around,
and
so
we
made
our
bet
on
a
different
tool.
We
instrumented
all
of
our
systems,
and
today
we
have
millions
of
metrics
being
emitted,
and
we
have
you
know
over
400
engineers
in
the
company
who
completely
trained
on
the
tool
and
understand
how
to
use
it.
C
You
have
you
know,
hundreds
of
dashboards,
thousands
of
alerts
set
up,
and
so
at
this
point
in
our
journey
kind
of
to
the
theme
number
two
that
you
see
there.
It's
really
kind
of
a
lot
of
investment
for
us
to
move
from
our
existing
solution
over
to
something
like
prometheus
or
grafana,
and
and
that's
not
just
the
cost
for
the
the
redoing
all
the
work.
But
it's
also
like
keep
in
mind
to
train
400
engineers
on
the
new
tool.
C
There's
going
to
be
a
brief
period
in
time
where
we're
probably
going
to
live
in
two
worlds
in
the
existing
world
and
in
the
new
world,
and
like
me,
going
through
that
whole
transition
just
seems
like
a
lot
of
work,
and
we
just
don't
see
enough
roi
in
making
that
investment.
At
this
point
and
again
it's
not
contributing
towards
our
core
business.
It
doesn't
really
buy
us
anything
in
terms
of
where
our
business
wants
to
go.
So
that's
kind
of
what's
holding
us
back.
C
If
I
were
to
start
from
scratch
today,
I'd
probably
go
with
prometheus
and
grafana,
which
are
like
the
popular
choices
and
where
end
users
are
today,
but
given
where
we
we
are,
and
what
the
investment
we
made
so
far
is
just
a
very
hard
sale
to
make.
A
This
is
basically
what
the
the
final
radar
looks
like.
So
in
a
in
adopt,
we
have
these
five
projects:
prometheus
grafina,
elastic,
datadog,
openmetrics
in
trial.
We
have
six
here
and
then
three
in
assess
and
I'm
going
to
ask
each
of
our
panelists,
just
one
thought
or
takeaway
or
something
that
you
learned
from
going
through
this
exercise.
B
I
think
it
was.
It
was
nice
and
somewhat
validating
to
discover
like
so
many
so
many
companies
use
so
many
tools
and
are
still
like
it's
not
it's
not
just
us
that
has
three
or
four
different
observability
tools
running
at
the
same
time
and
we're
all
you
know,
sharing
thoughts
and
ideas
with
the
other
people
on
the
team,
like
oh
yeah.
B
This
is
something
that
we
all
struggle
with
and
I
think
the
value
of
talking
to
your
peers
and
getting
you
know
either
to
get
like
advice,
commiserate
or
just
like
have
someone
who
has
an
opinion
on
this
that
isn't
really
biased.
I'm
trying
to
sell
you
some
one
thing
one
way
or
the
other
is
really
valuable
to
me.
E
Yeah
this
was
a
really
really
fun
process
and
it
really
wouldn't
have
been
possible
without
you
know:
cheryl
and
julie's
coordination,
so
big
thanks,
mad
props
to
them
for
that,
but
I
think
my
thoughts
on
the
the
kind
of
the
radar
and
the
process
and
whatnot
is
that
yeah.
I
again
I
want
to
reiterate
that
there
were
a
lot
of.
There
are
a
lot
of
tools
and,
unfortunately,
because
of
kind
of
the
subset
of
data
that
we
have.
You
know
we
can't
make
judgments
about
tools
that
you.
B
E
That
subset
of
of
cncf
end
users
didn't
use
widely.
So
we
know
that
there
are
a
lot
of
good
tools
that
kind
of
didn't
make
the
cut
because
they
didn't
have
the
votes.
We
couldn't
make
a
judgment
about
them.
So
that's
not
a
reflection
of
the
quality
of
the
projects
that
aren't
or
the
tools
that
aren't
on
here.
A
D
So
I
really
really
enjoyed
the
process
and
the
collaboration
and
I'm
happy
to
that.
I
was
able
to
learn
how
other
organizations
do
things
and,
as
for
the
radar,
I
I
read
it
as
there
are
things
that
do
solve
problems
really
well,
there
are
things
that
solve
problems
well
for
some
people
and
there
are
tools
or
things
that
people
hope
to
be
solved
for
them
in
the
future
like
when,
I
think
about
open
telemetry,
it's
something
that
we
are
excited
about
and
waiting
for.
D
A
Awesome,
I'm
glad
that
you
enjoyed
the
process
and
canal
last
word
to
you.
C
Yeah,
so
I
I
will
echo
what
the
other
panelists
have
said
and
thank
cheryl
and
julie,
and
the
entire
cncf
team
to
you
know,
help
shepherd
this
whole
thing.
I
think
this
is
a
very
valuable
effort.
I
also
want
to
thank
all
the
fellow
panelists.
I
think
it's
been
a
lot
of
fun
having
these
conversations
and
learning
what
everybody
thinks
about
what's
happening
and
and
kind
of
trying
to
come
up
with
some
way
to
wrangle
all
of
this
data
together
to
present
in
some
meaningful
way.
C
I'm
super
excited
actually
to
see
the
large
number
of
tools
and
the
various
kinds
of
problems
they
solve
here
and
how
end
users
are
using
this,
I
think,
for
me,
it
reflects
really
some
of
the
challenges
in
running
a
distributed
system
in
a
cloud-native
way.
I'm
super
happy
to
see
the
amount
of
interest
and
investment
that's
happening
in
the
industry.
That's
leading
to
you
know
newer,
newer
tools
coming
up
that
are
looking
to
solve
some
of
the
problems
that
you
know.
C
We
as
end
users
are
facing
in
trying
to
build
this
kind
of
an
architecture.
So,
for
me,
like
a
big
takeaway,
is
that
this
observably
landscape
is
pretty
large
lots
of
people
looking
to
solve
interesting
problems
here.
So
I
would
encourage
both
the
people
who
are
building
and
creating
these
tools
to
continue
all
the
hard
work
that
they're
doing
and
then
the
end
users
to
share
all
of
their
feedback
with
cncf.
C
And
these
end
you
and
the
creators
of
these
tools
so
that
they
can
continue
to
iterate
and
make
these
tools
better
so
that
we
as
end
users
can
benefit
from
that.
A
That's
a
that's
a
great
summary,
and
also
I
want
to
say
thank
you
to
all
of
you
for
working
with
me
on
this.
I've
actually
enjoyed
the
process
a
lot
as
well
and
learned
a
lot
from
all
of
you.
A
So
last
thing
to
mention:
we
have
a
new
website
radar.cntf,
where
you
can
go
and
see
all
of
the
information
that
we've
just
run
through
today.
You
can
find
the
previous
radars
as
well.
A
A
We
are
pretty
much
out
of
time,
so
I'm
sorry
for
not
being
able
to
have
time
for
questions,
but
again
just
put
it.
You
can
put
stuff
on
this
github
issue
and
I
will
go
and
check
it
out
and
answer
afterwards.
A
Thank
you
so
much.
I
really
appreciate
everybody's
time
chatting
today
and
just
the
last
things
to
wrap
up.
I
want
to
say
thank
you
to
all
of
our
presenters
for
coming
and
joining
today,
and
the
webinar
recording
and
slide
will
be
posted
online
later
today.
We
look
forward
to
seeing
you
at
a
future
cncf
webinar.
Thank
you
and
have
a
great
day.