►
From YouTube: Keynote: Using the OpenTelemetry Collector to Empower End Users - Constance Caramanolis
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe 2021 Virtual from May 4–7, 2021. Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
Keynote: Using the OpenTelemetry Collector to Empower End Users - Constance Caramanolis, KubeCon + CloudNativeCon North America 2020 Co-Chair & Principal Engineer, Splunk
https://sched.co/eoEq
A
Hi
again,
everyone
I'm
constance
caramanolis,
and
today
I
will
be
talking
to
you
about
the
open,
telemetry
collector
and
how
it
empowers
end
users.
A
A
These
were
two
projects,
successful
projects.
They
were
working
on
very
similar
goals
and
the
decision
was
done
to
merge
these
into
one
project.
To
unify
resources
right,
it
is
a
collection
of
tools,
apis
and
sdks,
and
yes,
that
actually
means
that
there
are
support
in
several
languages,
actual
libraries
that
you
can
use
and
adopt
in
your
applications
today
for
more
details
in
terms
of
the
instrumentation
side
today,
we'll
be
focusing
on
tooling.
A
I
highly
suggest
that
you
watch
last
year's
keynote
with
liz
fung
jones
and
sarah
novotny,
but
introducing
open,
telemetry
and
also
some
kubecon
eu
talks.
A
We
need
to
better
understand
this,
our
systems,
and
so
it
is
the
time
for
a
project
that
focuses
on
merging
multiple
telemetry
formats,
metrics
traces,
eventually
logs,
and
it
is
all
about
prioritizing
end
users
and,
as
you'll
see
in
our
landing
page.
We
talk
about
being
vendor
agnostic
because,
unfortunately,
when
it
comes
to
telemetry
and
the
resulting
back-ends
vendor
lock-in
has
happened
right.
It
depends
where
you
are
in
your
cloud-native
journey.
If
you
are
lucky
and
or
an
early
adopter
of
cloud-native
vendor
lock-in
might
not
be
a
strong
story
for
you,
but
from
it.
A
I
can
guarantee
that
for
many
of
you
watching
today,
vendor
lock-in
is
a
pain
right,
and
this
actually
hinders
your
ability
to
choose
the
proper
back
ends
for
you,
because
vendors
will,
you
know,
will
provide
better
tooling
and
better
instrumentation
that
works
better
with
their
back
end,
but
this
isn't
the
way
in
the
future.
Right
that
is
not
the
point
of
cloud
native
and
so
open
telemetry
is
partial
is
strongly
addressing
that
by
making
sure
to
provide
vendor
agnostic.
Instrumentation
and
tooling
now,
you're,
probably
thinking
great
another
migration,
and
I
don't
blame
you.
A
It
is
annoying
to
think
about
another
migration
right,
you're.
Probably
thinking
about
like
do
I
drop
service
mesh
to
kubernetes
like
what
about
containers
your
security,
so
many
things
working
there,
and
so
I
deserve
this
space.
Now
I
am
here
to
tell
you
that,
in
terms
of
adopting
and
migrating
open,
telemetry
it
isn't
an
all
or
nothing.
There
is
a
way
for
you
to
start
adopting,
open,
telemetry,
open,
telemetry
and
specifically
the
collector
without
changing
much
of
your
existing
infrastructure,
and
so
the
path
forward
is
the
open,
telemetry
collector.
A
A
It
supports
several
popular
open
source
protocols.
I
want
to
call
this
out
because
right,
open
telemetry
does
have
its
own
protocol
for
spec
for
ap
for
metrics
and
traces,
but
we
do
acknowledge
that
there
have
been
other
protocols
for
metrics
and
traces
before
and
so
to
successfully
adopt
open.
Telemetry
doesn't
mean
that
you
have
to
use
our
protocol
right
away.
A
A
There
are
two
ways
you
can
deploy
it,
one
as
an
agent
sidecar
or
as
a
standalone
binary
instance.
You
know
examples
for
why
why
you
might
want
to
run
it
as
an
agent.
Is
that
if
you
want
to
do
sampling
decisions
or
off
if
you
have
a
high
throughput
service-
and
you
want
to
offload
the
telemetry
as
soon
as
possible
to
some
other
binary,
so
you
don't
create
a
ball
and
look
in
the
application.
A
A
Now
the
internals
of
the
collector
we've
said
it
before,
but
there's
three
parts
with
getting
data
in
and
the
formal
term
is
the
receivers,
and
so
what
receivers
do
is
that
once
you
get
something
within
once
again
telemetry
within
a
certain
protocol,
it
gets
translated
into
internal
format
and
this
internal
format.
A
The
processors
consume
this
internal
for
format
and
then
transforms
the
data
optionally,
a
little
spoilers
in
a
bit
and
then
at
the
end
it
forwards
the
data
along
to
whatever
format
you
want
now,
when
you're
thinking
of
migrations,
there's
a
few
complex
things
about
it.
To
think
about,
one
is
that
you
can
have
multiple
telemetry
workflows
right.
Different
teams
might
have
used
different
protocols,
they
might
be
sending
things
different
back
ends.
A
It
gets
a
little
complicated
there
depending
you
know,
when
the
telemetry
is
added,
the
quality
of
the
telemetry
can
differ,
and
so
you
want
to
standardize
this
and
then
controlling
where
the
data
goes.
It's
great,
you
generate
all
this
telemetry,
but
controlling
where
it
goes
in
the
end
is
so
important.
A
Now,
let's
tackle
the
first
thing
managing
multiple
workflows,
this
is
naturally
a
simplified
service.
You
know
service
application
depiction,
and
so
all
these
services
use
only
one
metrics
format
and
send
things
to
prometheus
backend
and
now
they're,
actually
two
different
tracing
protocols.
Zipkiniaker
in
this
case-
and
you
know,
within
these
services
right,
they
might
have
these
values
hardcoded.
A
What
makes
it
really
hard
is
say
if
you're
the
observability
team
is,
how
do
you
manage
all
these
right?
It's
if
you
don't
provide
essential
way
for
teams
to
send
their
traces
to
some
data
collection.
Yes,
the
collector
it's
hard
to
audit
where
things
are
going.
So
what
I'm
suggesting
to
you
today
is
that
you
can
use
a
collector
to
maintain
all
of
your
workflows.
Now,
there's
a
few
ways
you
can
do
it.
A
What's
really
cool
about
the
collector,
where
I
was
hinted
at
in
the
pipelines
diagram
is
that
since
the
port
supports
multiple
receivers,
it
can
say
on
one
pipeline,
you
can
say
I
want
to
support
zipkin
and
jager
and
then
you'll
convert
both
of
those
into
the
internal
data
format
and
then,
after
once,
it
goes
through
the
rest
of
the
pipeline
export
it
in
whatever
format
you
want
it
to
be.
This
allows
for
you
to
have
multiple
protocols
for
telemetry
data
and
send
it
to
one
back
end
or
you
know,
maybe
something
cooler
at
the
end.
A
Now,
how
does
this
you
know
work
out?
If,
like
you
know,
you
have
these
protocols
and
a
new
service
comes
along,
and
this
these
service
owners
are
super
excited
about
open
telemetry
and
they
want
to
use
the
open,
telemetry
format
now
kind
of
like
what
we
did
with
the
zip
in
the
jager
trace
pipeline.
We
can
do
the
exact
same
thing.
We
can
add
the
hotel
protocol
to
the
receivers
for
the
metric
and
traces
and
just
send
that
data
along
this
means
that
your
service
owners
don't
need
to
change
protocols.
A
So
when
you
can
control
the
data,
yes,
it
does
require
changing
where
the
data
is
you
know
exporting
to,
but
it
also
means
too
that
you
don't
have
to
force
everyone
to
upgrade
to
the
latest
instrumentation
right
to
the
latest
library
or
protocol.
This
gives
you
power
to
choose
hey.
Maybe
we
only
upgrade
the
latest
services
right.
Only
new
services
uses
latest
protocol
or
you
know
this
one
service.
It
needs
to
be
rewritten.
Let's
do
that
one
there.
This
gives
you
power
to
choose
when
you
adopt
a
new
protocol.
A
Now,
let's
talk
about
standardizing
the
data
we
have
you
know
now
we
can
we
get
all
our
data
working
through,
but
now
we're
seeing
that
the
quality
of
data
isn't
so
great
right
and
a
few
causes
for
this
different
source.
Different
quality
of
data
can
be
different
languages.
You
know
naming
for
that,
maybe
earlier
data
that
doesn't
isn't
as
rich
as
newer
data,
because
we've
asked,
as
we
play
around
with
systems
for
longer,
we
learned
to
ask
better
questions.
A
Meanings
evolve,
a
term
that
we
might
have
used
four
or
five
years
ago
doesn't
make
sense
anymore
right,
maybe
you've
added
more
regions
and
before
there
were
no
region,
and
so
you
use
default
and
accidents
happen
as
best
as
we
try.
Sometimes
you
know,
like
pii
leaks
right,
something
like
that,
and
we
need
a
way
to
catch
that.
A
Yes,
as
you
all
know,
the
collector
does
all
of
these
things.
So
it's
called
the
attribute
processor
and
in
this
example,
I'm
going
to
be
operating
on
traces
and
so
say
I'm
missing
an
attribute
right.
I
have
an
earlier
version
of
the
application
that
wasn't
setting
environment
before
we
just
had
one
environment.
Now
we
actually
have
you
know,
testing
staging
production
and
so
using
this
processor
I
can
say.
Let
me
add
this
attribute
environment
prod
and
now
my
data
will
look
similar.
A
You
know
from
the
newer
stuff
and
then
I
realized
looking
at
this
a
little
bit
more.
Oh
oops.
You
know
what
I
got
into
a
little
joke,
saying
that
the
west
coast
is
the
best
coast,
and
so
I
set
region
to
best
coast
and
it
didn't
make
it
wasn't
as
much
of
an
issue
back
then,
because
we're
only
deployed
on
the
west
coast,
but
now
we
actually
have
the
east
coast
and
yes,
the
east
coast
is
really
cool
too.
You
know
it
can
be
the
best
coast,
so
the
best
is
coast
together.
A
So
I
want
to
change
west
coast
to
west
coast,
and
you
know
I'm
looking
at
the
state
a
little
bit
more
and
oops
sorry,
I
leaked
your
name
in
logging,
you
know
in
setting
these
attributes
and
so
that's
not
okay,
so
I'm
actually
going
to
delete
anything
with
username
for
my
trace
to
look
like
my
span
to
look
like
this
now.
This
is
giving
you
just
a
few
examples
of
how
you
can
tweak
existing
data,
and
this
is
all
done
in
the
collector.
A
I
haven't
given
any
yaml
files
here
because
I
think
we're
all
a
little
bit
tired
of
yaml
at
the
moment,
but
this
is
all
done
with
a
few
lines
of
yaml.
This
doesn't
require
your
application
to
change
anything.
This
allows
you
to
say
the
central
observability
team
to
control
the
quality
of
the
data
without
pushing
changes
onto
the
application
and
the
last
part
forwarding
your
data
along
right.
We're
getting.
We've
got
data
in
multiple
receivers,
we've
enriched
it
standardized
the
data,
and
now
we
want
to
send
it
on
outward
and
so
right
now.
A
In
this
example
here
we
only
have
we're
only
sending
to
one
back
end,
but
what
happens?
Maybe
if
you
had
a
hackathon
and
two
teams
came
up
with
two
different
back
ends,
and
you
want
to
compare
these
back
in
side
by
side.
Well,
this
is
great
is
that,
with
similar
to
the
receivers,
you
have
multiple
receivers.
A
We
support
multiple
exporters
and
so
with
a
few
lines
once
again,
a
few
lines
of
gamma
code
that
I'm
saving
your
eyes
from
at
the
moment
is
that
you
can
tell
the
collector
hey,
send
everything
you
know,
send
all
these
traces
to
back
in
one
two
and
three,
and
this
has
absolutely
no
implication
on
your
application.
Yes,
implication
of
your
application,
absolutely
none!
A
Now.
The
statement
I
want
to
make
I'm
going
to
want
to
make
an
even
stronger
statement
is
that
this
gives
you
the
power
to
choose
whatever
back-end
you
want
right.
This
is
partially
targeting
the
vendor
lock-in.
Is
that
if
you
can
control
how
you
collect
all
your
data,
you
can
also
then
control
where
your
data
goes
and
this
actually
holds.
I
know
it's
funny
coming
for
me
because
I
do
work
for
vendor,
but
this
holds
us
vendor
accountable
to
providing
a
great
backend
for
you
right.
A
A
So,
to
reiterate
some
of
the
things
we
saw
today
from
complex
operations
to
easier
operations
we
can
go
from
instead
of
you
know
very
disjointed
ways
to
managing
multiple
telemetry
workflows,
the
collector
you
can
have
multiple
pipelines
and
multiple
protocol
support,
so
you
can
do
that
with
one
binary
one
deployment
a
few
lines
of
yaml.
You
can
standardize
the
quality
telemetry
using
the
attributes.
Processor
right.
This
is
once
again
no
code
changes
and
you
get
to
control
where
your
telemetry
goes.
A
A
There
are
a
lot
of
really
cool
features
in
the
collector
that
I
highly
suggest
you're,
looking
looking
into
if
you're
trying
to
collect
and
manage
all
of
your
telemetry,
this
also
provides
a
way
for
you
to
adopt
new
telemetry
protocols
and
only
change
what
is
needed
right
by
say
having
a
one
pipeline
that
accepts
multiple
protocols.
A
You
can
then
say
I
keep
everything
you
know
anything
before
a
certain
date.
Only
using
protocol
you
know
legacy
protocol
and
everything
going
forward
using
open,
telemetry
protocol.
This
gives
you
more
power
to
choose
it's
easily
extendable
right,
anything
that
is
missing.
Add
it
right.
If
you
have
a
custom
protocol,
create
you
know
it's
an
easy
to
add
a
new
receiver
and
exporter
to
support
your
custom
protocols
and
processors
do
more
than
modify
attributes.
A
There
are
quite
a
few
processors
that
are
able
to
enrich
the
data
and
also
control
how
the
data
flows
through,
so
I'm
hoping
that
you're
a
little
bit
less
annoyed
with
the
call
to
migration
and
that
this
maybe
provides
a
path
forward
for
you
to
look
into
open
telemetry,
so
the
collector
and
also
instrumentation,
without
thinking
without
it
having
to
be
a
huge
migration
and
only
potentially
updating
small
things
that
need
to
be
updated.
So
maybe
you're
a
little
happier
with
this
concept.