►
From YouTube: Cloud Native Classroom feat Open Telemetry
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
welcome
to
episode
two
of
cloud
native
classroom
here
on
cloud
native
tv.
My
name
is
kat
cosgrove
and
before
we
get
started,
I
have
to
warn
you
that
this
is
an
official
cncf
live
stream
event,
which
means
that
the
cncf
code
of
conduct
is
in
force
here
so
be
nice,
don't
say
anything
crude
and
you
know
just
generally
behave,
but
we
are
watching
the
twitch
chat.
A
So
if
you
have
questions
for
me
or
for
my
guest
or
you
know
just
in
general
about
whatever
let
a
rip,
we
will
see
them
and
we
will
be
happy
to
answer
you.
If
we
can.
I
am
joined
today
by
ted
one
of
the
founders
of
open
telemetry
who's
here.
To
help
me
understand
what
observability
means
and
also
how
open
telemetry
can
help,
how
you
doing
ted.
B
Doing
great
nice
to
meet
you
kat
great,
to
be
on
the
show.
A
A
I
don't
know
if
you're
doing
any
better,
as
somebody
who
actually
like
supports
one
of
these
things,
but
so
like
what
what
actually
does
observability
mean
in
a
kubernetes
context
like
what?
What
problem
are
we
solving
here
when
we
talk
about
all
of
these
different
observability
tools,.
B
Yeah
so
observability
means
monitoring
all
the
hype.
Okay,
that
that
part
hasn't
changed.
It
means
when
you're
running
your
system,
you're
going
to
have
problems
and
then
you're
gonna
have
to
fix
those
problems.
Sure,
and
so
observability
is
the
peace
between
hearing.
Well,
it's
hearing
that
there's
a
problem
investigating
the
problem,
and
then
you
got
to
go
and
solve
it.
B
So
it's
the
hearing
about
the
problem
and
investigating
it
part
in
order
to
do
that,
you
need
to
have
some
kind
of
signal
coming
out
of
your
system
to
see
what
it's
doing-
and
I
say
it's
like
it
hasn't
really
changed
because
you
know
running
your
system
hasn't
really
changed.
The
bugs
are
the
same
bugs
that
there
ever
were
so
in
that
sense,
observability
is,
is
not
new,
but
there
is
some
new
tooling
available
and
that's
actually
what
open
telemetry
provides
so
open.
B
Telemetry
is
the
the
part
of
an
observability
system.
You
can
think
of
observability
as
two
parts
there's
generating
the
data
and
sending
it
somewhere.
So
that's
telemetry!
B
Okay,
like
look
up
the
you
know,
webster
definition
of
telemetry,
it's
you
know
sending
signals
and
observations
about
a
remote
object
to
somewhere,
where
you
can
analyze
it
so
open
telemetry
is
just
generating
that
data
and
sending
it
somewhere.
B
It's
not
analyzing
the
data,
so
analyzing
the
data
sold
sold
separately,
and
that's
that's
actually
a
new
thing,
though,
because
in
the
past,
usually
someone
would
make
a
tool
whether
it's
you
know
like
a
closed
source
thing
like
app
dynamics
or
new
relic,
or
it's
an
open
source
thing
like
prometheus.
It's
like
cool,
you
make
your
analysis
tool
and
then
you
need
to
offer
people
instrumentation
packages,
so
they
can
go
out
and
generate
metrics
and
logs
and
things,
and
so
it's
kind
of
like
a
unified
stack.
B
You
generate
the
data
to
send
it
to
the
thing
that
analyzes
the
data
and
that
creates
a
lot
of
vendor
lock-in
in
a
way
that's
really
pernicious,
even
even
with
open
source
stuff.
It's
not
just
like
vendor
for
money
issue.
It's
just
instrumenting
your
system
is
a
what
we
call
a
cross-cutting
concern.
In
other
words,
you
take
all
of
those
little
log
api
calls
and
you
just
sprinkle
them
everywhere.
B
So
you
end
up
with
just
just
approximately
a
hajillion
logs
and
metrics
calls
just
all
over
your
system,
and
then,
if
you
want
to
use
a
different
tool
to
analyze
that
data,
it's
like
we
have
to
go
re-instrument
all
of
that
stuff
and
so
that
that's
kind
of
like
one
of
the
core
problems
we
were
looking
at
and
then
the
the
other
issue
is
this
sort
of
siloed
approach.
We've
had
because
every
tool
analyzed
one
type
of
data,
maybe
it's
a
metrics
tool
that
makes
metrics
dashboards.
B
B
It's
not
good
because
the
reality
is
we,
don't
we
don't
use
these
tools
separately
when
you're,
when
you're
trying
to
observe
a
a
system.
You
you
have
this
cycle,
you
go
through
of
first
noticing.
You
know
you
have
like
an
alert
like
a
metric
here.
I've
got.
Can
we
share
my
screen?
I've
actually
got
like
some
some
slides
about
this,
so
sure.
B
B
Is
you
look
at
your
dashboard
full
of
metrics,
you
kind
of
squint
at
them
yeah,
and
then
you
try
to
figure
out
which
other
metrics
went
squiggly
at
the
same
time
like
in
the
past,
I've
literally
like
I
will
take
a
ruler
or
a
piece
of
paper,
or
something
and
just
line
it
up
on
the
dashboard
and
be
like
what
what
other
metrics
went.
Squiggly
at
the
same
time,.
B
A
B
This
went
squiggly,
this
other
thing
went
squiggly,
okay,
those
seem
to
correlate.
What
might
that
mean,
and
you
start
thinking
well
it
might.
It
might
mean
this
or
that,
and
so
you
start
going
through
your
logs
to
see
like
well.
What
are
the
transactions
like?
What
are
the
chain
of
events
that
may
have
caused
this
problem?
You
might
start
looking
at
your
configuration
files
and
be
like
it's
something.
Misconfigured
somewhere
is
like
you
know.
B
Are
these:
you
know
these
kafka
nodes
configured
differently
from
these
other
ones,
and
so
you're,
trying
to
like
take
all
these
different
data
sources.
Configuration
data
like
all
the
resource
data,
about
all
the
different
machines,
you're
running
logging
data
aggregate
data
like
metrics
and
you're,
trying
to
find
correlations
between
all
these
different
kinds
of
data
and
once
you've
started
to
find
some
correlations.
B
You
start
to
build
a
a
guess
about
what
the
problem
might
be
and
then
at
that
point
you
can
try
to
go
verify
whether
your
guess
is
correct
and
hopefully
it
is,
and
then
you
go
roll
out
a
fix
and
what's
difficult
about
this
process.
Is
people
tend
to
spend
a
lot
of
time
trying
to
find
those
correlations?
B
B
Really
labor-intensive
just
just
finding
the
logs
just
finding
the
logs,
for
example,
literally:
where
are
they
where,
where
are
the
logs,
when
you've
got
like,
you
know,
100
machines
and
they're
all
handling
thousands
of
requests
at
the
same
time,
your
logs
are
just
this,
this
blizzard
of
stuff,
and
even
if
you
have
them
in
a
system
that
can
index
them,
what
what
index
are
you
actually
going
to
use
to
find
just
the
logs
in
that
one
transaction.
B
We
do
it,
but
we
we
end
up
spending
a
lot
of
time
when
we're
observing
these
systems,
actually
just
trying
to
find
the
data
and
collect.
A
B
It's
it's
one
of
those
pain
points
that
you
you
get
used
to
and
you
don't
you
don't
realize
it's
unnecessary.
I
was
this
is
sort
of
like
when
good
code,
formatting
tools
started
to
show
up.
I
feel
like
the
go
programming.
Language
really
kicked
this
into
high
gear,
but
where,
like
you,
just
get
used
to
like
the
id
just
like
formats
your
code,
you
don't
think
about
it,
and
then
you
go
back
to
some
setup
where
you
have
to
do
it
yourself
and
suddenly
you're
like.
Why
am.
A
B
This
this
is
terrible.
I
don't
want
to
do
this,
and
so
I
feel
like
what
you're
getting
out
of
open
telemetry
is
the
kind
of
correlations
and
indexes
across
these
different
types
of
signals,
so
that
you
can
feed
it
all
into
one
tool
that
can
cross-index
all
this
stuff
and
if
you
have
one
tool
that
can
do
that
and
the
data
is
actually
structured
into
a
graph,
it's
actually
like
properly
structured
data
with
a
lot
of
semantic
meaning.
B
So
you
can
see
this
is
an
http
client
request,
and
this
is
like
a
kafka
queue
and
all
of
that,
then,
you
can
start
applying
some
machine
analysis
to
that
data,
which
means
the
machines
can
start
finding
these
correlations
for
you,
and
so
you
can
say,
look
at
look
at
some
metrics
and
then
be
able
to
say.
Okay,
I
see
the
spike
here
in
my
metrics
what
what
are
example,
traces
that
are
associated
with
this
spike
in
my
metric.
B
I
just
want
to
see
rather
than
try
to
guess
and
go
figure.
It
out
just
show
me
the
the
actual
transactions
that
we're
generating
this
metric,
for
example,
so
having
having
all
these
different
data
types
connected
into
a
proper
data
into
a
proper
graph
lets.
You
do
this
kind
of
automated
analysis
and
people
are
going
to
try
to
sell
this
like
aiops,
and
it's
just
going
to
like
think
for
you
and
that's
not
true.
B
It's
not
going
to
do
that
very
well,
but
it
will
be
able
to
like
automate
a
lot
of
this
digging
around
that
currently
has
to
go
through
a
human
brain
yeah,
and
once
you
get
that
off
of
your
plate,
it's
it's
really
liberating
because
you,
you
can
start
testing
your
hypotheses
very
quickly.
You
don't
have
to
think
well.
If
I
want
to
check
that
that
means
I've
got
to
go,
dig
around
a
bunch.
It's
going
to
take
me
like
15
minutes
to
like
get
all
that
data
together
and
then
grip
through
it.
B
So
I
don't
know
you
know
you
start
you
get
a
little
cautious
about
like
where
you
want
to
place
your
bets
and
if.
B
Like
you
know,
making
guesses
and
verifying
them
so
so
that's
that's
like
one
of
the
big
value
propositions
I
think
open
telemetry
is
bringing
and
by
doing
that,
in
an
open
source
way
where
we're
essentially
trying
to
create
a
standard
by
getting
all
the
big
players
on
board
to
agree,
they're
all
going
to
generate
and
consume
this
data
and
doing
it
a
way.
That's
stable
and
neutral
enough
with
like
the
right
kind
of
dependency
chain
stuff
which
I
can
dig
into,
but
basically
we've
made
it
potentially
consumable
for
open
source
libraries
as
well.
B
So
if
you
have
a
library,
that's
going
to
get
shared
in
a
bunch
of
different
systems
like
a
web
framework
or
you
know
a
database
client,
you
can
actually
instrument
that
library
yourself
with
open
telemetry
and
then,
when
it
plugs
into
an
application
with
other
libraries
using
open
telemetry
and
the
application
itself
using
open
telemetry.
They
all
automatically
start
talking
to
each
other.
A
That's
actually
really
rad,
because
I
am
I'm
like
deeply
lazy,
like
incredibly
lazy,
I'm
the
flavor
of
like
super
lazy
engineer
where
I
will
spend
like
a
bunch
of
extra
time
at
the
beginning
of
a
project
to
wire
up
things
that
enable
me
to
do
nothing
later
on,
or
at
least
do
do
less
busy
work.
Do
like
fewer
like
boring,
repetitive
things.
A
So
this
is
appealing
to
the
lazy
part
of
my
brain
yeah
in
a
pretty
big
way.
B
Yeah,
I
think
that
that
aspect
is
gonna,
be
gonna,
be
really
helpful
and
the
whole
thing
is
built
on
top
of
what's
called
distributed,
tracing
which
used
to
be
seen
as
a
niche
tool.
But
basically,
what
distributed
tracing
is.
Is
it's
a
a
context
that
follows
your
code
as
it's
executing.
A
B
Explicit
context
that
you
just
have
to
hand
around
like
a
jerk,
I'm
not
a
huge
fan
of
the
fact
that
you
have
to
hand
it
around
by
hand.
But.
A
B
I
actually
I
have
a
youtube
video.
That's
like
a
rant
about
context
that
just
really
digs
into
that
particular
issue.
It
is
a
bummer.
You
have
to
pass
around
by
hand,
but
it
is
great
that
there
is
a
canonical
context,
object
and
everyone
has
agreed
like
this
is
where
you
put
your
stuff
and
so
that
allows
you
to
hand
things
like
like
open,
telemetry
constructs
that
have
to
follow
your
code
just
go
into
the
context,
object
and
those
objects.
B
We
call
spans
are
what
generate
this
graph,
so
they
have
what's
called
a
trace
id.
I
could
probably
just
draw
this
but
sure,
let's
see,
let's.
A
See
here's
some
drawing
while
ted's
pulling
up
the
the
drawing
app
just
a
reminder
to
click
the
follow
button
so
that
you
can
a
use.
The
chat
ask
us
questions.
Please
do
ask
us
questions
even
if
they're
not
related
to
open
telemetry.
If
you
have
questions
about
ted's
virtual
background
or
my
hair,
that's
fine
too.
This
is
a
real
background.
Is
it
my
living
room?
Oh
my
god
really
see.
Look
it's
real!
Your
living
room
is
incredible.
Oh
thanks!
Wow,
it's
just
it's
so
perfect.
I
assumed
that
it
was
a
virtual
background.
B
Yeah
okey-doke
so
distributed
tracing
is
basically,
you
know,
you've
got.
Let's
say:
you've
got
two
services
here
and
there's
like
some
operations
that
are
occurring.
Okay,
so
you
have
like
operation
a
calls
operation
b,
which
calls
operation
c,
and
then
that
makes
like
a
little
network
request
here.
B
So
this
makes
like
a
network
request
to
this
other
service,
which
then
has
some
more
operations,
and
then
you
know,
maybe
it
makes
some
more
requests
to
other
things
and
so
on
and
so
forth.
So
you've
got
this
kind
of
chain
of
services
and
you've
got
the
the
user
coming
in
here
you
know
clicking
buy
or
whatever
it
is.
That's
kicking
all
of
this
off
and
so
in
all
of
these
operations.
Here
you
have
events
which
are
basically
like
logs.
You
have
all
these
little
events
happening.
B
B
So
let's
call
that
span
and
a
span
has
an
id,
and
all
of
these
spans
are
connected
to
each
other
where
each
span
has
a
parent.
So
you
have
a
parent
id
sorry,
my
handwriting
is
terrible.
No
it's
fine
and
then
so
that's
that's.
The
basis
of
your
graph
is
you've
got
each
thing's
got
an
id.
It's
got
a
parent
right,
basic
graph
and
then
this
whole
overall
graph
has
an
id
for
the
entire
transaction,
which
is
called
your
trace
id.
A
Okay,
somebody's
asking
what
you're
using
to
draw
by
the.
B
Up
on
procreate
in
ipad,
that's
like
my
favorite
thing,
but
if
I'm
on
desktop,
I
just
they
pay
for
the
adobe
thing,
so
I
use
it
right,
yeah,
of
course,
yeah.
So
yeah
span
id
pair
span.
You
got
your
span
id
your
parent
id
and
then
your
trace
id
got
it,
and
so
this,
this
blob
of
data
can
then
be
associated
with
every
log.
B
And
then
anything
else
you
might
end
up
generating
so
span.
Attributes
are
things
like
the
the
operation,
the
duration
of
the
operation,
the
start
time
and
then
a
bunch
of
indexes
that
you
might
want
to
have
on
all
of
the
different
events.
So,
for
example,
if
you
have
an
http
request,
there's
you
know
things
like
you
know:
what's
the
method,
what's
the
url,
what's
the
status
code
that
was
returned,
all
of
that
did
this
operation
succeed.
Did
it
fail?
B
Those
kind
of
attributes
are
collectively
applied
to
all
of
the
events
that
would
occur
within
that
particular
operation.
So
we
call.
A
B
So
it's
attributes
all
the
way
down.
Likewise,
the
metrics
have
attributes
all
the
way
down
yeah,
but
if
you
make
the
point
is
once
you
get
tracing
set
up,
then
anytime
you
make
a
log
or
anytime
you
make
a
metric.
It
automatically
gets
this.
These
ids
stapled
onto
it,
and
so
this
allows
you
if
you
find
one
of
these
things,
for
example
like
if
you're
looking
at
like
the
log.
That's
like
this
thing
blew
up
like
so
maybe
it's
it's
like
kaboom
here,
oh
no
and
you're
like
okay.
B
I
got
this.
I
got
this
event,
but
I
want
to
know
like
what
what
happened
here
like
what
kicked
this
off.
I
may
not
have
all
the
data
I
need
here.
I
may
want
to
know
something
that
occurred
somewhere
else.
For
example,
there
might
be
some
correlation.
That's
happening
like
this
blew
up
and
you
might
be
noticing
every
time
this
blew
up.
This
thing
has
like
project
id.
B
B
Or
latency
has
gone
through
the
roof,
like
your
kafka.
Cue,
is
backing
up
and
noticing,
like
all
of
that,
delay
is
happening
from
kafka
node
six.
That
kind
of
stuff
is
going
to
to
really
rapidly
rapidly
help,
and
I
should
mention
in
addition
to
these
spans.
So
this
is
the
kind
of
transaction
context.
So
we
call
this
like
like
this
is
trace
con.
Here
we
go
so
this
is
what
we
call
trace
context.
B
A
B
So
you're
able
to
to
kind
of
cross-index
not
just
looking
at
the
transactions
which
are
kind
of
like
every
time.
This
runs
what
happens.
You're.
Also,
then,
looking
at
what
are
the
services?
What
are
the
resources
this
transaction
was
associated
with
and
so
having
all
of
that
data
together
lets.
You
move
around
a
lot,
and
this
includes
metrics
right.
B
So
if
you
generate
a
metric
somewhere
in
here
that
metric
is
going
to
automatically
get
associated
with,
you
know
the
machine
that
it
was
generated
from,
and
then
it's
also
going
to
every
time
you
say
count
that
metric
it's
going
to
get
associated
with
transaction
that
caused
that
count,
and
so
these
are
what
are
called
exemplars
and
that
allows
you
to
to
kind
of
if
you
have
a
tool
that
will
do
this
bounce
back
and
forth
between,
like
looking
at
your
metrics
and
then
just
looking
at
the
transactions
that
caused
it.
A
B
And
that's
open
telemetry,
in
a
nutshell,
is
like
that's,
that's
the
value
prop.
The
other
value
prop,
like
I
said,
is
by
doing
this
in,
like
an
open
source
standard
approach
and
by
standard
I
mean
we
convincing
everyone
to
use
it
and
we're
developing
it
in
a
manner.
That's
really
focused
on
long-term
stability
like
we're,
never
gonna
ship,
a
2.0
of
any
of
our
stable
interfaces
once
they
become
stable.
B
A
B
Sort
of
time
scale
of
like
stability
and
support,
we're
thinking
about,
and
that's
what's
going
to
allow
open
source
software
to
be
like
well,
you
know
what
I
could
instrument
myself
rather
than
having
instrumentation
come
as
a
plug-in
that
kind
of
hooks
in
which
is
how
it
currently
works.
B
You
can
say:
well
I'm
going
to
instrument
my
database
client
or
my
web
framework
myself,
and
then
I'm
going
to
ship
a
playbook
to
my
users.
So
let
them
know
I
provided
them
all
these
configuration
options,
let
them
tune
and
I'm
providing
them
this
observability
data
and
I'm
going
to
give
them
a
playbook.
That
says
you
know
when
you
see
these
kinds
of
squigglies,
it
means
you
know
you
should
tune
tune.
These
knobs
right
now.
Playbooks
are
something
sres
just
make
for
themselves,
but
my
hope
is
in
the
future.
B
The
people
who
write
the
software
will
be
able
to
to
hand
you
the
playbooks
so
bad
yeah.
That's
that
those
are
the
big
big
goals
for
the
open,
telemetry
project.
A
Those
are
big
goals,
but
so
this
is
something
that,
like
anybody,
can
just
take
a
crack
at
themselves
right
like
if
somebody
wants
to
go
wire
up
their
personal
project
with
this,
they
can
just
yes,
let
it
rip.
B
Yeah
tracing
is
stable,
so
it's
totally
fine
once
any
look
at
any
client
that
comes
out
once
it's
hit.
1.0.
That
means
tracing
is
stable.
We're
working
on
the
metrics
api
right
now,
so
that'll
be
stable
by
the
end
of
the
year
and
we're
also
working
on
on
vlogs.
So
you
can.
You
can
essentially
log
using
the
tracing
system
today,
sick.
A
Yeah
this
has
been.
This
has
been
really
really
great.
This
has
been
really
helpful,
especially
for
for
me.
I
hope
it
was
helpful
for
the
like
30-ish
people
watching
us,
but
for
me
it
was
useful
because
I
didn't
really
understand
what
open
telemetry
did
before
this,
which
is
the
whole
point
of
this
show,
because
I
genuinely
do
not
understand
any
of
the
projects
I'm
inviting
on
here.
A
That
way,
it's
more
authentic
and
I
don't
have
to
like
pretend
to
ask
a
stupid
question.
I
just
like
authentically
ask
a
stupid
question,
but
we
are
running
out
of
time.
So
before
we
go,
is
there
anything
you
would
like
to
shill
for.
B
We
hang
out
on
slack
the
cncf
slack
any
channel
that
starts
hotel,
dash,
there's
an
open,
general
open,
telemetry
channel,
you
can
say
hi,
but
we
we
work
in
work,
cigs
just
like
kubernetes,
and
so
all
the
cigs
have
a
channel
and
the
sigs
often
meet
every
week
on
zoom.
So
there's
a
calendar
and
all
of
that
information
is
in.
If
you
go
to
our
github
org,
there's
a
repo
called
community,
and
that
has
that
has
all
the
info
for
kubecon
coming
up.
B
Just
you
know
sneak
preview,
we're
going
to
try
to
do
a
live,
open,
telemetry
community
day.
It
won't
be
part
of
kubecon
because
we
want
to
not
require
a
cubecon
ticket
and
to
keep
the
the
cost
low
for
attendees.
So
it'll,
probably
we
have
to
work
the
details
out,
but
it
will
be
very
cheap
to
attend
and
it'll
be
like
a
one
day.
Unconference.
Basically,
a
big
community
get
together
because
we
haven't
seen
each
other
because
of
the
pandemic
have
a
look
out
for
that.
We
should
hopefully
be
announcing
that
soonish.
B
A
Well,
thank
you
so
much
for
joining
me.
I
really
really
appreciate
it.
Everybody
on
twitch
who's
still
watching
the
next
show
on
cloud
native
tv
is
tomorrow
at
1pm,
pacific
time
it's
fields
tested
with
caslin
fields.
She
is
walking
everybody
through
it
while
she
deploys
a
personal
blog
on
kubernetes,
because
what
do
we
love
over
engineering,
simple
things
I
do
at
least
so
go
see
caslin
tomorrow,
I
will
be
back
week
after
next
and
ted.
We'll
see
you
on
twitter.
So
much.