►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Now
this
is
an
official
live
stream
of
the
cncf
and,
as
such,
it's
subject
to
the
cncf
code
of
conduct.
So
please
don't
add
anything
to
the
chats
that
would
be
in
violation
of
that
code
of
conduct
so
basically
be
respectful
of
all
of
your
participants,
be
respectful
of
online
and
respectful
of
me
too.
Please,
friends
who
are
joining
us
live.
Please
say
hello
in
the
chat.
A
B
Thank
you
for
having
me
hello,
everyone,
hello,
Community,
super
thrilled
to
be
here
my
first
time
joining
the
the
cloud
native
live
and
yeah.
Let's
just
get
started
just
a
bit
about
myself.
I
do
developer
relations,
I've
been
doing
it
for
roughly
five
six
years
now
so
I
should
know
what
I'm
doing
I
hope
right.
B
Now,
yeah
and
in
my
spare
time,
when
I'm
not
doing
developer
relations,
I,
like
lifting
heavy
things
off
the
floor
and
getting
punched
in
the
face
which
basically
translates
to
weightlifting
and
boxing.
B
Wonderful
yeah:
do
you
wanna?
Let's
do
just
quickly
jump
into
the
rough
agenda
of
today,
all
right,
yeah.
Let
me
pull
up
the
here.
We
go
so
obviously
what
we're
going
to
going
to
be
talking
about
today
is
going
to
be
really
kind
of
just
based
around
observability,
but
not
just
specifically
observability.
It's
going
to
be
based
around
open,
Telemetry
and
specifically
I
want
to
kind
of
demystify
the
whole
thing.
B
So
it's
quite
hard
to
understand
what
it
exactly
is
and
also
how
you
can
contribute
to
the
to
helping
the
community,
but
also
helping
the
the
open,
Telemetry
demo
was,
which
is
also
a
big
big
part
of
of
the.
B
What's
happening,
it's
very
hard
to
test
it.
Integration
testing
is
a
pain
and
so
yeah.
So
that's,
basically
what
we're
gonna
go
into
be
talking
about
in
the
in
the
next
hour
or
so,
and
I
really
want
to
start
by
by
starting
from
the
Bare
Bones
Basics.
So
let's
say
what
is
observability?
It's
like
a
buzzword.
We
know
about
monitoring.
We
know
about
metrics,
we
know
about
logging,
but
what?
What
does
this
specifically
mean
the
easier
like
the
easiest
way,
I
would
say
to
explain
it.
B
Is
that
it's
it's
the
way
you
observe
your
system?
So
that's
it's
the
way
you
figure
out
all
of
the
unknown
unknowns.
That's
that
are
going
on
in
your
system
to
help
you
troubleshoot
any
problems
that
that
are
in
your
system
now.
The
way
you
do
that
is
specifically
it's
the
way
of
emitting
signals,
and
those
signals
are
the
Holy
Trinity
or
whatever
we
call
them
traces,
metrics
and
logs.
The
three
pillars
of
observability
are
traces,
metrics
and
logs
now,
specifically
for
open
Telemetry.
B
It
does
support
all
of
those,
but
the
most
important
thing
and
what
we
will
be
talking
about
during
this
live
stream
will
be
distributed
tracing.
So
it's
the
distributed
tracing
part
that
we
really
want
to
understand
now
the
basics
of
distributed
tracing.
It's
not
really
easy
to
wrap
your
head
around
because
there's
there
are
a
lot
of
things
like.
Oh,
what
is
this
context
thing
and
nobody
really
knows
so
they
just
kind
of
wing
it,
and
things
like
that
I
want
to
help
them
mystify.
B
The
easiest
way
of
thinking
about
it
is
everybody
knows
what
logs
are.
Everybody
knows
what
about
logging?
Everybody
has
done
logging
if
you
haven't,
you
probably
should
try
side
note
anyway.
Logs
are
quite
literally.
This
is
a
log
line.
You
write
a
console
log
or
whatever
log
in
your
application.
It
spits
out
the
log,
and
you
know,
what's
basically
happening
in
your
system.
Tracing
is
very
similar
conceptually
where,
instead
of
logs
as
in
logs
you're,
generating
something
called
a
span,
and
as
it
says
here,
a
span
is
a
unit
of
work
or
operation.
B
But
let's
key
in
this,
this
keyword
here,
distributed
distributed,
tracing
wouldn't
be
distributed.
If
you
only
had
spans,
the
thing
with
spans
is
is
that
they
Connect
into
a
distributed
Trace.
So
here's
a
perfect
example,
this
auth
n
Span,
is
just
part
of
this
entire
distributed
trace,
and
this
entire
set
of
spans
that
are
connected
to
each
other
is
a
distributed,
trace
or
a
trace
per
se,
and
this
is
what
what
the
actual
power
of
tracing
gives
you.
B
You
don't
have
just
individual
log
lines,
and
then
you
have
to
figure
out
what
it
all
what
it
does
and
how
it
connects
between
one
part
of
your
system
and
the
other
part
of
your
system.
You're,
basically
getting
a
waterfall
diagram
so
to
say
off
you're
in
the
whole
interaction.
One
request
goes
through
within
your
system
and
that's
the
power
of
it
yeah.
So,
let's
just
kind
of
take
a
step
back
and
I
want
to
show
you
really
quickly.
B
Jaeger
is,
is
a
is
a
tool
for
distributed
tracing
and
then
just
say,
but
whatever
whatever
it
doesn't,
really
matter
which
one.
What
I
want
to
show
you
here
is
that
this
is
what
you
would
see
in
your
system.
If
you
have
something
like
Jaeger
installed
and
a
Jaeger
Jaeger
is
basically
a
tracing
back
end.
It's
like
a
like
a
data
store
for
your
traces,
so
all
of
the
traces
you
generate
in
your
system
you
put
in
Jaeger
and
then
you
can
get
a
really
nice
waterfall
of
everything.
B
A
A
couple
of
questions
from
what
one
person
asking
more
than
one
question
but
I
guess
they're
wondering
how
do
you
manage
this
for
python
logging
and
then,
at
which
point?
Is
it
best
to
use
this
framework
instead
of
a
sidecar
in
a
kubernetes
environment?
So
I
guess
this
is
touching
on
like?
Are
these
things,
language
specific
and
how
exactly
are
our
traces
collected.
B
That
is
a
lovely
question
and
the
way
I
want
to
answer
that
is
to
just
pull
up
the
open
Telemetry
website,
so
open
Telemetry
handles
all
of
this
open
Telemetry
is
a
it's
a
set
of
sdks.
It's
a
set
of
tools
and
libraries
that
lets
you
generate
Telemetry
in
the
most
efficient
way
possible.
So
you
you
can
generally
slot
this
in
instead
of
any
logging
framework
or
log
library
or
whatever
you're
using
and
instead
use
open
Telemetry
instead.
B
So,
as
you
can
see
here,
it's
literally
a
collection
of
apis
sdks
and
tools
and
that
you
use
it
to
instrument
your
code
so
you're,
basically
the
same
way
you
would
do
like
python,
library.log,
debug
or
whatever
you
want
to
use
there.
You
use
open,
Telemetry
sdks
instead,
now
the
best
way
of
of
explaining
that
is
popping
into
the
official
documentation,
obviously
and
saying
what
is
open
Telemetry,
so
you
basically
get
the
observability
framework
to
create
manage
your
Telemetry
data,
which
means
that
as
same
thing
as
you
were,
you
were
doing
for.
B
Let's
say
your
your
python
logging.
You
would
slot
in
open,
Telemetry
and
say
yep
I
want
to
use
it
for
logs.
I
want
to
use
it
for
metrics,
but
most
specifically,
the
traces
part
is,
is
the
power
because,
with
a
trace,
you
can
also
add
in
log
log
events
to
a
particular
distributed
Trace.
So
you
can
get
context
to
that
log
you're,
not
just
basically
sifting
through
logs
you're.
Actually,
you
can
actually
see
this
particular
request.
Had
a
problem,
and
here
are
all
the
logs
within
that
distributed.
Trace
and.
B
A
B
B
If
it's
invalid,
because
of
X
number
of
different
reasons,
which
is
the
power
of
of
tracing
you're,
basically
observing
exactly
what's
happening
and
you
can
set
in
you-
can
add
in
even
custom
spans
within
your
code,
the
same
as
you
would
do
with
logs.
But
these
custom
spans
are
basically
added
into
that
entire
context
of
that
one
distributed
Trace,
which
represents
one
API
request
and
that's
basically
the
the
most
powerful
thing
that
I
would
say
so.
A
It's
it's
it's
way
more
rich
in
terms
of
what
it
collects,
but
then
also
it
plugs
into
a
lot
of
different
Frameworks,
not
just
python
logging,
for
example
like
it
can
know
about
your
kubernetes.
It
can
know
about
other
languages,
and
so
it
has
a
standardized
way
to
collect
the
Telemetry
data,
regardless
of
where
it's
collecting
the
Telemetry
data
from
I
want
to
ask
you
Anon,
we
have
more
questions
rolling
in.
A
Do
you
want
to
go
with
the
questions
for
a
while,
or
do
you
want
to
tell
your
story
and
then
maybe
we'll
we'll
break
for
questions
in
a
bit.
B
Yeah,
let's
do
a
let's
do
a
couple
more
questions
before
we
jump
in,
because
some
of
the
things
I
want
to
move
on
to
right
now
is
exactly
what
you
were
saying
about
using
different
languages
and
what
the
sdks
actually
entail,
or
what
they
actually
have
as
a
possibility
and
I
would
say
that
that
would
be
something
we
can
jump
into
after.
After
a
few
more
questions.
Okay,.
A
Sounds
great,
so
Saeed
has
a
question:
can
you
add
traces
to
Black
Box
applications
Legacy
where
you
don't
have
access
to
the
code
base
before
you
answer
that,
will
you
let
people
know
what
what
Black
Box
means.
B
It's
quite
literally,
where
the
only
access,
actually
the
only
knowledge
you
have
about
the
system,
is
what
you
put
into
it
and
what
it
puts
or
spits
back
out
to
you.
So
basically
the
request
response.
You
only
know
what
you
have
to
give
it
and
what
you
should
get
back.
You
don't
have
any
clue
what's
happening
in
the
inner
workings
of
that
of
that
system.
B
So
one
thing
that
I
think
you
could
do
there
is
depending
on
where
your
application
is
running.
Let's
say
you
have
it
running
in
kubernetes,
open
Telemetry
does
and
even
says
here
on
the
on
the
screen.
So
open
Telemetry
gives
you
an
operator
for
kubernetes,
so
you
would
quite
literally
set
up
the
operator
run
it
as
a
kubernetes
crd
inside
of
your
cluster,
and
then
you
would
specify
on
the
on
the
pods
you
want.
B
You
can
specify
the
auto
instrumentation
libraries,
which
is
basically
a
super
magical
way
of
getting
tracing,
enabled
in
your
kubernetes
environment
without
writing
any
code.
So
this
basically
no
code
changes
to
your
code
base.
This
is
just
configuration
changes
and
something
that
your
devops
team
would
do,
or
a
platform
team
or
whatever
you.
B
You
call
it
in
your
in
your
organization,
but
apart
from
this
operator
for
kubernetes
everything
else
that
I've
I've
tried,
you
basically
need
to
do
some
sort
of
configuration,
changes
that
are
code
based
and
that's
something
I'm
going
to
be
showing
in
a
moment
as
well.
We
still
get
Auto
instrumentation,
but
there
are
some
slight
code
changes.
You
need
to
do
to
actually
load
those
libraries
to
start
generating
that
automatic
Trace,
those
automatic
traces
so
to
say.
A
B
That's
a
that's
a
good
question.
I
would
say
that
back
in
the
day,
you
would
start
with
logging
and
then
you
would
kind
of
add
in
some
metrics
and
then
you
would
kind
of
add
in
some
APM
and
then
that
would
kind
of
all
just
mesh
together
in
some
weird
way.
B
If
you're,
if
you're
doing
it
from
scratch,
now
I
would
suggest
starting
from
tracing
and
going
from
there.
Trace
using
distributed.
Traces
can
in
some
sense,
be
equated
of
with
the
term
APM
or
application
performance
monitoring,
which
is
something
that
I'm
not
sure
who
coined
it.
But
it's
quite
common
with
certain
tools
like
datadog
New,
Relic,
all
the
big
vendors
in
the
in
the
space.
So
you
could
say
that
if
you're
used
to
having
a
sort
of
APM
monitoring
system
using
distributed
tracing
goes
with
that
with
that
logic.
B
So
to
say
so
I
would
say:
try
doing
instrumentation
with
open
Telemetry
first,
because
that
enables
you
to
both
generate
metrics
with
open,
Telemetry
and
add
your
Logs
with
with
open
Telemetry
as
well,
so
you're
getting
a
non-vendor
lock-in
way
of
of
generating
the
Telemetry,
where
you
can
basically
choose
whatever
backend
you
want.
If
you
want
to
send
it
to
datadog,
if
you
wanted
to
send
it
to
Jaeger.
A
Amazing
and
now
we're
level
we're
back
to
Ground
Zero,
with
the
questions.
B
Wonderful
and
with
that
I
think
it's
gonna
be
fine,
I
mean
I
did
mention
the
open
Telemetry
operator
for
kubernetes,
but
generally
the
the
thing
I
also
want
to
mention
here
is
that
all
of
the
libraries
that
the
open,
Telemetry
project
offers
it's
I
mean
the
languages
you
can
choose
from
are
immense.
I
mean
everything
from
C
plus
plus
JavaScript
python
I
mean
basically
any
languages
that
you're
using
you'll
have
the
SDK
available
to
generate
traces
from
it
and
to
show
you
the
specifics.
There
I
I.
A
B
I
mean
because
I'm
a
JavaScript
developer,
hopefully
nobody
calls
me
a
fake
developer,
because
I'm
using
JavaScript,
I
I
mean
I
am
but
yeah.
Let's
not
go
into
that.
I
wanted
to
actually
talk
about
the
instrumentation
part.
So
what's
the
easiest
way
of
getting
started
with
a
language
such
as
such
as
JavaScript
and
here's,
let's
actually
pop
into
the
automatic
instrumentation
and
I'll
just
walk
you
through
here
you
install
some
libraries
cool
your
node.js,
that's
npm
install
is
perfectly
perfectly
reasonable.
B
B
It
looks
too
good
to
be
true,
and
this
is
the
part
that
I
want
to
talk
a
bit
about
the
the
way
you
need
to
actually
configure
some
code
edits
to
actually
get
it
to
work
and
I
think
the
best
way
of
doing
that.
Let
me
just
pull
in
something
and
and
show
you
so,
let's
go
ahead
and
open
some
code
and
I
specifically
want
to
show
you
this.
On
the
left
hand
side,
you
can
see
that
it's
a
service
called
payment
service
and
we
haven't
opened.
B
It's
obviously
JavaScript
it's
node.js
and
we
have
a
file
called
open.
Telemetry.Js
now,
I
have
very
conveniently
just
commented
this
little
section
out
and
added
that
in
so
with
this.
So
it's
a
open
Telemetry
requires
some
SDK
SDK
node,
getting
some
Auto
instrumentation
I'm,
getting
some
exporters,
yada
yada
whatever
and
I'm,
setting
my
Autumn
instrumentation
and
I'm
setting
where
I'm
going
to
export
my
my
traces.
Now,
this
30
odd
lines
of
code
is
going
to
Auto
instrument.
My
node.js
app.
A
Cool
but
that's
it
mic
drop
yeah.
B
And
you're,
obviously
looking
at
me,
this
dude
is
like
he's:
freaking
he's
lying,
there's,
no
way,
there's
no
way
it's
gonna
be
but
yeah.
Let
me
backtrack
a
bit.
I've
commented
this
out:
there's
no
open
Telemetry
instrumentation
in
my
payment
service
cool.
B
A
B
A
B
Obviously
this,
obviously
this
test
here,
just
I,
think
this
is
going
to
be
fine.
Obviously,
this
API
test
here
is
going
to
say
yeah
I'm,
looking
for
the
traces.
What
not
this
is
by
the
way
this
is
Trace
test.
This
is
the
testing
harness
that
the
open,
Telemetry
demo
is
using
more
on
that
in
a
second.
B
This
is
going
to
start
running
and
it's
going
to
fail
after
three
minutes,
which
is
the
timeout
it's
going
to
fail
because
I
have
no
tracing
in
my
in
my
payment
service,
so
just
gonna
say
I
triggered
it.
Nothing
really
happened.
I
didn't
really
know
what
to
do.
Well,
you
know,
let's
see
if
my,
if
my
hypothesis
is
true,
so
let's
move
back
to
our
to
our
open
Telemetry
file.
B
Let's
go
ahead
and
comment
in
are
supposedly
working,
Auto
instrumentation
save
that
file
up
go
ahead
and
rebuild
our
payment
service,
so
we
need
to
just
obviously
need
to
stop
it.
We
need
to.
We
need
to
rebuild
it
and
restart
it
once
again.
This
is
going
to
take
three
seconds.
So
if
we
have
any
questions
that
we.
A
A
B
Is
a
perfect
segue,
so
just
for
for
reference,
I've
restarted
this
payment
service
with.
B
To
run
my
test
again,
so
let's
do
just
the
same
API
test
pop
back
into
the
UI
and
let's
reopen
the
obviously
last
run.
Let's
reopen
this
little
body
here
and
now.
B
If
we
go
to
the
trace
the
trace
tab,
we're
going
to
see
the
trace,
but
also
one
thing
that
I'm
going
to
to
talk
about
here
is
the
question
was
about
semantic
conventions,
beautiful
question,
that's
something
that
was
also
thought
of
in
the
community,
because
one
of
the
the
big
problems
when
when
pushing
changes,
is
that
you
break
not
just
the
the
tests
or
you
break,
obviously
break
the
code
because
the
tests
are
passing,
but
they
shouldn't
be
passing,
but
also
you're,
not
writing
adequate
Telemetry,
so
you're,
basically
not
adhering
to
the
semantic
conventions
and
the
rules
that
the
open
Telemetry
Community
wants
you
to
to
adhere
to,
which
is
obviously,
first
and
foremost,
we
have
our
Trace.
B
Now,
obviously,
we
have
our
trade
span
so
that
one
jumble
of
30
lines
of
code
actually
made
us
get
our
Trace
back.
So
with
that
magic,
let's,
let's
say
through
30
lines
of
code
magic.
We
can
actually
see
our
Trace,
but
even
more
importantly,
we
can
pull
up
this
analysis
of
our
trace
and
see
okay,
so
our
Trace
is
83
percent
good,
which
basically
means
yeah.
I
have
some
semantic
conventions
that
have
failed.
I
can
jump
in
and
say
Okay,
so
this
failed.
B
Let
me
go
in
and
pull
up
the
documentation
on
how
to
fix
it,
which
is
an
amazing
amazing
thing
that
our
Our
Community
member
just
mentioned,
which
is
I,
actually
have
some
hand
holding
I.
Don't
really
have
to
make
up
my
own
way
of
doing
this.
I
have
the
open,
Telemetry
semantic
conventions
that
I
can
follow.
If
I
really
don't
I
mean
if
I
don't
really
know
what
the
problem
here
is,
it
says,
attribute
peer
and
I
mean
who
knows:
pull
up
the
documentation.
I
can
see.
Okay,
so
here
are
the
rule.
B
B
B
We
can
pop
back
here
to
talking
a
bit
about
this
magical,
I'm
going
to
say,
Auto
instrumentation
this
this
code,
that
I'm
showing
right
now
is
part
of.
So
it's
a
it's
called
a
payment
service.
It's
part
of
the
the
open
Telemetry
demo,
which
is
a
I'm
just
going
to
be
blunt.
It's
it's
an
amazing
amazing
demonstration
of
what
open
Telemetry
can
do.
B
It's
obviously
maintained
and
run
by
the
open,
Telemetry
community,
and
it
gives
you
an
overview
of
both
the
power
of
open,
Telemetry
and
also
guides
you
on
how
you
should
be
using
open,
Telemetry
and
I.
Think
the
best
way
of
showing
that
as
well,
is
to
pop
into
the
documentation
here
and
say:
yep,
let's
actually
see
what
actually,
what
features
are
in
the
in
the
open,
Telemetry
demo
and
one
of
the
the
most
important
features
that
I
would
think.
I
really
want
to
talk
about
a
bit.
B
Is
that
it
features
tons
of
languages,
so
it
doesn't
really
matter
what
your
background
is
if
you're
like
me,
writing
node.js.
If
you're
a
golang
engineer,
if
you
run,
if
you
write
Java
that
doesn't
matter,
every
single
one
of
the
sdks
are
showcased
in
the
in
the
demo
itself,
so
you
can
pop
in
and
actually
read
code
and
see
what's
happening,
you
don't
really
have
to
write
or,
like
sorry,
read
any
documentation
and
kind
of
fumble
through.
B
You
know
exactly
what's
happening
if
you
pop
into
the
code
and
and
look
at
the
code
and
read
the
code
and
understand
it
that
way
and
I
love
that
and
with
that
I
think
there
are.
There
are
a
total
of
11
languages
and
there
are
12
services,
and
you
can
see
here
that
we
have
the
the
payment
service
here
that
we're
going
to
be
looking
at
as
well
and
yeah
I,
just
I
just
think
it's
it's
freaking
amazing.
B
For
sure,
let
me
just
open
up
the
architecture
here,
so
we
can
walk
through
just.
A
Questions:
okay,
we
have
one
that
is.
As
of
all,
my
question
is
what
is
open?
Tracing
rimaz's
question
is:
is
there
a
way
to
maintain
the
compatibility
between
open
tracing
where
tracing
is
governed
by
xb3
headers
and
open
Telemetry,
where
tracing
is
governed
by
transparent
headers
in
a
distributed
environment?
At
the
same
time,.
B
So
the
short
answer
would
be
that
open
tracing
and
open
census
merged
in
2019
into
open
Telemetry.
So
you
should
you
should
just
drop
open
tracing
sorry
to
be
so
blunt,
but
I
mean
you
should
just
move
to
to
open
Telemetry
and
just
trash
everything
else.
You'll
be
doing
obviously,
there's
a
there's
a
migration
guide
as
well.
So
it's
not
it's
not
that
big
of
a
deal.
Obviously,
people
have
thought
about
it.
People
that
are
much
smarter
than
me
have
have
written
the
guides
and
actually
figured
out
a
way
of
doing
it.
B
B
Version
support
but
I
mean,
if
you're
quite
new
to
this.
If
you
have
existing
open
tracing,
sdks
or
instrumentation
in
your
code,
you
should
probably
consider
dedicating
some
time
and
figuring
out
how
to
change.
Otherwise,
if
you
don't,
if
you
do
not,
you
should
just
use
open
telemetry.
A
Excellence-
and
we
have
another
question
about
collecting
the
Telemetry
data-
can
we
do
tracing
with
open
Telemetry
as
a
init
container
or
a
sidecar.
A
B
A
B
Bit
different
because
you
specifically
need
to
write
the
instrumentation
in
your
code
or
you
use
the
open
Telemetry
operator
in
kubernetes
that
auto
injects
these
libraries,
so
I
would
say
that
so
it's
a
it's,
not
a
it's,
not
an
easy
question.
I
would
say,
use
the
specified
guides
that
the
open,
Telemetry
documentation
provides
with
you.
If
you're
running
kubernetes
as
I
obviously
see
you
are
use
the
open,
Telemetry
operator.
It's
it's
dead,
simple,
like
I've
tried
it
there's,
even
there's
even
a
documentation,
page
or
well.
I
can
I
can
also
pull
that
up.
B
It's
I
mean
it's
it's
if
I
don't
so
it's
that
simple,
you
install
a
cert
manager,
you
install
the
open
Telemetry
operator,
you
configure
a
file
and
you're
you're,
basically
apply
it
and
you're
basically
done.
The
only
thing
that
you
really
need
to
do
is
that
you
need
to
specify
in
your
services,
so
the
deployment
that
you
want
to
have
the
auto
injected
Library.
You
need
to
specify
an
annotation
for
that.
Now
it's
they
have
four
I
believe
they
only
have
four
right
now
that
are
available.
B
A
B
Keep
saying
it's
magic,
it
really
is
Magic
right,
so
I
mean
that
to
actually
believe
it
yourself
try,
it
I
mean
dedicate
half
an
hour
of
your
day.
Try
adding
it.
I
mean
this,
it's
a
crd!
So
if
you
don't
like
it,
you
can
just
delete
it.
You
know
nothing's
going
to
really
happen
to
your
cluster,
so
yeah.
A
So
I
I
think
you
just
had
the
right
thing.
The
what
language
is
this
question
is:
is
the
kubernetes
operator
available
for
all
the
languages
supported,
so
these
four
languages
are:
are
the
ones
that.
B
Are
exactly
so,
we
can
even
pull
that
up
in
the
that
doesn't
really
work.
We
can't
even
pull
that
up
in
the
documentation
here.
Let's
say
we
want
to
go
to
the
open
Telemetry
operator.
That
would
be
instrumentation
operator
and
we
can
pull
that
up
and
we
can
see
here
kubernetes
operator.
We
can
also
probably
link
this
in.
A
B
It's
quite
strange
that
they
don't
have
a
list.
Oh
there
we
go
so
they
have
annotations
for
injection
Java,
node.js
python.net
and
go,
but
go
has
some
some
a
bit
more
complicated
setup
that
they
need
to
figure
out,
and
that
is
pretty
much
it
yeah.
I
need
I
need
to
update
my
documentation
now
because
go
then
have
that
It
Go.
A
B
Sweet
sweet
yeah
with
that
I
think.
B
B
This
is
obviously
the
open
Telemetry
demo
and
it
is
a
perfect
demo
of
what
a
production
system
at
a
corporation
would
look
like
and
saying
that
the
open,
Telemetry
demo
I
think
in
the
last
month
had
15
or
so
different
contributors
that
numerous
pull
requests
and
these
people
are
from
all
over
the
world,
they're
from
every
imaginable
time
zone
that
we
have
on
the
on
the
globe.
And
how
do
you
sync
with
that?
You
know
how
do
you?
How
do
you
make
sure
when
somebody
edits
this
guy
that
everything
else
just
that
doesn't
die
right?
B
It's
not
easy,
especially
if
you're
writing
instrumentation.
If
you're
setting
everything
up
and
then
changing
that-
and
you
obviously
don't
know
how
it's
all
running,
because
you're
a
new
contributor-
and
you
have
no
idea
what's
happening
and
then
you
just
I
mean
even
worse.
If
tests
are
passing,
but
it's
broken,
I
mean
that's
it's
a
nightmare
and
then
you
merge
that
and
then
it's
broken
and
it's
merged
I
mean
it's
just
not
fun
at
all.
B
So
I
think
that
was
the
that
was
some
of
the
major
pain
points
that
the
open
Telemetry
demo
had
because
they
had
black
box
tests
already
set
up
with
with
Ava
and
Cyprus,
and
it
looked
fine
but
sometimes
tracing
broke.
Sometimes
Telemetry
was
just
misconfigured,
sometimes
test
passed
when
they
shouldn't
pass,
and
that
was
that
was
basically
one
of
the
problems
that
that
the
community
wanted
to
fix,
and
that
was
to
just
stop
that
from
happening
more
or
less.
B
And
that's
when
the
the
decision
to
introduce
trace-based
testing
came
in
and
trace-based
testing
is
exactly
what
it.
What
it
sounds
like
it's
using
distributed:
traces
for
for
your
end-to-end
testing
for
your
integration
testing
and
with
that
I
think
the
the
best
way
of
showing
that
is
to
obviously
pop
in
and
do
some
live
coding
and.
A
A
A
A
quick
question:
oops
Yeah
our
traces,
supported
on
the
database
layer
with
open
telemetry.
B
A
B
You
use
postgres
redis,
whatever
you
can
think
of.
Yes,
so
I'm
going
to
show
that
in
a
bit
as
well.
So
if
you're,
if
you're
patient
for
another,
maybe
15
minutes
I'm
going
to
show
some
interactions
in
between
a
API,
so
a
grpc
API
and
some
actually
a
redis
database.
So
cool.
B
B
Docker
compose
file
that
is
part
of
this
demo
and
you
can
see
we
have
tons
of
tons
of
lines
of
code
and
it's
basically
I
can
see
Accounting
Service.
You
can
see
all
of
these
Services
we're
just
going
to
pop
down
to
the
payment
service,
because
that's
the
one
we
will
be
changing
and
editing
and
then
what's
happening
in
the
payment
service
is
quite
simple.
So
it's
building
a
file,
it's
loading,
some
environment
variables
and
it's
just
starting
I
mean
it's.
Nothing
really
magical
is
happening
there.
B
B
What's
happening
here
is
just
a
node
app
I'm
loading
loading,
the
source
from
the
payment
service,
I'm
loading,
the
the
Proto
file
as
well
and
I'm
just
running
it
super
simple
node.js
service.
Now
one
thing
I
do
want
to
note
with
the
the
actual
Docker
compose
file.
That
is
quite
important,
for,
if
you
want
to
contribute
to
the
open,
Telemetry
demo
is
something
called
profiles.
B
So
profiles
in
the
docker
impose
is
just
a
way
of
selecting
which
services
are
going
to
start
by
default
versus
which
Services
you
want
to
specifically
tag
when
you're
starting
Docker
compose
to
to
spin
them
up
now.
The
profiles
that
are
named
with
test
those
are
going,
obviously
that
they
are
going
to
run
the
front
and
test
the
integration
tests,
the
trace
based
test
and
then
obviously
Trace
test
server,
which
is
the
the
testing
harness
that
we're
using
also
postgres,
because
it's
a
requirement,
but
that's
less
than
important.
B
One
more
thing
that
I
really
want
to
to
note
as
well,
is
that
I've
added
in
this
Dev
profile
to
the
Tracer
server,
just
because
I
want
to
have
that
up
and
running
when
I'm
doing
my.
My
red
green,
actually,
when
I'm
editing
code
when
I'm
testing
the
code
and
writing
the
instrumentation,
so
that
my
software
software
development
lifecycle
is
actually
up
and
running.
So
I
want
to
do
that.
B
So
when
I
actually
pop
back
into
my
terminal
window,
I
have
the
demo
running
here
and
I'm
running
it
with
Docker
compose
profile
Dev
up,
so
this
is
going
to
start
my
My
Demo.
You
can
see
here
I'm
running
my
open,
Telemetry
demo.
All
of
the
services
are
there
up
and
running
perfect
now,
one
other
thing
that
if
you
want
to
continue
contributing,
there's
a
really
Nifty
little
file
called
make
file
and
in
the
make
file
you
have
all
of
the
commands
set
up
for
you.
B
So
if
you
want
to
run
tests,
you
can
basically
run
the
tests
and
it's
going
to
trigger
all
of
the
the
test
containers
same
if
we
want
to
only
run
the
trace
the
trace
based
tests,
you
can
do
that
as
well
and
then,
obviously
you
can
also
just
run
start
and
and
just
run
all
of
the
all
of
the
services.
That
way,
now
one
thing
that
I
really
want
you
to
to
check
out
a
bit
is
that
this
command
is
going
to
trigger
the
trace
based
test
now.
B
A
A
B
And
perfect
now
what
did
I
want
to
say
yeah,
so
the
payment
service-
and
this
is
going
to
trigger
all
the
tests
that
are
currently
in
the
demo
for
the
payment
service.
So
it's
it's
quite
simple
to
actually
get
started.
If
you
want
to
contribute
to
the
demo,
I
mean
I
would
definitely
just
kind
of
suggest.
B
B
Obviously,
in
the
terminal,
once
the
tests
are
finished
running
you
get
the
either
passed
or
read
as
failed
for
the
test
for
the
test
specs
as
well.
So
this
this
is
just
a
usual
just
the
usual
way
of
of
testing
that
everybody's
used
to
as
well.
So
it's
it's
very
natural.
One
thing
that
I
do
want
to
show,
though,
for
this
particular
step
is
that
the
the
way
this
is
getting
run?
So,
let's
say,
let's
actually
dig
into
this
trace-based
tests
container.
B
B
No,
it's
not
it's!
This
Docker
file
we
can
see.
The
only
thing
that
really
happens
is
that
it's
running
this
app
test.
Trace
testing,
run
bash
script,
so
what's
happening
in
that
script,
is
that
it's
pulling
in
the
CLI
to
actually
run
these
tests
automatically
or
automated
in
an
automated
fashion
and
it's
generating
something
called
a
variable
set.
Now,
it's
quite
literally
loading
in
all
of
these
environment
variables
now,
which
environment
variables
you're.
B
Actually
asking
obviously
the
environment
variables
for
for
the
entire
demo
and
the
things
that
basically,
let's
say
the
important
things
like
these:
the
actual
addresses
for
the
services
and
the
ports
for
their
services,
and
this
is
because
we
really
want
to
pull
that
in.
So
we
can
run
the
tests
so
we're
pulling
those
in.
So
we
can
run
the
test
and
then
what's
happening.
B
Is
that
all
of
these
all
of
these
environment
variables
are
getting
loaded
into
the
to
the
test
files
and
then
the
test
files
are
going
to
get
run
that
way
and
with
that
I
think
we
can
move
on
to
yes,
the
important
part
I.
A
A
Okay,
we
do
have
someone
who
wants
to
follow
up
with
questions
later
to
you.
Do
you
have
a
good
way
to
contact
you.
B
Yeah
for
sure
you
can
either
do
my
email
or
I
mean
you
can
jump
into
the
cncf
slack
and
just
slack
me
there
I'm
in
there.
You
can
probably
find
me
there.
It's
the
easiest
way
to
find
me
excellent.
A
And
then
we
have
a
question
here
seems
like
database
tracing.
Is
clients
based
or
have
the
database
tools
like
redis
postgres
Etc
built-in
server-based
Hotel
support
already,
or
is
it
not
required.
B
No
I
mean
if
so,
the
the
code
we
added
in
in
the
open
Telemetry
file
for
the
node.js
SD
case.
It's.
B
A
Cool
and
then
someone
asked
just
kind
of
generally
about
Helm.
How
does
Helm
relate
to
open
telemetry.
B
B
Sweet
yeah,
so
with
that
this
is
just
a
basic
run
through
of
how
you
can
yourself
get
started
with
contributing
and
also
running
the
the
trace
based
tests
or
yeah.
One
cool
thing
is
as
well
as
if
we
pull
up
this,
so
I
did
obviously
run
this
test
by
by
passing
in
the
payment
service.
If
I
want
to
run
all
of
them,
I
could
just
trigger
it
like
that
and
run
all
of
them,
but
the
for
the
perfect
purposes
of
this
demo,
I
think
I
mean
we
don't
really
need
to
do
that.
B
What
I
think
is
going
to
be
more
fun
if
we
actually
jump
into
the
code
a
bit
more,
more
specifically
jump
into
the
code
of
the
of
the
payment
service
itself.
Now
we
did
jump
back
in
here.
So,
let's
open
up
the
open
Telemetry
file,
so
we
did
jump
in
here
and
and
add
in
add
in
our
instrumentation
one
more
thing:
I
really
want
to
show
you
before
we
move
forward
is
I
want
to
explain
what
the
the
actual
code
of
the
payment
service
does.
B
So
we
can
understand
what
it
what
it
does
and
how
to
write.
The
instrumentation
for
it
as
well,
which
is
what
we're
going
to
be
doing
in
the
next
few
minutes
and
I,
think
the
best
way
of
doing
that
is
first,
let's
pull
up
the
the
Proto
file.
So
let's
do
like
that
and
in
regards
for
the
payment
service.
Specifically,
we
have
these
four
things
that
we
need
to
look
at.
B
So
the
payment
service
has
a
charge
function,
it
takes
a
charge,
request,
object
and
it
returns
a
charge
response
very
simple:
the
charge
request
has
a
money.
Amount
parameter,
also
has
a
credit
card,
and
it's
this
specific
object,
and
then
it's
returning
a
string
and
that's
basically
it
so
just
understand,
what's
happening
here,
and
if
we
pop
back
into
the
the
service
itself,
let's
say:
first
the
index
file.
B
You
can
see
that
that's
pretty
much
the
same
thing
that's
happening
here.
We
have
a
charge
service,
Handler
we're
putting
in
the
call
request.
So
the
request
object
is
getting
passed
into
the
charge
function.
The
charge
function
is
getting
called
here,
as
you
can
see,
charge
function
and
we're
setting
we're
actually
grabbing
the
values
from
the
credit
card
that
gets
passed
in
we're
getting
the
details,
we're
validating
the
card,
we're
throwing
some
errors
if
it's
invalid
and
then
with
that
all
done,
we
specify
the
request
amount
and
we
generate.
B
B
What
we
want
to
do
now
is
you
can
see
that
we
have
a
bunch
of
code.
That's
commented
out
and
we're
going
to
specifically
step
by
step,
add
in
some
of
this
code
to
show
how
the
trace
changes
and
how
we
can
change
the
testing
against
or
actually
change
the
test
specs
against
this
particular
service,
and
with
with
that
I
think.
The
first
thing
we
can
do
is
just
start
adding
in
some
some
open,
Telemetry
instrumentation.
B
So,
just
to
refresh
our
memory
popping
back
into
the
test
suite
and
let's
do
last
run,
and
let's
do
our-
let
me
just
find
okay,
let's
just
rerun
the
whole
thing
do
not
to
not
think
about
it.
Let's
do
like
that.
Let's
rerun
our
our
tests
really
quickly
and
let's
pull
up
the
this
one
and
let's
pull
up
it,
does
just
refresh
our
memory
on
what
the
the
trace
looks
like
right
now.
So
this
is
only
having
the
auto
instrumentation.
B
B
That
I
just
showed
you
in
the
code
now
from
here,
I
can
go
in
and
say:
yeah
I
want
to
make
sure
that
my
status
code
is
equal
to
zero
I
want
to
say
it
should
return
status
code,
zero
I
can
go
ahead
and
save
that
test
spec
and
now
every
every
time
that
this
particular
test
runs.
It's
going
to
want
my
RPC
span
to
return
status
code,
zero,
sure,
that's
cool,
but
that's
not
really
that
specific.
We
can
do
much
much
better,
and
this
is
what
I
would
say
that
we
can
do
so.
B
B
So
let's
say
severity
message:
request
I'll
also
obviously
need
to
go
ahead
and
kill
the
span,
so
I
always
need
to
make
sure
that
if
I'm
starting
a
span
I
need
to
end
it
and
I
also
want
to
add
a
status
to
my
particular
span.
Now,
because
this
this
current
service
also
has
a
charge
function,
so
this
particular
function.
I
also
want
to
pop
into
the
charge,
function
and
I
want
to
do
the
same
thing.
B
So
I
want
to
go
up
and
again
I'm
getting
the
active
span,
so
the
active
span
is
going
to
be
my
RPC
span
and
I
want
to
add
some
span.
Attributes,
let's
go
in
and
add
some
spam
attributes,
and
this
is
a
kicker
I
also
want
to
check
so
because
we
have
a
valid
value
here,
because
we're
obviously
validating
the
card
and
then
getting
that
value
back.
I
want
to
add
that
as
an
as
an
attribute,
so
I
can
run
a
test
spec
against
that
particular
value,
which
is
actually
pretty
cool
yeah.
B
B
B
It
is
perfectly
up
and
running:
let's
go
ahead
and
run
this
same
test
once
again,
so
let's
go
ahead
and
pull
up
the
our
exploratory
test
once
again
and
pulling
it
up
in
the
UI
once
again.
A
B
I
agree,
I,
agree,
but
yeah
that's
moment
of
truth:
let's
see
nice
so
yeah.
So
now
we
have
the
app
payment
amount.
We
have
the
app
payment
card
card
type
and
then
we
also
have
the
card
valid.
So.
B
Of
these
custom
attributes,
we
have
them
added,
even
the
the
spine
events
here,
you
can
see
that
we
have
the
log,
so
it's
a
I
can
even
run
some
test
specs
against
the
events
here
as
well,
which
is
I,
mean
it's
freaking,
freaking
cool
I
mean
if
this
isn't
cool
I,
don't
know
what
is
and
that's
the
cool
part.
Let's
say:
I
want
to
actually
do
a
custom
test.
Pick
against
my
card
value.
B
True
and
I
want
to
say
it
should
return
a
valid
equals
to
true,
like
that,
save
that
up,
let's
go
ahead
and
save
that
whole
set
of
tests
and
here's
what
we
get
so
now.
If
this
card
is
invalid,
I
mean
immediately
getting
that
value
from
the
code
itself
and
I
can
and
it's
part
of
my
distributed
Trace,
meaning
it's
part
of
my
test,
harness
which
means
if
this
changes
I
know
something's
wrong.
B
This
is
a
this
is
as
white
box
testing
like
as
it
can
be,
it's
freaking
I
mean
it's,
it's
actually
pretty
cool,
but
no
it's
it's!
You
know,
let's
not
just
stop
there.
Let's
do
some
more
cool
things
and
what
I
think
we
can
do
specifically
is
this
setup
doesn't
really
give
me
a
good
point
of
view
of
what's
happening.
It's
just
one
RPC
span
I'm,
not
quite
sure
what
else
is
happening.
I
have
some
custom
attributes
which
is
cool,
but
we
need
to.
We
need
to
take
it.
B
A
B
Let's
just
here
at
the
beginning,
I'm
just
going
to
comment
out
this
part
and
I'm
going
to
put
in
this
part
instead
now
what's
happening
here
again,
we're
doing
the
same
thing:
we're
getting
the
active
span,
which
is
the
RPC
span,
but
we're
setting
it
on
the
context
and
we're
creating
another
span
card
called
charge
service,
Handler,
Okay,
Okay
cool.
So
we
have
another
span.
That's
a
child
span
of
our
RPC
span,
but.
A
A
B
Function
so
let's
say
we
pop
in
here
we
say
again
in
the
same
same
type
of
thing:
we're
getting
the
active
span,
but
this
active
span
is
not
the
RPC
span
anymore.
Oh
no
way,
actually
no
I'm
wrong.
It
is
the
RPC
span
still
because
this
will.
This
will
create
a
sibling
span
with
the
span
that
I
was
just
mentioning.
B
A
A
B
A
B
Sibling
spans
that
are
part
of
the
the
grpc,
so
the
RPC
span
here,
and
which
what
I
particularly
like
about
this,
is
that,
with
this
I
can
add
a
I
want
to
make
sure
all
of
my
grpc
spans
are
okay,
cool
I'm,
just
targeting
my
my
grpc
spans,
but
then
I
also
want
to
Target
these
individually,
where
I
want
to
say.
A
B
B
I
have
a
charge
service,
Handler,
somewhere
and
I
know
that
there's
a
charge
function,
that's
getting
run
somewhere
as
well,
but
we
can
even
take
this
a
step
further
as
well.
By
doing
it,
the
right
way,
the
right
way
should
be
that
this
charge
span
should
actually
be
as
a
a
child
of
this
band,
because
that
kind
of
just
makes
sense
because
it's
triggered
from
the
service
Handler,
it's
not
triggered
from
the
grpc,
specifically
okay.
So
let's
take
a
another
another
stab
at
that.
Let's
do
another
to
stop
the
service.
Whilst
that's
stopping.
B
Let's
just
change
that
up.
You
know
in
a
second
here.
Let's
comment
this
back
up
and
I
want
to
do
something.
That's
called
start
act,
the
Spanish,
but
instead,
so
the
start,
Act
is
fine.
What
that
does
is
that
all
it
gives
you
this
callback
function
and
everything
that
happens
in
between
so
in
this
callback
function
is
going
to
be
nested
within
that
span.
So
it's
going
to
create
child's
bands
within
that
nested
callback.
A
B
And
let's,
let's
move
that
a
bit
like
that,
just
clean
that
up
a
bit
more
and
we're?
Obviously,
let's
clean
that
up
a
bit
more
and
also
the
that
there,
and
if
I've
done
this
correctly,
you
can
see
that
this
callback
function
is
again
going
to
also
we
need
to
change
this
up,
because
we
don't
really
need
the
context
anymore,
because
we're
running
this
function
in
bit
in
that
callback
function
within
the
active
span.
B
So
what's
going
to
happen
now,
is
that
this
charge
span
right
here
is
going
to
be
part
of
the
Actis
band,
which
now
is
the
service
Handler
span,
and
with
that,
let
me
go
ahead
and
rebuild
our
payment
service
and
let's
go
ahead
and
restart
it
again
and
run
the
same
test
once
again.
Let's
go
ahead
and
do
the
UPS
up
no
start,
and
the
start
like
that.
So
so.
B
Yeah
I
mean
it's
taking
up
it's
taking
a
while,
but
you
know
we're
getting
there
so
now,
if
we
do
rerun
the
test
this
time-
and
this
should
ideally
say-
number
of
spans
collected
should
be
four,
so
just
give
it
a
moment
to
iterate
through
them,
and
it's
going
to
show
us
a
nice
linear
view.
So
a
nice
visual
linear
view
of
what's
actually
happening
in
our
code
and
then
we
can.
We
can
go
ahead
and
run
our
run.
Our
tests
against
that
and
add
specs
logically
I
would
say
against
against
those
fans.
B
A
B
A
B
It
with
with
the
CLI
command-
it's
just
freaking,
it's
just
convenient
as
hell.
It
doesn't
really
get
any
simpler
than
this.
B
That,
actually
in
the
UI
and
actually
sorry
in
the
CLI
and
the
code,
editors
is
really
simple.
If
we
pull
up
the
the
test
itself,
let's
say
we
do
here,
so
we
have
a
the
valid
credit
card,
so
we
want
to
just
check
if
the
card
is
valid,
we
have
our
obviously
we're
our
protobuf
file,
so
the
demo
profile
is
showing
a
bit
a
moment
ago,
and
then
we
have
obviously
our
request.
B
So
this
is
the
amount
and
the
credit
card
we
want
to
pass
into
the
payment
service
and
we
can
trigger
this
and
add
in
our
test
specs
now
the
the
magic
about
the
test.
Specs
is
there
quite
literally
a
part
of
this
file,
and
the
way
they
work
is
that
the
selector
here
is
the
selector.
It
works
in
the
same
Principle
as
as
I
was
showing
in
the
UI
as
well,
where,
if
you
pop
into
the
UI,
you
can
basically
go
in
and
say,
I
want
to
run
a
test
against.
B
B
B
B
Like
that
app
payment
card
valid,
should
it
be
equal
to
true
that
works
perfectly
fine?
Let's
save
that
up
and
let's
go
ahead
and
run
this
test
now.
Instead,
if
I
go
ahead
and
point
to
that
file
in
my
CLI
I'm
going
to
do
like
actually,
let's
just
delete
that
part-
and
let's
say
it
was
called
valid
credit
card
whoop.
B
Obviously
that
happens
because
you're
not
attentive
enough.
We
obviously
need
to
add
in
the
it's
a
list
of
of
so
let's
pop
back
in
there.
Let's
run
that
again-
and
this
is
going
to
ask
me
for
this-
is
what's
what
I
like
calling
an
ad
hoc
test.
So
it's
going
to
add
me.
Ask
me
for
environment
variables,
which
is
super
cool,
because
remember
maybe
30
minutes
ago
I
was
talking
about
the
environment.
B
A
B
Adding
the
environment
in
the
in
the
Run
bash
file
and
here's
here's
how
that
gets
loaded
so
anytime
I
want
to
run
this
from
an
environment.
That's
already
set
up,
so
I
can
see
here.
The
variable
set
here
I
can
see
so
here's
my
variable
set
I
can
also
do
that
in
the
CLI
as
well,
where
I
want
to
say
it
is
like
that
and
it's
I
think
it's
rars
is
probably
worse.
B
Cool
so
I'm
pointing
to
the
environment.
Actually
the
variable
set
that
I
have
already
configured
I
can
obviously
just
write
a
file
for
an
environment
for
the
actual
variable
set
and
have
it
that
way
but
yeah,
let's
pop
back
it
into
the
UI,
so
we're
not
waiting
for
the
for
the
terminal
to
get
back
to
us.
B
A
B
Whatever
you
prefer
doing,
one
cool
thing
is
that
when
you're
running
in
CI
CD,
you
have
all
of
these,
you
can
generate
all
of
the
tests
by
hand.
If
you
want
to
actually
see
what's
happening,
you
you
can
export
the
files
and
set
that
up
automatically
to
run
with
the
CLI
and
NEC
CI
CD
process
that
you
that
you
have.
A
Amazing
we
have
one
more
question
in
chat
and
then
I
think
we're
about
finished
with
time,
but
let's
go.
How
could
is
privacy
handled
in
open
Telemetry?
Is
that
left
to
the
tree?
Store
or
features
like
masking
can
be
leveraged
out
of
the
box.
B
B
So
the
collector
is
basically
it's
a
piece
of
software
that
acts
as
the
middleman
in
between
your
system
sending
traces
and
the
trace
data
store,
and
then
you
can
do
pretty
much
anything
in
the
collector
so
that
you
have
different
extensions.
You
have
different
processors
that
you
can
add
in
and
the
processors
are
you
can
do
tail
sampling.
You
can
do
head
sampling.
You
can
basically
do
a
bunch
of
different
things
and
for
this
specific
thing,
I
think
it's
called
masking
I'm,
not
quite
sure
what
it's
called.
B
A
Yeah
is
there
anything
else,
you'd
like
to
add
to
your
presentation,
Before
We,
Say
Goodbye
today.
B
I
mean
I
think
this
is
super
cool,
so
I
could
probably
be
talk
talking
about
this
for
another
hour
or
so,
but
I
don't
really
want
to
waste
anybody's
time,
but
yeah.
If
you
have
any
more
questions,
we
will
be
I'm
going
to
try
to
add
this.
All
of
the
examples
that
I
showed
today
and
even
more
I'm
going
to
make
sure
to
add
this
to
the
so,
basically
just
the
fork
that
I'm
maintaining
of
the
open,
Telemetry
demo
and
then
share
it
with
with
everybody.
B
A
I'll
actually
try
to
make
sure,
because
the
the
official
open
Telemetry
demo
dab
this
little
part
here
at
the
at
the
bottom.
So
here
so
there
are
different
demos
from
different
tools
and
vendors
that
maintain
their
own
Forks,
just
to
show
people
how
it's,
how
it's
run,
I'm
going
to
try
my
best
to
actually
get
in
and
get
a
version
in
here
from
the
the
examples
that
I
showed
today
just
to
have
it
so
everybody
can
pop
in
and
see
if
they
want
to
try
it.
A
Awesome
so
we
have
the
the
GitHub
URL
now
to
see
the
demos.
If
you
want
to
get
your
own
hands
dirty
with
everything
that
we
saw
Adnan
do
today,
please
please
go
for
it.
This
has
been
super
informative
and
super
fun.
You
yeah
you're
super
impressive
I,
appreciate
you
sharing
your
time
and
your
expertise
with
us
and
I
appreciate
everyone
for
coming
and
sharing
your
time
with
us.
Everyone's
time
is
so
valuable.
It's
such
a
gift
that
you're
giving
us
some
of
yours.
A
So
thanks
everyone
for
joining
today's
episode
of
cloud
native
live.
It
was
great
to
have
Adnan
rahitch
here,
teaching
us
how
the
power
of
traces,
why
open
Telemetry
embraced
trace-based
testing
I,
also
as
always,
really
loved
the
interaction
and
questions
from
chat.
Y'all
are
the
best.
So
here
at
Cloud
native
live,
we
bring
you
the
latest
Cloud
native
code
on
Tuesdays
and
Wednesdays
at
noon.
Us
Eastern
so
be
doing
another
show
tomorrow.
So
thanks
for
joining
us
today,
and
thanks
for
those
who
watch
the
recording
and
we'll
see
you
again
soon,
bye.