►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud:
natives,
I'm
annie
talvastow
and
I'm
a
cncf
ambassador
as
well
as
senior
product
marketing
manager
at
camunda,
and
I
will
be
your
host
tonight.
So
every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
A
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions.
So
you
can
join
us
every
wednesday
to
watch
live
this
week.
We
have
really
a
special
program
because
we
had
a
last
minute
cancellation
but
gracefully,
and
thankfully
we
had
a
step
up.
So
thank
you
so
much
to
our
speakers
this
week
to
stepping
up
on
late
notice,
but
even
more
so
amazing
to
have
them
with
us.
A
B
Well,
presentation
you
thank,
you,
annie
hi,
everyone
thanks
for
joining
us.
This
is
a
look
look.
You
are
from
our
screws
so
today,
as
you
guys,
probably
found
a
late
late
last
minute,
surprise,
I'm
going
to
talk
about
open
telemetry,
but
the
specific
topic
is
tracing,
and
so
most
of
my
talk,
if
you've
seen
it
is,
you
know
how
does
one
use
tracing
for
the
upside
of
things?
Meaning
tracing
has
always
been
used,
so
you
know
just
to
set
the
stage
up.
B
A
B
We
see
the
spring
all
good,
perfect,
okay,
guys
so
all
right.
So
again,
it's
going
to
be
all
about
open,
just
retracing
and
and
you'll
notice,
the
specific
term
that
I'm
using
real-time
observability.
That's
the
key
that
I'm
going
to
talk
about.
Hopefully
that
gets
your
interest.
Let's
get
this
out
of
the
way,
and
you
know
I
think
I'm
probably
preaching
to
the
choir,
because
you
guys
are
all
practitioners.
B
The
biggest
challenge
for
cloud
native
is
complexity,
dependencies
dynamism.
You
all
know
that
the
good
news
is,
you
can
get
all
the
data
you
want.
The
question
is:
do
you
have
adequate
insights
at
the
scale
and
tracing
by
no
means
is
one
of
the
biggest
complex
beasts
right.
So
so
I'm
going
to
jump.
Tell
you
a
little
background
on
what
we
do
and
how
we
came
into
this.
So
ops,
crews,
if
you're
not
familiar
with
us,
is
probably
the
one
observation
company
that
builds
tree
on
home,
telemetry,
meaning
all
open
or
cncf
projects.
B
B
B
You
know
even
graffana
temple
and
open
zipkin,
which
is
what
we
talk
about
flows
like
istio
or
ebpf
and,
of
course
kubernetes
and
changes.
And
of
course
we
have
to
cut
cloud
metrics.
So
if
you
look
at
that
on
the
bottom,
metrics
losses,
flows
and
config
in
changes
all
of
them
being
collected
from
open
source,
cmtf
agents,
running,
of
course,
in
the
control
plane
in
the
kubernetes
plane
and
the
monitoring
plane,
not
in
the
I
o
plane.
B
So
what
we
do
to
help
sre
and
devops
teams
is
pull
that
that
they
always
this
telemetry
by
collecting
them
from
the
native
collectors
such
as
prometheus.
If
you
have
c
advisor
as
a
demon
set
or
node
exporter,
and
so
on
so
forth,
combine
them
together,
contextually
so
get
the
full
dependency
map
both
service
to
service,
as
well
as
service
to
the
kubernetes
layer
to
infrastructure
and
then
understand
what
is
happening
by
using
machine
learning
to
understand
the
behavior
of
every
service,
and
then
we
also
do
automated
causal
analysis.
B
The
whole
idea
is
that
there
is
no
reason,
the
message
I'll
give
you
is:
there's
no
reason
to
go
with
proprietary
agents,
etc.
When
cncf
and
open
telemet
is
providing
all
of
that.
This
is
the
way
the
future
has
been.
A
feature
is
going,
and
this
is
where
you
want
to
be
where
the
puck
is
going
to
be.
So
the
question
is,
we
are
doing
this.
The
specific
area
that
we
are
looking
at
is
how
do
we
make
you
know
we
know
about
metrics
and
logs
and
events
right.
B
We
can
see
them,
we
can
capture
them,
but
the
biggest
question
is
then:
what
about
tracing
itself
and
the
reason
I
bring
that
up
is
okay,
one
more
thing
before
I
do
that
this
is
gonna.
Give
you
that
same
thing,
what
I
just
talked
about
more
of
a
workflow,
as
you
can
see,
on
the
left
hand,
side
is
all
the
open
source
cncf
projects
for
telemetry.
What
we
actually
do,
as
I
said,
was
we
build
out
that
dependency?
B
We
check
the
behavior
model,
using
contextual
knowledge
and
machine
learning,
and
then
using
that
behavior
model
and
deviation.
We
detect
problems,
whether
it's
events
or
predicted
from
the
ml
and
then
use
information
from
there
to
be
able
to
get
to
events
or
logs
looking
at
changes,
and
as
part
of
that,
of
course,
we
want
to
look
at
traces
as
well
bringing
them
all
together
using
a
decision
tree
for
causal
analysis,
so
this
is
kind
of
the
larger
framework.
B
To
give
you
an
idea,
the
biggest
challenge
that
we
we
know
that
ops
teams,
you
know
whether
you're
on
the
sre
side,
tech,
ops,
you
know
devops
ops-
is
tracing
with
distributed
tracing,
primarily
because
it's
complex
right
so
I'll
give
you
two
examples
where
tracing
it
becomes
hard,
and
I
don't
know
if
you
can
see
this
on
the
right
hand,
side
I'll,
hopefully
walk
through
one
of
the
biggest
challenges
about
trying
to
find
a
different
trace
is
most
of
the
time
when
we
tag
them
right.
We
usually
start
with
the
root
span.
B
So
if
you
look
in
the
right
hand,
side
here,
this
is
the
front
end.
This
is
actually
an
example
that
I'll
show
later
this
is
online
boutique
from
google,
you
know
being
used
for
tracing
front
end.
Has
an
operator
called
receive
cart
right?
This
is
for
the
e-commerce
application
on
google
boutique.
Now,
when
you
notice
the
requests
come
in,
they
can
make
calls
to
any
of
these
services
right.
B
You
can
go
to
get
code,
get
cart,
convert
you
know,
get
supported
currencies
all
these
services
are
done
when
you
are
basically
are
making
a
request
to
fill
out.
Your
cart
for
e-commerce
and
then
after
that,
of
course,
you
can
get
a
coat
and
product
all
these
spans
that
will
go
between
them.
This
has
been
collapsed.
Would
all
start
from
the
root
span
receive
card.
B
On
the
other
hand,
you
can
also
have
a
receive
card
that
can
come
from
the
front
end
same
route
span,
but
it
might
be
going
to
add
item
or
get
the
product
information
and
get
the
specific
information
all
that,
so
there
will
be
traces
that
will
traverse
through
this
with
the
same
root
span.
So
if
you
want
to
tag
this
by
root
span
in
order
to
differentiate
different
races,
you
have
to
start
adding
very
complex
number
of
tags
across
all
of
these
or
odd
queries.
B
So
same
entry,
point
different
path
and,
as
you
can
imagine,
the
performance
of
these
kind
of
transactions
going
through
these
paths
here
from
this
set
of
services
was
very
different,
so
trying
to
find
a
problem
trace
which
starts
with
the
same
root
span
and
taking
all
these
different
paths
becomes
very
hard
and
imagine
you
have
thousands
of
transactions
per
second
or
thousands
of
transactions
per
minute
even
or
hour.
Trying
to
find
that
so
this
is
a
non-trivial
problem,
so
you
know
as
you're
probably
aware.
B
Then
second
part
is
imagine
you
already
found
a
specific
trace
and
you
wrote
smug
queries
to
it.
How
do
you
know
there's
a
problem
usually
trying
to
do
a
manual
search
for
query,
because
the
volume
of
data
that's
coming
at
you
right?
It
could
be
not
only
that
it's
a
bad
code,
but
it
could
be
in
its
running
on
kubernetes
and
kubernetes
has
caused
a
problem
on
the
underlying
controller
container
that
implements
this
service.
So
these
two
broad
problems,
while
it
sounds
simple
meaning
at
the
at
the
high
level
headline
level,
is
non-trivial.
B
A
So
two
audience
questions
by
the
way
or
comments
as
well,
so.
A
A
question
on
is
this
for
communities
and
then
there
was
a
request
to
get
the
link
to
the
slides
at
some
point.
Obviously
you
might
not.
B
Right
we
can
send
that
absolutely
yeah.
Okay.
For
the
second
question
on
the
first
question,
kubernetes
is
one
example.
No,
we
can
use
at
least
from
the
the
tracing
application
if
you're
using
open,
telemetry
anywhere
using
open
telemetry
like
jager,
open,
zipkin
or
even
temple.
The
approach
that
we
are
talking
about
will
work
hope
that
answers
the
question.
So
any
I'm
not
looking
at
this,
I'm
looking
at
my
screen.
So
hopefully,
if
there's
any
other
follow-ups
I'll,
be
glad
to
answer.
B
Okay,
so
so
this
is
the
two
big
issues:
there's
also
an
issue
about
how
you
store
and
retrieve
traces,
you
know
and
and
what
has
happened
in
the
past,
with
propriety
tools,
as
opposed
to
using
say,
open
telemetry
was
the
traces
were
usually
somewhere
that
you
had
to
pull
in
right
with
jaeger,
having
an
open
back
end
where
you
can
persist
them,
but
it's
a
shared
repository
on
the
cloud
or
yours
or
you
put
it
on
your
own
account
or
on
your
own
storage.
It
doesn't
matter
and
now
they're
accessible.
B
So
when
you
want
to
pull
in
a
trace,
we
can
get
that
so
one
of
the
big
advantages
of
having
open,
telemetry
as
accessibility,
open,
accessibility
of
capturing
different
traces,
adding
things
to
it
and,
of
course
accessing
it.
But
the
two
problems
remain:
how
do
we
discriminate
the
trace
of
interest
given
the
complexity
and
the
way
that
different
paths
take
place?
And,
second,
how
do
we
detect
problems?
So
that's
what
we're
going
to
talk
about
today
so
to
set
the
stage
I'm
going
to
use,
what's
called
trace
path
and
to
jump
ahead.
B
So
let
me
give
you
one
of
the
things
that
we
already
do,
and
there
are
some
other
folks
might
be
using
like
flow.
So
if
you're,
using
istio
or
using
ebpf,
which
we
do
you
have
aggregated
metrics,
let's
say
from
service
a
to
service
b
to
service
c
to
service
d,
so
requests
accounts
that
come
into
service
b
response
time
errors.
You
can
do
that
now.
These
will
be
aggregated
on
the
scraping
interval.
B
Now
contrast
this
when,
if
you're
running
a
trace
that
service,
the
actual
request
may
be
from
the
operations,
let's
call
it
a1
might
be
calling
service
b,
but
it's
calling
operations,
b1
or
service
a1
might
be
a
up
around
by
calling
b2
and
b2
could
be
calling
c1
and
they
could
be
multiple
b1
called
bc2
and
c1
might
be
calling
d.
So,
as
you
can
see,
you
can
have
many
different
paths
right.
I
can
go
a1
b2,
c1,
d1
or
a1
b1,
c1,
d1,
right
or
etc,
etc.
B
Right
so
the
possible
combination
becomes
complex
and
of
course,
there
might
be
multiple
calls
between
them
as
well.
Right
repeated
calls
different
times,
so
this
at
the
transaction
level
is
a
lot
more
detail
in
trying
to
see
that
in
real
time
and
visualize
it
when
many
of
them,
because
it's
not
really
aggregated,
is
hard
so
think
of
it,
as
this
is
an
aggregate
snapshot,
this
real-time
performance,
but
you
have
to
go
to
a
specific
trace.
You
know
look
at
one
and
pull
that
data
and
look
at
the
typical
flame
graph.
B
The
problem
is,
you
can't
watch
oops,
sorry
didn't
mean
to
go
there
going
to
a
wrong
screen,
so
you
can't
go
to
the
full
detail
versus
the
aggregate.
What
what?
What
does?
If
you
want
to
use
traces
and
won't
detect
problems
at
this
detail
level?
What
we
really
want
is
alerts
in
real
time
and-
and
I
think
there
was
a
question
about
kubernetes-
you
don't
know
whether
the
underlying
infrastructure,
whatever
is
managing
the
services
that
implements
these
operation
services
in
your
application,
whether
that
has
a
caused.
B
What
also
wants
to
do
is
detect
the
problem
they
like
to
know
drill
down
and
basically
have
this
operational
view
as
opposed
to
go.
Let's
go
search
service
is
done,
someone
calls
you
and
we
typically
know
this
right
user
calls.
You
and
saying
my
requests
are
failing.
I
can't
get
this
transaction
done
and
then
you
go
and
start
searching,
hopefully
with
the
request,
etc.
It's
after
the
fact.
How
can
we
make
this
proactive?
B
That's
where
tracepath
comes
in,
so
let's
dig
into
what
tracepath
really
does
so
think
of
tracepath
as
aggregating
at
the
service
operations
level.
Remember
we're
talking
about
business
flows,
so
remember
the
paths
that
we're
talking
about
service
a
can
call
service
b
go
to
service
d.
There
might
be
another
one
that
requests
are
coming
through
here.
B
There
are
finite
number
of
paths
that
might
go
from
service
operation,
one
oops,
sorry
to
let's
say
service
operation
d,
so
imagine
representing
in
the
trace
path
of
your
application,
the
service
operation
as
the
vertex
and
these
edges
that
you're,
seeing,
for
example,
service
a
operational
service
b
operation,
one
being
called,
is
the
edge
in
that
vertex
so
trace.
Back
now,
of
course,
you
can
have
different
traces.
B
You
know
go
through
these
operations
right,
for
example,
I
can
come
here.
Request
might
come
from
here
on
t
two
hop
one
and
go
to
t
to
hop.
Two.
A
request
might
come
from
t
one
hop
one
go
to
t
three
hop
two
and
then
go
down
right,
so
the
same
service
operations
can
be
on
different
trace
paths
right.
So
it's
not
a
exclusivity.
B
So
the
way
we
want
to
get
aggregation
is
we
now
will
aggregate
the
request
coming
from
the
service
operations
of
one
to
service
operation,
another
one
and
aggregate
the
flow
metrics
request
rate.
You
know
response
time,
error
count.
So
here
is
an
example.
Here's
a
trace
t
one
that
I've
shown
you
here.
One
t
one
op,
one
g,
one
hop
two
d:
one
hop
three
d:
one
hop
four!
Hopefully
you
can
see
this
right.
B
I
just
kind
of
walked
through
the
one
of
the
trace
paths
which
is
listed
here
for
that
the
aggregator
average
duration
on
that
path
is
275.
Microseconds
maximum
is
this:
I
can
label
this.
I
can
create
another
trace
path
because
I'm
watching
the
frequencies
of
the
traces
t2
goes
from
the
hop
one
here
from
this
top
one
to
here.
T2
goes
from
here
and
then
goes
here,
so
it's
only
three
service
pair
connections
right
now.
How
do
we
get
this?
B
We
are
looking
at
all
the
transactions
that
going
through
stepping
back
and
grouping
them
on
these
common
routes,
so
think
of
this
as
finding
the
most
common
routes
from
the
entry
point
to
whatever
the
end
officer,
traces
and
grouping
them,
because
if
you
think
about
it,
most
traces
will
follow
certain
patterns.
That's
why
we
call
it
a
business
pattern.
B
So
if
you
step
back-
and
you
can
collect
and
build
this
right
over
a
period
of
time,
you
will
get
a
set
of
bubbled
up
trace
paths
based
on
these
common
traces
that
can
be
grouped
in
it.
Of
course,
I
can
change
a
service.
Different
request
comes
and
the
track
path
can't
change,
so
so
the
trace
parts
by
definition,
cannot
be
static.
It
will
be
dynamic.
You
have
to
update
as
new
requests
type
coming
in
and
it
calls
different
operations
right.
B
So,
by
doing
this
aggregation
we
can
keep
a
live
view
of
the
different
trace
paths
dynamically
and
be
able
to,
and
from
there
go
down
to
the
traces
that
basically
are
running
through
the
trace
paths.
Now
I've,
given
example
of
performance
metrics,
like
duration,
you
know,
response
time,
errors
etc.
I
can
even
get
request
counts,
but
I
can
also
add
to
the
trace
path:
some
specificity.
For
example,
if
the
request
the
t1
is
user,
let's
say
payment
service
and
it's
high.
I
can
tag
it
and
specify
and
create
a
tag
for
that.
B
So
I
can
search
those
trace
paths
that
are
higher
importance
to
me
because
they
have
bigger
impact
or
some
trace
paths
that
are
using
a
specific
service
or
container
that
I'm
worried
about,
because,
let's
say
it's
usually
a
shared
database,
then
I
can
tag
them
as
well.
So
there's
a
problem.
I
can
pick
those
now.
How
do
we
aggregate
thousands
of
traces
into
order
of
1
10
to
100
number
trace
paths?
What
we've
done
here
is,
as
you
can
see,
we're
not
giving
you
all
the
calls
that
are
going
between
them.
B
So
we
aggregate
the
repetitive
spans
and
loops
that
go
through,
so
they
might
be
coming
like
this.
You
know
on
my.
If
you
watch
my
finger
hop
on
hop
to
come
back,
you
can
make
another
call
etc,
but
we're
not
showing
all
those
actual
spans
so
you're
not
seeing
the
flame
graph
you're
aggregating
the
service
operations
and
consolidating
it
on
that
service
operator.
That
gives
us
this
big
reduction
in
the
mapping
of
all
the
traces
that
I'm
seeing
into
a
few
trace
paths
that
I
can
now
monitor
and
capture
aggregated
metrics
on.
A
B
Yes,
so
again,
just
to
summarize
and
I'll
give
a
couple
examples
where
we
collect
trace
parts
and,
as
someone
pointed
out
no,
this
is
not
specific
to
the
infrastructure.
This
can
work
anywhere.
It
could
be
running
on.
Serverless
could
be
running
your
bare
metal
machine.
It
could
running
on
a
container
could
be
running.
You
know
on
vm,
doesn't
matter,
we
are
collecting
the
unique
patterns
of
traces
that
represent
the
bristol.
B
So
coming
back
to
this,
instead
of
showing
all
the
different
paths
we
will
base
our
trace
paths
here
will
have
these
services,
and
maybe
only
these
two,
the
blue
and
the
red-
might
be
the
two
dominant
trace
paths
that
can
essentially
capture
all
the
different
traces
that
are
traversing
through
these
operations.
Okay,
so
as
an
example,
I
could
have
500
traces
a
of
an
hour.
B
I'll
only
see
this
aggregated
view
without
understanding
which
operations
and
I
can
map
those
transactions,
this
traces
into
just
two
trace
paths
as
an
example
here.
So
that's
why
operations
can
now
look
at
those
trace
paths
and
look
at
the
performance
there.
So,
let's
take
an
example,
example
graph.
So
how
would
we
collect
the
data?
So
here's
a
typical
example
right,
the
overall
flame
graph
right,
a
you
can
request,
comes
to
a
calls
b
b
called
c.
There
might
be
more
requests
here.
B
C
calls
d
d
call
z
and
goes
back
as
an
example
of
the
four
span.
So
at
each
time
spam,
if
I've
calculated,
if
to209
you
can
see
this
is
my
trace
path
span
a.
I
can
look
at
the
counts
that
are
coming
in
over
that
interval
right.
The
last
interval
the
errors
and
response
time
and
then
on
each
one
of
the
spans
that
combine
these
right.
So
the
first
three,
the
three
spanzine
a
b
c.
Let's
say
there
was
nothing
going
on
in
d,
so
in
that
case
I
won't
get
any
data.
B
Let's
say
d
was
not
called
so
in
that
first
time
stamp
I'll
get
for
that.
Those
three
spans
on
this
trace
path,
I'll
get
this
aggregated
metrics
and
the
next.
Let's
say:
I'm
scraping
at
30
seconds
same
one
I'll
get
these.
So
essentially
we
are
finding
out
and
then
in
those
three
in
that
30
second
interval.
I
can
also
find
out
what
are
the
traces
that
actually
went
through
that
contributed
to
the
error.
The
count
request
count
the
errors
and
the
response
time.
B
So
this
is
the
data
that
you
want
to
process
and
what
you
want
to
show
them
here
are.
The
two
trace
paths
here
is
what
the
aggregate
metric
is.
Oh,
you
want
to
go
down
into
the
specific
traces
that
you
saw
in
the
first
30
seconds.
Okay,
here
are
the
two
choices,
so
imagine
your
error
count
went
high.
B
Then
you
can
go
drill
down,
saying
hey,
find
the
trace
that
contributed
error.
So
it's
a
two-step
process
right.
I
found
the
trace
paths,
look
at
the
aggregate
trace
path,
metrics
on
performance
and
then
drill
down
from
there.
When
I
see
a
problem,
it
meets
some
issues
at
the
trace
path.
Let's
say:
go
find
the
trace,
that's
a
problem,
as
opposed
to
looking
for
the
trace
that
someone
told
me
and
then
trying
to
figure
out
if
it
was
bad
or
not.
B
B
I
may
have
20,
but
the
three
are
the
top
oh,
and
these
one
or
two
started
having
higher
response
time
to
be
determined
whether
high
response
time
was
a
problem
or
not
or
more
error
counts,
and
because
we
are
collecting
all
the
open
telemetry.
I
just
want
to
point
out
now,
of
course,
with
these
three
sites
running
on
that
servers,
I'm
still,
I
can
be
combining
it
with
logs
flows
and
changes
and
bringing
them
in
contextually,
so
you
basically
have
for
operations
a
real-time
view
of
bringing
in
the
contextual,
overall
holistic
understanding
of
all
this.
B
So
now
operationally
you
can
look
at
things
in
real
time.
Just
like
you
do
log
alerts
metrics
from
prometheus
flows
that
you're
doing
you
can
still
do
flows
changes
from
kubernetes
or
whatever
orchestration,
using
as
well
as
traces
through
trace
paths.
The
advantage
now
is
just
like
we've
done
before,
with
metrics
or
flows.
I
can
now
set
capabilities
to
detect
anomalies.
Now
I
can
detect
anomalies.
Are
they
aggregate
level
on
traces
on
these
trace
paths
and
then
use
it
for
causal
analysis,
because
I
have
linking
all
of
them?
B
B
A
B
So
this
is
just
to
set
the
stage,
because
the
demo
is
going
to
be
relying
on
the
environment
that
we
have
so
our
deployment
architecture.
If
you'll
notice
is-
and
yes
I
am
using
kubernetes-
and
this
is
cncf-
you
know
open
source
environment.
So
imagine
if
you
have
kubernetes.
Of
course
you
can
have
vms
as
well.
The
blue,
the
green,
are
your
pods
that
implement
the
services
for
your
application,
so
the
blue
is
all
of
the
open
source
instrumentation.
B
So,
for
example,
if
you're
using
prometheus
for
metrics
c
advisor
as
a
domain
as
a
daemon
set
node
exporter,
if
you're
using
loki
prompter
across
them,
if
you're
using
jaeger,
which
we'll
talk
about
today,
there
is
jaeger
already
installed
and
we
are
capturing
that
which
means
your
code
that's
running,
and
these
I
already
have
the
jaeger
libraries
enabled
to
capture
those.
B
So
we
have
logs
sorry
metrics,
logs
and
traces.
We
capture
them
through
one
container,
one
pod,
one
for
each
telemetry
in
the
cluster
one.
So
essentially
we
are
sitting
passively
in
the
kubernetes
node,
not
touching
anything
and,
of
course,
with
jaeger,
you
don't
have.
We
don't
have
to
touch
the
code
right
and
then
we
also
capture
kubernetes,
to
capture
the
state
metrics,
to
get
changes
so
effectively
and,
of
course,
if
you're
running
on
cloud
we'll
also
get
the
cloud
gateway.
B
I
think
that
that's
what
this
is
so
basically
using
these
five
parts
to
collect
data
from
the
open,
telemetry
or
cloud
we
can
capture
everything
we
need.
We
can,
of
course,
do
the
same
with
cloud
metrics
and
vms
as
well,
and
then
push
it
to
the
controller.
That's
what
we
will
be
doing.
Okay,
so
without
much
should
you
actually?
Let
me
go
back
and
there
are
no
questions
jump
into
the
demo
all
right,
please!
Let
me
know
if
you
can
see
this.
B
It
is
very
small
I
can
zoom
in
so
what
you
are
seeing
and
is
essentially
think
of
it
as
every
container
pod
nodes
being
captured
here
and
the
flows
between
them.
So
as
an
example,
if
I
go
in
here,
you
can
see
the
the
pod
has
metrics
events
logs
connections,
so
this
is,
and
you
can
see
the
direction
of
shipping
service
here
as
an
example
going
from
here
to
here.
B
In
fact,
if
I
highlight
that
you
can
see
the
connections
and
in
fact,
as
I
am
looking
at
it,
you
can
even
see
the
data
flowing
through.
This
is
what
can
be
done
by
pulling
data
in
from
those
collectors.
We
actually
build
this.
This
service
map
directly
from
those
met
those
elements,
collectors
that
we
talked.
A
About,
of
course,
and
there's
already
an
audience
question.
A
Is
correlation
between
span
a
b
and
c.
B
Oh,
we
are
back
on
this.
I
think
he's
talking
about,
or
she
is
talking
about
this
correct.
Is
this
the
question
related
to
that.
B
Yeah
so
yeah,
so
the
example
here
is:
if
this
is
the
span,
a
is
this
is
the
kind
of
the
the
major
span
span
b
is
when
the
request
went
from
here
to
here
request
goes
from
b
to
c,
etc.
This
is
almost
like,
showing
you
the
the
flame
graph.
If
I
were
to
kind
of
go
in
there
later
on.
Let
me
see
if
I
can
show
you
an
example
here.
B
Yeah,
so
in
fact
I'm
jumping
ahead
guys,
so
here's
the
trace
path,
summary
that
we'll
show-
and
if
I
just
pick
one
at
random
this
will
actually
get
into.
Let
me
just
pick
front
end,
so
if
I
look
at
the
traces
so
essentially
what
you're
seeing
is
as
I'm
jumping
ahead.
This
is
that
aggregated
service
operations
to
service
operations,
you
can
see
that
right
services
front-end
the
operations,
the
catalog
services,
front-end,
the
operations
ad
service.
So
that's
the
aggregate.
This
is
not.
B
And
then
I
think
this
is
where
I
can
jump
to
the
trace
and
here's
your
familiar
spans
right,
I'm
jumping
ahead
and
already
giving
away
all
the
details
of
the
demo,
but
here
so
root
right.
So
this
is
the
root
a,
I
think,
that's
what
he's
talking
about
and
then
the
span
goes
from
here
to
here,
etc.
Right!
So
that's
what
that
was
it's
being
aggregated,
because
the
service
was
this.
These
are
the
corresponding
service
operations,
but
nest
doesn't
necessarily
show
all
the
detailed
spans
which
we
can
break
down
further
here
right.
B
B
B
We
can
look
at
its
metrics
events
logs
and
connections.
We
already
know
the
open
chart.
In
fact,
as
I
said,
you
can
even
look
at.
We
can
figure
out
where
that
application
is
running,
what
it's
using,
what
node
it's
running
on,
what
infrastructure?
I'm
not
going
to
go
on
that,
but
it's
interesting!
No
now
we
know
where
that
service
is
and
what
note
it's
running
on
in
you
know
how
much
it's
consuming.
B
You
know
whether
those
services
are
healthy
or
not,
etc,
and
we
will
detect
it
right
there.
So
you
can
get
all
that
detail.
What
we
are
not
showing
is.
This
is
not
at
the
same
level
as
the
trace
path
right.
So
that's
what
I
want
to
go
back
to
next
and
so
going
back
here
where
we
were
on
the
app
map.
That's
where
we
were
so.
The
app
map
gives
you
that
structure,
but
this
is
again
not
at
the
trace
level
right.
This
is
scraped
and
built
at
the
aggregate
level.
B
So
what
you
were
seeing
as
an
example
as
we
are
zooming
in
this
was
at
the
flow
level.
So
the
flow
level
gives
us
the
connections
right
who's
talking
to
whom
right
the
front
end
is.
If
you
look
at
here,
requests
are
coming
in
inbound,
inbound,
inbound
and
then
goes
out
to
this
guy
load
generate
as
an
example,
and
you
can
capture
aggregated
metrics,
like
latency
average
max,
but
this
is
at
that
service
level
right,
so
the
service
level
being
between
these
services
right.
B
So,
if
you
zoom
in
you
see
that
it
says
server
front-end
talks
to
this
front-end,
it's
sending
so
much
byte
and
that
response
that
28.648
it's
from
flow,
which
is
ebpf,
aggregated
we're,
not
seeing
all
the
transaction
that's
going
through
between
these
requ,
these
two
services
right
or
between
these
coming
from
the
server
to
this.
So
what
we
want,
as
I
was
pointing
out,
and
I'm
going
to
jump
back
again,
if
you
don't
mind.
B
A
Yes,
jimmy
asks
is
anything
instrumented
out
of
the
box.
Do
you
need
to
write
it
yourself
or
with
like
open
telemetry.
B
Thank
you.
I
will
revisit
this
so
if
you've
for
yeager,
if
you've
already
opened
up
the
libraries,
the
jaeger
native
library,
let's
say
c
python
whatever
and
you've
enabled
those
libraries
to
send
through
jaeger,
we
will
capture
those
you
can,
of
course
customize
which
ones
you
want
to
send
right.
So
when
we
say
jaeger
here
we
are
assuming
that
jaeger
is
instrumented
and
although
you
don't
have
a
collector
here,
our
gateway
is
going
to
basically
go
and
talk
to
the
jaeger
repository
and
get
that
data.
B
That's
coming
in
so
instrumentation
is
think
of
it
as
almost
like
auto
instrumentation,
enabled
by
the
libraries
as
far
as
the
other
things
goes
like
metrics,
logs,
etc.
Like
prom
tail
c
advisor,
we
can
help
you
in
enable
sorry
install
these
using
a
simple
helm
chart
for
the
logs
metrics
and
events
eager.
You
turn
on
jager
and
our
gateway
collector.
Welcome
like
that.
Hopefully,
that
answers
the
question.
A
Yeah
and
if
not
jimmy,
please
elaborate,
if
you
want
to
ask
me.
B
Yeah,
let
me
follow
up
with
us,
absolutely
would
love
to
talk
to
you
yeah.
There
are
so
many
different
aspects
here
and
I
think
this
way
we'll
focus
on
you
know,
so
I
mean
here's
a
different
view
of
it
by
namespace.
You
can
see
here.
This
is
our
online
boutique
application.
This
is
our
collector
right
by
namespace.
B
A
B
We
don't
make
any
assumptions
on
the
workload.
Remember
and
I'll
explain
so
going
back
to
the
issue.
Here
it
doesn't
matter
what
your
workload
is.
Remember
if
you
have
any
traceable.
I
give
an
example
of
an
e-commerce
like
online
boutique,
because
it's
public
available,
but
anything
that's
using
tracing,
will
generate
traces
and
they
will
have
trace
parts,
and
you
want
to
detect
problems
whether
they
are
slowing
down
having
errors
or
services
are
dropping.
So
this
is
orthogonal
to
the
specific
workload
it
uses
right.
Just
like
collecting
metrics
doesn't
depend
on
what
matter.
B
The
idea
is
that
you
can
collect
all
the
metrics,
but
in
real
time
as
we
as
we
are
given
examples,
the
logs,
etcetera
and
bringing
them
together,
however,
and
know
the
structure
of
the
application.
But
how
do
we
detect
a
problem
at
the
trace
transaction
level?
That's
the
focus,
so
we
don't
really
care
what
you're
running.
Obviously
you
care,
because
let's
say
you
have
a
customer
facing
application
could
be
an
iot
application
could
be
you've
got.
B
You
know,
machinery
coming
and
you
want
to
detect
whether
it's
coming
in
a
security,
application,
monitoring
something
else
and
they're
sending
requests
at
a
large
volume,
and
you
want
to
know
whether
you're
capturing
or
there's
a
problem
and
you're
not
missing
it
right.
So
we
are
capturing
traces
and
trying
to
make
understand
how
ops
can
even
track
traces.
If
there's
a
problem.
C
B
Industry,
that's
a
fair
point
chris.
I
think
the
demo
itself
what
we
are
showing
here
as
an
example
and
that's
interesting,
is
we
did
use
an
open
source
e-commerce
application
called
online
boutique.
B
So
we
implemented
this,
and
this
is
an
e-commerce
application,
because
you
know
e-commerce
is
probably
one
of
the
most
interest
to
retail
in
other
industries.
But
you
know
you
could
have
replaced
this
with
any
other
type.
Application,
collect
traces
and
still
detect
problems
right,
hope
that
helps
yeah.
It
could
be
telecom,
it
could
be
iot,
it
could
be
manufacturing
whatever
right
security,
etc.
B
Okay,
great
so,
let's
go
into
tracepath.
So
the
way
we
go
in
trace
path,
as
you
saw,
this
was
the
app
map
level
that
we
just
showed.
This
is
the
host
map
this.
This
is
the
infra
map.
I
didn't
even
go
through
them,
but
similarly,
what
we've
created
is
a
trace
map
view
and
trace
map
view.
We
use
something
called
trace
paths,
of
course,
because
we
want
the
real-time
snapshot.
B
We
are
listing
them
by
the
kind
of
like
the
top
ten
and
a
top
five
idea.
What
is
the
top
five
trace
paths
by
request
count
in
the
last
15
minutes?
Of
course
I
can
change
this
to
anything
you
want
that,
will
give
you
a
visibility.
You
can
do
all
that,
but
for
now
what
is
the
top
five
as
an
example?
What's
the
response,
time
errors
so
problems
and,
and
then
specific
services
also
because
the
trace
paths
involve
services
and
requests
right
and
then,
where
are
the
errors?
B
So
so
that
means
this
gives
you
the
aggregate
view.
So,
if
you're
the
ops
and
you
want
to
get
the
holistic
picture,
all
the
traces
going
on.
This
is
one
way
to
look
at,
as
we
said
group
them
by
trace
paths
and
find
issues
or
problems
able
to
get
the
high
level
snapshot
go
down
as
opposed
to
hey.
Let
me
search
for
traces
that
went
wrong
which
one
so
as
an
example.
Here
all
the
trace
paths-
and
this
can
go
on
and
on
right.
A
Yeah
and
there's
a
question
as
well:
can
we
get
trace
of
gcp
resources.
B
Well,
the
question
is:
are
you
are
using
open
telemetry
connecting
traces
from
what
so?
Can
you
clarify
that,
when
you
say
gcp
it's
running
on
a
cloud
platform?
What
are
you
instrumenting,
what
application
that
generates
the
trace?
So
maybe
I'm
not
understanding
the
question.
B
A
A
Yeah,
let's
see,
let's
see
muhammad,
if
you
have,
if
you
say
yes
or
no,
we'll
know,
but
then
there's
another
one.
Yes.
B
A
B
Me
go
through
the
demo
and
then
I
can
take
live
interactive
questions
so
here's
an
example
of
summary
of
the
all
the
trace
paths
without
grouping.
Now,
what
I'll
point
out
is
in
fact
I'll
go
to
the
previous
one,
so
you
can
see
we
get
an
aggregate
view
and
which
ones
have
errors,
latency
etc.
This
is
your
top
level
view.
Okay,
red
means
there
are
errors,
so
there's
a
number
of
ones
that
are
having
errors
that
we
are
capturing
and
in
fact
I
can
go
back
and
say.
B
Is
that
true
in
the
last,
let's
say
four
hours
and
it
should
group
because
I'm
collecting
statistics
and
if
I
go
to
all
trace
paths
last
four
hours
we
got
a
few
more
so
you
can
see
the
volume
of
error
went
down.
Okay.
So
let's
go
back
to
most
recent
15
minutes
that
we
are
collecting
traces
because
there's
some
errors
there.
It
generates
and
updates.
B
If
I
look
at
that,
you'll
notice,
something
here
remember
discriminating
traces.
I
want
to
point
this
out.
Do
you
see
this
front?
End
receive
cart,
receive
cart,
so
at
the
top
level,
if
that's
the
label
using
discriminating,
but
look
at
this.
B
There
are
a
lot
more
traces
on
this
path
than
this,
even
though
I
had
the
same
front-end
name.
In
fact,
if
I
go,
if
you
hover
it,
you
can
see
this
has
seven
plus
one
plus
one
services
involved.
This
only
has
two
so,
for
example,
if
I
go
to
receive
cart,
I
can
look
at
the
trace
path,
and
you
can
see
here.
This
is
gonna
populate
as
we
speak,
bring
it
in
what
it
says
is
front.
End
coming
receive
cart,
split,
hold
on
hold
on
okay.
B
Here
we
go
so
this
basically
makes
calls
to
get
code.
Get
card.
Convert
currencies,
get
the
supported
currency
list,
obviously
not
in
this
order
right
and
then
after
that
goes
to
shipping
service.
After
that,
this
one
calls
this.
These
are
the
services
and
the
corresponding
operation
and
in
fact,
as
you're,
seeing
it
on
this
path,
we're
also
seeing
on
that
edge,
the
corresponding
response,
time,
errors,
etc.
B
If
I
go
back
so
this
was
the
first
received
card
right,
more
services.
If
I
get
to
the
second
receive
card,
there's
only
five
service
operations
involved,
let
it
populate
it
again.
There
we
go
and
you
know
what's
interesting:
there
are
no
errors
on
this.
One
same
front
end
so
able
to
discriminate
without
going
and
trying
to
figure
out
how
to
tag
everything
on
those
services.
B
B
That's
the
whole
point,
differentiating
choices.
That's
one
thing:
we
want
to
automate,
there's
no
way
anyone
can
do
that.
So
so
all
services
here
is
an
example.
Again,
as
you
can
see,
that's
why
the
errors
are.
All
of
them
are
related
to
the
front
end.
So
that's
where
we
have
to
figure
out.
What's
going
on
so
going
back
to
highlights,
let's
figure
out,
let's
take
a
look
at
the
topmost
error.
So
if
I
go
look
at
top
most
error,
here's
one
in
fact
I
can
even
see
which
one
has
the
highest
latency.
B
B
You
can
see
from
a
summary
that
we
are
aggregating
as
we
are
processing
that
front
end
right,
which
has,
as
you
can
see,
one
two
three
four
five,
six
really
seven
services
receive
calls
these
other
six
and
then
the
next
service
is
product
catalog,
which
has
this
get
product
right.
So
these
are
the
ones
involved
again
we're
not
showing
the
spans.
Remember
we
aggregated,
but
we
know
that
this
fraud
has
problems
and
in
fact
this
is
specifically
on
the
error.
B
So
if
I
want
to
go
down
from
front
end
and
now
detect
what
happens,
I
can
click
on
that
right.
Sorry
before
that,
let
me
just
show
you
on
this
one:
what
is
the
inventory?
As
I
said,
there
are
only
those
two
services
right
front
end
which
has,
if
you
remember
six
one,
two,
three
four
five
six
and
then
product
catalog
only
had
one
get
that
product
right
and
where's
the
problem.
Aha
hipster
ad
service
is
having
higher
amount
of
errors.
B
So
already
I'm
seeing
it
now,
the
question:
can
I
get
to
those
traces
so
at
the
aggregate
metric
level
I
can
even
find
out
the
metrics
are
aggregate
and
I
can
see
the
errors
are
being
consistently
high.
So
if
I
go
back
what
what
does
ops
want
to
do
find
me
those
traces
that
are
causing
a
problem,
give
me
that
trace
so
here
there
are
multiple:
remember
they
are
looping
through
the
spans
are
being
aggregated,
so
they
are
actually,
as
you
can
see
almost
you
know.
I
forget
the
exact
number
here.
B
I
don't
know
where
it's
listed
below
somewhere,
I'm
not
seeing
it,
but
you
can
see
there's
about
10
of
these
that
happened
in
the
last
15
minutes.
So
if
I
pick
one
at
random
here,
the
same
service
that
pulls
up
your
familiar
flame
right
gives
you
the
down
and
saying:
where
is
the
problem?
So
we
are
highlighting
that
because
we
know
that
service,
it's
the
hipster
app
service,
and
now
there
are
the
tags
right
description.
What
is
the
type
there's?
A
transient
failure
here,
error
while
dialing
this
connection
refused
again.
This
is
from.
B
B
Again,
as
someone
pointed
out,
if
it's
not
kubernetes,
well,
then,
whatever
it's
running
on
right,
so
I
want
to
pause,
because
I
really
want
to
leave
at
least
10
15
minutes
of
questions,
but
hopefully
that
gave
you
some
background
right
and
please
follow
up
post
on
chat.
You
know
if
you
want
a
bag
around
this.
B
As
I
said,
the
whole
idea,
if
I
were
to
summarize,
is
to
make
distributed
treaty
where
you
don't
have
to
go
custom
coding,
custom,
instrumentation
or
proprietary
agents,
hey
open,
telemetry
and
open
source
cncf
is
the
way
to
go
in
order
to
figure
out
the
problems
on
traces
for
operations.
What
we
want
to
do
with
trace
path
is
able
to
automatically
group
and
identify
those
traces
again.
B
As
I
said
dynamically
once
we
have
that
we
can
bring
in
all
the
contextually
linked
things,
and
now
we
can
detect
problems,
go
down
to
the
specific
traces
so
skipping
through
this.
Just
as
a
background,
if
you
want
to
try
us,
you
can
go
to
upscrews.com
forward,
slash
free
forever
or
reach
us
at
info
obscurus.com.
B
You
can
check
out
our
website
I'll
I'll,
be
glad
to
take
questions
now.
So
with
that.
Thank
you.
So
much
for
chiming
in
feel
free
to
ask
questions.
I
will.
A
B
A
We
have
a
few
questions
here,
starting
off
with
the
question
that
we
kind
of
touched
upon,
but
we
have
some
more
info.
So,
regarding
the
question
about
getting
a
trace
of
google
cloud
platform
resources,
the
asker
continued
yes
using
the
hotel
collector,
I
got
an
update
that
we
need
to
pass
the
current
context
only
to
the
google
cloud
methods.
Let's
suppose
we
have
a
google
cloud
pub
sub
you
and
I
want
to
collect
the
fans
of
those
operations
that
happened,
that
the
pops
up
eq.
B
I
at
the
top
of
my
head
mohammed,
I
would
say,
if
you
do,
then,
at
a
minimum
we
can
start
tracking
those,
because
you've
already
instrument
that
specifically
how
it
ties
with
everything
else
would
be
great
to
have
another
that
dive
with
you.
B
Maybe
you
can
ping
us
and
send
us
an
email,
send
chris
an
email
chris
obscures.com,
because
this
could
be
very
specific
to
your
environment,
but
on
the
top
from
what
you
told
me,
if
you
are
instrumented
it
doesn't
matter
what
the
the
the
target
that
you're
trying
to
monitor
the
light
racing
and
capture.
I
think
we
should,
but
we
should
follow
up.
A
Okay,
great
so
perfect,
there
we
had
a
question
earlier
or
in
addition
to
the
aggregate
view,
can
we
support
for
threshold
average
time
being
said,.
B
Sure
sure
I'm
not
sure
I
I
didn't
that
I'm
not
sure
are
you
talking
about
average
response
time?
The
question
sorry,
is
that
what
the
question
was.
A
B
B
I
forgot:
okay,
am
I
sharing
now
or
not.
B
Yesterday,
all
right
so,
but
you
can
see
here
if
you're,
seeing
my
alert
window,
this
is
something
I
didn't
talk
about
outside
the
scope
of
this
doc.
B
We
do
capture
solar
breaches
by
looking
at
response
time
using
flows
when
it
goes
high
as
an
example
just
just
a
wild
example
right
so
and
we
do
this
using
ml
methods
and
there
are
multiple
ways
of
doing
that-
you
can
go.
You
know
different
ways
of
solving
the
problem,
looking
at
the
whole
path
when
it
goes
up,
etc.
We
do
root
cause
analysis
and
the
the
way
we
actually
do
this.
There
are
a
couple
of
different
ways
to
do
this.
B
One
is
you
can
let's
take
front
end
as
an
example,
the
service
level?
Let's
take
the
service
level,
if
you
see
here
where's
the
I'm
trying
to
find
where
we
can
set
slos
first.
B
What
I
was
trying
to
go
is
see
if
I
can
show
you
an
example
here
we
go
so
this
is
a
slightly
different
application.
A
shopping
cart
on
this.
If
you
can
zoom
in
this
nginx,
I
clicked
on
it,
you
can
add
slo
and
you
can
see
what
it's
doing
and
I
didn't
get
the
name
sorry.
So
by
looking
at
it,
we
are
constantly.
We
use
a
statistical
method
for
doing
that,
but
you
can
also
set
your
own
slo.
If
that's
one,
that's
a
threshold
at
the
ingress
level.
B
However,
as
I
said,
we
use
ml
techniques
to
also
look
at
the
data
and
and
do
that
beyond
that,
so
we
are
doing
the
same
approach
when
we
are
looking
at
back
to
the
trace
map
and
you
want
to
say
let's
say
I
take
a
trace
path
like
this.
You
can
say
response
time
and
you
can
say
hey.
I
can
set
response
time
on
the
front.
End
receive
checkout
card
checkout
and
set
an
slo.
We
are
not
enabling
that
you
can
see
already
someone's
already
tracking
anomalies
it's
higher
than
so.
Here's
an
example.
B
B
B
An
email
to
me
a
look
at
upscrews.com
or
follow
up
with
chris
at
obscurus.com,
absolutely.
A
B
B
Let
me
go
to
this
when
transactions
change,
this
list
gets
updated
because,
as
the
transaction
is
coming,
we
are
running
the
grouping
on
the
back
end
right.
So
and
in
fact,
not
only
are
you
looking
at
the
grouping
or
service
drops,
trespass
have
to
be
so
by
definition,
it
has
to
be
dynamic,
so
we're
not
only
discovering
new
services
services
drop,
but
also
service
operations.
Being
called
remember,
trace
path
is
between
service
operation
service
operations.
The
answer
is
absolutely
so.
The
highlights
will
change.
C
A
Yeah
we
are
starting
to
near
the
end
of
our.
A
There
is
still
five
minutes
to
go
if
anyone,
yes
or
typing
away,
while
we
see
if
anyone's
typing
away
and
just
like
sending
a
question
in
soon.
B
And
I'm
going
to
keep
this
yeah
up
there,
so
in
case
people
want
to
get
hold
of
us
or
you
want
to
try
besides
info
or
op
screws.
As
I
said,
you
can
reach
out
to
me
a
locator
screws,
or,
I
would
say,
go
to
chris
he's
tracking
this
better
than
I.
So
it's
just
chris
right,
chris
c-h-r-I-s
at
obscurus.
C
A
C
B
Yeah
we
do
have
not
I'm
glad
you
mentioned
that
I
should
have
brought
it
up.
There
is
a
ebook
we
just
published
in
fact
just
about
a
week
or
two
ago,
chris
that
we
can
send
out.
That
has
a
lot
more
detail.
In
fact,
I'm
trying
to
see,
if
you
don't
mind,
I'm
going
to
jump
and
see
if
I
can
find
it
for
you
guys
actually
and
show
you
what
it
looks
like
since
we
got
a
minute.
C
B
C
C
B
Well,
we
do
have
some
customers
who
are
using
it,
but
if
they
don't
use,
then
of
course
we
don't
run
it
because
it's
a
resource
on
the
site,
so
I
will
now
at
the
risk
of
sharing
my
whole
screen,
show
you
what
this
ebook
looks
like.
Can
you
guys
see
that.
B
So
this
was
published
and
basically
the
details
on
this
ebook
and
what
we
do
with
samples,
how
it
and
the
specific
problems
etc.
I'm
just
going
ahead
because
I
had
a
hand
in
this,
so
this
is
available
within
the
screenshots,
some
of
the
stuff
you've
already
seen
and
what
we
do
etc.
So
so
that
is
there
this
ebook,
you
can
just
ask
us,
for
it,
send
us
an
email,
etc,
and
chris
will
be
glad
to
get
hold
of
this.
A
Perfect
chris
has
a
lot
to
emailing
to
do.
A
That's
perfect,
yeah,
so
final
final
call
for
questions.
If
there's
anything
that
pops
up,
but
thank
you
so
much
it's
been,
it's
been
really
lovely
for
everyone.
B
Great
and
thanks
for
letting
us
host
this
and
talk
about
trade
spots,
something
we
really
believe
in
and
it's
open
telemetry
so
guys.
If
there's
one
message
I'll
give
you
open,
telemetry
cncf,
you
got
everything
you
need.
I
mean
it's
a
it's
a
it's
like
a
buffet
out
there.
You
don't
need
anything
and
now
with
open,
telemetry
jaeger
and
able
to
bring
traces
in
real
time.
B
You
don't
have
to
wait
for
some
poor
guy,
searching
and
and
ops,
be
second
to
class
citizens
to
developers
and
apps.
So
perfect,
it's
for
everyone.
So
all.
B
A
Appreciate
it
look
forward
to
hearing
from
you
guys,
yeah
perfect
reach
out
to
to
the
these
guys.
It's
gonna
be
great.
So
as
the
final
kind
of
wrap
up
here,
as
you
can
see
from
the
screen,
the
q4
calendar
for
online
programs
is
now
open,
so
book
your
session,
such
as
this
from
there.
So
it's
open
till
the
end
of
the
year,
so
plenty
of
chances
there
for
amazing
content
like
this
to
be
showcased.
A
So
looking
forward
to
that,
but
as
always
thank
you,
everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
great
to
have
a
session
about
using
hotel,
distributed
tracing
for
real
time
observability,
and
also
we
really
love
the
interaction
with
the
questions
from
the
audience.
Thank
you
so
much
and
as
always,
we'll
bring
you
the
latest
cloud
native
code
every
wednesday
all.