►
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Hello
and
welcome
everyone.
Thank
you
so
much
for
joining
us
today
for
our
webinar.
We
are
going
to
be
exploring
a
very
interesting
and
pertinent
Topic
in
today's
time,
so
we'll
be
looking
at
boosting
engineering
efficiency
through
open,
Telemetry,
captain
and
Tyke.
We
are
specifically
looking
at
how
insights
can
drive
efficiency
in
organizations
and
we'll
be
looking
at
different
sides
and
aspects
of
that.
I
am
Buddha.
Your
host
product,
evangelist
and
developer
advocate
here
at
tyke
and
I'll,
be
your
host,
co-presenter
and
facilitator
for
today's
session.
I'll.
A
Take
you
through
the
entire
journey
of
this
conversation.
Picking
up
your
questions
as
we
go
along
as
well,
so
that's
gonna
be
my
role.
Hopefully,
we'll
have
a
really
good
time
together.
Joining
me
on
this
journey
today
are
two
amazing
panelists
starting
off
with
Andreas
or
Andy.
He
is
the
devops
activist
at
dynatrace,
as
well
as
the
developer
Advocate
at
Captain
super
cool
titles,
all
of
them
so
hello
and
welcome
Andy.
It's
really
good
to
have
you
with
us
hi.
B
A
Andy,
it
is
then
I
think
we'll
we'll
keep
it
that
way.
So
thank
you.
So
much
Andy
for
joining
us
also
joining
us
is
our
very
own
Sonia.
She
is
the
group
product
manager
here
at
tyke,
but,
more
importantly,
she's
also
the
subject
matter:
specialist
subject
matter
expert,
as
well
as
the
driving
force
behind
Dyke's
open
Telemetry
here
with
us.
So
thank
you
so
much
for
joining
once
again
and
really
really
looking
forward
to
sharing
more
about
all
things
open,
Telemetry.
C
A
Indeed,
I
am
looking
forward
to
it
as
much
as
I
said
as
much
as
everybody
else
here,
so
hopefully
just
a
few
sort
of
housekeeping
things
here.
If
you've
got
any
questions,
any
point
of
time
do
feel
free
to
post
that
in
the
Q
a
section
below
we
will
be
taking
questions
as
we
towards
the
end
of
the
session,
but
if
there
is
something
that
specifically
pops
up
during
the
discussion
as
well,
I'll
keep
an
eye
out
for
that.
A
We
are
live
streaming
right
now
on
YouTube,
so
I'm
keeping
an
eye
out
on
the
comments
there
too.
So
if
you've
got
anything
any
feedback,
anything
coming
up,
anything
that
comes
to
your
mind
feel
free
to
add
those
as
well.
So
what
do
we
have
in
store
for
you
today?
We
are
going
to
be
looking
at
beyond
the
introductions,
we're
going
to
be
looking
at
githubs
again,
it's
a
very
pertinent
topic.
A
When
we
talk
about
efficiency,
specifically
from
a
devops
devops
perspective,
githubs
comes
up
pretty
frequently
and
I
think
we
want
to
look
a
little
bit
more
into
the.
What
how
and
why
of
Kit
Ops,
we'll
then
look
at
the
Dora
metrics
and
what
the
importance
of
that
is
how
you
measured
devops
efficiency
through
that
we'll
introduce
you
to
the
concepts
of
open
Telemetry
and
the
benefits
around
it,
followed
by
observability
of
deployments
using
Captain.
A
This
is
where
Andy
will
come
in
and
he'll
do
a
bit
of
a
demo
showcasing
how
you
can
have
better
insights
when
you
are
doing
deployments
and
making
that
more
efficient,
we'll
be
following
that
up
with
Sonia
who's
going
to
be
telling
us
all
about
API
observability,
with
type
in
demonstrating
how
you
can
troubleshoot
your
apis
and
get
better
insights
before
your
apis
with
type
and
dynatrace.
So
all
very,
very
exciting
things
to
come
went
for.
A
Last
but
not
least,
of
course,
we've
got
the
Q
a
segment
where
we'll
be
having
a
bit
of
a
discussion
and
taking
questions
from
the
audience
as
well.
As
perhaps
question
from
me
as
well,
because
I'm
a
curious
mind
here,
so
I'm
learning
as
I'm
going
along.
So
with
that
being
said,
I'm
gonna
go
over
to
our
topic
for
today.
A
So,
where
do
we
stand
in
today's
world?
We
have
pandemics,
we
have
a
war,
we
have
economic
uncertainty
to
a
point
where
there
is
a
panicked
market,
and
we
have
regulatory
failures
in
certain
cases
which
has
all
led
to
an
environment
where
things
are
not
really
as
predictable
anymore.
It
is
quite
uncertain.
A
It's
led
to
organizations
needing
to
take
drastic
steps,
it's
led
to
them
letting
go
of
a
few
people
and
it's
all
been
a
little
bit
messy
to
say
at
the
least,
and
you
know
we
don't
exactly
know
how
things
are
going
to
progress
in
the
near
future,
so
things
may
stay
along
for
a
little
by
a
little
bit
longer
with
this
case
or
get
better,
hopefully
sooner
rather
than
later.
But
all
of
that
is
to
say
that
the
conversation
has
now
shifted
with
okay.
What
can
we
do
with
what
we've
got
today?
A
How
do
we
do
more
with
what
we
have
and
the
conversation
immediately
shifts
towards
the
ideology
or
idea
of
efficiency?
How
do
we
get
more
efficient?
How
do
we
do
things
better?
A
Think
today,
what
we're
going
to
be
looking
at
is
specifically
we'll
be
looking
at
the
engineering
efficiency
side
of
things
specifically
from
the
perspective
of
devops.
So
with
all
of
this
being
said,
obviously
you
start
thinking
about.
How
do
you
get
more
efficient?
You
start
putting
in
plans
together.
You
start
strategizing
and
executing
that,
of
course,
as
quickly
as
possible,
and
all
of
that
is
fantastic,
and
all
that
is
great.
A
But
while
you're
strategizing
and
executing
a
plan,
how
do
you
figure
out
whether
it's
working
I
think,
ultimately,
the
whole
conversation
around
efficiency
and
Effectiveness
comes
down
to
getting
better
metrics,
getting
understanding
what
is
happening,
and
that
is
where
today's
discussion
becomes
very
very
important.
We
want
to
know
what
is
going
right.
We
want
to
know
what
is
going
wrong.
How
do
you
make
better
decisions
for
your
product?
How
do
you
make
better
decisions
for
the
tools
that
you're
using?
A
How
do
you
make
better
decisions
for
your
business
as
a
whole,
and
all
of
that
is
encapsulated
in
the
idea
of
observability
telemetry
and
hopefully,
as
an
extension
to
open
Telemetry
that
we'll
explore
later,
but
before
we
go
over
there?
Let's
look
at
like
I,
said
we're
going
to
be
looking
at
it
from
the
devops
perspective.
So
let's
look
at
a
little
bit
towards
the
devops
life
cycle,
so
to
speak.
So
it's
a
bit
of
a
it's,
an
infinite
Loop,
so
to
speak,
or
a
feedback
loop
as
I
would
call
it
an
iterative.
A
Loop
would
be
a
better
way
of
putting
it.
It
starts
off
with
the
planning
coding
and
building
and
testing
and
then
releasing
that
the
developer
side
of
things,
the
development
side
of
things,
and
we
move
on
to
the
operational
side
with
deployments
and
operations
and
monitoring,
and
then
it
feeds
all
the
way
back
into
planning
and
the
cycle
continues
and
it's
a
like
I
said
it's
a
very
iterative
process
and
what
we
want
to
explore
with
the
idea
of
efficiency
is
to
make
this
cycle
go
faster.
A
Make
these
iterations
be
smoother,
be
more
efficient
for
the
lack
of
better
word
and
whatever
percentage
gains
that
you
can
get
through.
This
is
what
is
going
to
contribute
towards
that
whole
efficiency
strategy
that
you
might
have
better
maximizing
your
return
on
investments,
doing
more
with
less
I
think
the
entire
conversation
is
hinged
towards
that.
So
when
we
talk
about
this,
obviously
the
the
idea
of
automation
comes
in
and
the
idea
of
automation
is
driven
in
the
devops
world
by
git
Ops.
A
So
you
would
have
probably
heard
this
in
the
context
of
cicd
or
infrastructure
as
code,
but
really
git.
Ops
is
a
set
of
practices
for
man,
managing
infrastructure
and
application.
Configuration
in
a
declarative
and
Version
Control
way,
and
ultimately
it's
an
enabler
for
infrastructure,
Automation
and
I'm,
specifically
looking
at
API
infrastructure
automation,
because
that's
where
the
context
of
today's
conversation
is
going
to
go
but
githubs
can
be
applied
to
your
entire
application
stack,
so
you
know
it.
It
can
be
applied
to
different
aspects.
A
It
is
a
set
of
principles,
not
a
set
of
tools,
so
you
can
use
tools
to
enable
you
to
be
git,
Ops
ready
or
enable
your
github's
journey,
but
githubs
by
itself
is
more
of
a
framework
and
guiding
principle
than
anything
else.
It
tells
you
how
you
can
approach
things
for
it
to
be
more
efficient,
be
automated,
so
I
think
I,
like
this
quote
from
someone
who
mentioned
githubs
as
just
infrastructure,
score
done
right
and
we'll
look
at
what
that
being
done.
Right
means
so
with
githubs.
A
There
is
a
few
key
principles
that
you
need
to
be
aware
of
and
four
primary
principles
here
and
when
you
think
about
those
principles,
you
look
at
how
the
system
behaves.
What
your
system
needs
to
be
system
is
our
system
needs
to
be
defined
declaratively.
A
What
that
means
is
that
you're
describing
what
the
desired
state
of
your
system
is
going
to
be,
instead
of
describing
or
defining
a
set
of
instructions,
you'd
look
at
what
that
end
end
game
is,
or
the
end
result
is
going
to
be
what
the
outcome
is
going
to
be
or
the
end
state
is
going
to
be
as
opposed
to
how
you
get
there
step
by
step.
So
that's
what
the
first
principle,
the
second
one
is
version.
A
The
system
needs
to
be
version
and
immutable,
so
this
is
sort
of
handled
through
versioning
of
your
entire
code
base
sort
of
in
in
git.
This
is
where
your
infrastructure
at
goal
as
code
conversation
comes
in.
It's
all
version
git,
which
means
that
it
is
a
lot
easier
for
you
to
roll
back
things,
because
your
state
is
maintained
as
different
versions.
A
It's
immutable,
which
means
that
again,
if
you're,
looking
at
audits,
you
can
go
back
in
history
and
look
at
what's
how
things
have
evolved
and
gotten
to
the
point
that
you
are
at
right
now,
so
rolling
back
and
recovering
of
systems
becomes
really
really
easy
and
simple.
With
this
your
system
again,
it
needs
to
be
pulled
automatically,
which
means
that
your
approved
changes
are
automatically
applied
to
the
system
which
again
comes
from
a
host
of
automated
testing
and
checks
that
you
would
have
in
place.
A
If
there
is
any
Divergence
in
this,
this
would
be
sort
of
your
principles
of
git
Ops,
which
have
gone
through
quite
quickly,
but
ultimately
this
is
the
gist
of
what
git
Ops
is
built
on.
So
then,
why
is
this
important?
What
is
the
benefit
of
actually
using?
This
principles
are
great,
but
what's
the
actual
benefit
of
it?
Moving
on
to
the
benefits
here,
so
we've
got
five
key
things
that
I've
distilled
done
and
I
think
different
people
might
again
have
different
ways
of
looking
at
it.
A
So
my
objective
here
is
looking
at
it
from
the
spectrum
of
productivity,
so
faster,
more
efficient,
more
frequent
deployments
made
easier
if
you've
got
a
pipeline
that
is
defined.
That
makes
all
of
this
easy
without
a
whole
lot
of
human
intervention
with
the
right
checks
and
balances
in
place.
This
becomes
a
lot
lot
easier
for
you
to
manage
cost
efficient,
of
course,
because
you
know
you're
reducing
the
amount
of
time
downtime
probability
here,
you're,
better,
managing
your
resources
a
little
bit
more
again.
Human
intervention
is
reduced.
A
That
means
things
are
moving
a
lot
quicker,
a
lot
faster,
reducing
the
amount
of
time
and
effort
required
here.
Reliability
because
again
we
spoke
about
things
being
version
and
gate.
It's
immutable,
it's
easy
to
roll
back
things
and
it
makes
life
a
lot
easier
from
an
errors
perspective
as
well
you're,
less
prone
to
errors,
because
you've
got
the
right
checks
and
balances
in
place
already.
Compliance
and
security
makes
for
simplified
auditing
and
access
control.
You've
got
all
that
managed
already.
A
You've
got
credentials,
management
taken
care
of
a
lot
easier
as
well,
especially
in
a
in
a
cluster
cluster
environment
or
orchestration
environment
like
kubernetes
and
then.
Finally,
we've
got
developer
experience
where
you
know,
instead
of
having
to
deal
with
whole
lot
of
tools
within
these
pipelines,
you
really
work
with
Git,
something
that
most
developers
are
familiar
with
have
worked
with
and
therefore
it
gives
or
makes
for
consistent
and
familiar
practices
and
tools
that
would
again
make
life
a
lot
more
easier
and
productive.
A
So
with
that
being
said,
of
course
now,
while
the
benefits
are
great,
but
how
do
you
measure
success
off
that
so,
which
is
what
brings
us
to?
How
do
you
measure
the
value
of
GitHub,
specifically
the
business
value
of
githubs,
and
this
is
where
we
have
well
enter
Dora?
Well,
when
I
talk
about
Dora,
this
is
not
this
data,
not
Dora
the
Explorer
per
se,
but
perhaps
Dora
the
metrics
person
in
this
case,
so
Dora
stands
for
devops
research
and
assessment.
A
This
is
what
this
is
essentially
a
way
for
you
to
measure
devops
efficiency,
and
there
are
four
key
things
that
you
need
to
look
at
or
is
looked
at
under
Dora
metrics.
Here,
where
we
look
at
deployment
frequency,
how
often
an
organization
successfully
releases
is
to
production?
We
look
at
lead
time
for
changes
where
the
amount
of
time
it
takes
to
commit
to
get
to
production
is
important
change.
A
Failure
rates
where
you
look
at
the
percentage
of
deployments,
causing
a
failure
in
production
and
then
time
to
restore
a
service
I
think
this
is
again
quite
important,
because
how
long
it
takes
an
organization
to
recover
from
failure
is
an
important
metric
for
you
to
understand
again
from
a
kitoff's
perspective.
All
of
these
really
really
important
for
you
to.
C
A
A
You
also
need
to
know
when,
to
course
correct
or
what
is
going
right
or
what
is
going
wrong
in
this
case,
so
that
different
difference
between
desired
State
versus
actual
state
is
what
you
need
to
know,
and
that
is
that
is
the
question
that
observability
in
this
case
needs
to
answer
or
speak
to.
So
you
need
to
understand
what
is
going
on
within
your
system
and
whether
there
are
deviations
if
it's
all
working.
Well,
that's
fantastic
double
down
on
that.
A
But
if
there
are
deviations,
then
how
do
you
make
sure
that
you're
not
going
too
far
apart
before
you
make,
or
you
know,
find
things
that
needs
to
be
changed,
so
that
brings
us
to
open
Telemetry,
which
is
what
we're
going
to
be
discussing
today.
Urban
Telemetry
is
an
open
source,
observability
framework
for
collecting,
processing
and
exporting
limited
data,
essentially
to
help
you
gain
better
visibility
and
into
the
performance
of
your
distributed
systems.
As
you
can
see,
it
is
actually
supported
by
a
whole
lot
of
different
tools
in
the
industry.
A
Already,
there's
the
second
most
popular
open
source
project
in
cncf
right
now
after
kubernetes,
which,
as
you
know,
most
of
you
might
be
familiar
here
already.
It
has
a
really
really
big
active
Community
today
and
it's
only
growing
in
popularity
with
more
and
more
tools
adopting
this
open
standard.
So
why
is
this
beneficial
for
you
to
consider
it
as
a
no-brainer
gives
you
better
monitoring
capabilities?
It
gives
you
better
service,
Health
response
times
errors.
Better
insights
is
what
we
are
looking
at.
It
also
provides
you
with
a
Common
Language.
A
It's
vendor
neutral,
it's
open
source.
It
gives
you
an
open
standard,
common
language
for
you
to
integrate
with
with
tools
externally,
as
well
as
its
support
for
multiple
tools
enables
you
to
again
add
all
of
those
things
to
your
stack,
have
specific
tools
that
look
at
the
same
data
differently
and
provide
you
with
better
insights
and
and
monitoring,
and
then,
finally,
that
will
help
you
make
better
product
decisions
through
better
product
insights,
better
usage,
metrics
product
issues
are
highlighted
earlier,
and
then
you
have
a
more
data
driven
approach
to
decision
making
in
your
organization.
A
So
with
that,
I
bring
It
full
circle
back
into
our
devops
lifecycle,
and
once
again
this
is
important,
because
this
is
what
we're
going
to
be
talking
about,
and
Andy
will
be
looking
at
it
specifically
from
the
deployment
side
of
things
and
and
get
better
with
that
side,
whereas
Sonia
is
going
to
be
looking
at
it
more
from
the
apis
life
cycle
side
of
things
and
how
you
can
troubleshoot
your
apis
a
little
bit
better
and
and
get
more
insights
from
it.
So
I
bring
it
all
back.
A
The
idea,
the
objective
of
today
is
going
to
be
to
make
this
cycle
more
efficient
and
help
you
get
started
with
that
Journey
or
be
better
at
the
journey
that
you
already
are
on
today.
So
with
that
I
think.
Hopefully
that
gave
you
a
bit
of
an
introduction
into
what
we're
going
to
be
discussing
today.
Some
background
on
the
different
concepts
that
we
are
touching
upon
and
with
that
I'm
gonna
head
over
to
Andy
I
have
mentioned
Andreas
here,
but
I
am
going
to
say
over
to
you
Andy.
B
Thank
you
so
much
Buddha.
Thank
you
so
much
for
having
me
and
yeah.
Let
me
share
my
screen.
I
hope.
I
can
just
take
away
your
sharing
rights
here
we
go
just
quickly
in
case
you
want
to
follow
up
later.
With
me,
I
know
you
were
probably
sending
out
anyway
information,
but
who
am
I.
I
was
introduced
in
the
beginning,
but
in
case
you
joined
late
I
during
a
day,
I
work
for
Diana
trace
and
the
rest
of
the
time.
B
I
work
for
Captain
well,
obviously,
I
do
work
also
during
my
day
for
Captain
I'm,
a
def
Rail
and
a
maintainer
for
the
open
source
project.
Today,
I
also
want
to
quickly
mention
that,
besides
open
Telemetry
and
Captain,
I
want
to
hide
that
open
feature,
as
well
as
another
open
source
project
in
the
cncf
space,
because
open
feature
also
will
make
it
easier
for
you
to
deploy
new
features
in
a
more
risky
way,
so
in
case
you're
into
feature
flagging.
You've
never
heard
about
it.
B
Maybe
open
feature
is
something
you
want
to
look
into
as
well,
but
Buddha.
You
started
with
this
slide
here
and
you
also
ended
before
you
passed
it
over
the
devops
infinity
loop
and,
as
you
correctly
said,
the
Dora
metrics
are
really
here
to
give
us
insights
in
how
well
we
are
doing
how
well
we
are
pushing
out
changes
from
code
all
the
way
into
like
building
it
testing
it
releasing
it
and
then,
on
the
right
side.
B
I
have
to
change
fader
rate
and
time
to
restore
Services,
which
are
more
metrics
that
show
you
how
mature
you
are
in
terms
of
operating
your
software.
How
fast
you
can
recover
from
a
problem.
So
I
want
to
focus
specifically
now
in
my
presentation
on
observing
the
dura
metrics
from
a
build
to
deploy
perspective
and
then
Sony
right.
You
will
be
then
covering
what
the
operational
aspects
using
the
monitoring
data
to
do
the
troubleshooting
and
all
that
so
I
want
to
First
focus
on
how
do
we
get
stuff
into
production
now?
B
Buddha,
you
also
mentioned
githubs
and
I
am
a
lot
of
gitups
and
I
would
say.
Githubs
is
a
really
it's,
not
just
infrastructure's
code
done
right,
but
I
think
it's
full
stack
is
called
done
red
because
it's
obviously
spanning
from
infrastructure
to
the
application.
I
want
to
highlight,
though,
first
that
githubs
as
great
as
it
is.
It
also
requires
a
new
approach
to
observing
and
measuring
depth
of
sufficiency
and
here's.
B
Why
the
way
I
see
the
world
and
again,
you
know,
please
feel
free
to
also
correct
me
or
challenge
me,
but
in
the
classical
monolithic
world
that
we
used
to
live
in,
and
some
of
us
still
live
in
the
complexity
of
building
monolithic
applications
was
really
all
on
the
deaf
side,
because
this
is
where
you
had
a
joint
repository
of
code,
where
multiple
teams
had
to
figure
out
how
all
of
the
code
works.
Together,
you
have
to
invest
a
lot
of
time
in
continuous
integration.
B
We
had
to
figure
out
how
all
the
different
components
actually
make
up
an
app
where
you
did
your
validation
and
Security
checks,
and
then
we
had
tools
like
Jenkins
that
were
then
able
to
come
up
with
I
would
say
a
more
simpler
kind
of
construct
like
a
monolith
that
then
allow
it
allowed
a
little
bit
more
of
simple
operations
to
deploy,
observe
and
operate
a
well-defined,
app
and
I
think
with
the
emergence
of
githubs.
We
also
saw
obviously
the
move
towards
breaking
the
monolith
into
smaller
pieces,
I
think
in
the
cloud
native
world.
B
We
definitely
ended
up
in
a
way
where
we
have
engineering
teams
develop,
Individual
Services
rather
than
building
big
monolithic
applications.
Obviously,
there's
a
debate
on
what
what
is
when
does
it
make
sense
to
build
a
monolith
versus
when
does
it
make
sense
to
build
Services?
However,
I
think
the
move
was
great
because
we
made
development
simpler.
We
we
were
boosting,
I,
think
efficiency
on
the
output
of
individual
development
teams.
However,
what
this
really
meant?
B
We
were
shifting
right
the
complexity
of
actually
running
applications
that
are
still
applications
that
are
built
now
of
multiple
capabilities:
multiple
Services
towards
operations,
the
whole
app
composition
problem
like
what
is
really
an
app.
What
type
of
services
in
which
version
running
on
which
infrastructure
on
which
cloud
services
really
make
up
version
one
or
version
two
of
your
app.
B
If
you
measure
Dora
in
the
new
world
in
the
cloud
native
world,
only
on
Individual
Services,
you
all
of
a
sudden,
see
a
lot
of
new
Services
being
deployed
all
the
time,
but
it
doesn't
really
mean
that
you
really
deploy
new
features.
New
apps
to
your
end
users
right.
So
this
is
the
the
challenge
with
the
app
composition.
B
What
does
now
make
up
an
app-
and
this
is
the
problem
that
we
try
to
address
with
the
open
source
project
Captain,
bringing
visibility
a
into
the
new
Cloud
native
world
as
you're
building
microservice
based
applications,
but
also
bringing
application
awareness
into
the
observability
aspect
that
we
bring
in.
So
what
does
this
really
mean
captain?
B
What
is
Captain
Captain
is
an
automated
app
aware,
observability
toolkit,
that
gives
you
visibility
into
all
of
your
githubs
and
kubernetes
deployments.
So
what
you
need-
or
what
you
hopefully
already
have,
is
your
github's
tools,
whatever
you
like
your
kubernetes
clusters,
you
pick
your
observability
tool
of
choice.
I
Sonia
I
tried
to
do
a
good
job
and
put
in
all
of
the
logos
that
you
had.
B
If
I
miss
any
of
the
logos
don't
be
offended
any
any
observability
tool
that
can
deal
with
open,
Telemetry,
Prometheus
or
any
of
the
other
Open
Standards
have
a
you
know
should
be
up
should
be
placed
up
here,
but
there
was
limited
space.
I
put
the
stuff
on
there
that
I
see
on
a
day-to-day
basis.
Now
what
we
do
from
a
captain
perspective,
the
only
thing
you
need
to
do
is
you
need
to
install
Captain.
We
call
it
the
the
latest
version
of
the
latest
iteration
of
keptness,
the
so-called
Captain
lifecycle
toolkit.
B
You
just
install
it
on
your
kubernetes
cluster.
We
have
a
couple
of
things
that
I
will
show
in
the
demo
later
on,
where
you
can
instruct
captain
on
what
to
actually
observe
and
what
not
to
observe
and
what
to
do.
But
once
you
do,
when
your
developers
are
pushing
code
changes
into
git
and
then
your
GitHub
tool
of
choice,
I
will
use
Argo
later
on
for
my
GitHub
tool
of
choice.
B
We'll
then
do
the
deployment
from
that
point
on
Captain
will
automatically
create
or
give
you
insights
actually
using
both
Prometheus
and
also
open
Telemetry,
so
that
you
automatically
get
the
dashboards
that
show
you
the
key
Lara
metrics.
How
often
you
deploy
how
long
it
takes
to
deploy,
how
many
deployments
fail
and
the
beauty
of
it
is
because
we
are
basing
this
all
on
an
open
standard
or
on
Open
Standards.
You
can
look
at
this
data
in
in
this
case.
B
How
does
this
technically
work
in
case
you're
interested
in
it?
Technically
Captain
is
a
kubernetes
operator
that
you
install
on
your
kubernetes
cluster
and
to
give
you
a
quick
example.
If
you
have
an
app
that
is
made
out
of
free
services,
front
and
back-end
storage
in
different
versions,
then
what
we
actually
do,
what
our
operator
does,
if
you're
instructed
to
do
so
through
some
annotations
on
your
kubernetes
deployment,
manifests
we
automatically
measure
the
time
each
individual
service
takes.
B
What
what
do
we
do
with
this?
We
are
measuring
it.
That
means
we're
creating
metrics
and
also
traces
for
every
single
deployment.
Front-End
payment
storage,
including
the
time
span,
obviously
right.
How
long
does
it
take
now?
We
are
introducing
and
also
an
application
concept,
because
I
told
you
earlier.
The
big
challenge
is
to
figure
out
when
you're,
not
that
you
deploy
Individual
Services.
B
But
how
long
does
it
really
take
to
deploy
an
update
to
an
application,
because
an
update
could
be
updating
one
service
or
five
services,
and
so
we
are
allowing
you
to
define
the
application
context,
and
then
we
also
measure
automatically
when
you're
making
an
update
to
one
or
multiple
components
of
your
application.
We
are
measuring
the
pre-time.
So
when
does
the
whole
update
start
when
the
individual
updates
been
done
of
the
services
or
the
workloads?
B
B
We
also
allow
you
to
execute
tasks,
so
we
can
also
use
Captain
to
orchestrate
actions,
pre-deployment
and
post
deployment.
Now,
why
is
this
important?
One
of
the
metrics
in
Dora
is
failed
deployments
or
successful
deployments,
so
we
can
use
Captain
after
a
deployment
is
done
to
execute
a
task
within
your
kubernetes
namespace
and
have
it,
for
instance,
figure
out
is
my
app
really
up
and
running
right.
You
can
execute
some
tests,
you
can
validate
it
or
you
can
validate
your
slos.
B
By
going
back
to
your
stability
platform,
we
can
also
use
pre-deployment
tasks
to
pre-deployment
validate
if
it
actually
it's
a
good
time
to
deploy
the
application
right
now,
because
maybe
your
external
dependencies
are
not
there
or
maybe
your
environment
is
currently
either
quarantine
and
we
can
actually
stop
the
kubernetes
scheduler
from
deploying
your
pods
all
right.
So
this
is
what
we
technically
do.
We
are
an
operator
that
really
traces
and
monitors
and
observes
the
complete
end-to-end
deployment
of
your
application.
B
That
consists
that
can
consist
of
one
or
many
workloads
and
we
can
even
execute
pre
and
post
deployment
tasks.
Now,
how
does
this
look
like
when
we
look
at
the
trace
that
we
generate?
So
when
you
are
applying
the
changes
you
manually
or
your
githubs
tool?
This
is
not
a
visualization.
How
it
looks
in
Jaeger.
I'll
show
this
in
the
demo
as
well,
but
just
to
make
it
easier
to
to
visualize
when
you're
deploying
an
app
app
data
version
301.
B
As
you
can
see
here,
then
we
automatically
start
creating
a
trace,
but
not
a
single
node
Trace
we're
creating
a
full
application,
Trace
application
deployment
Trace.
That
means
this
actually
consists
of
the
pre-app
deployment
tasks
that
we
can
execute
and
it
checks
before
the
app
should
be
deployed.
Then
for
every
workload.
So
your
front-end
service,
your
backend
service,
your
database
service,
whatever
service
Captain,
will
then
create
spans
on
that
Trace
that
show
you.
B
How
long
did
it
take
to
do
the
pre-checks
for
the
workloads?
How
long
did
the
actual
deployment
take
of
that
record?
And
how
long
did
the
post
deployment
checks
take?
Remember?
Post-Deployment
checks
could
be
checking
if
the
deployment
is
really
Successful
by
executing
some
tests,
and
we
do
this
for
every
single
workload
for
the
front-end
service,
the
back-end
service,
the
database
service,
whatever
it
is
once
all
of
them
are
done
in
the
end.
We
also
execute
the
post
app
deployment
tasks
and
we
measure
this
as
well.
B
B
Honeycomb
I
use
Diner
Trace,
because
that's
what
I
do
during
my
day-to-day
life,
what
I
have
is
I
have
my
my
app
my
app
git
repository
here
and,
as
Buddha
said:
what's
really
nice
if
I
want
to
change
something
right,
if
I
have
a
sample
app,
it's
not
very
fancy,
but
if
I
want
to
change
something,
if
I
want
to
update
my
version
of
my
app,
the
only
thing
I
need
to
do
is
right.
B
If
I
have
a
new
container
image,
the
only
thing
I
need
to
do,
and
let
me
just
Commit
This
I'm
deploying
the
version
402
oops,
not
not
sharpest,
that's
it.
If
I
commit
it,
then
what
I
have
here
I
have
my
git
update.
This
could
be
now.
Obviously
application
configuration
is
called.
This
could
also
be
infrastructure
as
code,
but
what
I'm
basically
instructing
now
my
tool
to
do
my
tool.
My
GitHub
tool
of
choice
is
Argo.
B
You
can
use
flux
or
anything
else,
I,
don't
really
care,
but
Argo
is
now
updating
and
pushing
my
changes
out
and
what
is
happening.
If
you
can
see
this
here,
we
have
some
annotations,
where
I
actually
instruct
the
captain
operator
that
this
is
really
a
workload
it
should
look
into.
It
should
observe
and
Trace,
and
here
are
also
the
pre
and
post
deployment
actions
that
we
can
execute.
Now
what
this
does
I'll
explain
in
a
second.
B
Our
goal
is
pushing
out
these
things.
Captain
is
not
working
in
the
background
and
what
Captain
actually
does
it
automatically
starts
tracing
and
it
starts
monitoring
and
observing
the
deployments
that
happen
in
my
system.
Now,
I
can
look
at
this
data
now
because
it's
all
based
on
Open
Standards,
either
here
I
have
grafana
installed
and
I
can
see
what's
happening
and
what
what's
what's
currently
happening
in
my
environment?
I
can
also
look
at
it
in
right,
because
this
is
a
an
open
standard,
so
I'm
using
dynatrace
here
to
also
visualize
the
workflow.
B
So
earlier
today,
I
deployed
version
one
two
and
three,
as
you
can
see,
and
if
I
refresh
I
should
probably
I
also
see
version
number
four
as
a
trace
coming
in,
hopefully
any
second,
let's
see
version
number
four
is
coming
in
already
perfect.
So,
every
time
now
a
deployment
happens,
I
get
a
trace.
So
let
me
actually
open
up
quickly.
This
Trace
here
version
three
and
four
I
think
probably
version
number
four
is
still
ongoing,
but
if
I
look
into
this
Trace
I
can
see.
B
This
is
the
deployment
that
I
did
earlier
so
I
or
by
making
my
kids
commit.
I
basically
said:
I
want
to
change
the
desired
State.
Our
goal
was
applying
the
desired
state
to
my
kubernetes
cluster
and
what
I
see
here
is
an
end
to
end
Trace
right.
If
I
go
here,
end-to-end
trace
on
how
my
deployment
actually
made
it
to
kubernetes,
including
my
pre-deployment
tasks,
my
pre-workload
tasks,
the
actual
deployment
with
timings,
also
with
additional
information
about
like
if
I
click
on
these
individual
nodes,
we
also
have
so-called
attributes.
B
So
when
you
are
creating
these
open
Telemetry
spans
like
we
are
doing
with
Captain,
you
can
add
as
much
metadata
that
is
needed.
So
I
know
exactly
what
type
of
app
what
type
of
version
is
actually
being
deployed.
This
is
all
here
and
if
I
now
go
over
to
you
see
version
number
four.
It's
already
almost
done.
The
pre,
the
pre-deployment
tasks,
are
coming
in
full
end
to
entries.
Now.
Why
is
this
important
from
an
efficiency
perspective,
because
I
want
to
know
how
long
does
it
take
to
make
to
get
a
deployment
done?
B
How
long
did
it
take
deployments
over
time
and
just
to
really
show
that
you
can
do
this
in
any
type
of
tool
right?
You
can
do
this
in
in
Europe's
ability
tool
of
choice
as
long
as
this
observability
tool
understands
and
and
supports
these
standards
all
right.
Last
but
not
least
in
the
in
the
demo
that
I
want
to
show
if
I
look
at
my
deployment
DML,
you
saw
earlier
that
I
have
pre
and
post
deployment
tasks.
It
says
notify
here
for
a
pre-task
and
a
post
task.
B
What
notify
really
does
it
is
actually
executing
a?
Where
are
my
tasks
here?
Are
my
captain
tasks
so
Captain
actually
allows
you
to
one
of
the
options
is
that
you
can
write
a
JavaScript
function
that
we
then
execute
it's
kind
of
like
a
serverless
function,
and
this
function
here
is
actually
sending
out
a
select
notification
so
through
and
through
a
declarative
way,
right,
declarative
on
the
deployment
I.
B
Basically,
with
my
slick,
here's
my
slick
I
now
see
that
my
version
4
got
deployed
and
I
got
two
notifications,
because
I
said
I
want
to
call
this
child.
This
function,
pre
and
post
deployment
all
right,
so
there's
more
that
I
could
demo,
but
in
considering
the
time
I
want
to
bring
it
home
and
then
pass
it
over
to
to
Sonia
I
just
want
to
end
up
with
with
one
thing
here:
I
showed
you
the
whole
thing
on
a
single
cluster,
but
really
the
end
goal
with
this
is
because
you
probably
have
multiple
different
environments.
B
So
that
means
you're
not
constrained
with
what
we're
doing
here.
What
you
saw
to
a
single
environment,
what
we
really
want
with
Captain
and
with
getting
the
observability
in
I'm
just
drawing
this
out
here,
we
want
to
give
you
end-to-end
traceability
from
first
git,
commit
all
the
way
from
development
all
the
way
to
production.
This
is
what
we're
aiming
for
and
I
think.
B
This
is
what
open
Telemetry
enables
and
Open
Standards
enabled,
so
that
in
the
end,
we
can
really
get
to
a
world
where
we
can
enable
developers
to
easier,
develop
their
Individual
Services,
but
also
reduce
the
complexity
and
give
you
the
insights
for
operations
so
that
they
know
what
they
are
getting
is
actually
there
to
stay
and
is
good,
and
for
this
was
insightful,
but
now
it's
time,
I
think
to
pass
it
over
to
Sonia.
I
will
focus
more
on
the
operating
monitor
piece
of
the
whole
thing.
C
Thank
you.
Andy
was
really
interesting.
Every
time
I
see
this
demo
I
think
we
need
to
to
contribute
the
demo
with
an
API
Gateway
and
an
API
definition
that
we
can
also
push
to
the
pages
and
have
some
some
exactly
and
using
open
API.
So
one
more
standard
to
that
I
think
that
would
be
really
great,
so
something
for
the
future
for
us
to
welcome.
A
Yeah,
while
you're
doing
that
Sonia
just
a
quick
question,
Andy
I
think
there's
a
couple
of
questions
that
come
through
from
Gabriel.
Thank
you
so
much
for
the
questions
he
asks.
Is
there
a
demo
with
flux
CD
as
the
first
question,
so
perhaps
is
there
a
demo
I'm
sure
I,
don't
think
we're
doing
that
today,
but
is
there
a
demo
that
he
might
be
able
to
reference.
B
I
I
don't
have
a
demo
with
flux
CD,
but
it's
the
same
concert
right
I
just
use
I
mean
you
can
just
install
captain
on
your
kubernetes
classes.
If
you
use
flux,
you
get
the
same
thing
right,
I'm,
just
using
Argo
and
as
for
tracing
again,
there's
open,
telemetric
traces
and
yes,
I
showed
Diner
Trace.
But
you
can
see
here.
This
is
the
same
Trace
in
so
any.
B
A
A
C
Okay
and
now
we
are
here
so
we
shifting
things
to
production
and
yeah,
that's
not
typically
the
way
it
should
be,
but
it's
still
most
of
from
developers
do
something
committed,
and
then
it
runs
to
production
and
what
happens
then?
What
what?
What's?
What's
the
next
step?
And
if
we
go
back
to
our
Circle,
we
are
now
going
to
the
once
we
have
deployed
and
we
are
operating.
We
start
to
monitor
because
you
want
to
know
what's
happening
in
production.
C
Everybody
knows
you
could
test
as
much
as
you
want
your
application,
your
services,
but
when
it's
in
production
customer
will
start
using
it
in
some
different
combination
that
you
haven't,
tested
and
and
all
the
services
interconnected,
different
versions.
So
you
will
have
new
learning,
and
this
is
what
we
are
going
to
talk
about
today
in
my
in
the
part
of
my
presentation,
so
I've
called
that
learning
from
production.
So
once
it's
in
production,
what
can
you
learn
from
how
your
systems
are
running,
how
your
apis
are
and
and
that's
kind
of
two
different
learning?
C
There
is
one,
that's
the
really
really
technical
part
where
you
want
to
know.
Are
there
any
errors
that
I
need
to
act
on
now?
Is
anything
going
really
really
really
wrong?
Do
I
need
do
I
have
many
more
traffic
is
something
happening?
Do
I
need
to
have
Auto
scale,
do
I
need,
can
I
scale
down
and
save
some
money?
Is
there
any
error
configuration
resource?
C
Do
I
need
to
act
if
possible
automatically
without
having
somebody
that
needs
to
be
woken
up
in
middle
of
the
night,
and
then
there
is
more
than
Improvement
continuous
Improvement
on
the
product
side
as
a
product
management
manager,
I
want
to
learn
from
other
users
are
using
my
application
in
my
services,
where
can
I
prove
how
can
I
provide
a
better
Services?
Maybe
what
are
they
not
using?
C
So
we
can
also
deprecate
some
things
to
really
reduce
and
be
more
efficient
on
the
services
that
we're
offering
and
all
this
you
can
learn
from
production
using
observability
data
and
open
Telemetry,
because
it's
vendor
neutral,
as
we
were
discussing
it's
one
standard
one
protocol
format
that
you
can
send
to
many
many
different
observability
vendors
and
open
source
platform,
and
that
you
need
only
to
do
the
instrumentation
ones,
and
this
is
why
we
are
working
on
it
and
to
have
it
in
the
take
it
well,
because
it's
super
valuable.
C
Also
when
you
have
apis,
you
don't
have
typical
earlier
front
end,
so
you
just
expose
your
apis
to
your
external
users
and
you
don't
have
session
monitoring.
You
don't
have
a
UI
to
monitor
to
see
what
users
are
doing.
Where
are
they
clicking,
so
you
really
need
those
inside
the
gateways,
the
first
things
that
hit
the
traffic
from
your
customers,
and
this
is
where
you
can
observe
what
is
going
wrong.
What
is
going
well
and
what
you
can
learn?
C
What's
even
better
is
if
not
only
the
Gateway
Essence
data,
but
also
all
the
other
services,
because
then
you
really
get
this
nice
end-to-end
Trace
end-to-end
visibility
to
be
able
to
understand
how
much
time
is
spent
in
the
API
Gateway.
If,
if
there's
an
error,
is
it
an
error
configuration
on
the
Gateway?
Is
it
something
that
is
happening
much
later
in
the
Upstream
services
and
at
which
point
which
team
needs
to
act
and
to
solve
that?
That
issue.
C
This
is
an
example
of
how
end-to-end
Trace
looks
like
so
you've
seen
it
also
for
Andy
presentation
that
was
more
about
the
deployment
and
he's
really
a
trace
of
HTTP
request,
and
you
can
see
it's
hitting
in
that
example.
First,
the
front
end,
so
the
front
end
is
hosted
on
the
side.
Then
the
Gateway
and
in
the
tech
gate,
where
there's
different
middleware
operation
checking
the
version.
So
you
could
have
different
version
to
your
apis
and
redirect
it
to
the
right
version.
C
You
could
have
a
cache
with
limiting
and
then
sending
to
services
that
could
be
grpc,
graphql,
rest
whatever
and
then
I
think
everything
that's
going
on
in
that
services
and
then
the
call
being
successful,
and
if
you
send
some
music
for
this
demo,
the
open
Telemetry
demo,
the
open,
Telemetry
Community
has
been
working
on
a
demo.
It's
a
shop
and
you
can
you
can
run
it
in
kubernetes
or
in
Docker
and
there's
different
products.
C
You
can
click
on
it
add
things
to
cards,
bad
things
and
all
of
these
instrumented
with
different
services
and
what
we
have
done
at
Tech
is.
We
have
changed
it
a
little
bit
to
add
our
API
Gateway
in
the
in
the
mixer,
the
API.
All
cores
are
going
to
the
API
Gateway.
That
then,
does
the
redirection,
the
forwarding
to
the
Upstream
services
that
are
using
grpc
and
when
we
send
the
traces,
the
data
and
then
address,
we
can
get
a
really
really
nice
overview
of
how
the
target
is
doing
so.
C
The
typical
service
metrics
the
response
time.
So
it's
all
running
on
my
computer,
so
no
no
network.
So
it's
pretty
fast,
there's
not
that
much
of
traffic.
You
can
see
the
failure,
rats
that
looks
pretty
decent.
You
can
see
the
throughput
and
then
you
can
go
over
to
the
traces,
and
this
is
one
view
that
already
gives
you
information
about
the
infrastructure.
So
this
is
the
place
where
that
you
would
use
for
auto
scale
to
see.
Oh
there's
a
lot
of
progress
coming
up.
We
need
more
gateways,
we
need
to
scale.
C
Oh
there
are
too
many
errors.
Do
we
need
to
work
somebody
or
do
we
need
to
to
to
act
to
do
something
automatically,
but
that's
not
all
with
API
observability,
because
that's
really
the
model
infrastructure
is
the
Gateway
running
time,
but
when
you're
doing
with
apis
you're
more
interested
to
learn
also
about
the
usage
of
the
different
apis,
so
more
granular
level-
and
you
can
do
all
that.
C
So
if
you
have
the
data,
you
can
create
some
nice
dashboard,
visualization
kind
of
like
a
bi
tool
to
look
at
the
data,
and
you
can
really
have
so
I
created
a
dashboard
with
the
metrics
with
the
traces,
the
metrics,
based
on
the
traces
that
are
that
are
based
on
the
Telemetry
data.
That
Tech
is
exporting,
and
here
I
can
see.
Okay,
which
are
not
most
popular
API
requests.
So
I
see
the
product.
C
Catalog
service
is
the
most
used,
one
where
that's
got
most
of
the
requests
and
then
the
card
service
I
can
see
all
my
services
I
can
see
the
Response
Code.
So
more
often
it's
like
successful
200
responses,
but
here
I
see
yeah
the
errors.
That's
that's
good!
There's
some
errors
that
are
coming
from
my
checkout
service
and
I've
already
some
some
some
information
that
it's
the
errors
are
coming
for
the
middleware
parts
in
tag
that
is
responsible
for
weight
limiting.
C
So
maybe,
let's
take
a
look
first
in
the
let's
check,
if
there's
even
really
a
problem,
so
I'm
going
to
try
here
check
out
place
order
yeah
that
doesn't
work
okay.
So
we
need
to
check
that
and
I
can
look
at
the
distributed
traces.
Oh
yeah
see
there
are
some
traces,
some
transactions
that
are
having
errors
with
that
API
with
the
checkup,
and
this
is
the
beauty
of
the
distributed
traces.
The
end-to-end
traces
at
Just,
One,
Look,
I,
can
say:
okay,
there's
an
error.
C
That's
coming
from
the
weight
limit,
part
in
the
checkout
service,
so
the
checkout
service,
API
and
oh
API
weight
limit
exceeded,
oh
okay,
so
either
somebody
is
making
too
many
requests
or
I.
Have
a
I
have
a
missed
configured.
Something
I
can
go
to
tag
to
my
API
definition.
I
can
check
it.
Let
me
check.
Let
me
check
yeah
I
have
my
weight
limits.
Oh
yeah
I
have
only
one
request
per
minute,
so
that's
that's
really.
That's
somebody
made
an
error.
C
So
let
me
let
me
increase
that,
and
obviously
here
I'm
doing
it
in
our
you
know
API
manager,
but
that's
something
that
you
would
do
as
code.
So
part
of
the
API
definition
is
something
that
you
would
put
like
to
the
whole
GitHub
process
and
just
automate,
but
here
it's
more
easier
to
show
it
to
you
like
this.
C
I
have
updated
my
API
definition
and
now,
if
we
look,
if
we
refresh
a
little
bit
and
wait
at
some
point,
we'll
see
that
the
errors
analog
in
common,
let's
check,
if
I
can,
if
I
can
place
my
order,
yeah
I
could
place
my
order
and
that's
really
the
beauty
of
open
Telemetry
and
to
interesting.
You
see
the
error
directly.
Is
it
something
in
the
gate
for
a
misconfiguration
or
is
it
something
new
Upstream
services?
C
And
sometimes
it's
not
an
error
process,
so
the
maybe
technical
Engineers?
No,
that
was
not
an
error.
You
know
that
was
how
it's
supposed
to
be.
This
cannot,
but
as
a
product
manager,
I
can
go
there
and
see
what
other
errors
that
Merit,
that
my
user
having
and
if
many
users
are
having
error
with
weight,
limiting
with
authentication
with
one
path,
then
I
can
look
at
the
documentation
that
I
have
for
my
apis.
C
B
And
and
I
wanna
I
wanna
add
first
of
all,
awesome
demo
and
I
want
to
add
again,
especially
for
people
like
Gabrielle
I
know
we
showed
the
visualization
of
these
traces
in
dynatrases.
Sonia
showed
it,
but
you
can
view
this
in
any
type
of
tool
that
understands
open
Telemetry.
But
what
I
like
about
your
demo
is
right.
You
showed
a
very
common
use
case
in
distributed
systems
where
you
have
API
gateways
or
service
meshes
or
whatever.
B
It
is
between
services,
and
if
somebody
makes
a
configuration
mistake
and
therefore
stops
API
calls
from
going
from
A
to
B
and
stopping
there
for
something
critical
like
an
order
transaction.
Then
you
want
to
be
notified
as
fast
as
possible
and
you
want
to
have
the
data
that
shows
you.
The
problem.
Is
there
it's
a
rate
limit
right
or
whatever
else
it
is
yeah,
but
this
was
really
really
nice.
A
Absolutely
it's
a
really
really
good
demonstration
for
both
of
you
actually
and
I.
Think
just
that
on
that
last
Point.
There
is
true
business
implication
of
something
like
this,
where
it's
not
just
about
having
the
best
possible
product
experience
and
I
mean.
Obviously
that
is
very
important.
That
is
what
people
probably
pay
for.
If
they
are
using
a
paid
product,
they
want
the
best
experience
possible.
A
They
want
their
users
to
be
able
to
use
your
product
better,
but
at
the
same
time
it's
also
about
you
know,
uptime
conversations
and
slas
and
I
think
there
is
their
entire
sort
of
commercial
aspect
of
support
of
of
a
particular
product
and
I
think
the
faster
you
can
respond
and
identify
an
issue,
and
you
know
resolve
it.
Hopefully
you
know
there
is.
A
There
is
True
commercial
implications
of
that
the
support
packages
that
go
along
with
it
all
of
that
comes
down
to
True
business
implications
of
what
we
have
looked
at
so
at
me,
at
the
face
of
it
is.
It
is
a
fairly
technical
thing
that
we
we
looked
at.
We
are
looking
at.
You
know
githubs
you're,
looking
at
open
Telemetry,
but
there
are
true
business
implications
of
everything
that
you
you
are
seeing
here,
the
end-to-end
ability
to
look
at
things
from
an
end-to-end
perspective.
What
is
going
on
internally,
what's
right,
what's
wrong,
how
do
you
improve?
A
B
And-
and
one
thing
to
add
here
is
I
know
that
many
of
organizations
in
the
past
have
developed
their
own
tools
and
own
standards
on
how
to
collect
data
from
different
parts
of
the
infrastructure
and
maybe
written
it
to
their
custom
databases
or
wherever
don't
do
this
anymore,
because
we
have
open
Telemetry
with
Open
Standards.
If
you
miss
observability
in
a
particular
piece
of
your
infrastructure
in
particular
processes,
then
don't
Implement,
something
that
is
custom.
Proprietary,
use,
open,
Telemetry,
there's
sdks
for
pretty
much
every
language
out
there.
B
A
100
and
I
think
that's
the
that's
the
beauty
of
it
again:
Open
Standards
in
General,
open
Telemetry
in
this
case,
specifically,
the
idea
is
that
you
don't
have
to
think
about
hooking
into
specific
systems
that
have
their
own
language,
their
own
requirements.
You
don't
need
to
do
that
anymore.
You
can
have
this
common
language
that
is
spoken
across
different
systems
and
solutions
and
you've,
like
Sonia
mentioned
once
you've
set
that
up
inside
your
inside
your
systems
and
inside
your
solution.
A
It
is
going
to
be
applicable
to
any
other
integration
in
the
future,
even
if
it's
not
today,
but
you
know,
if
you
want
insights
later
on,
you
would
still
have
that
ability
to
to
integrate
with
those
Solutions
a
lot
easier
than
having
to
write
either
a
middleware
or
a
plug-in,
or
some
kind
of
a
hook
to
be
able
to
connect
to
that
system
in
its
own
language
and
not
having
to
maintain
like
five
or
six
or
maybe
ten
different
systems
altogether.
So
that's
the
beauty
of
it.
That's
the
productive.
A
Hopefully,
the
efficiency
part
of
things
comes
out
a
little
bit
better.
There's
a
question
again
from
Gabriel
I.
Think
we'll
you
share
the
slides
in
fact,
I'll.
Do
you
one
better?
We
will
be
sharing
the
entire
presentation.
This
entire
video
recording
will
be
made
available
to
everyone
here.
So
you
shouldn't
have
any
issues
with
that
once
to
follow
along.
A
If
you
wanted
to
from
a
questions
perspective,
I
think
there
is
one
thing
that
Andy
you
touched
upon
and
I
think
just
I
wanted
to
clarify
and
sort
of
re-emphasize
that
a
little
bit
where
a
lot
of
times
when
we
think
about
open
Telemetry.
Sometimes
it
is
looked
at
it
from
looked
at
from
a
very
transactional
perspective,
whereas
what
you
mentioned
today
was
very
much
from
an
end-to-end
application
side
of
things.
So,
if
you
could,
maybe
just
you
know
clarify
that
a
little
bit
or
maybe
just
reinforce
that
in
the
minds
of
everyone,
who's.
B
Listening
yeah,
thank
you
so
much
for
the
question
so
obviously
open,
Telemetry,
I
think
at
least
it
was
born
out
of
the
necessity
and
the
need
to
trace
ends
when
transactions
and
business
critical
apps,
but
we
can
use
open
Telemetry
for
any
use
case
and
whether
it
is
the
deployment
use
case
that
I've
shown
some
from
git
commit
all
the
way
into
production.
You
can
also
think
about
business
process.
B
Monitoring
right
if
you
have
business
processes
that
are
spanning
multiple
systems,
where
you
even
have
you
know,
wait
time
in
between
I
think
about
an
order
process,
there's
also
ways
in
open
Telemetry
to
create
traces
and
spans
that
are
then
linked
together.
So
you
can
really
think
about
tracing
end
to
end
from
the
first
time
you
reach
a
customer
and
get
their
interest
until
you
ship
the
product.
B
I
mean
even
that's
possible,
and
so
that's
why
open
Telemetry
is
not
constrained
to
the
classical
tracing
use
case
for
tracing
business
transactions
in
business
critic,
labs.
A
Absolutely
thank
you
so
much
for
that
response.
The
next
question
that
we
have
this
is
probably
over
to
Sonia
in
this
case,
where,
typically,
when
we
think
about
open
Telemetry
I,
think
we
obviously
us
working
in
the
API
management
side
of
things.
There
is
different,
API
styles,
that
we
have
to
deal
with
at
the
API
Gateway
level.
A
lot
of
the
conversations
typically
tends
to
go
towards
the
rest
apis
today,
because
this
is
obviously
the
most
popular
standard.
But
equally
could
something
like
open.
A
Telemetry
help
us
get
better
insights
with,
say
a
graphql
API
or
perhaps
a
grpc
APK,
all
of
which
have
they
work
a
little
bit
differently.
They
provide
different
insights.
They
have
different
operational
models
internally,
so
could
open
Telemetry
be
extended
to
probably
give
us
better
insights
into
those.
C
Yes,
so,
as
you
mentioned
with
rest,
is
just
easy:
straightforward,
we
know
all
the
fields,
so
there
is
a
semantic
convention
with
open,
Telemetry
kind
of
Defense
standard
fields
that
everybody
use.
You
know
HTTP
Response,
Code
and,
and
then
it's
really
helpful,
because
in
the
tool
when
you
want
to
generate
from
different
sources
the
data
you
can
use
to
cement
convention
to
create
filter.
But
there
are
some
things
that
are
still
missing,
for
example,
for
graphql,
because
for
example,
what
is
a
graphql
error?
When
you
have
a
graphql,
you
could
have.
C
You
know
calling
two
different
services
and
just
getting
data
from
one
and
then
the
response
of
the
graphql
would
be
a
HTTP
200,
so
open
Telemetry
would
interpret
it
as
something
that
was
successful,
even
though
you
are
missing
some
part
of
the
data.
That's
something
that
we
also
looking
into
and
I
will
have
a
talk
at
kubecon
with
one
of
acrylic
Ahmed.
C
A
Thank
you
so
much.
This
has
been
fantastic,
I
think
we
are
exactly
one
minute
to
the
hour
today.
So
thank
you,
everyone
for
joining
us.
Thank
you!
So
much
panelist.
It
has
been
an
incredible
conversation.
I
have
thoroughly
enjoyed
learning
all
about
open,
Telemetry,
Captain,
tyke
dynatrace
and
the
entire
workflow
that
goes
into
making
application
life
cycle
more
efficient
in
this
case.
So
thank
you.
Everyone!
It's
been
a
pleasure.
A
Thank
you
all
right
guys
until
next
time
we've
got
another
webinar
coming
up
next
week
above
where
we,
where
we'll
be
looking
at
declarative
approach
to
API
management
and
kubernetes
next
week
on
the
23rd
of
March
as
it
stands,
so
do
make
sure
that
you
join
us
for
that.
We'll
go
a
little
bit
deeper
into
some
of
the
concepts
that
we
discussed
today
and
stay
tuned
for
more
as
we
go
along.
So
thank
you
so
much
everyone.
It's
been
a
pleasure
until
next
time
take
care
and
have
a
lovely
day
ahead.
Bye.