►
From YouTube: InfluxDB + Telegraf Operator Easy Kubernetes Monitoring
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone,
as
caitlyn
mentioned
today,
we're
going
to
be
explaining
in
influx
db
together
with
telegraph
operator
and
how
to
use
them
to
monitor
kubernetes
workloads,
we're
going
to
be
showing
some
examples.
A
lot
of
these
are
based
on
what's
in
telegraph
operator
repositories-
maybe
we'll
start
by
introducing
myself.
A
B
Cool
so
I'll
go
ahead
and
introduce
myself
so
I'm
pat
gone,
I'm
an
engineering
manager
here
at
influx
data.
I
manage
the
deployments
team
and
we
are
responsible
for
all
the
plumbing
that
is
in
place.
This
whole
cid
ci
cd
pipeline
for
getting
for
our
cloud
2
sas
offering.
So
that's
we'll
kind
of
focus
our
talk
in
the
space
that
we
know,
which
is
kubernetes
and
influx
data.
B
So
first
I
just
really
want
to
say
so.
Influx
data
is
the
remote
first
company
behind
influx
db.
So
I
think
you
most
of
you
are
all
know,
probably
more
than
I
do
about
how
to
use
our
product,
but
I'll
give
a
little
bit
of
an
overview
and
don't
hesitate
to
ask
questions
along
the
way
and
we'll
get
to
them
at
the
end.
B
So
it's
a
so
influx
db
is
the
platform
for
building
time
series
applications,
and
I
wrote
all
these
really
good
words
and
now
I'm
having
to
read
them
and
really
at
the
heart
of
it.
It's
an
open
source
time
series
data.
So
it's
purpose
optimized
for
time
series
data,
whether
that
is
sensors
or
like
I
have
one
of
those
like
doorbells,
where
you
can
see
see
the
person.
So
it's
like
there's
time
data
there.
B
So
wherever
there's
time
based
data
influx
db
is
is
a
perfect
platform
for
you
to
develop
application
applications
around
that
data,
so
you
can
start
from
the
ui
or
you
can
skip
right
to
it
and
you
can
use
the
raw
code
and
the
apis
and
we've
got.
We've
got
apis
and
client
libraries
in
in
several
of
the
most
popular
programming
languages,
so
telegraph
if
you're
not
already
familiar
with
telegraph.
B
If
you
have
say
that
ring
device
over
there-
and
you
want
to
get
your
data
somewhere
telegraph-
has
the
input
and
output
plugins
to
allow
you
to
you
know,
get
your
data
from
your
device
into
a
database.
Of
course
my
preference
is,
if
you
put
that
into
influx
db,
but
we've
got
plugins
for
other
other
types
of
things.
B
It's
it
it's
an
open
source
agent
and
it's
it,
I
think,
has
a
really
healthy,
open
source
community
and
it's
maintained
by
influx
db,
and
they
have
like
they
have
a
lot
of
different
plugins
there's
over
300
different
plugins
that
allow
you
to
basically,
like
I
said,
like
manipulate
your
data
on
the
way
in
get
your
data
in
and
help.
You
manipulate
your
data
on
the
way
out.
B
It's
really
a
really
powerful
tool
so
today
we're
going
to
focus
on
talking
about
it
in
this
space
of
kubernetes,
but
I
know
that
there
were
several
talks.
I
think
at
influx
days,
north
america,
where
they
actually
talked
about
telegraph.
I
think
there
was
a
beginner
session
and
I
think
some
other
things
so
check
it
out,
because
it's
a
really
powerful
tool.
B
So
now
I'm
going
to
tell
you
a
little
bit
about
the
telegraph
operator
and
I
wrote
some
notes
ahead
of
time
to
prepare
for
this,
so
the
telegraph
operator
packages,
the
operational
aspects
for
deploying
a
telegraph
agent
on
kubernetes.
So
this
is
about
having
a
kubernetes
sidecar
with
a
telegraph
operator
there.
It's
it's
it's
a
sidecar
container
based
on
annotations
and
it
provides
the
telegraph
configuration
to
to
scrape
the
exposed
metrics
all
defined
declaratively.
B
It
allows
you
to
do
define
common
output
destinations
for
all
your
metrics,
so
you
can
send
it
to
influx
db
or
you
can
also
send
it
elsewhere
and
I'm
going
to
pause
there,
because
I
want
to
let
voice
finish
setting
the
stage
for
his
demo
and
I
don't
want
to
take
it
all
so
actually
voice
check's,
gonna,
take
it
from
here
and
and
and
do
a
demo
so
but
I'll.
Let
you
also
finish.
A
Right
so,
thank
you
think
about
so,
as
you've
mentioned
telegraph
operator
is
meant
to
be
running
alongside.
A
A
A
Okay,
yes,
so
I
noticed
that
there
is
a
question
about
apis
for
influx
db,
so
I'll
just
share
this
real
belief,
brief
and
keep
it
open.
So
we
have
a
documentation
about
all
of
the
apis.
There
are
also
clients
and
I'll
show
it
in
a
bit,
so
it's
well
documented.
In
the
rest,
apis
there
is
a
querying
language
called
flux
and
influx
ql
that
could
be
used
to
get
the
data
and
writing
the
data
is
relatively
simple,
but
going
back
to
telegraph
operator.
A
But
it's
also
really
useful
in
development
and
we'll
just
use
the
exact
same
setup
that
we
use
when
we
develop
it.
There
is
a
so
what
I
did
in
advance
because
it
takes
around
one
two
minutes.
I
run
a
kind
start,
make
a
kind
start
command
which
basically
just
creates
a
kind
cluster.
On
my
on
my
computer
and
it's,
it
deploys
a
few
things,
but
we're
going
to
deploy
if
we're
going
to
deploy
in
flight
db
version
2,
because
that's
what
we
want
to
demo.
A
We
also
have
a
ui
that
shows
a
way
how
to
write
data
from
a
lot
of
places
so
say
if
you're,
a
golang
developer
it'll,
give
you
a
ready
to
use
snippets.
Obviously
you
would
want
to
replace
the
token
and
some
other
things
with
flux.
Over
time
and
parameterize
this,
but
this
is
a
really
good
way
to
get
started
with
just
putting
data
in,
in
fact,
but
anyway,
right
now.
What
I
really
want
to
do
is
in
order
to
be
able
to
write
my
organization.
A
A
What
I'm
telling
it
is
in
my
cluster,
there
is
an
influx
db2
service,
which
is
what
we
were
just
talking
to
in
the
browser
in
the
influx
db2
namespace
and
the
part
that
listens
on
is
8086,
which
is
the
default
port.
I'm
just
going
to
tell
it.
My
organization
is
demos
I
just
entered
it.
Packet
is
demo,
and
now
I'm
just
going
to
copy
my
token
and
because
it's
my
local
machine,
I'm
fine,
sharing
it
because
I'll
just
keep
the
class
later
on.
A
I
will
also
copy
it
to
believe
we
will
be
using
the
default
one
as
well.
So
I'll
just
click
this
so
right
now,
I'm
configuring
the
classes,
meaning
that
when,
when
we
want
to
monitor
some
workloads,
we
will
need
to
specify
what
the
class
of
that
workflow
is
or
it
will
be
using
the
default
class
if
it's
not
specified.
A
A
Okay,
thank
you
so
much
for
noticing
it
so
right
now
I'm
going
to
go
back
to
my
terminal
and
I'm
just
going
to
deploy
it
examples,
classes,
okay,
so
the
example
is
already
committed
and
the
example
shows
how
to
use
it
with
things
like
dbd1,
because
from
development
perspective
we
keep
on
using
the
version
one
for
that,
which
is
something
we
should
improve.
But
it's
just.
B
A
Just
telegraph
operator
has
been
created
when
everyone
was
there
was
that
so
right
now
I
deployed
my
classes,
my
configuration,
I
can
update
it
in
the
future
and
there
is
live
reload,
so
I
can
change
it,
but
right
now
we
deploy
that.
So
what
I'm
going
to
do
next
is
I'm
going
to
deploy
telegraph
operator
and
it
can
be
deployed
in
multiple
ways.
A
A
Okay,
so
right
now
it's
it's
on
github,
okay,
telegraph
operator.
A
That's
that's
available
if
you
just
install
our
influx
data
headshots
source
and
then
and
then
you
can
install
it
or
just
use,
upgrade
install
which
will
either
install
it
or
upgrade
it
depending
on
whether
it's
already
installed
or
not-
and
this
is
also
this
is
a.
This-
is
a
preferred
way
to
get
to
getting
production
environments
running,
but
because
we're
using
kind
and
because
all
of
the
examples
are
based
on
on
this-
I'm
just
going
to
follow
this
and
not
not
to
the
handshake
based
installation,
but
I
can
right
now.
A
I
can
just
go
and
see
what's
running
in
my
cluster,
so
you
can
see
that
telegraph
operator
is
spanning
it's
ready
to
handle
to
handle
the
new
deployments
coming
in
and
adding
the
telegraph
sidecars.
So
now
the
way
telegraph
operator
works
is
maybe
I'll,
just
open
one
of
the
deployments
to
explain
it.
This
is
just
a
definition,
a
very
simple
definition
of
how
to
run
redis.
It
is
a
stateful
set,
but
it
doesn't
really
even
include
volumes
in
real
life.
A
This
would
be
a
more
complex
stateful
set,
but
this
is
an
example
of
how
to
use
telegraph
operator
to
to
monitor
things.
The
way
telegraph
operator
works
is
it
for
each
part
that
gets
created.
It
checks
the
annotations,
and
if
there
is
a
telegraph
operator
annotation
in
it,
it
will
inject
the
side
current.
A
So
right
now
we
can
see
there
is
just
a
one
container
called
redis,
that's
just
using
the
default
redis
image,
but
we
can
also
see
that
we
have
the
annotation
telling
telegraph
operator
that
it
should
be
conducting
localhost
and
the
standard
redisport
and
using
the
redis
plugin.
This
is
this
is
one
of
the
plugins
that
pat
mentioned,
and
maybe
I'll
explain
this
a
bit
more
so
actually.
B
A
Yeah
is
open
source
and,
and
it
also
includes
a
an
extensive
freebie
on
how
to
get
started
with
development,
with
deploying
points
to
the
helm
chart,
and
this
is
so.
If
you
want
to
rerun
what
I'm
showing
today,
I
think
the
easiest
way
is
to
clone
it
and
I'm
basically
using
a
lot
of
the
make
targets
and
just
applying
some
of
the
things
that
we
also
mentioned
in
the
documentation,
because
you
can
see
that
we're
deploying
this
we're
just
playing
this
through
github
urls
rather
than
locally.
A
A
That
thank
you
so
much
for
this.
That
is
a
very
good
point
because
I
am,
I
am
because
I
am
so
into
the
repository,
sometimes
forget
to
explain
things
that
may
be
like
maybe
my
day-to-day
things,
but
for
a
lot
of
people
they
may
be
new,
so
it's
good
to
mention
it.
So,
going
back
to
the
configurations,
I
may
have
skipped
explaining
some
of
these
things
so
the
way
telegraph
operator
works.
A
It
combines
the
output,
the
telegraph
configuration
that
the
telegraph
will
be
reading
from
multiple
sources,
one
of
the
sources,
the
classes
that
I
mentioned,
which
is
just
a
vanilla,
kubernetes
secret,
with
the
definitions
of
all
the
classes,
and
usually
this
would
be
including
outputs
or
some
of
the
tags
or
some
or
some
of
the
general
things
that
would
be
applied
to
all
the
metrics
related
to
this.
Let's
call
this
this
class
of
applications
that
want
to
monitoring.
A
So
in
this
case,
we
added
the
output
to
it,
which,
which
means
that
everything
with
the
app
class
will
be
writing
to
our
influx
db
v2.
A
So
one
of
this
is
we're
adding
inputs,
dot
redis,
which
means
use
the
redis
input
plugin,
and
previously
we
were
using
the
influx
db,
v2
out
plugin.
So
we're
telling
telegraph
talk
to
ready's
on
this
on
this
port,
get
some
of
its
standard
metrics
and
send
them
out
to
influx
dbv2
on
this
specific
url.
A
We
could
also
tell
it
like
send
it
to
my
cloud
instance:
send
it
to
some
of
the
on-prem
instances
of
intellect
db
or
maybe
send
it
to
one
of
the
very
very
large
set
about
it
like
it's
just
that
we
support
like.
I
know
it
could
be
sending
it
directly
to
kafka
or
some
or
some
other
output
plug
into
this
part
or
write
it
to
a
file,
but
basically
we
tell
it.
This
is
the
input.
A
These
are
the
outputs
that
they
that
are
in
the
secret
in
the
classes,
and
then
they
get
concatenated.
So
my
redis
definition
tells
this
is
how
you
should
gather
metrics
for
my
red
is
my
classes
that
I
disable.
You
should
be
writing
this,
and
it
also
tells
it
by
the
way
this
is
the
app
class,
meaning
that,
whatever
I
put
in
my
app
class
in
the
classes,
definition
is
where
the
data
goes
it.
We
can
also
specify
settings
for
memory,
requests
and
limits
for
the
telegraph
sidecar.
This
one
is
invalid
and
it
will
be
ignored.
A
This
is
more
of
a
development
test
case,
but
the
limit
the
cpu
limit
will
be
will
be,
will
be
set
to
the
telegraph
sector.
So
anyway,
that's
that
and
let's
just
let's
just
go
ahead
and
deploy
this.
So
this
was
examples
ready
to
believe
example.
Thread
is
okay.
So
now,
if
we
go
back
to
watch,
we
can
see
that
we
only
specified
one
container
one
container
for
the
for
within
the
pot
spec.
A
We
can
see
it's
actually
running
two
containers
if
we
do
just
click
port,
if
we
go
ahead
and
describe
it,
let
me
just
do
it
this
way,
we'll
see
that
there
is
the
redis
container
we
defined,
there's
also
the
telegraph
container
that
was
injected
by
the
telegraph
operator,
and
we
can
see
that
the
cpu
limit
is
set
to
1
to
750
milli,
so
0.75
of
a
single
cpu
car.
We
can
see
it's
mounting
at
the
telegraph
using
and
we
can
see
it
below
using
a
secret
that
was
generated
by
telegraph
operator.
A
So,
basically,
when
the
pod
was
about
to
be
created,
telegraph
operator
combined
the
whole
telegraph
configuration
put
it
in
that
secret
and
started
running
telegraph
operator,
and
it
also
tells
told
telegraph
operator
that
it
should
be
monitoring
that
configuration
to
allow
hot
reloading
which
I'll
explain
in
a
bit,
because
that
is
a
an
interesting
feature
of
telegraph
operator.
But
anyway,
at
that
point
I
believe
the
pods
are
already
running.
A
So
what
we
could
also
do
is
right
now,
I'm
just
telling
er
I'm
asking
for
actually,
let's
use
something
more
visual,
we're
going
to
run
a
tool
called
k9s,
which
is
a
nice
console
based
ui
for
a
lot
of
things,
kubernetes
related
and
it's
much
better
than
what
I
was
doing
before
that.
So
I
think
that's
going
to
be
more
visible.
A
So
this
is
my
board
with
the
sidecar
included.
I
can
take
a
look
at
the
logs
of
this
telegraph
sidecar
and
I
can
see
that
because
we
told
it
also
logs,
lock
all
the
metrics
to
standard
out.
We
can
see
that
we
already
have
the
metrics
in
here
and
the
metrics
are
in
line
protocol,
which
is
what
influx
db
fields
are
on
top
of,
but
this
is
basically
just
because
we
told
telegraph
to
write
the
standards
output
and
we
didn't
tell
it
to
use
any
other
protocol.
A
I
could
probably
also
see
a
lot
of
other
matches,
but
because
there's
not
really
nothing
happening,
I
can
also
just
have
it
show
all
the
metrics.
So
we
see
some
metrics
changed
over
time,
but
not
a
lot
of
them.
So
we
can
see
that
there's
there
are
a
lot
of
metrics
that
better.
I
can
just
show
the
update
and
you
can
see
that
there's
a
lot
of
data
that
we
have
and
we
can
see
that
the
telegraph
operator
is
reporting
this.
A
So
I
can
with
this
okay.
What
would
only
be
so
say?
Let's
go
back.
Let's
remove
this
a
bit.
I
could
say
that
I
will
start
by
just
filtering
data
coming
from
my
applications
and
then
I
can
go
back
and
say:
okay
and
now,
let's
take
a
look
at
all
the
fields
they
have
right.
So
we,
for
example,
we
have
another
another
thing
that
we
could
deploy,
which
is
which
is
an
example
of
deploying
nginx.
A
This
will
deploy
the
nginx
steam
on.
You
can
see
it's
being
deployed.
We
can
see
it's
slowly
running
now.
If
I
go
to
the
logs
right,
it's
mentioning
that
it
can't
really
scrape
locks
because
nginx
is
not
listening
on
those
ports
and
also
our
nginx
is
not
running
any
application
that
will
expose
the
metrics,
but
we
can
see
because
we
also
enable
the
internal
metrics.
We
can
see
some
basic
metrics
that
telegraph
is
reporting
so
right
now,
if
we
go
back.
A
A
This
is
not
exactly
regular
for
the
operator
specific,
but
let's
just
show
how
I
could
basically
just
go
and
say:
okay,
I
just
want
to
see
I
used
memory
for
babies
right
and
then
I
can
just
save
it,
and
I
have
my
dashboard
and
this.
That
would
be
an
easy
way
to
just
move
from
having
my
workload
in
the
cluster
to
basically
being
able
to
visualize
it
in
flex
db.
A
A
And
we
can
see
that
the
methods,
if
I
go
back
to
the
telegraph
plugin,
that
does
it
sorted
to
the
logs
of
the
telegraph
we
can
see
the
data
keeps
on
coming
in
so
one
other
thing
that
I
wanted
to
mention
or
show
is,
which
is
really
interesting,
is.
As
I
mentioned,
we
also
support
reloading
of
configuration,
so
I
could
just
start
adding
a
new
tag.
Let's
say
new
type
equals
application
and
for
the
other
one
we
could
say
default
new
default.
A
Okay,
so
once
I
deploy
this
classes,
so
the
only
thing
I'm
deploying
right
now
is:
I
am
changing
a
secret
that
telegraph
operator
is
using,
but
if
we
take
a
look
at
the
logs-
and
this
should
take
around
one
minute
for
telegraph
operator
to
notice
this-
and
we
want
logs
from
all
the
time
in
around
one
minute,
because
this
is
how
much
it
takes
for
kubernetes
to
re
to
reload
the
secret
mounted
inside
the
container
in
around
one
minute
telegraph
operator
will
pick
it
up
and
we'll
say.
A
What
we
would
really
want
is-
and
we
have
it
right
now-
is
the
ability
that
once
we
change
the
settings,
telegraph
operators
would
be
smart
enough
to
detect
that
and
then
decide
which
are
the
things
that
really
need
to
be
updated.
So
you
can
see
that.
A
It
decided
that
we
don't
really
need
to
update
the
secret
for
the
nginx.
We
can
see
that
it
decided-
let's
not
update
the
secret
from
engine
x,
d1
m
and
hx2,
because
nothing
changed
in
there,
because
we
didn't
change
the
basic
class,
but
let's
update
the
secret
for
redis,
because
the
class
in
there
was
up.
So
if
I
go
back-
and
this
is
the
mistake
I've
made
if
I
go
back
and
also
add
this
class
and
then
tags
basic
basic
up,
if
I
do
that,
then
we
should
see
around
one
minute.
A
A
A
A
The
telegraph
sidecars
would
be
injected
to
those
and
because
changing
the
annotations
on
the
pod
would
create
a
would
have
would
mean
that
the
pod
gets
recreated.
So
whenever
we
will
be
adding
the
the
annotations,
then
then
the
new
post
will
get
created
and
they
would
start
getting
the
telegraph
side
cards
included.
A
But,
as
you
move
into
day,
two
of
operations
sometime,
you
need
to
change
some
of
the
settings,
and
this
is
an
important
aspect
of
this,
or
sometimes
you
need
to,
let's
say,
rotate
your
tokens,
which
I
assume
would
not
be
manual.
It
would
be
some
automated
process,
but
that
would
be
something
that
should
be
happening.
So
let's
say
you
generate
a
new
token.
A
This
means
that
all
of
the
workloads
would
have
to
be
restarted,
or
at
least
the
telegraph
cycles
have
to
be
restarted
with
the
hot
reload
functionality
in
place,
telegraph
operator,
and
then
the
telegraph
sidecar
would
take
care
of
this
automatically
and
the
data
operations
are
much
easier
with
the
with
this,
what
reload
functionality
being
available?
The.
B
B
A
A
For
example,
we
want
to
change
the
frequency
at
which
we
get
some
of
the
data
because
we
want
to
increase
or
decrease
the
amount
of
data
we're
storing
or
we
we
or
we
want
to
move
some
of
the
data
to
other
places
like
we
may
be
monitoring
some
data
in
our
internal
systems,
but
we
also
want
to
be
moving
some
of
the
data
to
do
like
production
systems,
because
we
want
these
to
be
to
be
in
the
same
place
that
our
customers
use
it.
So
we
can
also
use
that
for
for.
B
B
What's
going
on
so
having
that
hot
reload
like
adding
that
that
functionality,
which
was
added
earlier
this
year,
fantastic
because
and
also
I
wanted
to
say,
as
you
mentioned,
we're
using
this
in-house
so
yeah,
it
was
definitely
kind
of
a
frustration
point
when
people
would
make
a
change
and
then
they'd
look
for
the
change
and
it
would
take
a
little
bit
to
you
know.
Basically
it
would
have
to
wait.
I'm
gonna
say
voice
check
until
it
naturally
like
got
restarted,
which
is
kind
of
a
funny
use
of
the
word.
Naturally.
A
So
right
now,
if
I
reload
this
right
now,
I
can
see
the
new
type
so
the
field
I
added
and
I
did
not
go
and
restart
anything.
So
this
is
like
the
thing
we
talked
about
it's
difficult
to
to
show
it
because
it
takes
a
few
minutes
for
for
all
the
kubernetes
mechanics
to
kick
in
and
and
change
the
underlying
secrets
and
then
distributing
the
underlying
watch
mechanism
to
notice
this,
but
in
the
kubernetes
reality
waiting
a
few
minutes
for
these
chains
to
get
deployed
to
like
hundreds
or
thousands
of
telegraph
sidecars.
A
This
is
very
acceptable,
as
opposed
to
the
thing
we
mentioned,
which
being
it
would
be
a
matter
of
days
or
weeks
before
the
state
that
is
visible
so
right
now,
I
can
go
in
here
and
see
my
internal
metrics
as
well,
so
this
is
so.
This
is
a
huge
improvement,
and
this
is
this
is,
I
think,
a
really
nice
feature
of
telegraph
operator.
A
And,
like
I
said
we
could
we
could,
for
example,
one
other
thing
we
wanted
to
show,
because
if
we
were
to
see
the
logs
of
say,
redis
and
the
operator
here,
we
would
not
be
able
to
find
the
the
message
that
the
logs
were
restarted
because
we
keep
seeing
this
this
data
flowing
in.
But
if
I
were
to
say,
remove
the
outputs
file
and
deploy
that
and
then
wait
a
few
minutes.
While
we
perhaps
do
something
else.
A
I'll
also
see
that
now
I
no
longer
will
see
my
my
data
being
written
to
standard,
which
is,
which
is
also
a
pretty
interesting
feature.
B
A
A
C
A
A
We
were
just
using
a
file
okay,
so
let's
just
we
were
just
using
the
file
and
influx
db,
so
we
were
just
using
this
plugin
and
then
we
can
see
it's
readme
file
and
we
were
also
using
the
influx
db
v2.
You
were
using
v1,
but
that's
kind
of
interesting,
but
basically
we
could
configure
a
lot
of
things
like
he
said
it
has
outputs.
We
could
be
filtering
things
at
the
output
level,
as
you
mentioned,
it's
pretty
powerful
to
be
able
to
do
that.
A
A
But
technically
I
would
be
able
to
disable
one
of
the
outputs
and
let's
say
if
it
wasn't
able
to
write
to
another
output,
it
would
be
smart
enough
to
realize
this
is
the
same
output,
I'm
just
going
to
keep
on
raising
the
same
buffer.
So
we
could-
and
I
mean
we
just
we
just-
did
change
the
configuration.
We
can
see
that
right
now
it
was
just
it
just
reloaded
and
it
stopped.
A
It
just
stopped
writing
outputs,
but
the
nice
thing
is
like
we
can
do
all
of
these
and,
like
I
said
in
kubernetes
world,
where
sometimes
we
don't
want
to
restart.
Like
I
don't
know,
we
have
deployments
where
we
have
our
deployment
stakeholder
sets
and
other
types
of
workflows,
but
we
have
workflows
for
a
single
type
of
a
micro
service.
We
would
have
hundreds
of
pods
and
then
restarting
all
of
them
just
because
we
want
to
tweak
a
single
setting.
A
A
So
it
really
needs
that
we
could
just
specify
the
port
of
parts.
The
path
and
telegraph
would
be
integral
of
operator
will
just
generate
configs
out
of
that,
but
also,
if
you
know
that
you're
running
something
that
telegraph
knows
how
to
scrape,
then
you
can
just
use
one
of
them.
Many
many
many
plugins
and
you
just
inject
this
small
snippet
and
telegraph
operator
will
blow
it
together
with
what
it
should
be
output
to,
and
you
can
also
have
some
additional
settings
in
them
in
the
classes.
A
So
it
is
really
easy
to
manage
and
from
our
experience
we
have
large
clusters.
We
have
dozens
and
dozens
of
those
classes
we
have
to
manage.
It
is
really
useful
to
be
able
to
do
that.
One
of
the
community
contributed
features
that
I
think
will
be
showing
in
the
next
release.
That's
happening
really
soon,
and
I'm
really
excited
about
is
ability
to
also
reference
other
conflict
maps
or
the
secrets
and
be
able
to
be
able
to
reference
some
of
the
metadata.
A
So
if
I
would
want
to
get
some
of
the
kubernetes
metadata,
I
can
expose
it
as
an
environment
variable
in
the
in
the
annotation.
I
believe
the
annotation
is
something
like
and
field
ref,
and
I
can
say
that
this
then
my
variable
name
is
like
namespace
name
and
it
will
just
be
metadata
space
and,
like
I
said
this
is.
A
I
could
also
expose
that
this
is
useful
in
some
cases,
but
we
really
want
to
tie
this
back
to
some
of
the
fields,
but
I
could
also
get
the
like
the
ip
address
of
the
part,
which
I
could
then
use
to
filter
things,
but
I
could
also
do
something
like
secret,
key
ref
and,
like
token-
and
I
could
say
for
my
secret,
which
would
be
my
token
secret
dot.
This
would
be
the
key
name
dot.
Let's
say
token
right.
So
with
that
I
could.
I
could,
for
example.
Well.
A
A
I
could
reference
that
and
then
telegraph
and
then
kubernetes
would
load
this
as
an
environment
variable
for
the
telegraph
cycle,
and
I
could
use
it
in
the
configuration,
so
I
wouldn't
have
to
inject
it
in
other
places.
So
this
is
useful
if,
for
example,
using
other
tools
to
manage
the
secrets
or
the
secrets
are
just
managed
better
application,
because
then
the
director
of
state
car
would
get
it.
A
This
is.
This
is
not
where
cut
reload
would
work
because
of
the
way
it's
it's
working
because
of
the
kubernetes
internals.
Maybe
that's
something
we
we
could
extend
in
the
future,
but
this
is
still
a
pretty
nice
piece
of
functionality,
because
if,
for
multiple
reasons,
we
have
some
data
in
some
other
secrets-
and
we
just
want
to
reference
it-
it's
much
easier
than
having
to
hardcode
it
in
the
annotation.
B
A
B
A
The
people
using
the
tool
and
that
people
are
willing
to
spend
their
time
extending
it
so
we're
trying
our
best
to
help
whenever
anybody
contributes
in
any
way.
Even
if
someone
just
opens
an
issue
like
we've,
had
people
open
issue
that
if
they
run
it,
they
then
we
forgot
to
create
the
name
space
and
we
were
fixing
those
kind
of
things
and
that
that's
also
great,
because
this
means
someone
took
the
time
to
to
give
it
a
try.
And
if
something
was
broken
you
also,
let
us
know
so
we
could
fix
it
for
other
people.
B
C
Well,
thank
you
for
that
boy
check.
I
I
know
live
demos
are
always
fun,
so
I
know
boy
check
you
already
sort
of
answered
this,
but
how
does
a
newbie
get
his
or
her
arms
around
apis?
I
know
you
showed
the
document
the
docs
link.
Is
there
anything
else
that
a
community
member
can
do
to
get
some
help
or.
A
I
think
going
to
influx
db,
so
first
thing
is
just
getting
onboarded
in
influx
db.
I
think
the
easiest
option
is
to
go
to
cloud
influxdata.com
and
play
with
the
sas
offering
because
it's
there's
a
free
tier
that
provides
most
okay.
I
guess
my
typing
was
so
basically.
A
A
There
are
binary
that
you
can
just
grab
and
run
it
on,
like
there's,
multiple
ways
to
run
inflixibility
and
then,
when
you
go
to
the
ui,
there's
a
way
to
get
started
with
most
languages.
We
also
provide
ways
to
get
telegraph
configurations,
because
that
is
a
slightly
longer
process,
but
basically
this
is
a
way.
There
are
multiple
ways
to
get
the
data
you
can
also
write.
A
You
can
also
directly
use
the
api,
but
I
think
we
we
try
to
do
our
best
to
just
get
people
started
with
whatever
it
is
that
they
need
to
do
right.
So
I
could
just
say
just
on
my
system
data
and
it's
going
to
basically,
okay,
I'm
going
to
basically
generate
a
whole
conflict
for
me,
and
this
is
just
a
telegraph
configuration
I
can
save.
I
can
run
telegraph
on
my
machine
and
it's
going
to
start
writing
data
to
finish
tv.
B
Well-
and
I
would
like
to
so-
let
me
tell
you
what
I
do
to
go
figure
out
anything
on
influx
db.
I
go
and
find
blog
posts
from
the
fabulous
anna
east
so
like
she
has
one.
It's
like
tldr
influx
db,
tech
tips,
creating
buckets
with
the
influx
db.
Api
like
I
am
completely
biased,
but
I
think
her
blog
posts
are
fantastic
for
a
newbie
and
then
I
think
they're
also
really
good.
For
someone
who
is
not
a
newbie,
so
I
would.
A
C
I
think
also
you
already
answered
this,
but
can
you
share
this
data
locally,
or
can
you
share
this
code
for
us
to
test
it
locally?
I'm
assuming
it's
all
in
the
repo
yep.
A
And
the
make
file
is
also
a
good
starting
point,
because
it
provides
an
easy
to
use.
Make
targets
like
the
kind
start
deploys.
In
fact,
the
b1
deploys
a
lot
of
things.
Make
paint
test,
basically
deploys
most
things
and
it's
even
deploying
redis
and
showing
you
at
the
end
that
redis
has
the
site
that
has
the
cypriot
container
that
sleep
is
going
away.
A
C
And
you
touched
on
this
briefly,
but
how
are
we,
how
is
influx
data
using
the
telegraph
operator
internally?
It
sounds
like
it
sort
of
was
developed
from
a
internal
pain
point
as
well.
A
C
A
It
was
developed,
I
think,
for
both
internal
external
users,
but
but
when,
when
we,
when
we
started
deploying
workloads-
and
we
were
thinking
about
being
able
to
handle
our
set
of
kubernetes
clusters
and
large
workloads,
we
were
just
discussing
how
to
do
this,
how
to
get
all
the
data
and,
given
that
we
already
had
telegraph
as
as
a
very
successful
and
long
and
project
with
long
history.
We
wanted
to
use
telegraph
and
we
were
just
wondering
how
to
do
that
and
telegraph
operator.
It's
just
a
natural
way
of
doing
this,
so
we
use
it.
A
We
use
it
a
lot
for
most
of
our
workloads,
meaning
that,
with
one
of
the
first
things
we
deploy
in
our
clusters,
telegraph
operator,
which
is
obviously
automated,
but
that's
one
of
the
first
things
we
deploy
and
then
from
all
the
workloads
we
have
monitors.
We
just
add
the
same
annotations.
I've
shown
they
may
be
slightly
more
complex
than
the
examples
we're
showing,
but
it's
still
annotations
that
we
use
and
for
a
lot
of
the
code
we
write
internally.
We
just
expose
them
as
tremendous
metrics
or
expose
them
in
other
ways.
C
A
We
should
go
back
to
the
to
the
the
the
why
we're
using
this
sidekick
containers,
as
opposed
to
demon
sets,
because
we
do
get
this
question
a
lot,
and
I
I'm
actually
surprised
this
question
hasn't
come
up
so
one
we
deploy
telegraph
as
a
sidecar,
and
this
means
that
if
we
have
lots
of
workloads,
then
there
is
a
lot
of
telegraph
side
containers
and
there
are
a
lot
of
processes
that
could
be
solved
by
a
demon
set,
and
so
we
chose
to
use
the
sidecars,
because
we
noticed
that
a
telegraph
report
has
is
more
successful
of
getting
the
data
and
not
being
able
to
buffer
it
if
things
ever
go
wrong
temporarily.
A
So
it's
much
more
reliable
if
we
monitor
a
single
part,
but
we're
also
trying
to
figure
out
ways
to
to
do
something
between
managed
between
running
telegraph
cycle
for
each
port
and
running
it
as
a
demonstration
monitoring
all
the
all
the
nodes.
A
A
Yet
so
we're
trying
to
trying
to
tackle
this,
because
that's
one
of
the
things
that
could
be
helpful
for
us
internally
as
well
and
I'm
sure
a
lot
of
people
have
this
issue
that
a
demon
set
so
one
all
the
pods
is
too
much
data
together
and
then
a
cipher
for
every
single
part
is
too
many
resources
being
useful,
sometimes
really
small
micro
services
that
don't
don't
get
involved.
A
lot.
C
B
In
terms
of
like
from
the
perspective
of
the
telegraph
operator,
I
think
voice
check,
I
think,
covered
it,
but
just
generally
in
general,
we're
just
going
to
continue
to
make
influx
db.
Our
cloud
2
sas
product
screaming
fast
and
make
it
my
team
is
good.
Art
is
working
to
continue
to
make
it
so
that
our
developers
can
deliver
sweet,
sweet
software
to
the
users
more
quickly.
C
Awesome
well,
thank
you
both.
I
I
feel
like
there's
gonna,
be
lots
of
people
checking
out
this
webinar
and
they
might
come
bug
you
in
the
community
slack
with
follow-up
questions.