►
From YouTube: 2023-03-16 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
A
A
If
you
have
a
question
for
Nikolai
or
Josh,
are
you
shipping
the
open
Telemetry
operator
as
part
of
your
product.
B
What's
wait
a
second
okay
there
we
go.
Can
you
hear
me
by
the
way,
yeah
audible
Ops
as
an
option
right
now?
Yes,
mostly
for
also
instrumentation,
but
they're
very
soon
and
I'm,
talking
about
kind
of
the
overall
small
Helm
chart
for
kubernetes,
which
which
does
all
the
things
we're
going
to
be
replacing
Prometheus
with
it,
and
then
we're
going
to
be
like
more
serious
operator
users
because
kind
of
to
achieve
the
sharded
Premier
sharded
scrape
config
setup.
You
essentially
need
the
operator.
C
C
B
Well,
we
ship,
what's
in
the
Upstream
directly
I,
don't
think
we
have
it
kind
of
some
of
the
things
that
the
operator
currently
does.
We
do
on
our
own
anyway,
because
we
actually
use
hotel
for
nearly
everything
in
our
home
chart
right
now,
but
we
started
doing
it
before
the
operator
was,
you
know,
was
mature
enough.
So
basically,
this
is
some
of
the
stuff
that
the
operator
added
later
we
ended
up
just
doing
with
help
just
manually
and
it's
we
might.
B
B
There's
also
one
more
use
case
that
we're
planning
to
transition
actually
I,
don't
know
that
I
think
about
it,
because
we're
users
of
the
telegraph
operator
and
essentially
mostly
as
a
kind
of
bridge
towards
like
getting
from
Telegraph
metrics
to
Prometheus
metrics,
there's
some
weird
stuff
in
the
Java
world,
like
jmx,
God,
Knows
Why,
and
if
you
one
of
the
ways
of
getting
a
Java
application
that
exposes
jmx
metrics
into
the
previous
format,
is
to
have
a
telegraph
sidecar
for
it
and
there's
a
hope
that
we
can
also
use
Auto
and
the
operator
by
extension,
to
do
that.
B
It's
actually
quite
a
quite
a
few
like
Telegraph
operator
in
general.
Telegraph
is
like
a
lot
of
plugins
that
that
can
do
certain.
Let's
call
them
Legacy
things
and
expose
them
as
something
a
bit
more
modern.
So
and
that's
there.
There's
there's
a
hope
that
that
we
can
use
Alto
to
fill
that
English
as
well.
A
C
Oh
yeah,
so
we're
deploying
the
operator
with
are
other
Helm
trips
that
we
already
have
by
using
the
helm
chart
for
the
open
Telemetry
operator
and
we're
mainly
only
using
it
for
auto
instrumentation
and
so
right.
Now
we're
working
on
java.net
we're
at
export
for
the
other,
instrumentation
libraries
and
what
else
we
don't
use
the
operator,
though,
to
like
manage
The
Collector
at
this
point,
we're
still
using
Helm
charts
to
do
that.
So
we're
only
using
the
operator
for
auto
instrumentation.
A
Or
is
just
it's
not
good
fit
or
any
comments?
Why.
C
We
have
several
discussions
going
on
with
that,
and
so
the
architecture
of
the
way
that
the
collectors
set
up
through
the
operator
is
different
from
what
we
already
have,
and
so
it
would
be
a
sizable
migration
for
us
to
migrate
all
of
our
content
to
be
able
to
work
with
the
operators.
Collector
API.
B
Hold
the
record
on
our
end,
this
is
probably
not
a
very
big
migration.
It's
more
like,
there's,
not
really
that
much
benefit
to
it.
Right
now,
like
we
for,
for,
like
kind
of
the
static
uses
of
The
Collector,
we
we
achieve
what
we
need
to
achieve
of
Health,
so
there's
like
no,
while,
while
we
could
Port
it
to
the
collector,
there's
no
actual
reason
to
do
it,
because
you
know
whether
you,
if
you
have
a
Daemon
set
that
let's
say,
collects
container
logs.
B
You
know
by
having
them
mounted
on
the
on
the
Node,
whether
you
do
that
by
using
the
operator
crd
or
by
provisioning
your
data
set
directly
at
that
level.
There's
not
that
much
difference
between
doing.
If
you
know
how
to
do
it
already
right,
there's
not
that
much
difference.
B
I
can
I
can
kind
of
Imagine
for
someone
who
doesn't
know
how
to
do
it,
and
you
know
it's
not
it's
not
really
an
expert
in
kubernetes
or
something
it
probably
would
be
valuable
to
be
able
to
say
something
like
I
wanna,
I
wanna
get
container
logs
right
and
not
worry
about
the
the
details
of
how
that
actually
happens.
But
for
us
it
doesn't
really
make
any
difference.
A
B
For
like
for
I
kind
of
think,
like
the
the
major
use
cases
in
kubernetes,
for
that
are
what
I
said
right
now
container
logs.
This
is
kind
of
what
anybody
who
owns
Vlogs
from
kubernetes
Once
in
a
sense,
and
they
don't
want
to.
For
example,
they
don't
want
to
care
about
how
we
actually
configure
the
receiver
or
what
you
need
to
mount
or
whether
the
container
runtime
is
is
Docker,
shim
or
cryo
or
container.
They
just
want
to
get
their
logs
and
so
on.
So
that
seems
valuable.
B
I
think
the
other
major
use
cases
is
Prometheus
metrics.
Basically,
so,
like
you
pretend
you,
you
want
to
get
metrics
that
your
applications
are
like
the
kubernetes
components
exposed
via
Prometheus
and
but
you
don't
actually
need
Prometheus
itself
because,
like
we're
in
that
were,
for
example,
in
the
in
in
that
kind
of
situation,
where
we
want
to
collect
Prometheus
metrics,
but
we
don't
actually
need
most
of
most
most
of
Prometheus
features
like
those
features
are
a
drag
on
us
like
we
don't
need
querying
or
data
or
the
database
or
alerts
or
anything.
B
A
C
B
Actor
and
fluent
I
don't
know
about
Vector
about
fluent
a
bit.
Well,
it's
less
it's
less
performant
than
fluent
bit,
but
honestly
it
doesn't
matter.
It
is
much
more
reliable,
like
I
have
I,
don't
want
to
get
in
the
sub
box
about
flu
and
a
bit.
So
let's
just
say
that
that
a
true
individ
has
been
a
source
of
pain
for
us
for
a
very
long
time
at
the
performance
is
mostly
okay,
I
think
we've
had
some
like.
B
We
had
like
one
internal
use
case
actually,
because
we
actually
dog
food,
this
stuff
in
our
own,
in
our
own
infrastructure
and
there
we
we
do
have
like
one
place
where
it's
it.
It
seems
to
eat
a
lot
of
CPU
when
it
and
I
think
that's
why
I
think.
That's
the
reason
we
haven't
like
investigated
it.
That
hard
is
because
we
consider
that
use
case
to
be
kind
of
a
to
be
kind
of
degenerate
on
our
own,
and
it's
like.
B
We
have
some
some
applications
which
just
log
tons,
tons
and
tons
of
data
on
on
a
schedule,
and
at
that
point,
like
the
file
log
receiver
setup
can
can
eat
quite
a
bit
of
CPU,
but
I,
don't
really
I
kind
of
suspect.
That's
not
a
problem
for
most,
like
normal
cases
like
you,
have
the
typical
Daemon
set
problem
right
where
you
have
a
Daemon
set,
and
you
can't
really
scale
it
like
you.
B
Can
you
can
tell
you
can
say
how
much
how
much
resources
it
can
use,
but
it's
the
same
on
every
node,
unless
you
start
creating
different
demon
sets
even
for
the
different
node
types,
which
is
also
really
annoying
to
do,
and
and
if
you
make
your
damage
as
thin
as
you
can,
and
you
still
have
a
performance
problem,
that's
kind
of
a
problem
but
like
that's
not
serious
enough,
at
least
in
our
You
by
in
our
experience
right
now
to
to
like
seriously
worry
about
it.
Basically,
so
it's
like
it's
good
enough.
C
A
Well,
talk
about
the
auto
instrumentation,
what
the
languages
you
support
and
in
and
as
well
I'm
curious.
If
you
let
customers
use
the
Upstream
images
or
you
have
your
own
distributions
or
if
you
need
any
kind
of
customization
on
the
or
you
might
need
any
customization
of
the
instrumentation
crd
to
maybe
support
some
of
the
your.
B
Okay,
I
I,
don't
have
a
lot
of
comment
on
an
instrumentation.
It's
not
normally
like
my
ballpark.
If
you
actually
want
detailed
feedback
on
instrumentation,
I
can
get
someone
here
like
two
weeks
from
now.
Who
will
have
a
lot
of
opinions
who
also
contributed
I,
think
some
stuff
to
the
operator
anyway
in
the
past,
so
I,
don't
actually
I,
don't
actually
know
now
off
the
top
of
my
head.
B
I
I
can
tell
you
which
ones
we
actually
support,
I,
believe
it
it's
definitely
Python
and
Dot
net
and
I,
don't
remember
the
top
of
my
head.
What
else
I
can
like
pass
the
Baton
to
Josh
and
figure
it
out
quickly.
C
Yeah
over
at
Splunk,
we
have
a
instrumentals
team
for
pretty
much
all
the
normal
languages
that
are
sports,
so
like.net
python
Java
go
PHP
and
we're
planning
on
using
our
own
distributions
of
these
instrumentation
libraries
for
auto
instrumentation.
C
One
issue
we
were
facing
is
we
were
trying
to
use
like
the
docker
images
that
we
have
in
the
operator
project
as
like
kind
of
the
base
images
we
use
to
inject
Auto
instrumentation
and
they
use
busy.
Bucks
and
BusyBox
comes
with
a
lot
of
extra
dependencies.
We
don't
want,
and
so
like.
It
means
more
like
scanning
for
vulnerabilities
on
our
side.
C
It
means
that
there's
more
just
points
of
the
more
risk
for
our
users
like
using
that
and
when
they
really
need
it,
and
so,
like
the
only
reason
we
need
busy
box,
because
in
order
to
do
like
container
injection,
we
use
a
CP
command
to
just
like
copy
the
binaries
over
and
so
we're
hoping
in
the
long
term.
That
would
come
up
with
something
different
so
like
using,
like
volume
mounts
to
do
container
injection
a
bit
more
kubernetes.
A
more
kubernetes
native
solution
like
that
would
be
more
plausible.
B
Yeah
for
the
record,
for
us,
it's
exactly
python.net,
Java
and
node.js
with
some
caveats
is
what
we
currently
support
and
for
the
images
I
think
these
are
the
the
pure.
These
are
just
Upstream
images.
We
don't
we
don't
customize
them
from
what
I'm
saying
right
now.
A
Feedback
going
back
to
the
collector
reports,
the
reason
I'll
ask
you:
if
you
have
a
fork,
I
was
thinking
that
we
should
reconsider
how
we
structured
the
repository
and
I
was
thinking
about
moving
most
of
the
code
base
into
an
internal
package
and
having
public,
maybe
only
apis,
to
install
the
controller
that
hooks-
and
you
know
the
main
parts.
A
A
B
I
actually
have
one
thing,
so
the
correct
me
if
I'm
wrong
about
any
of
this
I
am
relatively
new
to
the
operator
called
base
in
general.
So
so
there
might
be
something
I'm
missing
about
this,
but
so,
like
I,
said
where
our
current
kind
of
big
project
is
to
move
off
of
Prometheus.
So
we
want
to
use
Prometheus
receiver
we're
going
to
use
the
target
allocator
from
what
I've
tested
it's.
It
seems
to
work.
B
Fine,
although
I
will
say
that
the
documentation
of
actually
like
that,
like
I,
haven't,
found
any
documentation
that
basically
tells
you.
This
is
what
you
need
to
put
in.
In
order
to,
like
you
know,
have
the
Prometheus
experience,
which
is
you
know,
you
use
all
the
Prometheus
operators
here,
the
the
Prometheus
operator
crds
that
are
relevant,
which
are
service,
monitors
and
pod
monitors,
I.
Think
just
and
like
what
kind
of
configuration
do
you
need
to
put
in
there
for
that
to
actually
work,
and
it's
not
super
difficult
to
figure
it
out.
B
But
it's
there's
also
like
not
one
place
where
that
is
explicitly
spelled
out,
and
the
other
part
is
I.
Think
that
it's
something
that
I
expected
but
wasn't
the
case
was
that,
like
the
operator,
would
use
the
target
allocator
integration
from
the
Prometheus
Receiver
right,
so
previous
receiver,
you
can
just
set
the
target
allocator
in
the
config
and
that's
everything
you
need
to
do.
You
don't
have
to
put
any
Prometheus
configurations
or
anything,
that's
enough,
but
that's
not
really
what
happens
with
the
operator.
What
happens
is
that
it's
like
it.
B
It
tries
to
take
every
every
job
definition
every
like
scrape
config
and
then
pass
like
an
HTTP
SD
config
link
to
that
job.
It
does
some
slightly
weird
things
and
it
doesn't
actually,
even
if
I
just
enable
Target
allocator
and
even
it
it
doesn't
actually
inject
the
service
name
of
that
Target
allocator
into
the
collector
config
yeah
I
have
to
do
that
myself.
That
was
kind
of
a
surprising
thing
for
me.
A
B
B
And
basically,
what
I
expected
was
for
this
to
happen.
I
enable
Target
allocator
for
a
given
collector
crd
right
and
then
what
should
happen
is
in
the
configuration
there.
I
have
a
Prometheus
receiver
that
maybe
doesn't
have
anything
in
there
because
I
just
want
the
Prometheus
crd
part
I,
don't
care
about
the
finding
any
static
jobs
for
it
and
what
I?
What
I
expected
to
happen
was
that
the
operator
would
inject
like
the
configuration
stanza
for
Prometheus
receiver.
B
That
would
get
the
that
would
use
the
Target
allocator,
and
you
know
the
operator
knows
what
the
what
that,
what
that
domain
should
be
internally
right.
The
domain
is
like
the
service
name,
dot
namespace,
whatever
right
it
knows
and
and
it
it
doesn't,
it
doesn't
actually
do
that
so
I
had
to
put
it
in
myself
and
that's.
That
seems
like
something
that
should
happen
anyway,
because
right
now,
there's
kind
of
a
I
think
all
right.
So
maybe
maybe
it's
like
this.
B
If
if
we
decided
to
only
support
The
Collector
versions
where
that
is
implemented,
basically-
and
it's
implemented
since,
like
0.62
the
target
allocator
stuff
in
Prometheus
receiver,
then
then
the
way
that
could
work
is
essentially
like.
We
just
replaced
the
whole
configuration
for
Prometheus
receiver
with
just
use,
Target,
allocator
and
and
the
name
and
that's
it,
and
we
don't
care
about
anything
else.
I
I,
don't
know
if
this
makes
sense
to
you.
B
If
you,
if
you're
like
not
not
really
into
into
the
metrics
part
I,
can
create
an
issue
because
yeah
I
actually
thought
because
I
yeah,
yeah
I
I
thought
because
I
I
read
the
notes
from
like
two
or
a
few
weeks,
two
or
three
meetings
ago,
where
it
was
this
future
of
the
operator
discussion
and
there
were
like
references
to
it
there.
So
I
thought
it
might
already
exist,
but
if
it
doesn't
I
can
create
it.
A
A
And
yeah
the
issue
will
be
appreciated.
Jacob
from
lightstep
is
mostly
doing
the
the
tourist
allocator
work
with
with
yeah.
B
Yeah,
it's
like
to
me
at
least
this
seems
like
a
very
kind
of
increasingly
mainstream
use
case
for
this
right.
If
you
Prometheus
is,
is,
if
is
a
de
facto
standard
in
the
kubernetes
world
and
outside
of
it?
So
the
the
use
case
of
we'll
use
otile,
but
we
have
a
ton
tons
of
Prometheus
metrics
exposed
everywhere,
is
like
a
very
frequent
is
going
to
be
a
very
frequent
thing.
B
So
that's
like
to
me:
it's
like
another
candidate
for
the
kind
of
more
higher
level
config
where
which
will
just
do
this
automatically,
where
your
use
case
is
basically
we're
using
Prometheus.
Now
we
want
to
drop
in,
we
want
to
drop
in
hotel
and
have
the
same
things
happen.
Roughly
right,
yeah
do.
B
A
B
There
any
concerns
for
that
we
buffer
it
anyway
in
hotel,
so
so
so
far.
So
right
now,
the
way
our
metrics
pipeline
looks,
as
does
Prometheus,
which
scrapes
the
metrics,
and
then
it
forwards
them
using
a
remote
right
to
to
like
to
like
an
Hotel
stateful
set.
That
also
does
a
bunch
of
metadata
additional
metadata
enrichment
and
and
processing,
and
that
forwards
it
to
the
to
the
remote
back
end
so,
and
that
old
tell
stateful
set
already
does
buffer
into
like
persistent
volumes,
so
it
doesn't
actually
make
any
yeah.
B
B
No,
it's
implemented
in
in
Upstream
collector.
You
can
use
a
like
a
file,
storage
and
a
persistent
queue.
Exporter
helper
has
a
persistence
queue
which
is
now
stable.
Well,
it's
as
stable
as
everything
else
in
there
right
and
you
can
absolutely
you
can
enable
it.
You
can
put
it
on
a
volume.
It
works
fine.
C
B
B
B
Yeah
on
the
exporter,
if
you're
using
the
sending
queue,
you
basically
just
set
storage
to
something
some
storage
extension
that
you
have
defined
and
it's
gonna
use
it.
Okay,.
B
B
And
the
storage
card
is
playing
contract
because
the
queue
can
basically
use
any
storage
extension
that
you
give
it
I
think
the
one
used
by
almost
everyone
who
does
this
is
the
file
storage
over
here.
B
But
the
way
this
kind
of
is
wired
up
in
kubernetes
is
you
have
a
staple
set?
Your
staple
set
has
like
a
persistent
volume,
claim
template
right,
so
you
have
a
persistent
value.
You
put
you
configure
a
storage
extension
which
you
point
to
your
persistent
volume
directory,
and
then
you
you,
you
use
a
queue
and
set
the
in
your
export.
There.
You
set
the
storage
to
name
of
your
extension.
It
works.
B
Foreign
I
would
even
cautiously
cautiously
say
that
it
is
now
like
reasonably
production
ready
as
we've
already
we
we've
we've
already
fixed
a
bunch
of
bugs
in
it
in
our
production
use.
So
so
now
it's
like
reasonable
I
used
to
have
a
bunch
of
interesting
Behavior
related
to
the
fact
to
the
underlying
technology.
But
now
it's
it's
all
right.
B
I,
don't
know
if
I've
ever
used
anything
else.
To
be
honest,
so
I
don't
know
at
the
very
least
like
EBS
disks
on
AWS
work,
fine,
there's,
no
problem,
you
generally
speaking
like
don't
because
the
way
the
Q
the
skew
works
is
that
every
single
record
that
you
have
passes
through
it
so
literally
every
single
record
or
every
single
batch
that
you
get
gets
written
to
disk
and
then
the
idea
from
the
other
side,
the
consumer
Fred
takes
it
and
and
sends
it
over.
B
So
everything
passes
for
disk
which,
which
makes
it
like,
makes
the
disk
speed
matter
a
little
bit,
but
in
reality
it's
all
kind
of
a
single
memory
mapped
file
underneath
underneath
all
of
it.
So
so
it's
like
reasonably
performant
I've
also
heard
other,
like
other
ideas
as
to
how
to
do
this
kind
of
persistent
buffer
layer
in
kubernetes
they're
a
little
bit
more
exotic
they're
a
little
bit
more
elaborate
this
this.
This,
in
my
opinion,
very
simple
conceptual,
at
least.
A
B
At
the
very
least
under
like
under
Target
allocator,
it
it,
the
spr
at
the
Target
in
the
Target
allocator
in
the
read
meter
should
be
a
reference
to
to
like
probably
a
link
to
somewhere
else,
but
but
there
should
be
like
a
reference
saying
how
do
I
actually
use
the
thing
at
the
end
of
the
day.
Right
now,
this
is
kind
of
not
discoverable,
as
in
you,
if
you
know
exactly
if
you
know
that
the
operator
can
do
this,
and
so
you
know
what
you're
looking
for
you
can
find
it.
B
But
if
you
don't
know,
then
it's
not
it's
not
really
discoverable.