►
From YouTube: 2023-01-19 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
C
C
F
Yeah
sorry
I
forgot
to
put
my
name
on
some
of
these
things
yeah.
So
there
are
two
things
to
discuss
so
the
first
one
I'm
just
going
to
name
on
this
so
I.
Don't
this
track.
F
The
first
one
is
around
some
of
the
crd
functionality
in
the
Target
allocator
and
how
we
currently
return
the
scrape
config
to
The
Collector
from
the
target
allocator,
basically
how
the
dynamic
job
Discovery
works
right
and
the
issue
that
we've
seen
reported
and
that
I
sort
of
knew
was
coming
with
this
is
the
collector
is
sort
of
constantly
scraping
the
target
allocator
for
these
new
configs
right
now,
we're
transmitting
it
via
Json
and
the
problem
is,
is
that
we
can't
directly
Marshal
to
Json
because
of
Prometheus
not
having
Json
fields
in
their
or
like
Json
marshalling
capabilities
in
their
code.
F
I've
opened
an
issue
there
and
the
guy's
basically
like.
If
you
want
to
do
it,
go
ahead,
but
it's
going
to
be
a
pain
in
the
ass
to
be
frank,
and
so
that
doesn't
seem
good.
The
other
option
is,
we
can
copy
over
all
of
their
structs
into
our
code
and
then
add
our
own
Json
encoding
on
top
of
it,
and
then
the
last
option
is
that
we
can
just
transmit
it
via
yaml.
My
problem
with
transmitting
via
yaml
is
that
yaml
is
just
with
white
space.
F
It
is
a
it
can
be
a
lossy
format
and
I've
seen
malformed
issues
before
that,
maybe
is
just
too
anecdotal.
I
don't
have
enough
evidence
to
provide.
So
if
we
think
that
like
going
with
the
ammo
is
fine,
no
one
else
has
any
other
like
more
hard
evidence
for.
Why
that's
a
problem?
We
can
just
go
ahead
with
that
and
that's
okay,
and
so
so
that's
sort
of
the
first
part
of
this.
A
I
would
prefer
to
change
the
ammo
I.
Don't
think
I
have
the
same
concern
as
you
do
regarding
white
space,
because
this
is
generated
by
the
machine
read
by
a
machine.
There's
no
human
in
the
loop.
Nobody
gets
a
chance
to
to
mess
with
it
and
if
we're
dropping
bytes
that
mess
up
the
animal
white
space,
we're
probably
also
drop
a
hit
or
an
information.
That's
going
to
screw
facing.
F
The
the
thing
that
we
need
to
be
careful
of
is
that
there
is
potentially
user
generated
yaml
from
the
scrape
configs
file
that
someone
can
provide
because
I
think
that
might
be
transmitted
as
well,
but
I'm
not
not
positive
on
that
and
then
also
within
a
pod
monitor,
someone
might
transmit
their
own
weird
yaml,
but
I
guess
if
it's
malformed
there
would
be
malformed
everywhere.
So
it's
not
the
end
of
the
world
cool,
so
yaml
transmission,
yes,
Christina
and
sorry,
mate,
mate
or
matish.
D
F
Fine
Christina
and
Monte
both
have
been
working
on
sort
of
similar
things
in
this
region
and
they're
going
to
be
new.
There
will
need
to
be
two
PRS
to
coordinate
this
one
in
the
Target,
allocator
and
I.
Guess
three,
one
of
the
target
allocator
to
transmit
yaml
another
in
The
Collector,
to
read
that
yaml.
E
E
F
Oh
great,
so
we
could
just
like
transmit
yaml
and
it
all
should
work.
Oh
awesome
that
makes
life
yeah
hopefully
well.
That
does
make
life
easier,
so
Christina,
given
you
already
have
that
some
like
a
similar
PR
open.
Would
you
be
good
to
own
that
stuff,
because
I
know
you
were
also
writing
tests
for
these
endpoints
as
well.
F
Cool
is
that
okay,
mate.
F
So
that
was
the
first,
the
first
Target
allocator
thing.
The
second
thing
that
I
wanted
to
bring
up
and
have
a
discussion
about
was
how
this
is
done
at
all
So.
Currently,
The
Collector
is
on
a
loop
querying
the
target
allocator
for
these
scrape
config
files
to
add
to
its
list
to
then
go
and
scrape
right.
F
F
When
talking
to
some
of
these
users
that
are
experiencing
some
of
these
memory
problems,
it
seems
that
when
they
turn
on
the
crd
functionality,
they
start
seeing
higher
memory
usage
like
much
higher
memory
usage,
because
it's
sort
of
doing
this
query
constantly
and
there's
a
lot
of
marshalling
so
I
think
after
maybe
we
should
have
this
discussion
after
Christina
makes
this
change
so
because
maybe
that
will
not
having
to
do
both
of
these
marshalling
actions
will
be
better
on
both
ends.
F
I,
don't
know,
but
the
idea
that
I
had
was
we
could
do
something
closer
to
what
the
Prometheus
operator
does,
which
is
the
Prometheus
operator
is
reading
for
pod
monitors
service
monitors
and
then
it
edits
the
configuration
for
Prometheus.
In
this
case
we
would
edit
the
configuration
for
The
Collector,
and
then
we
would
restart
the
collector's
Prometheus
scraper,
which,
with
the
setup
that
we
have
shouldn't,
be
too
difficult
to
do.
G
E
Marshall
in
a
nicer,
more
performant
way,
there's
still
the
issue
of
the
panics
in
the
Target
allocator.
That
I
think
your
configuration
fix
also
sells
because
right
now
we're
trying
to
Marshal
values
that
could
potentially
be
written
at
the
same
time
and
it
causes
panic.
F
Yeah
that
was
might
as
PR
is
to
add
the
lock
back
in
right,
so
I
think
Christina
and
yours.
We
should
probably
just
add
that
in
because
it's
like
you
know
four
or
five
lines
yeah
the
lock-in.
We
got
rid
of
the
lock
initially
because
of
memory
concerns
because
we
were
just
doing
it
so
often
I
think
was
the
reason
why
Maybe
I
forgot
I,
think
I
wrote
in
the
comment
on
a
PR
somewhere
yeah,
but.
E
F
Ure,
that
was
it
that
was
it
yeah,
so
yeah
that
that,
as
functionality,
I
think
is
just
maybe
expensive
overall
and
so
I
think
limiting
the
amount
of
times
that
we
have
to
copy
destruct
is
going
to
be
ideal
as
well,
but
I
think
so
maybe
we
should
wait
for
that
change
to
go
through
and
then
observe
the
performance
after
that
and
then
see
if
we
should
look
into
doing
this
in
the
operator
instead
of
from
the
allocator,
which
you
know,
some
people
might
not
like,
because
they
aren't
using
the
operator
for
this
functionality.
A
F
Yes,
so
the
the
difference
is
that
right
now,
The
Collector
is
constantly
scraping
the
target
allocator
for
these
configs.
But
with
the
operator
it
would
just
load
the
config
once
and
then
use
that
rather
than
constantly
querying,
so
we're
saving
a
lot
of
excess
calls
and
like
marshalling
on
marshalling
on
both
ends.
F
Yeah
we
would
have
to
well,
we
would
we
could
restart
the
Prometheus
receiver
in
the
collector,
which
I
think
is
a
is
a
thing
coming
soon
to
The.
Collector
I
remember
seeing
some
like
restart
functionality
being
added
in
no.
A
F
Yeah,
it
would
mean
that
as
well,
which
is
added
work
for
the
operator,
and
it
also
means
that
you
would
have
to
run
an
operator
to
get
this
crd
functionality,
which
I
know
some
people
don't
want
to
do
currently.
A
I
wonder
if
an
alternative
might
be
to
communicate
through
a
file
rather
than
HTTP
requests,
so
have
the
target
allocator
set
up
or
have
the
operator
set
up
a
volume
that
the
target
allocator
can
write
a
file
into
and
The
Collector
can
read
out
of
and
the
the
collectors
privacy
receiver
that
can
watch
four
changes
to
that
file
rather
than
building
up
the
network
and
then
the
target
allocator
just
writes
out
changes
as
they
happen.
C
F
We'd
have
to
open
something
up
on
the
collector
end.
To
do
that,
like
accept
that
config.
A
Yeah,
and
that
was
that
was
kind
of
the
original
design
before
the
HTTP
service
Discovery
was
available
in
the
Prometheus
Discovery
manager
was
for
the
Target
allocator
to
push
new
targets
that
were
discovered,
I
think
the
the
issue
we're
at
here
now
is
new
jobs,
so,
like
targets
are
handled
by
the
following
of
the
HTTP
service
Discovery.
But
how
do
we
get
a
new
job
configuration
to
it,
pushing
why
HTTP
could
work
as
well?
A
That
then
means
that
the
collector
needs
to
have
something
new.
That's
listening,
but
yeah
I
think
either
writing
to
a
file
and
having
the
collector
pick
up,
changes
to
that
or
pushing
by
HTTP
you're,
probably
equivalent
from
the
target
allocator's
perspective
that
allows
it
to
then
just
react
to
changes
and
push
them
out
rather
than
being
pulled
constantly.
F
G
F
So
that's
what
I'll
do
but
I'll
read
for
Christina's
PR,
because
hopefully,
maybe
somehow
will
be
more
efficient
with
not
doing
all
this
weird
marshalling.
That
we're
doing
right
now
should
be
great.
F
F
I
was
talking
with
Tyler
helmuth
over
from
the
Helm
chart
Sig,
and
if
you
look
at
this
issue,
that's
open.
There
is
an
existing
there's.
This
problem,
where
you're
unable
to
install
the
operator
and
a
collector
crd
at
the
same
time
because
of
the
web
hooks
I
believe,
are
the
problem
because
there's
some
type
of
I
think
it's
a
race
condition
where
we're
waiting
for
the
web
hook
to
be
available.
But
something
is
blocking
on
that,
and
so
it
never
becomes
available.
F
B
G
F
G
C
Think
yeah,
there
is
still
an
issue
where
you
deploy
the
operator
and
it
takes
a
couple
seconds
to
make
it
fully
functional,
especially
the
web
hooks.
So
there
is
even
a
script
in
the
operator
repo.
It's
actually
go
along
that
you
can
run
and
it
will
check
if
an
instance
of
The
Collector
can
be
created,
it
will
run
like
end-to-end
test
to
create
an
instance
and
then
delete
it.
The
third
manager
does
kind
of
the
same
thing
with
the
check
API
function.
D
Yeah
also
also
relate
to
that.
Oh
go
ahead.
Go
ahead,
okay!
Well,
the
things
that
something
that
I
noticed
because
I
was
the
one
who
had
the
script
right
is
that
sometimes
the
deployment
seems
to
be
available
right,
the
problem
of
the
of
the
open
parametric
operator,
but
you
are
checking
you're
watching
at
the
same
time
the
deployment
also
the
the
ball.
D
F
So
is
the
solution
to
just
disable
the
web
hook
on
a
chart,
that's
trying
to
install
them
at
the
same
time,
because
it
like
the
the
web
Hook
is
my
understanding
of
this
is
that
the
web
hook
blocks
the
actual
like
custom
resource
instance
from
existing
in
the
cluster,
whereas
if
we
disable
the
web
hook,
the
instance
would
exist
and
then,
when
the
operator
pod
is
up,
it
would
then
be
able
to
reconcile
that
state
right,
so
that
should
solve
the
race
condition.
Theoretically.
Is
that
a
correct
understanding
or
am
I
missing?
Something
here.
A
Well,
certain
values
have
solved,
it
makes
the
problem
go
away,
but
it
probably
introduces
other
problems
if
you're
you've
got
a
validated
Web
book.
That's
supposed
to
say
this
is
a
valid
resource
that
can
be
added
to
the
cluster
or
not.
If
you're
no
longer
validating.
You
can
end
up
with
resources
that
don't
belong
there,
which
can
cause
problems
further
down
the
road.
F
Yeah,
that
is
a
thing,
but
this
the
problem
with
this
is
that
someone
might
someone
wants
to
install
the
operator
and
an
instance
in
the
same
Helm
installation,
and
so
you
wouldn't
be
able
to
run
the
weight
in
between
those
things,
because
Helm
just
doesn't
have
hooks
well,
they
have
hooks,
but
Helm
doesn't
have
the
ability
to
run
these
like
weight,
commands
really.
F
No,
the
home
chart
for
the
operator
only
deploys
the
operator,
but
there
are
people
who
want
a
single
home
chart
to
deploy
both
so
that
you
don't
need
to
manage
both
separately.
A
F
D
F
I,
don't
think
so
I
think
so
you
can
do
Helm
dependencies,
but
that's
essentially
just
it's
like
pulling
in
a
package
and
go
it's
not
waiting
for
that
package
to
be
available.
It's
just
saying,
I'm
going
to
have
this
code
with
your
code.
Essentially
it's
just
you
have
a
bunch
of
ammo
files.
A
dependency
is
just
adding
in
more
yaml
files
that
it
needs
to
go
apply
there.
I,
don't
think
that
there's
a
sense
of
order
to
any
of
them.
F
I
think
this
does
require
some
testing
on
RN
to
see
if
this
is
still
a
problem
and
understand
like
what
is
the
problem
exactly
so
that
we
can
begin
to
diagnose
it,
especially
if,
as
Benedict
said,
there
have
been
changes
in
the
past
like
half
year
to
this
type
of
flow.
F
This
might
be
already
solved
potentially
otherwise.
I
think
regardless
a
Readiness
probe
would
be
really
helpful
to
have
I,
don't
know
how
it
would
fit
in
with
the
cube
API
and
the
reconcile
flow
with
the
web
hook.
But
I
do
think
we
need
to
test
this.
A
little
bit
more
regardless
I
did
in
the
Readiness
should
exist.
B
I'm
searching
for
the
pr
that
I
mentioned
once
I
find
it
I
will
link
it
in
the
document.
It'll
ping,
you.
F
Yeah
that'd
be
great
benefit.
Thank
you.
I
can
take
testing
this
and
writing
an
issue
in
the
operator
repo
if
it
doesn't
exist
already
around
this
as
well,
because
it
won't
take
long
I've
I've
made
this
a
problem
before
I
can
make
it
a
problem
again
Israel.
Do
you
think
that
you
could
look
into
adding
the
Readiness
probe
for
the
operator.
F
No
worries,
no
worries
happens.
All
the
time.
Unfortunately,
would
you
be
able
to
look
into
adding
the
Readiness
probe
using
that
script
that
you
wrote
Into
the
operator
Bob?
Oh.
D
G
C
Every
operator
will
have
a
similar
problem
if
they
use
value
to
defaulting
their
books.
So
maybe
we
could
take
a
look
as
well
at
the
certain
manager
or
something
some
some
other
operator.
F
B
Yes,
this
was
for
me
we
the
bubble
and
I.
We
have
a
presentation
scheduled
for
a
definition
and
we
plan
to
show
how
to
use
the
operator
on
kubernetes
and
wanted
to
briefly
mention
also
the
parts
with
the
tag
allocator,
and
also
how
to
gather
locks,
but
yeah
I
had
some
slides
where
I
was
a
bit
confused,
how
to
explain
it
and
we
simply
removed
them.
B
For
now,
it
was
too
much
into
detail.
I
was
trying
to
read
the
card,
a
code,
a
bit
from
the
Target
allocator,
to
understand
how
things
work
and
yeah,
since
it
was
too
detailed
anyway,
it's
removed.
So
now
we
simply
mention
that
things
are
there,
that
there
is
some
work
ongoing
and
yeah.
That's
it.
F
Like
the
event
logs
better
emitted
like
using
the
announce,
oh,
the
container
logs
of
the
operator
or
of
The
Collector,.
F
I
see
I
am
not.
B
Yeah
so,
for
example,
with
the
file
lock
receiver,
there
are
some
open
questions
how
it
works
with
permissions
and
yeah.
All
that.
F
The
I
think
an
issue
that
I've
seen
with
this,
that
we
need
to
work
on
like
The
Collector
end
is
there's
not
currently
an
elastic
search
or
what's
it
called
law
log
stash
receiver
for
The
Collector.
F
So
if
someone
were
using
something
like
fluent
that
they
couldn't
just
forward
their
logs
right
now
and
because
fluent
bit
is
how
is
something
that's
default
installed
in
every
gcp
cluster
right
now,
that's
how
they
collect
all
their
logs,
so,
theoretically,
one
could
configure
fluent
bid
to
forward
to
a
collector
that
is
then
receiving
them
on
elasticsearch
and
then
forwarding
that,
to
you
know
your
favorite
logging
destination,
but
the
collector
doesn't
have
a
Alaska
search
receiver,
which
makes
that
much
more,
you
know,
makes
it
impossible
to
do
currently.
F
I
have
to
write
a
proposal,
one
of
the
many
things
in
my
backlog
to
add
a
elasticsearch
log,
stash
receiver
in
electric
and
trip
oh
fluent
forward
receiver.
Thank
you.
Anthony.
F
I
haven't
talked
with
anyone
who
wants
to
do
that.
I
have
gotten
a
lot
of
questions
about
people
who
are
interested
in
doing
it,
but
no
one
who's
like
act.
More
of
a
question
of
like
hey,
is
this
possible?
How
would
I
do
that?
Not
a
I'm
doing
this,
what's
going
wrong
with
it,
so
my
answer
is
not
sure.