►
From YouTube: 2021-08-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
D
B
Should
the
I
put
something
up
and
yeah,
I
need
to
move
this
down.
I
guess.
B
Yeah,
I
don't
know
well,
I
already
discussed
my
proposal
with
with
bogdan
tigrin,
so
I
guess
I
can
go
ahead
and
let
me
do
it
here
and
get
people's
feedback.
Let
me
share.
B
If
they
have
any
more
feedback,
I
can
sync
up
with
them
separately.
B
B
So,
if
anybody's
not
familiar
with
it,
this
gives
kind
of
an
overview
of
receiver
creator.
Basically,
it's
a
way
to
at
run
time,
dynamically
start
receivers
based
on
discovery
of
like
a
pod
or
a
host
port.
So
you
know,
let's
say
you
want
to
every
time
you
find
a
port
like
a
like
a
you
know,
a
pod
with
a
container
that
has
port
6379.
B
You
want
to
start
the
redis
receiver,
so
you
can
collect
matrix
and
provide
the
config.
So
basically,
it's
similar
to
like
target
like
scrape
targets
in
prometheus
yeah.
So
it's
basically
so
right.
Now
it's
this
receiver.
That
starts
like
child
receivers
under
it
and
there's
kind
of
a
number
of
limitations.
B
It's
like
nested,
like
you,
know,
one
two,
three
plus
levels,
and
so
it's
you
know
you
have
like
this
receiver
creator,
actually
yeah
receiver
creator
under
receivers
and
there
you
tell
it
like
which
observers
to
watch
to
so
you
can
discover
endpoints
and
then
there
you
have
receivers
they're
like
the
templates.
So
whenever
this
rule's
encountered
it
starts
the
receiver.
B
So
it's
just
kind
of
especially
you
know
it's
not
a
first
class
concept,
and
so
it's
it's
just
kind
of
kind
of
wonky.
Second
limitation
is
there's
no
way
to
set
like
a
global
scrape
interval.
Currently,
so
we
have
the
scraper
helper,
but
there's
no
way
to
say
like
default,
all
my
scraping
to
be
you
know,
10
seconds
or
whatever,
it's
all
you
know,
per
per
receiver.
B
The
third
part
all
right.
Let
me
come
back
to
the
third
part.
Actually,
the
fourth
part
is
every
time
you
start
like
one
of
these
collections,
for
you
know,
let's
say
redis,
you
have
a
timer,
you
know
for
every.
Let's
say
it's
10
seconds,
so
every
10
seconds,
timer
fires,
there's
a
collection.
B
So
if
you
have
like
a
thousand
of
these
or
10
000
of
these,
you
have
you
know:
10
000
timers
and
you
basically
start
running
into
you
basically
start
consuming
a
non-trivial
amount
of
cpu
with
the
timers
other
seconds
have
a
dedicated
section
to
that.
So
we
come
back
to
that
so
yeah.
So
basically
there's
a
limitation
of
like
timer.
B
B
We've
come
across
recently
where
you
may
you
have
like
this
atlas,
which
is
like
their
posted
product,
where
you
want
to
scrape
blogs
from
it
every
like
five
minutes
or
so
you
want
to
be
able
to
pull
down
logs
and
you
send
them
through
the
pipeline,
so
it's
like
scraping,
but
normally
we
think
of
it
as
metrics,
but
in
this
case
it's
so
those
are
kind
of
like
some
of
the
deficiencies
you
want
to
address
and
then
sorry
number
three,
the
other
issue
with
metrics
is
the
way
to
filter
them
is
pretty
cumbersome.
B
So
let's
say
you
have
this
redis
receiver
and
then
you
want
to
either
like
turn
on
some
like
detailed
metrics
or
you
want
to
turn
off
some
metrics.
You
have
to
have
a
separate
processor,
configure
the
processor
and
then
hook
in
the
pipeline
and
they're
like
in
different
sections
right.
So
you
have
the
receiver
sections,
you
have
the
processors.
So,
like
you
have
all
this,
you
have
this
configuration
in
the
processor.
That's
specific
to
you
know.
Probably
this
this
receiver
configuration
but
they're
completely
disconnected
so
yeah.
B
This
is
kind
of
a
like
number.
Three
is
not
like.
We
could
do
the
rest
of
this
and
kind
of
funnel
number
three
because
it
might
be.
You
know.
Maybe
that
should
be
like
a
second
phase
type
thing,
because
it's
not
trivial
so
well.
Let
me
stop
there
kind
of
any
questions
about
receiver
creator
or
about
observers,
and,
like
kind
of
I
mean
just
kind
of
where
the
the
current
state
of
the
world
and
like
the
problem,
this
stuff
solves.
D
So
perhaps
we're
going
to
address
that
later,
but
is
there
a
place
to
to
query
what
is
the
current
state
of
the
collector
when
it
comes
to
receivers
like
what
are
the
current
receivers
that
are
active
in
there.
B
No,
no,
I
mean
you,
you
have
like
the
z
pages
metrics
right,
I
think,
for
the
sub
receivers.
You
should
still
be
getting
like
z,
page
metrics,.
D
So
the
context
for
the
question
is
from
the
operator
perspective:
we
today
we
parse
the
open,
telemetry
collector
configuration
yaml
and
we
we
have
a
list
of
receivers
like
known
receivers
and
whenever
we
detect
those
receivers
in
the
configuration
we
automatically
open
a
port
like
a
kubernetes
service
port,
and
if
the
collector
has
receivers
that
we
don't
know
about,
we
cannot
open
ports
for
for
them
in
the
service,
so
the
operator
would
need
a
way
to
query
what
are
the
receivers
and
which
ports
they
are
they're.
Listening
on.
D
But
no
it's.
D
D
That's
true
and
scrapers
are
really
the
only
the
one
types
of
receivers
that
we
don't
care
about
in
the
operator
anyway.
So
correction,
yeah
yeah,
that's
good
yeah,
I
mean,
but
I
guess
you
know
the
main
question
or
the
main
point
remains
that
we
need
to
make
it
observable.
You
know
what
is
the
current
configuration
of
the
collector?
What
is
the
current
state
of
the
collector
so
which
receivers
are
are
running
right
now.
E
That's
that's
something
that
we
have
in
our
mind,
but
right
now
is
not
possible,
but
there
is
going
to
be
later
some
some
proposals
and
stuff.
C
Okay,
I
have
some
questions
so.
G
G
It
seems
like
we
could
handle
a
lot
of
this
just
by
you
know
like
with
their
current
scraper
helper
package
and
enhancements
there.
B
Yeah,
I
think
that's
a
fair
point.
Sorry
bogdan
were
you
gonna,
say
something.
E
No,
let's,
let's
then,
let's
go
over
the
the
document
and
and
see
some
more
informations
why
that
may
help.
But
if
we
disagree
on
that
it
doesn't
I
mean
this
is
a
proposal.
It's
not
a
final
call
on
the
on
the
design.
G
For
sure
yeah,
I'm
just
just
trying
to
understand
if
there's
like
a
fundamental
user
reason
that
we
need
to
differentiate.
But
yes,
let's,
let's,
let's
go
through
the
proposal
thanks.
E
So
the
the
biggest
difference
for
me
that
or
from
from
user
perspective,
is
this
discoverability
part.
This
fact
that,
comparing
with
a
normal
receiver,
these
scrapers,
we
need
to
start
them
sometimes
dynamically,
when
we
discover
a
new
target.
That
is
a
possible
target
for
one
of
the
scrapers
that
we
implement.
B
B
Receivers
would
be,
for
you
know,
passively,
you
know
receive
stuff,
it's
just
the
you
know
the
naming
kind
of
lines
up
in
terms
of
kind
of
to
what
bogdan
was
saying
of
like
usability
right,
so
the
receivers
would
be
kind
of
like
passive
things
most
likely,
whereas
the
scraper
section
would
be
things
that
are,
you
know,
scraping
so
the
example
be.
You
know
this
would
be
like
said,
like
host
metrics,
there's
like
a
host,
metrics,
receiver
right
and
then
the
host
metrics
receiver
has
multiple
scrapers
within
it.
B
So
it
basically
gets
rid
of
the
host
metrics
and
you
know
just
makes
like
top
level
receivers.
So
you
have
like
you
know,
cpu
memory
disk
for
those,
so
that
those
are
kind
of
an
uninteresting,
it's
kind
of
the
more
interesting
one.
Is
you
know
so
under
it?
You
would
have
kind
of
like
redis
with,
but
we
had
redis
before
in
this
case,
just
using
nginx.
B
So
one
change
is
to
the
discovery
rules
like
before
we
had
this
watch
observers
thing
where
it
said
like
which
observers
to
you
know
to
look
at
for
discovering,
pods
or
ports
or
whatever,
so
that
option
was
removed
and
just
folded
into
the
discovery
rule
here,
which
isn't
just
an
expression,
that's
evaluated
against
each
endpoint
has
been
discovered
pod
port.
What
have
you
so
that
kind
of
kind
of
simplifies
things
a
little
bit
just
gets
rid
of
that
one
option,
which
was
just.
B
I
don't
think
it's
really
used
to
practice.
Let's
see
this
is
say
this.
This
already
exists,
it
stays
the
same.
This
is
how
you
just
enhance
resource
attributes,
either
dynamically
with
like
pod,
uid
or
just
static
attributes.
The
user
happens
to
to
use.
I
guess,
let's
come
back
to
filtering
since
that's
kind
of
a
separate
side
piece
inside
the
service.
Here's
where
you'd
like
have
a
default
scrape
interval
that
was
one
of
the
limitations
before
was
not
having
a
scrape
interval
like
a
global
scrape
interval.
B
So
in
this
case,
if
it's
you
know
a
top
little
thing,
we
can,
you
know,
put
it
in
in
service
and
then
the
last
change
that
would
be
required
is
kind
of
two
options
here,
because
I
didn't
really,
I
don't
really
have
a
strong
preference,
so
we
need
to.
Basically,
we
need
to
you
know,
put
this
engineering
http
into
the
pipeline
and
two
options
I
see
for
that
are
one
is
have
a
separate
script
like
add
a
scrapers
entry
similar
to
receivers
so
receivers
stay.
The
same.
B
Scrapers
skippers
are
put
in
their
own
section
option
two
would
be
able
to
basically
only
have
one
option
where
you
would
mix
and
match
procedures
and
and
scrapers.
B
Yeah,
so
this
part
not
really-
I
don't
know,
I
don't
have
a
strong
opinion
on
yeah,
so
yeah,
but
I
think
those
are
the
kind
of
two
options
I've
seen
for
for
putting
it
in
the
pipeline.
B
Yeah-
and
actually
I
do-
I
don't
have
in
the
dock,
I'll
put
it
in
the
dock.
I
did
have
a
separate
kind
of
example
of
what
what
it
might
look
like
if
we
don't
do
a
top
level
thing
and
just
stick
to
keeping
it
in
receivers.
B
I
think
it
looks
something
maybe
like
this,
where
you
have
like
a
basically
instead
of
receiver
creator,
you
have
a
scrapers
controller.
We
put
like
a
scrape
interval.
You
know
default
script
interval
there.
Instead
of
you
know
at
the
service
level,
then
you
have
a
list
of
scrapers
pretty
much
like
you
did
before
all
that
kind
of
stays
the
same,
and
then
I
guess
I
don't
have
it
but
anyway.
So
then,
this
scrapers
controller
would
go
into
the
server
into
the
receivers
section
in
the
pipeline.
B
Yeah,
I
think
that's
the
only.
I
think
that
wouldn't
require
any
changes
to
core
so
downsides.
Being
it's
a
little
more,
it's
still
kind
of
it's
just
more
yeah,
it's
not
as
quite
as
discoverable.
It's
a
little
bit
more.
It's
not
right
built-in
feature
right,
it's
another
receiver,
so
a
little
less
discoverable
yeah.
I
think
I
think,
there's
arguments
to
be
made
with
waze.
There.
D
So,
for
you
know
on
on
general
architecture
or
the
general
way
of
configuring,
we
have
a
sort
of
like
similar
problem
or
scenario
with
a
routing
processor
and
and
what
we
did
there.
So
the
roaching
processor,
basically
is
a
processor
that
allows
a
pipeline
to
get
into
different
exporters,
depending
on
on
the
characteristics
of
the
data
in
the
pipeline.
D
The
case
that
I
had
in
mind
was
mutual
tenancy,
so
for
for
this
tenant
go
to
this
exporter,
for
that
tenant
goes
through
that
go
to
that
exporter
and
basically,
what
we
have
is
in
the
configuration
for
the
launching
process,
where
we
reference
exporters,
but
the
exporters
themselves
are
defined
in
the
exporter
node
of
the
configuration
in
this
case
here.
E
But
I
think
I
think
one
of
the
the
the
simplifications
that
we
want
to
do
is
to
to
make
scraper,
even
if
it's
not
in
the
config
but
make
scraper
as
a
standalone
interface
that
only
the
controller
can
can
schedule
them.
But
maybe
this
is
not
the
right
thing
again.
We
are
brainstorming.
We
are
talking
about
this.
D
Yeah,
I'm
not
sure
if
you
can
hear
my
neighbor,
but
I
hope
you
cannot,
but
the
other
thing
that
we
can
take
from
another
processor
is
you
know
not
making
that
a
process
for
itself,
but
a
component
and
the
component?
I
think
the
only
requirement
is
to
have
a
start
and
stop
method
or
shutdown
method.
D
In
that
way,
you
can
specify
the
receivers
as
extensions
then,
and
those
extensions
are
then
available.
As
part
of
you
know,
component
host-
and
similarly
here
you
just
reference
the
extensions
that
are
part
of
the
pipeline,
or
you
know
the
extensions
are
defined
elsewhere.
D
Does
that
make
sense,
I
mean
the
only
advantage
advantage
of
that
would
be
to
reuse
the
same
receiver
or
the
same
scraper
into
different
pipelines
without
having
to
or
in
two
different
configurations,
without
having
to
repeat
they
are
the
same
code.
E
E
That's
one
thing,
and
the
second
thing
is
it's
less
likely
that
a
scraper
will
be
shared
like
usually
you
don't
want
to
to
scrape
twice
so
so
the
only
thing,
and
also
the
properties
of
the
receivers
and
extensions
as
well,
is
you
create
only
one
instance,
so
the
only
reason
to
share
that
scraper
is
to
tie
it
into
multiple
pipelines,
but
tying
into
multiple
pipelines.
You
can
do
it
by
putting
the
whole
receiver
into
multiple
pipelines.
B
Scrapers,
so
let
me
let
me
briefly
touch
on
the
filtering.
I
mean
I
think
I'm
gonna
break
this
out
into
its
own
proposals.
Like
I
said
it's,
I
mean
it's
not
like
on
the
critical
path
and
but
just
kind
of
give
a
sense
of
like
why
I
mean
kind
of
why
we
want
to
also
kind
of
redo
this.
B
So
the
filtering
you
know,
like
I
said
it's
pretty
cumbersome
today-
to
do
the
filtering
as
a
separate
processor
and
the
syntax
is
a
little
unfriendly
in
terms
of
like
in
terms
of
like
doing
matches
and
whatever.
So
basically,
the
idea
is
to
have
like
a
friendly
way
that
maybe
it
doesn't
work
in
100
of
the
cases
but
works
in
like
the
99
case.
B
For
basically,
you
know
turning
off
turning
all
metrics
turning
off
metrics,
you
know
being
being
able
to
have
a
set
of
default
metrics,
so
not
emitting
all
of
them
by
default,
but
just
emitting
a
subset
and
then
be
able
to
turn
them
off
and
turn
them
on
so
say
like
this
would
have
like
a
syntax
of
you
have
matched
by
regex.
You
can
match
my
globs.
B
You
could
match
the
the
metric
name
as
well
as
labels
within
the
within
the
data
points,
so
it
kind
of
have
its
own.
B
It's
kind
of
similar
to
prometheus
syntax
has
a
lot
of
similarities,
I
think
to
their
like
filtering,
or
they
call
it
something
else
after
what
they
call
it
but
yeah.
So
the
idea
would
be
to
have
this
like
inline
filtering,
so
that
you
have
only
one
one
place
to
configure.
B
The
second
reason
to
have
it
as-
and
I
mean
I
just
realized
this,
but
so
I
need
to
put
it
in
the
dock,
but
the
other
reason
why
having
this
at
the
scraper
is
important
instead
of
having
it
as
a
processor
is
the
case
where
we
don't
have
this
feature
in
in
hotel
today,
but
at
least
in
the
signalfx
agent.
B
We
have
this
feature
where
you
could
use
like
pod,
annotations
or
docker
labels,
and
you
could
basically
put
your
your
agent
configuration
in
in
those
annotations
and
then
when
we
discovered
that
pod
it
would
pull
the
configuration
to
scrape
that
thing
from
the
annotations
right.
So
your
redis
pod
would
have
like
tell
it
to
configure
the
redis
scraper.
B
You
know
with
you
know
password
foo,
whatever
whatever
configuration
turn
on
these
extra
metrics
so
like
it
was
all
so
you'd
have
to
like
modify
your
your
agent
to
collect
it.
The
your
pot
annotations
could
describe
how
to
collect
that
pod,
and
so,
if
we
were
to
implement
that
feature
like
we
wouldn't
want
to
like,
like
have
doing
the
filtering
at
like
the
processor
level,
just
becomes
kind
of
clunky
like
you
need
it
kind
of
self-contained,
because
basically
this
all
this
configuration
would
be
like
all.
B
This
configuration
would
come
from
like
pod
annotations,
for
instance,
so
you
kind
of
need
to
be
self-contained,
so
yeah.
So
that's
kind
of
the
the
go
with
filtering.
I
guess
yeah
we
will
have
to.
I
think
we'll
just
have
a
separate
make
this
a
separate
proposal,
but
if
anybody
has
like
high
level
feedback
on
it,
it'd
be
it'd
be
great,
but
you
know
don't
assume
that
this
is
this
will
be.
This
is
like
you're
we're
signing
off
onto
this
just
yet.
G
I
think
I
agree
a
lot
with
the
goals
of
standardizing
all
these
things
and
and
for
the
most
part,
with
the
ways
that
we're
doing
it.
I
I'd
like
to
I'm
gonna
think
about
this.
A
little
more
and
I'll
comment
on
the
doc,
maybe
even
make
a
parallel
proposal,
but
just
that,
like
an
intuitive
level,
it
seems
it
seems
unnecessary
to
introduce
this
extra
concept
to
the
user
and
we
have
this
nice
elegant
system.
G
Where
you
know,
receivers
are
ingest
all
of
the
data
that
we
ingest
and
I
think
we
can
stick
with
that,
and
I
think
I
see
ways
to
make
that
work
relatively
straightforward,
but
very
similar
to
this,
but
by
exposing
less
of
the
internals
to
the
user.
I
think.
E
Yeah,
if
we,
if
we
can
do
that,
I
think
it's
even
better,
but
this
is
the
solution
that
we
could
came
up
with
so
happy
to
to
review
alternatives
or
or
enhance
these
proposals
so
that
we
we
help
our
users
and
and
everything.
I
B
The
the
manga
thing
was
about
just
the
example
of
log
scraping,
so
normally
we
think
of
scraping
in
terms
of
just
metrics.
That's
what
it
is
today,
but
we
also
have
a
use
case
of
scraping
logs.
So
basically,
we
just
want
it
to
work
with
any
any
pipeline
data
type
like
you
should
be
able
to
send
metrics
great
metrics
as
well
as
logs
like
there's.
Nothing
really
super
special
about
metrics,
about
it
being
like
metrics.
Only.
I
B
Yeah
so
there's
a
section
on
timer
as
well.
We
have
to
go
to
it,
but
I
basically
did
a
little
some
simple
benchmarking
of
like
what
the
overhead
of
timers
are.
It's
like.
So
for,
like
a
thousand,
it
was
two
percent
of
a
core
like
a
macbook
pro
record,
ten
thousand
it.
I
guess
it's
kind
of
roughly
linear
yeah,
it's
like
twelve
percent
of
a
core
at
10
000.,
but
it
basically
seemed
to
come
down
to
like
if
the
more
timers
you
have
the
more
scheduling
the
runtime
has
to
do.
B
So
I
think
that's
where
most
overhead
is
so
you
basically
get
rid
of
most
overhead,
I
think
by
by
basically
matching
the
work
right.
So,
instead
of
interrupting,
you
know,
10
000
times
right,
interrupt
just
as
much
as
you
need,
but
do
but
match
the
work.
So
anyway,.
B
Then
finally,
I
guess
we
can
kind
of
so
I
kind
of
summarized
all
this
under
open
questions
of
kind
of
things
that
I
feel
like
are
still
kind
of
open.
In
my
mind
like
some
of
these
are
I'm
fairly
sure
of,
but
I
guess
I
wanted
some
sandy
checks,
so
one
was:
do
we
have
any
case
for
dynamically,
creating
a
receiver
like
dynamically,
creating
an
otlp
receiver,
or
are
we
certain
that
we
only
need
to
dynamically
create
scrapers?
B
I
don't
have
like.
I
don't
have
a
counter
example.
Basically,
so
I
think
this
is
you
know
true.
I
guess
so.
The
second
open
question
is:
is
the
current
descriptor
interface
sufficient?
It's
basically
startup
shutdown
in
the
collect
cycle?
Are
there
any
more
complicated
things
like
where
a
scraper
would
need
multiple
timers
or
things
like
that?
B
Third
is
so
it'd
be
good
to
be
good
to
get
get
feedback
from
other
vendors
on
this,
so
basically
in
our
pricing
model,
certain
metrics
are
included
and
others
are
not
included,
so
you
have
to
pay
more
for
them.
You
have
to
pay
extra
for
them,
and
so
we
in
our
agent,
we,
like
we
turn
on
certain
metrics
by
default
and
the
users
can
turn
on
additional
metrics.
B
I
don't
know
if
other
vendors
have
this
same
pricing
model,
but
it'd
be
you
know,
so
it
basically
comes
up
to
a
like.
Do
vendors
need
to
agree
on
what
the
default
metrics
are
right
when
you
turn
on
the
redis
scraper
and
you
get
you
know,
half
of
the
metrics
sending
in
you
know.
If
one
vendor
feels
like
one
metric
is,
should
be
defaulted.
Another
vendor
feels
like
another
metric
should
be
default.
Can
I
watch
what
should
those
defaults
be,
and
maybe
they
should
be
configurable
inside
the
distribution.
B
Yeah
it'd
be
kind
of
yeah.
It's
good
to
hear
about,
like
what
other
pricing
levels
are
in
that
respect.
So,
fourth,
what
are
the
number
of
scrapers?
We
should
target
per
performance
testing?
Is
it
like
a
thousand?
Is
it
ten
thousand
like
what
scale
like
what
scale
should
this
be
hitting.
B
Yeah
and
then
five
that's
about
filtering
but
again
filterings
can
be
broken
out,
but
in
the
filtering
proposal
there
was
not
a
way
to
filter
by
resource
attributes,
there's
only
a
way
to
filter
by
data
point
labels,
and
the
question
is
whether
or
not
that's
sufficient.
I
think
it's
not.
I
think
you
have
to
have
a
way
to
support
filter
by
resource
labels,
but
anyway
I'll
get
hunted
out
of
into
a
different
proposal.
I
think
so.
There's
other
questions
any
other.
I
feel
like
make
sure
I
kept.
B
E
I
don't
have
anything
in
mind,
but
I
I
would
encourage
everyone
to
look
at
this.
Maybe
propose
alternatives,
as
dan
suggested.
Just
let's.
Let's
keep
the
ball
rolling
and
and
make
make
progress
on
this.
B
All
right
thanks,
everybody
feel
free
to
ping
me
if
you
want
to
you,
know:
email
slack
as
well.
It's
fun
thanks.
B
I
think
there's
something
else
in
the
back
and
the
somebody
had
something
else
about
grpc.
E
A
Yeah,
so
we
had
discussed
a
couple
weeks
ago,
some
of
the
findings
that
we
had
on
high
latency
links
for
our
grpc
receiver,
and
we
also
discussed
the
possibility
of
combining
the
grpc
and
hdp
endpoints
into
one
port.
So
I
just
wanted
to
share
the
progress
along
that
line
and
talk
through
some
of
the
things
that
we've
been
working
through.
So
I
have
this
prototype
pr.
A
That's
up,
and
this
is
a
big
mess,
I'm
going
to
clean
it
up,
but
the
idea
is
to
consolidate
port
4317
to
serve
both
grpc
and
http
traffic,
and
I
know
that
there
was
some
work
at
some
of
the
other
companies
where
there
was
a
struggle
to
do
this.
This
approach,
the
way
that
it
works
is
we're
combining
the
grpc
endpoints
and
the
http
endpoints
into
just
one
server.
So
the
grpc
endpoints
end
up
just
being
one
of
the
handlers
on
the
server
and
the
way
that
this
works
is
the
server
itself
has
been
modified.
A
A
So
I
have
a
config
file
here,
set
up
to
scrape
the
open
telemetry
collector
through
its
self
metrics
via
prometheus,
and
I
have
it
set
up
to
export
otlp
over
grpc
with
gzip
compression
and
otlp
over
http,
and
you
can
see
that
it's
receiving
the
grpc
it's
receiving
both
the
grpc
and
hdp
traffic,
so
the
metrics
are
coming
in
pairs
here.
So
the
exporter
is
is
processing
both
on
the
same
port.
A
A
So
some
customers
are
picking
up
the
node
exporter
which
by
default
sends
traffic,
I
think
over
http,
so
it's
using
otlp
over
http
and
we
don't
currently
support
that
on
the
same
port.
So
there's
just
been
a
lot
of
confusion
around
the
customer.
Doesn't
understand
whether
they're
sending
grpc
or
http
as
far
as
they're
concerned,
they
just
know
otlp
and
they
just
configure
some
otlp
exporter
and
they
expect
the
the
port
and
the
host
to
just
be
the
same
for
both
but
that's
kind
of
the
customer
story.
D
I
guess
the
first
question
is
what
about
the
performance,
so
one
thing
that
we,
I
think
we
we
had
before
is
that
the
performance
for
one
of
the
components
you
were
using
jrpc
gateway,
perhaps
or
whatever
sim
symlogs.
I
think
there
were
some
some
performance
compromise
that
we
had
to
make
there.
A
Well,
I
was
going
to
say
here
we're
actually
using
fewer
stages,
so
there
there
was
a
performance
concern.
I
imagine
from
having
the
the
mux
in
front
where
you
would
have
to
have
an
additional
stage,
but
here
we're
just
using
the
hq
server
directly
to
route
to
a
particular
handler.
I
can
run
performance
testing.
I
don't
think
that
we
would
see
the
same
degradation
as
prior
approaches.
D
You
know
having
a
an
idea
of
what
are
the
compromises
would
be
good,
so
perhaps
the
baseline
could
be
what
we
have
today
versus
this
new
approach,
and
perhaps
additional
data
point
would
be
cmux
right.
So
what
we
used
to
have
before
the
concrete
problem
that
we
had
was
on
tls
handling,
but
I
think
your
approach
here
would
not
suffer
the
same
problem,
because
there
is
only
one
http
server
right,
so
tls
handling
is
unified
for
both
handlers.
A
A
I
should
mention
the
other
benefit
here
is
that
the
configuration
is
significantly
simplified.
So
if
you
look
at
the
otlp
configuration
for
the
receiver,
I've
changed
it
so
that
you,
you
basically
only
have
to
specify
configuration
for
one
endpoint
and
both
grpc
and
http
would
be
enabled
on
the
same
server.
D
B
D
Of
the
nightmares
that
people
mentioned
before
was
you
know
making
sure
that
we
know
what
is
going
on
when
we
get
bug
reports.
A
Yeah,
so
the
other
benefit-
I
guess,
of
this
approach
as
well-
is
the
htp
like
the
open,
telemetry
http
server
instrumentation
will
be
used
for
both
the
grpc
and
hdp
path
in
this
case,
so
I
imagine
that
it'll
be
easier
to
see
a
comparison
of
the
performance
of
grpc
and
http
since
since
we're
using
like
only
one
instrumentation
package,.
D
Yep-
and
I
guess
my
last
comment
is
on
something
that
I
mentioned
on
on
the
ticket
that
you
referred
before,
so
it
would
be
like
out
if
we
have.
If
we
end
up
having
this
feature,
I
would
prefer
to
have
it
as
opt-in
first
and
then,
once
we
have
good
feedback
from
users,
then
we
can
make
it
opt-out
and
make
it
you
know
on
by
the
full,
and
if
we
decide
to
do
that,
one
by
default,
then
you
know
we
care.
We
should
care
about
backwards
compatibility.
D
So
we
just
open
the
same
or
you
know
the
same
handlers.
We
don't
on
those
two
ports
for
a
while,
and
then
we
remove
one
of
the
one
of
the
boards.
E
Yeah,
we're
definitely
gonna
do
a
deprecation.
I
I
have
a
different
question.
I
know
I
know
a
bunch
of
load,
balancers
may
or
may
not
work
with
http,
2
and
and
jrpc
would
be.
Would
this
change
make
it
very
hard
for
users
to
configure
load
balancers
and
and
ensure
that
everything
is
working
smoothly?
In
that
case,.
A
This
actually
makes
it
easier.
I
think
so
the
the
thing
that
we're
taking
advantage
of
here
so
we
mentioned
there
might
be
some
trade-offs,
so
the
trade-off
here
is
there.
This
will
remove
the
potential
to
add
streaming
support
for
grpc
in
the
future.
If
we
go
down
this
path,
so
I
think
that
the
difficulty
around
load
balancers
is
for
streaming
rpcs
and
not
so
much
the
unary
side
of
things,
the
unary
side
of
things
this.
A
This
is
pretty
much
just
a
regular
http
server,
so
any
load
balancer
that
can
serve
h2
traffic
or
proxy
hq
traffic
will
will
be
able
to
front
the
collector.
E
With
this
receiver,
I
I
remember
that
alb
was
not
able
to
serve
grpc
or
to
to
route
grpc.
So
that's
why
yeah
was
one
of
the
examples.
Yeah.
A
E
A
Okay,
I
can
run
some
tests
with
an
alb,
so
we
can
observe
the
behavior.
I
think.
As
long
as
the
alb
can
support
http
2
proxying,
I
think
it
should
be
okay,
yeah.
D
I
mean
lb
is
not
the
only
one
that
people
are
using
up
there
right
I
mean
there
is
noi
on
the
server
side
service
mesh
side
of
things
there
is,
and
nginx
is
also
doing
some
proxy
for
jrpc,
or
you
know
reverse
proxy
for
that.
So
there
are
things
that
we
don't
know
about,
or
we
are
not
interested
in
testing
them
honestly
and
what
we
can
and
we
can.
We
can,
you
know,
get
users
feedback
if
we
get
that
out
on
an
opt-in
basis
right.
So
I
think.
A
D
A
A
We've
tested
this
with
the
the
go
exporter
as
you
can
see
right
and
it
seems
to
work.
Okay
and
we've
also
tested
it
with
the
java
grpc
exporter.
So
this
this
approach
is
based
on
the
http
protocol
as
specified
in
the
grpc
library.
A
So,
no
so
the
handler
is
now
done.
So
all
of
the
handling
is
done
with
the
these
otlp
hdp
handlers.
So
grpc.
The
only
thing
that
grpc
is
doing
with
the
the
unary
unary
requests
is
its
append.
It's
pre-pending,
five
bytes
onto
the
beginning
of
the
request,
and
so
what
this
approach
is
doing
you
can
actually
see
the
majority
of
the
the
implementation.
A
A
We
we
do
interpret
those
bytes,
so
I'm
not
just
throwing
those
away.
Those
bytes
actually
do
get
interpreted
so
they're
they're
used
for
compression,
for
example,
and
so
the
the
compression
and
that's
why
I
was
demoing
exporting
with
gzip
compression.
So
the
server
is
implementing
the
those
like
reading
those
bytes
in
the
proper
way.
F
E
There
are
some
protocol
negotiations
between
client
and
server
and
if
some
features
are
supported,
for
example,
one
of
the
thing
that
that
I
know
for
sure
is
the
fact
that
grpc
talks
between
client
and
server
and
if,
if
they
don't
have,
if
they
are
talking
between
a
grpc
client,
a
grpc
server,
which
is
not
an
h2
load
balancer
or
doesn't
have
to
respect
the
h2
protocol,
for
example,
for
buy
for
bytes.
They
don't
use
base64
encoding
on
the
on
the
header.
E
E
So
ahead,
a
header,
if
you,
if
you
set
the
header
in
in
h2,
you
cannot
send
raw
bytes,
you
have
to
send
them
as
as
base64.
E
No
you
you
do
have
so
in
grpc.
You
do
have
this!
No,
not
in
egypt.
In
h2
you,
you
have
some
forbidden
characters
that
can
be
in
the
header
values.
Sure
you
cannot
put
any
bytes
correct,
so
grpc
adopted
a
protocol.
It
says
that
if
you
are
sending
metadata
that
they
call
as
a
byte
right.
E
Base.
Basically
for
it,
because
because
that's
the
simplest
thing
to
ensure
that
there
are
no
no
no
forbidden
characters
in
the
in
the
value.
A
Right,
you're
you're
talking
about
this,
this
custom
metadata
correct
right,
so
that
that
was
one
of
the
questions
that
I
brought
up
in
the
in
the
pr
because,
currently
the
I
believe
there's
no
unified
way
to
communicate,
headers
or
metadata
down
through
the
processor
or
exporter
layer.
A
I
think
there
there
was
a
proposal
that
so
tigran
mentions
that
there's
a
proposal
for
this
called
p
data
context,
so
those
those
headers
are
not
currently
read
in
any
way.
So,
in
other
words,
I
I
understand
what
you're
saying
that
the
the
context
there
are
some
grpc
protocol,
specific
bits
about
the
the
metadata
that
would
need
to
be
decoded
before
being
sent
downstream,
but
currently
those
headers
do
not
funnel
into
anywhere,
because
I
don't
think,
there's
a
I'm
waiting
sort
of
on
discussion
of
this.
E
A
E
E
Yeah,
so
because
of
that,
we
may
consider
having
this
as
an
alternative
unify
or
tlp
or
whatever
we
call
it
a
receiver
that
is
capable
of
doing
this,
at
least
for
for
the
couple
of
months,
because,
yes,
we
need
we,
we
need
a
bunch
of
testing
before
we.
We
can
deploy
this
as
a
defaulting.
A
E
A
E
Yeah
and-
and
there
are
huge
amount
of
scenarios
like
jurassic,
pointed
like
envoy
ngx,
whatever
whatever
and
people,
and
we
don't
have
resources
to
test
all
of
them.
So
I
think
the
best
thing
we
can
do
is
to
make
this
an
alternative
thing,
as
as
pointed
and
suggest
people
to
use
this
if
they
want.
There
was
another
performance
issue
that
you
mentioned,
that
we
may
be
able
to
to
handle
by
by
creating
our
own
h2
server
and
just
use
the
the
grpc
handler.
A
Yes,
so
that
was
the
that
was
another
prototype
that
I
was
planning
to
do
so,
instead
of
instead
of
you
know
removing
the
grpc
library
entirely,
we
can
use
the
handle
http
interface,
exposed
by
the
grpc
library
and
plug
that
into
an
h2
server.
A
My
understanding
is
that
interface
is
going
to
remain
experimental
it's
and
they
have
no
plans
to
change
it,
but
they
also
stress
that
they
consider
that
interface
to
be
buggy,
so
I'm
not
sure
if
they
said
that
there
may
be
some
known
bugs
there
and
we
didn't
get
too
much
into
the
details
there,
but
given
what
they
said,
I
I
tried
to
prototype
this
path
first,
so
I
think
either
way
we're
going
to
have
some
amount
of
risk.
E
I
think
we
have.
We
have
only
two
minutes.
So,
let's,
let's
talk
online
offline
more
on
this,
we
have
our
last
topic.
I
think
for
today.
J
H
Yeah,
so
I'm
just
having
several
questions
left
for
this
health
check
in
the
open,
telemetry
and
may
need
your
answer
about
it,
and
so
I
just
see
your
answer
yeah.
Let
me
share
my.
H
H
Yeah
so
tennessee
last
yes
yeah
so
yeah,
so
we
are
still
trying
to
use
office
matrix
it's
our
new
wheel
in
our
new
hashtag
extension,
and
so
given
by
your
idea.
We
just
need
to
generate
a
new
wheel
in
the
with
our
original
obvious
matrix,
and
but
in
this
case
we
still
cannot
guess
this
new
wheel
in
our
household
extension
in
the
contribute,
and
so
I'm
just
curious
how
we
can
use
it
yeah,
the
first.
So
the
first
question
is:
how
was
the
original
office
matrix
used?
F
H
E
That
you
right
now
have
go
go
to
to
the
files.
F
Okay
and
check
where
ops
metrics
is
used.
H
Yeah
so
yeah,
so
I
just
I
read
the
code
after
obvious
matrix
report,
but
I
I'm
still
not
clear
about
how
was
the
original
matrix
used
yeah.
I
see
some
functions
that
I
send,
but
I
cannot
see
where
that
is
set
yeah.
So
currently
I'm
so.
My
question
is:
are
my:
my
original
code
was
like,
I
will
have
we
generated
a
new
wheel
and
the
new
wheel
registers
our
new
reporter.
H
Portal
is
responsible
for
clients,
this
review,
this
matrix
and
the
cue
and
but
but
how
we
can
do
it
in
this
way.
E
Yes,
it's
it's
used
jurassic
by
default,.
D
E
Record
against
that
measure,
so
so
the
measure
gets
the
values.
So
it's
not
a
problem,
but
so
so
okay,
so
you
have
the
view
here.
A
E
H
E
H
H
So
where
so,
like
you,
you
may
like
this.
H
Like
what
so
you
may
like
this,
you
use
a
username
wheel
and
the
review
service
exporter.
Yes,
okay
and
also
use
this
new
wheel
to
reduce
our
newest
portal.
I
generated
in
our
new
health
check
center
right
and
yeah.
E
Okay,
so
I
think
we
are
over
time,
but
please
I
will
send
you
a
couple
of
modelings
and
if
you
cannot
do
it,
I
will
do
it
for
you.
I
mean
sorry,
but
it's
it's
hard
for
for
for
me
to
explain
all
the
details.
Look
at
look
at
what
open
sensors
does
and
because
that's
what
we
are
right
now
using
and
how
you
can
have
a
view
installed
and
then
on
the
other
side,
because
open
sensors
is
a
singleton,
you
can
install
a
exporter
and
consume
all
the
metrics
that
you
have.
H
Yeah
so
yeah
so
yeah
we
can
maybe
have
a
ton
of
luck.
Okay,.