►
From YouTube: 2021-07-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Thank
you,
okay.
I
guess
we
can
start
now.
It's
five
minutes
after
the
starting
time.
Thank
you
for
joining.
I
hope
that
whoever
had
long
weekend
did
manage
to
take
a
break
so
yeah.
First
of
all,
usually
we
used
to
do
some
triaging,
but
I
was
going
through
the
issues
and
they
have
been
triaged
already
so
so
far
so
good
on
that
side,
then
we
can
go
to
the
through
the
actual
items.
First
of
all,
then
evo
initializing
and
configuring
transfer
providers.
B
This
adds
an
extra
friction
to
the
onboarding
or
for
the
customer,
and
with
this
being
the
only
exception,
it's
it's
not
that
big
of
an
issue,
but
over
time
I
kind
of
see
that
if
we
are
successful
than
three
years
from
now
we're
not
looking
at
one
or
two
but
dozens
upon
those
different
thing.
Tracer
provider,
initializers
kind
of
all
may
be
done
differently,
and
this
might
become
an
issue
over
time.
B
C
Just
just
for
clarity,
it
seems
like
a
question
of
of
who's
in
control,
right,
like
you're,
saying
like
there's
some
software
packages
that
people
install
and
those
software
packages
want
to
be
able
to
set
up
the
sdk
and
configure
it.
C
But
then
you
also
have
a
human
operator
who
may
want
to
be
doing
that,
and
so
one
question
in
there
is,
if
we
start
telling
these
third
parties
to
these
software
components
to
automatically
do
that,
is
that
gonna
create
a
a
conflict?
I
imagine
that's
sort
of
at
the
the
core
of
what
you're
you're
struggling
with
there.
B
Yeah,
well
maybe
to
to
shed
some
light
on
the
current
use
case.
There
is
kind
of
a
centralized
the
platform
team
who
provides
the
developer
teams
with
tools
and
and
infra,
as
well
as
pre-packaged
software
and
with
everything
else
they
will
just
ship
the
pre-configured
open,
telemetry
agent
with
auto
instrumentations,
embedded
guess
what
there
is
now
one
exception
with
this
being
the
one
exception:
okay.
B
A
bunch
of
different
services
on
the
under
instrumentation
there,
most
of
them
by
auto
instrumentation
and
the
patchy
camel
nodes
or
services.
There
being
the
exception
that
this
is
now
where
the
apache
camel
itself
provides
a
component
from
their
site,
capturing
the
telemetry
and
exporting
it
themselves,
not
via
the
instrumentations
maintained
by
our
maintainers.
D
Even
if
I
may
try
to
explain
to
try
to
translate
your
talk
to
developer
to
engineering,
so
I
believe
the
problem
with
what
evo
is
trying
to
explain
if,
when
we
have
libraries
providing
open
elementary
integration
by
themselves
like
built-in
native
integration
with
open
telemetry,
but
those
libraries
require
explicit
opt-in,
please
change
your
application
to
say.
Yes,
I
want
to
use
open,
telemetry
module
of
that
library
and
now,
if
you
have
15
different
libraries
in
my
application,
all
requiring
manual
opt-in,
it's
like
it's.
C
Is
this
the
scenario
where
maybe
that
and
again
this
might
work
differently
in
java,
where
you
also
have
an
agent,
but
is
it
the
case
where
they
should
just
be
using
the
default
or
global
providers
like
when
they
start?
Don't
don't
ask
the
user?
C
Do
you
want
to
use
open,
telemetry
or
not
just
just
grab
the
the
providers
and
go
and
give
users
an
option
to
give
you
a
specific
provider
if
that
that
global
provider
isn't
actually
the
one
they
wanted,
if
they're
doing
something,
tricky
is,
maybe
that
and
like
don't
don't
ask
whether
they
want
it
or
not.
If
they
want
it,
they'll
configure
a
tracer
provider,
and
if
they
don't
do
that,
then
it's
all
no
ops.
So
don't
don't
worry.
D
E
This
may
get
to
the
issue
that
I
raised
a
couple
weeks
ago,
though,
about
getting
the
tracer
provider
out
of
the
span
in
that,
if
you
do
have
these
15
or
20
components
that
you
have
to
explicitly
provide
a
tracer
provider
to
to
get
them
to
work
because
they
don't
want
to
take
that
dependency
on
the
global
tracer
provider,
say
you've
got
a
component
that
has
a
policy
that
we
don't
depend
on
globals.
You
must
inject
all
dependencies,
then
you're
back
to
the
situation.
E
That
evo
is
describing
where
you've
got
a
bunch
of
new
components
that
you
have
to
configure
explicitly
to
tell
to
use
open
telemetry,
whereas
if
they
could
pull
the
tracer
provider
out
of
the
spin
and
say
I'm
going
to
use,
what's
provided
to
me
in
this
particular
request.
If
it's
a
no
upgrade,
it's
a
no
op,
if
it's
actually
configured
I'll
use
it
and
it
will
go
somewhere.
F
E
I
think
something
has
to
have
initialized
it
yeah,
but
I'm
not
sure
in
the
grpc
context.
I
don't
do
a
lot
of
gfc,
but
in
the
http
context
we
could
say
you've
got
a
handler
on.
You
know,
listening
on
one
of
several
endpoints
and
each
one
of
those
could
be
configured
with
a
different
tracer
provider,
but
the
same
library
could
be
used
underneath
in
each
of
those
handlers
and
they
would
never
know
the
difference.
It
would
just
know.
G
F
Where,
where
evo,
has
a
problem
that
if
the
application
owner
has
to
initialize
that
and
I'm
not
saying
that,
that's
what
we
want,
but
in
in
invo's
case,
if
the
application
owner
has
to
write
that
specific
line
of
initializing
this
plugin
or
installing
this
plugin,
it's
an
extra.
D
D
In
default
case
you
just
it
should
just
work,
I
configure
sdk
and
whoever
needs
gets
the
tracer
provider
from
the
current
spam.
What
anthony
is
talking
about
in
in
usual,
in,
like
the
simplest
case
anyway,
library
should
just
grab
a
global
state
if
global
state
is
not
is
out
out
of
the
questions
for
some
reason,
then
that
related
question
by
probably
already
a
next
step.
C
Yeah,
I
think
what
with
anthony's
saying
is
what
the
way
we've
envisioned
open
telemetry
is
that
your
providers
are
something
that
are
statically
configured.
Essentially,
when
you
boot
your
program
up,
then
you
configure
these
things,
there's
a
global
one
that
is
very
convenient.
C
E
If
you
have
many
of
these
that
you
have
to
wire
through
your
tracer
provider
too,
in
some
manner
evo
is
that
is
that
the
the
correct
description
that
the
apache
camel
is
not
taking
the
global
tracer
providers
or
it's
not
being
provided
to
it
some
way
you
have
to
get
a
tracer
provider
and
give
it
to
it
before
it
will
function.
D
Yep
exactly
and
not
not,
please
not
not,
not
precisely
by
the
way
they
even
don't
ask
necessarily
for
tracer
provider.
They
just
want
to
say
like
yes,
open
telemetry,
please
use
integration
with
this
camel
module.
You
can't
provide
azure
provider,
but
that's
why
they
even
don't
require
that
they
just
require
like
opt-in.
Yes,
I
want
to
open
telemetry
in
this
camel
module.
E
So
yeah,
okay,
but
in
that
case
they
would
then
use
the
global
yeah.
So
there
I
would,
I
would
suggest
they
just
use
the
api
and
expect
a
no
op
if
the
user
doesn't
want
to
okay.
I
was
I
was
misunderstanding
that
this
library,
which
may
be
down
further
in
the
application
several
layers
required
a
tracer
provider
to
start
it
to
try
to
start
tracing.
C
Yeah
anthony,
I
really
wanna
I
mean.
Maybe
we
can't,
but
I
wanna
emphasize
that,
like
as
much
as
we
can.
We
should
push
back
on
the
idea
of
needing
this
stuff
to
be
dynamically
configurable
and
have
a
way
to
to
have
it
always
be
statically
configurable.
C
Just
it's
going
to
really
unravel
a
lot
of
things
if
that
doesn't
work,
so
I
just
want
to
emphasize
that
you're
saying,
like
just
tack
on
this
tracer
provider
thing
and
then
some
of
these
guys
can
do
it,
but
that's
actually
going
to
just
kind
of
roll
back
up
the
chain.
I
think
in
some
ways
I
guess
I'm
a
little
concerned
that
you
might
be.
C
You
might
be
unraveling
the
thread
that
starts
to
unravel
part
of
this
sweater,
basically,
because
I
suspect
you're
going
to
be
in
a
situation
where
those
modules
that
require
some
kind
of
dynamic
provider
are
then
going
to
start
poking
the
modules
next
to
them
and
say:
hey,
you
have
to
do
it
this
way
and
then
that'll
eventually
bubble
back
up
to
instrumentation
in
general.
Third
part,
you
know
auto
instrumentation
instrumentation
we
provide
having
to
do
it.
That
way.
E
Yeah,
they
see
the
concern
that
the
pattern,
may
you
know,
have
some
sort
of
memetic,
propagation
and,
and
everybody
starts
doing
it
this
way.
I'm
not
sure
that
I
can
that
I
understand
the
concern
with
using
that
approach
versus
a
static
configuration
though
I
I
I
think,
being
able
to
do
a
per
request
trace
pipeline
has
benefit,
and
I'm
I
I
don't
see
the
downsides
to
it.
C
Well,
it's
true
that,
like
tracer
objects
and
things
like
that
are
lightweight,
but
there
are
these
objects
that
we
want
to
create
somewhat
statically
like
when
you
get
into
metrics.
For
example,
this
is
some
of
the
instrumentation
discussion
we've
been
having
in
the
asia
pacific
sig.
There's
there
is
stuff
that
you
don't
necessarily
want
to
be
recreating
all
over
again
on
every
single,
every
single
transaction.
C
So
I
I'm
worried
it's
going
to
to
start
piling
on
on
overhead
if
there's
no
way
for
objects
to
hold
on
to
a
static
set
of
things
like
metrics
with
predefined
label
sets
some
amount
of
static
configuration,
for
example,
and
maybe
it's
the
case
that
that
gets
statically
combined
that
gets
statically
created
somewhere,
and
it's
just
getting
handed
around
that
context
everywhere,
but
anyways
I'm
just
a
little
cons.
It's
just.
C
E
It
does
seem
that
the
metrics
are
different.
You
you
it's
staging
metrics
once
and
you
use
them
throughout
the
life
cycle
of
your
application.
You
don't
create
a
new
metric
with
every
request,
unlike
spans
right
and
they're,
not
likely
to
go
to
a
different
place.
So
they're
categorically
different
from
traces
logs
are
also
different
in
that
you
probably
instantiate
a
logger
once
per
component,
and
you
always
log
to
that
same
logger,
but
where
that
logger
goes
and
what
context
comes
in
from
it
also
may
change,
depending
on
when,
where
that
component
is
substantiated.
C
They
they
all
have
like
at
least
a
little
nubbin
that
gets
created
statically,
where
you
define
hey.
This
is
the
the
class
that
I'm
reporting
on,
for
example,
and
potentially
there's
room
for
more
static
configuration
to
go
there
and
they're
all
gonna
start
getting
kind
of
bundled
up.
C
I
don't
know,
I
guess
I
guess
what
I'm
saying
is
like
I'm
a
little
bit
nervous
about
saying
we
can't
we
can't
have
that
handle
where,
where
we
statically
create
this
stuff
and
hand
it
around
at
initialization
time,
so
I
just
just
want
to
point
out
that,
like
if
it's
possible
with
the
use
cases
you're
seeing
to
to
go
about
this
in
a
different
manner,
so
that
we
can
retain
that
ability,
I
think
that's
important
because
we're
seeing
it
the
need
for
it
in
general,
with
signals
and
I'm
nervous
about
saying
well
for
this
one
signal:
we're
gonna
we're
gonna,
do
it
differently,
it's
it
might
create
difficulty
later,
especially
when
creating
convenience
wrappers.
C
Maybe
not
I'm
just
I'm
just
pointing
out
that
removing
that
ability
to
kind
of
do
this
dependency
injections
approach,
where
we
hand
it
around
at
initialization
time,
is,
is
kind
of
a
big
deal.
So
I'm
just
wondering:
if
is
there
a
way
to
like
back
up
and
like
solve
the
problems
you're
looking
to
solve
from
a
from
like
a
different
approach,.
D
F
Nikita,
I
would
like
you
to
to
talk
to
yuri
about
this,
because
he
was
a
lot
against
this
idea,
so
he
had
a
bunch
of
argument
against
this.
I
I
unfortunately
he
is
not
joining
this
meeting,
but
it
will
be
good
to
to
have
a
conversation
with
him
and
his
experience
with
the
uber
and
jager
and
why
nothing
was
like
done
like
that.
F
Yeah,
just
just
think
him
or
ask
him
in
an
issue
or
something
because
I
know
I
know
he
had
a
lot
of
arguments
a
long
time
ago
when
we
chat
about
this,
because
there
was
this
idea
of
having
the
global
as
a
as
a
first-class
citizen
and
why
this
is
important-
and
one
of
my
argument
was
exactly
your
point,
but
he
had
strong
arguments
against
this.
F
F
D
C
Okay,
I
I
I
think
we
are-
I
mean,
maybe
it's
worth
it
to
listen
to
yuri.
I
want
to
clarify
this
is
different
from
from
anthony's
questions,
but
there's
what
I
will
say
there
is.
There
is
a
just
a
split
in
the
programming
community
about
whether
globals
are
just
poison
and
for
application
code.
It's
it's
reasonable
to
say
that
requiring
a
global
like
having
a
global,
be
the
only
way
things
work.
C
But
in
my
observation,
having
worked
in
both
kinds
of
shop,
it
can
get
a
little
religious
where
some
shops
just
say
just
just
no
globals
anywhere,
because
that's
going
to
that's
going
to
lead
to
bad
behavior,
which
is
true,
but
when
you're
talking
about
cross-cutting
concerns
like
telemetry
it
it's
a
little
bit
different
than
how
you
treat
your
application
code.
C
Cross-Cutting
concerns
tend
to
to
function
differently
because
they
have
a
different
purpose
and
they
end
up
not
conforming
to
the
kind
of
coding
paradigms
and
patterns
you
normally
see
with
with
application
code,
for
example,
they
break
encapsulation
left
and
right
because
that's
the
point
of
them
right.
C
You
sprinkle
instrumentation,
calls
across
your
entire
code
base,
so
there's
encapsulation
kind
of
goes
away
and
relying
on
a
global
to
do
that
by
default
is
kind
of
okay,
but
you
still
need
to
make
sure
that
you
aren't
forcing
everyone
to
use
a
global
as
the
one
and
only
way
to
do
it.
F
A
A
F
C
I
tried
to
summarize
this
in
the
notes
by
the
way,
and
I
think
the
nikita
for
the
issue
you
open.
I
think
it
should
be
an
issue
for
for
writing
this
down
as
guidance.
We
should.
We
should
write
this
down
in
the
spec
and
like
put
it
on
the
website
too,
because
I
think
this
is
pretty
critical.
If
we're
starting
to
see
people
natively
instrument,
we
we
need
to
like
really
put
some
guidance
out
there,
just
so
it's
clear
how
we
think
they
should
do
it.
C
A
Acknowledged
perfect
thanks
so
much
for
that.
Okay,
going
to
the
next
point,
mine
is
adding
an
option
to
limit
attributes
to
metric
values
as
well.
The
pr
is
there
nikita.
Thank
you
for
for
poking
to.
A
I
think
that
in
general,
even
though
this
pr
has
enough
approvals,
we
always
want
to
try
to
to
address
all
the
comments
to
not
leave
people
ignored
so
j
mcd,
you,
I
think,
you're
in
the
call.
So
please
review
that
if
it
gets
solved
soon,
we
will
merge
it.
Let's
try
to
make
that
happen
soon.
A
The
second
one
is
the
otlp
requirement
of
implementing
both
grpc
and
http
and
protobuf1.
This
is
also
related
jonathan,
even
off
to
the
to
the
question
that
you
were
asking
later
on
in
the
agenda.
I
think
this
is
something
we
we
talked
about
last
week.
Please
review
that
and
prove
that
also
yuri
was
asking
about
that.
You
know
that
point,
but
nikita
explained
him
there.
So
I
hope
that's
that's
fine,
but
please
everybody
review
that
approve
that
that's
all
from
my
side
yeah.
H
Sorry
about
about
the
about
the
http
like
transport,
so
as
far
as
I
remember,
this
specification
in
the
public
says
that
it
like
sdks
may
implement
it.
It's
not
mandatory
right.
D
Director,
when
we
discussed
this
on
last
on
the
previous
maintainers
meeting
and
the
consensus
as
I
understood
it
and
as
was
written
in
my
pull
request-
is
that
sdk
has
to
implement
one
of
them
either
grpc
or
http
and
may
implement
both.
So
we
don't
require
both
at
the
moment
yeah.
It's
a
shoot
requirement.
Sorry
for
that.
H
D
A
Okay,
thank
you
for
that
and
then
a
final
issue
that
I
created
about
I
would
like
to.
I
was
thinking
that
it
would
be
interesting
to
identify
what's
the
source,
whether
it's
an
agent
or
specifically,
a
destroyer.
So
please
just
go
and
add
your
own
opinion
or
review
for
that.
I
think
that,
based
on
the
feedback,
I
will
be
creating
myself
a
pr
against
a
specification
to
add
something
there,
probably
something
called
telemetry,
sdk
distribution
or
distro,
or
something
like
that.
A
So,
if
you
have
something
in
your
mind,
please
just
leave
a
comment
there
going
to
the
next
item.
Jonathan
scraping
support
for
tracing
data.
Please
yes,
so.
H
We
already
started
talking
about
this
on
the
only
collector
meeting
I've
been,
I
believe,
broken
was.
Was
there
spring
framework?
We
will
have
a
endpoint
in
spring
boot
that
will
basically
give
the
clients
the
ability
to
scrape
traces
in
the
zip
kin
and
also
in
the
otf
format.
H
Is
this
something
that
could
be
interesting
for
for
the
collectors
to
implement
like
this
is
similar
to
like
the
ud
prometeus
like
protocol
for
for
metrics,
where
the
client
is
not
pushing
the
data
to
the
back
end,
but
the
backend
is
scripting.
It.
F
And
I
think
to
to
clarify
a
bit
you
told
us
that
is
not
actually
a
scraping
is
more
or
less.
The
client
is
initialize
the
request,
and
then
you
push
through
that
connection
or.
H
Right
now,
it
is
just
an
http
endpoint.
It's
just
like
not
two
ways.
It's
always
like
pulling.
It's
always
like.
The
backend
calls
the
the
the
client,
but
I
believe
that
can
be
like
changed
like.
H
It
cannot
be
servers
and
events,
though,
because
that
doesn't
support
binary
data,
so
without
dlp
grpc
it
will
not
work
sorry
or
like
any
like
binary
format
right
now.
What
we
have
it's
like
a
a
draft
for
this
in
spring
framework,
and
that
will
give
you
just
an
end
point
where
you
can
basically
just
scrape
your
traces.
F
I
okay
so
to
give
everyone
a
heads
up.
So
what
I
was
telling
jonathan
is
that
we
will
definitely
have
a
receiver
for
these
endpoints
if
spring
will
implement
it
once
the
protocol
is
stable
and
we
know
what
kind
of
request
we
should
send.
What
kind
of
repo
response
we
expect
and
so
on.
F
But
probably
the
question
was
like:
do
we
want,
as
open
telemetry
community,
offering
a
pool
based
protocol
defined
in
our
specification
as
as
an
equivalent
to
the
push
that
we
have.
F
F
So,
okay,
jonathan
or
jonathan,
can
you
oh
josh
one?
Second,
I
wanna
ask
these
questions
or
you
can
go
ahead.
Go
for
your
comments
and
then
all
that.
I
I
So
I
don't
see
how
you
would
a
reasonable
implementation
for
scraping
trace
data
other
than
like
an
unbounded
buffer
of
memory
just
growing
and
going
until
your
next
scraper
arrives
and
in
the
metrics
world.
We
scrape
because
there's
like
an
aggregate,
that's
cumulative
that
we
can
always
scrape
at
any
given
time.
I
don't
see
how
to
do
that
with
traces.
H
So,
to
meet
shh,
of
course,
should
not
be
unbounded
like
you
can
say
that
you
are
like
storing
the
last.
I
don't
know
1000
traces
and
from
the
from
the
backhand
side
you
can,
you
can
get
those
and
also
you
can
like
paginate
it.
You
can
get.
The
last
like
x
amount
that
you
know
that
you
don't
have
and
so
on,
so
I
believe
it
could
be
implemented.
The
interesting
part
is
that
would
that
be?
H
Would
that
be
useful
for
the
users
for
for
open
entry,
because
for
for
spring,
we
had
like
a
few
requests
for
this.
That's
why
we're
implementing
it.
F
H
The
reason
is,
an
application
stack
in
kubernetes
and
but
like
as
far
as
I
know,
people
are
building
is
a.
H
So
a
standard
way
to
ship
like
metrics
and
and
tracing
information,
and
they
wants
to
use
prometheus
and
and
basically
format
for
this
and
make
their
clients
basically
able
to
to
use
like
any
any
kind
of
backends.
H
Whole
a
scrape
based
design.
As
far
as
I
know,
no,
the
sequencer
is
not
able
to
do
that.
So,
as
far
as
I
know,
what
they
are
building
is
a
scraper
for
the
for
the
tracing
data
and
basically
they
will
format.
It
sorry
like
convert
it
if
it
is
needed
and
forward
it
to
the
to
the
right
back-ends
and
the
same
with
prometheus.
C
Yeah,
I
I
kind
of
suspect
this.
This
might
just
be
what
what
josh
is
saying.
It
might
be
people
coming
from
a
metrics
background
who
are
used
to
the
the
prometheus
workflow
and
then
just
being
like
well,
we'll
just
use
that
for
everything,
but
the
issue
you're
going
to
run
into
with
tracing
and
logs
is
that
those
workflows
like
the
through
the
the
best
workflow
for
that
actually
is
streaming.
The
data
out.
C
We
currently
provide
a
batch
approach,
which
is
kind
of
a
middle
ground,
but
that
has
two
triggers
one
is
when
a
certain
amount
of
time
passes
flush
it,
which
would
be
the
equivalent
of
like
how
a
scraper
would
work,
but
the
other
is
once
your
buffer
is
filled
up
past,
a
certain
point
flush
it
at
that
point
and
that's
actually
really
critical,
and
if
you
lose
that
it
would
still
work,
but
the
amount
of
dropped
traces
you're
going
to
see
is
going
to
go
up
and
and
that's
going
to
affect
affect
things
significantly
around
systems
that
are
trying
to
to
create
aggregates
and
histograms
out
of
this
information.
C
Because,
if
you,
if
you
have
enough
drop
data,
then
it
well,
one
you'll
just
be
missing
the
the
traces
and
logs
which
is
bad,
but
two.
It's
going
to
start
to
affect
your
ability
like
the,
for
example,
the
latency
distributions
you're,
looking
at
and
error
rates,
and
things
like
that
are
just
going
to
start
to
become
inaccurate.
If,
if
you're,
experiencing
a
significant
amount
of
dropped
arbitrarily
dropped
data,
so
it's
it's
not
it
it.
You
could
build
an
exporter
that
did
this,
but
I
don't
think
it's
a
good
idea
for
those
reasons.
I
So
has
anyone
considered
a
streaming
request
where
you
dial
in
just
not
to
scrape,
which
is
to
ask
for
a
cumulative
summary,
but
to
dial
in
to
say?
Please
start
streaming
me
some
trace
data,
which
might
be
a
much
more
acceptable
approach
and
in
metrics
that
would
you
know
that
would
be
like
a
push-based
protocol.
That's
been
reversed
in
direction.
The
push
base
is
fundamentally
different
from
the
pull-based
protocol,
because
in
a
pool
based
vertical,
everyone
gets
the
same
cumulative
state,
but
in
a
push-based
protocol.
I
Your
deltas
are
like
relative
to
some
recent
point
in
time
and
so
pulling
deltas
never
has
worked
and
traces
are
very
much
like
delta
format
of
data.
So
I
don't
see
how
this
is
going
to
work
unless
you're,
just
sampling
or
summarizing
which,
by
which
point
summarizing,
becomes
hard
to
aggregate
sampling.
Again,
it's
like
a
delta.
You
could
say
I
could
keep
a
sample
of
every
minute
of
data
and
then
you
can
dial
in
and
get
my
current
most
recent
minute
of
sample.
But
it's
not
clear.
H
H
I
I
believe
from
like
the
client's
perspective.
What
you
mentioned
is
is
quite
similar
to
having
an
endpoint
like
because
they
will
just
have
like
a
website
at
some
point
or
something
where
the
very
back
end
can
connect
and
the
client
can
like
push
these
two
ways.
H
F
Problem
with
that,
if
you
do
a
push
over
a
stream,
is
you
lose
the
ability
of
load
balancing?
So
if
you
do
that,
you
will
always
send
data
to
the
same
instance,
and
there
is
hard
to
to
do
load,
balancing.
H
What
I'm
not
hundred
percent
sure
about
that?
I'm
actually
like.
As
far
as
I
remember
like
historics,
I
did
this,
they
had
like
servers
and
events.
They
had
an
even
stream
for
for
histrix
data.
They
also
had
a
extreme
aggregator
called
turbine
which
solves
exactly
this
issue.
C
C
Maybe
a
central
thing
here
is
service
discovery.
The
approach
kind
of
open
telemetry,
I
think
advocates
right
now,
though,
not
like
publicly-
is
basically
the
service
mesh
approach.
This
is
kind
of
how
people
do
it,
rather
than
explicitly
configuring
every
service
to
talk
to
a
specific
collector.
C
You
say
you
know
you're
sending
this
data
to
collector
and
then
your.
C
It
would
probably
be
a
good
thing
in
my
opinion,
because
it
centralizes
information
storage,
but
the
thing
prometheus
does
is:
there's
this
prometheus
control
plane,
that's
understanding
where
the
services
are,
and
so
that's,
maybe
the
part
where
bogdan's
suggesting
of
being
able
to,
rather
than
it
being
like
a
scrape
based
model
where
it's
fetching
and
scraping
the
data.
It's
contacting
these
services
and
then
saying
send
the
data
to
me.
That's
that's
like
that.
C
That's
a
model
that
preserves
the
prometheus
approach
to
service
discovery,
while
still
preserving
the
the
need
to
to
be
able
to
push
the
data
out
as
it
comes
and
flush
flush
buffers
without
letting
them
get
overrun.
F
So
can
we
can
we
summarize
this,
as
as
a
community
open,
telemetry
being
a
more
naive,
a
more
young
community?
We
don't
see
yet
a
need
to
have
this
face
in
our
community,
but
if
this
is
the
way,
how
spring
will
do
will
do
things
we
most
likely
gonna
support
this
in
our
collector
as
a
as
a
receiver,
scraper
or
whatever
is
called
in
the
collector
a
way
to
scrape
this
data
from
from
spring
based
on
whatever
spring
defines
the
protocol,
because
we
are
friendly
with
anyone
any
external
third
party.
F
So
if
spring
decides
this
is
important,
we
will
will
support
it
as
as
we
support
prometheus
or
other
things,
but
it
will
be
super
cool,
at
least
for
me
personally,
to
learn
what
exactly
is
the
benefit
of
switching
to
pool,
because
I
feel
like
push
it's
way
simpler.
You
have
a
stupid
load
balancer,
which
are
thousands
of
implementation.
F
You
just
set
up
a
simple
endpoint.
You
just
send
data
to
that
load,
balancer
and
you
are
done,
but
here
you
have
a
much
complicated
logic
of
load.
Balancing
like
prometheus
does
with
hashing
of
sources
of
targets
and
who,
who
who
are?
Who
who
to
scrape
from
and
a
bunch
of
problems
that
you
have
compared
with
the
simple
load.
Balance
of
solution
go
ahead.
Josh.
I
I
I
don't
even
have
to
identify
myself,
so
it's
like
almost
the
simplest
way
you
can
go,
but
it
requires
you
to
do
this
cumulative
state
to
do
pull
in
the
simple
way.
So
my
question
is:
can
we
arrange
for
I'm
just
going
to
open
a
port?
You
contact
me
and
I
will
push
you
data
and
you
will
attach
my
resource
attributes.
I
do
not
have
to
detect
them
this.
I
believe
this
will.
Let
us
solve
the
load
balancing
problem
independently
of
the
like.
I
Who
am
I
problem,
which
is
what
prometheus
makes
really
easy,
just
trying
to
keep
them
separate.
F
So
so
you
have
to
do
a
global
state
there
and
pass
the
the
instant.
So
let's
assume
you
have
a
pod
somewhere
and
you
connect
it
to
that
and
that
pod
stands
too
much
for
you
and
you
cannot
keep
up
with
the
data
sent
by
that
connection.
How
do
you
pass
that
connection
to
another
instance
to
connect
to
so
so
you
still
have
a
bunch
of
load
balancing
problems
they're
compared
with
the
proper
load
balancer
that
just
gets
a
request
and
sends
it
to
the
next
to
the
available
instance
that
can
handle
that
request.
I
All
right,
I
see
your
point
I.
What
I'm
trying
to
address
is
that,
right
now
we
have
a
load
balancing
problem
because
we
have
a
self-identification
problem
and
if,
if
the
infrastructure
knew
how
to
attach
my
resource
attributes,
the
way
prometheus
does
pushing,
I
feel
like
it
would
be
much
easier
to
migrate
from
the
prometheus
model
into
the
open,
telemetry
model.
Okay,.
F
F
We
started
to
to
talk
about
that
problem
in
the
collector
just
so
we
may
have
a
solution
for
for
the
identity
problem
that
we
have
in
a
push
base,
but
that
needle
to
say
like
I
would
still
want
to
he
to
see
the
issue
in
the
spring
unit,
and
if
you
have
the
the
issue
where,
where
this
was
discussed
and
as
I
said,
we
will
support
it.
If
you
add
it,
99
will
support
it.
A
C
Just
go
ahead
just
just
shout
out
afternoon:
there's
an
asia
pacific
spec
meeting
we
bumped
the
cadence
of
that
meeting
up
to
weekly
before
it
was
bi-weekly,
and
I
think
that
made
it
a
little
bit
harder
for
people
to
remember
is
on
the
schedule.
C
So
just
let
people
know
that's
that's
happening
every
week
at
4pm,
pacific
time
and
the
thing
we're
going
to
be
running
out
of
that
meeting
is
instrumentation
there'll
eventually
be
an
instrumentation
sig
that
spins
off
once
it
gets
enough
people
working
on
it,
but
for
the
time
being,
the
the
afternoon
spec
meeting
is
going
to
be,
where
we're
doing
a
lot
of
the
design,
work
and
discussion
around
how
instrumentation
should
work.
C
C
Let
you
just
configure
statically,
like
all
the
things
you
want
to
record,
for
example,
on
an
http
request
and
then
just
when
you
say
start
and
finish
on
finish
in
addition
to
recording
the
span,
it
also
emits
a
number
of
metrics
and
other
things
so
interesting,
stuff,
honorable's
kind
of
taking
the
lead
on
a
lot
of
this
work,
which
is
why
we're
doing
it
in
the
afternoon,
if
you're
interested
in
instrumentation
getting
stability
to
the
semantic
conventions.
C
And
all
of
that
please
come
to
the
meeting
that
happens
today
at
4pm
thanks.