►
From YouTube: 2023-01-20 meeting
Description
January End User Lean Coffee Chat
A
B
B
Also,
totally
not
to
put
you
on
the
spot
Alex,
but
since
I
recognize
you,
we
do
have
Alex
here,
I
believe
you're
on
in
the
python
and
collector
six.
C
Hey
yeah
no
I'm,
mostly
on
The
Collector,
stick
now
I'm
I've
run
out
of
time
to
be
on
the
boat.
So.
A
B
Also
I
don't
know
if
I
don't
think
you
can.
But
if
you
hear
weird
noises
it's
because
I
have
a
new
kitten
and
she's
being
kept
in
our
office.
B
If
she
calms
down
and
gets
into
my
lap
I'll,
oh
put
my
video
on
and
show
and
show.
B
All
right,
if
these
are
the
topics
that
we
have
for
today,
we
can
go
ahead
and
get
started
with
voting,
go
ahead
and
vote
for
as
many
topics
as
you
would
like.
It
looks
like
it
might
be.
A
smaller
group
today,
so
we'll
have
a
more
we'll
be
able
to
spend
a
little
bit
more
time
on
each
topic.
B
Oops
I'm,
going
to
pop
the
link
to
the
agile
coffee
board
in
the
zoom
chat.
If
you
would
like
to
add
any
additional
topics
or
questions
or
if
you
would
like
to
vote.
B
B
B
B
Year
to
the
sessions,
my
name
is
Rhys
I
my
day,
job
is
with
New
Relic
and
I
also
do
a
lot
of
work
with
the
open,
Telemetry
and
user
working
group,
including
hosting
these
sessions.
B
All
right,
let's
see.
Okay,
let's
start
with
this
one.
Does
your
role
in
compass
being
an
enabler?
How
do
you
deal
with
helping
in
languages
you're,
not
an
expert
in
usually
how
I
run
these
is
whoever
put
the
topic
or
question
in
if
you
can
unmute
and
maybe
give
us
a
little
bit
more
detail,
looks
like
Derek
has
a
question
on:
how
do
you
define
enabler.
B
D
To
expand
on
this
so
like
basically,
the
role
I
have
in
my
company
is
kind
of
twofold.
The
first
is,
like
you
know,
maintain
open
Telemetry,
collect
your
deployments
and
and
kind
of
be
responsible
for
the
data
pipeline
receiving
data
and
sending
that
to
the
various
backends.
But
it's
also
like
what
I
call
the
enabler
which
is
like
being
a
champion
for
open
Telemetry
driving
adoption
within
the
company.
You
know
troubleshooting
helping
teams
troubleshoot
issues
that
they
have
in
their
respective
Services.
D
You
know
things
like
that
so,
like
in
my
company,
for
instance,
we're
probably
like
70.
Net
10
Java
10
go
five
percent
python.
You
know
like
we're
primarily
focused.
D
Like
we
build
examples,
in.net
we
I
personally
am
more
of
like
a
dotted
developer.
My
team
is
as
well.
You
know
a
great
thing
about
open
Telemetry
in
my
opinion,
is
that,
like
the
concepts
are
extremely
similar
across
languages,
but,
like
you
know,
I,
don't
really
know
how
to
write
Java.
You
know
it's
not
too
far
away
from.net,
but
for
instance,
so
sometimes
it's
it's
like
troubling
for
me
to
provide
specific
feedback
or
like
look
at
a
PR
and
like
really
understand
the
scope
of
everything.
D
Just
you
know
if
anyone's
experiencing
this
and
maybe
how
you're
you're
you're
you're
solving
for
it.
I.
C
As
a
as
a
former
enabler,
I
can
tell
you
that
my
my
strategy
was
really
to
kind
of
first
lead
the
way
by
showing
how
to
enable
observability-
or
you
know,
within
my
within
my
domain
of
expertise,
which
at
the
time
was
go,
and
you
know,
as
as
I
was
demonstrating
the
the
benefits
of
using
open,
Telemetry
or
I
guess
at
the
time.
It
was
open
tracing
that,
if
that
gives
you
an
idea
of
how
long
ago
this
was.
C
But
you
know
the
one
of
the
benefits
as
I
was
demoing.
The
benefits
people
would
come
in.
Ask
me
questions
and
a
lot
of
the
times.
C
These
people
worked
in
different
teams
and
different
languages,
and
you
know
they
would
be
interested
in
in
starting
out,
but
they
wouldn't
be
familiar
with
the
concepts
of
open
tracing,
open,
Telemetry,
and
so
you
know
my
my
role
as
I
thought
was
to
kind
of
pair
with
those
people
who
were
experts
in
whatever
languages
that
they
were
trying
to
be
becoming
enabler
in,
and
you
know
really
work
alongside
with
them
until
they
felt
comfortable
with
the
concepts
of
the
observability
platform.
I
was
using
so
I
think.
That's
that's
kind
of
my
strategy.
C
D
Do
you
find
when
you
did
that
that
those
people
you
know
as
like
I
wasn't
familiar
with
open
tracing
but,
like
you
know,
as
the
spec
evolves
and
as
new
features
get
added,
do
they
do
they
come
back
to
you?
Do
they
seek
information
under
their
own?
Do
they
jump
in
the
hotel,
slack
channels
and
ask
questions
or
you
know,
were
they
routing
things
through
you
and
your
experience.
C
It's
kind
of
a
mixed
bag,
I
think
I
think
some
people
feel
more
comfortable
with
just
talking
to
the
person
they
know,
and
so
you
know
you
to
a
certain
extent.
I
became
a
bit
of
a
bit
of
a
router
for
questions,
but
you
know:
I
did
see
a
couple
of
people
who
were
really
into
you
know
talking
to
the
community
directly,
and
so
they
they
just
went
into
the
slide
channels
or
reach
out
to
people
directly
in
GitHub
or
whatever.
So
it
it's
a
bit
of
a
mixed
bag.
C
You
know
I
I,
think
at
some
point.
As
you
direct
people
to
specific
issues
in
GitHub
or
specific
spec
changes,
people
tend
to
get
more
comfortable
with
the
idea
that
they
can
go
there
themselves,
but
yeah.
D
Cool
and
then
maybe
one
more
follow-up,
if
you
don't
mind
like
how
long
would
you
say
like
it
took
for
those
individuals
you
worked
with
to
like
I,
don't
get
up
to
speed
whatever?
That
means,
in
your
opinion,
you
know.
Is
that,
like
a
matter
of
weeks,
was
that,
like
a
longer
period,
you
know
how?
D
C
Yeah
again,
sorry
to
give
you
maybe
a
a
mixed
answer,
but
again
it
was.
It
depended
on
individuals,
I
think
some
people
who
are
really
keen
on
trying
to
learn
new
ways
of
doing
things.
We're
really
excited
about
the
prospect
of
of
trying
out.
C
You
know
something
new,
and
then
you
know
those
people
kind
of
got
up
to
speed
a
lot
faster.
The
people
that
were
you
know
you
know
maybe
more
used
to
doing
things.
A
certain
way
took
a
little
bit
longer,
and
then
you
know,
if
I'm,
honest
I,
think
some
people
you
know
I
was
trying
to
get
distributed.
Tracing
adopted
some
people
just
never
really
understood
what
the
benefits
were
and
I
think
that
you
know
it
really.
C
D
Yeah
cool!
No
thank
you
I
appreciate
that,
because
I
I
feel
like
we
struggle
with,
sometimes
exactly
that,
like
they
don't
really
understand
the
purpose
or
they
don't
understand
how
it
may
be,
has
value
outside
of
their
individual
team.
You
know
like
when
we
start
connecting
services
or
something
so
just
trying
to
you
know,
come
up
with
any
insight.
I
can
into
like
improving
like
the
culture
and
that
sort
of
thing
so
appreciate
the
response.
C
Yeah
I
think
you
know,
you
know,
I
think
the
it's
really
hard
to
get
to
take
someone
from
their
day-to-day
job,
which
you
know
most
people
are
already
kind
of
strained
on
doing
whatever
it
is
that
they're
doing
for
the
business
and
asking
them
to
do
more
by
you
know
heading
off
into
this
completely
what
a
kind
of
appears
unnecessary
aspect
of
their
jobs
right
to
implement
open
Telemetry,
and
it's
not
really
until
people
have
those
aha
moments
of
you
know
understanding
how
much
better
the
lives
will
be
in
the
future
to
that
they
can
really
wrap
their
head
around
the
benefits.
C
I
think
one
of
the
things
that
I've
really
helped
try
to
help
people
understand.
Is
you
know
how
do
you
get
started
as
quickly
as
possible
and
I?
Think
that's
one
of
the
one
of
the
main
areas
of
open
Telemetry
that
I'm
really
really
excited
about
is
all
of
the
auto
instrumentation
work,
because
I
think
that
allows
people
to
get
some
amount
of
benefit
without
a
tremendous
amount
of
investment
up
front
and
a
lot
of
the
time.
C
I
find
even
you
know,
even
if
it
only
gets
you
60
or
70
of
the
way
there.
It
tends
to
give
people
enough
of
visibility
into
what
their
systems
are
doing
through
Auto
instrumentation
that
they
can
then
like
get
excited
about
the
prospect
of
investing
more
time.
In
doing
this
work.
E
Dan
I,
don't
really
have
an
answer.
All
I
can
say
is
I
sympathize
with
you
as
someone
who,
like
codes,
even
probably
a
lot
less
than
you
know
what
you
described
in
your
role.
It
feels
weird
to
kind
of
want
to
lead
the
way
as
an
enabler
for
these
things
in
your
work,
without
necessarily
being
the
person
who's
able
to
kind
of
just
all
right.
Let
me
let
me
do
a
couple
examples
for
you.
E
F
In
my
previous
organization
like
when
we
were
trying
to
bring
open
Telemetry
into
the
development
teams,
we
faced
some
interesting
challenges
where
the
teams
recognize
the
importance
of
open
Telemetry,
but
they
were
constantly
fighting
fires.
So
they
didn't
have
time
to
instrument
their
code,
which
was
kind
of
ironic,
because
instrumenting
their
code
would
probably
help
them
fight
those
fires
more
effectively
and
so
like.
While
my
my
team
was
their
job
was
to
be
that
enabler.
So
we,
you
know
Define
the
best
practices
and
taught
teams
how
to
instrument
their
code.
F
They
kept
trying
to
use
my
team
as
like.
Oh
well,
you
guys
know
this
stuff.
So
why
don't
you
instrument
our
code
for
us
which
we
had
to
push
back
really
hard
on
because,
like
we
don't
know
your
code,
so
you
have
to
instrument
your
code
because
you
know
it
best
right.
You
know
what
the
things
are,
that
you
need
to
look
for.
D
G
Yeah,
what
what
a
general
mentioned
is
something
that
experiencing
as
well
and
and
even
though
it's
wider
than
just
one
company,
it
goes
even
throughout
the
community.
So
sometimes
I
I
try
to
help
a
few
open
source
projects
to
instrument
to
add
these
orientation
to
their
code
base,
because
I
wanted
to
use
to
have
good
references
of
what
other
people
can
can
do
you
know
so.
I
could
then
point
other
people
to
that.
Quite
a
bit
and
say
you
know
this
is
a
very
well
instrumented
application.
G
They
didn't
actually
they
weren't
ready
for
it
right,
and
this
is
something
that
I'm
experiencing
within
the
company
within
you
know.
Other
companies
as
well
is
that
we
we
can
try
to
create
a
network
and
instrument
things
for
them,
but
if
they're
not
ready
to
have
their
clothes
printed,
it's
just
not
gonna
work.
D
Yep
I
I
could,
by
the
way
you
sound
a
little
distant,
I
I
was
able
to
hear
you,
but
just
have
I
I
I.
Think
that's
true,
like
I
feel
like
first
somehow
we
need
to
like
sell
the
like.
If
teams
maybe
recognize
the
benefit
earlier
like
it,
helps
in
all
these
avenues
like
it
helps
with
them
understanding
the
purpose
and
therefore
like
maybe
wanting
to
contribute
more
how
to
solve
the
problem.
D
So
maybe
I
don't
know
I'm
just
thinking
out
loud
here
and
maybe
like
pushing
to
show
more
like
here
are
some
examples
where
doing
this
solves
some
problems
like
if
you're
experiencing
similar
problems
and
perhaps
like
this,
would
be
a
really
good
idea
for
you.
Maybe,
like
you
know,
I,
don't
know
honing
in
on
that
kind
of
an
attack
might
might
be
beneficial.
G
That's
good
yeah,
so
I
I
just
came
with
a
performing
conversation
with
another
person
that
that
person
was
mentioning
something
exactly
like
that.
G
You
know,
and
one
thing
that
that
they're
doing
and
I
found
very
interesting
is
they
they
have
like
an
observability
group,
and
that
group
is
the
reference
group
in
terms
of
observability
and
they
they
can
watch
what
other
people
are,
what
the
whole
company
is
doing
in
terms
of
observability
and
what
what
he
mentioned
was
there
was
one
team
using
the
specific
programming
language
and
they
found
that
one
specific
metric
helped
them
recover
very
fast
from
an
outage.
G
G
Otherwise,
you
know,
because
that
metric
helped
that
team
to
record
over
very
fast
from
an
outage
so
I
see
the
role
of
the
inhibitor
here,
like
the
observation
neighbor,
as
also
to
spread
information
to
get
information
from
one
Silo
and
break
this
item
and
bring
to
the
whole
company
or
to
the
whole
Community.
For
that
matter.
B
Do
you
know,
would
it
be?
You
know,
as
far
as
something
like
the
community
could
do
with
this,
would
maybe
some
documentation
around
like
real
world
cases
like
the
one
jurassi
mentioned?
D
D
I,
don't
want
to
say,
like
what
we're
doing
isn't
working
I
think
it
is
working.
It's
just
like
progress
is
slower
than
I.
Maybe
would
prefer
you
know,
I
mean
yeah
I.
Think,
like
you
know,
having
a
collection
of
like
good
news
stories
that
you
know
or
whatever
you
want
to
call
it
like.
You
know
like
hey,
I
work
at
company
X
and
we
were
able
to
solve
this
problem
and
you
know
reduce
mean
time
resolution
blah
blah
blah
blah
like
yeah
I.
D
D
You
know
like
as
my
role
here
as
like
in
the
observability
group
for
my
company
like
I,
go
and
I
read
blogs
and
stuff,
but
I
don't
think
like
everybody
at
my
company
would
do
that,
like
maybe
they
are
reading
blogs
specific
to
like
the.net,
runtime
or
something
you
know,
I
don't
know
so
yeah
I
think
it
would.
It
would
help
I
just
I,
don't
know
what
the
best
I
guess
format
would
be.
D
F
I
I'm,
just
wondering
would
would
it
be
beneficial
to
have
like
periodic,
like
some
sort
of
forum
where,
like
every
month
or
whatever,
or
every
quarter,
where
folks
from
organizations
that
are
looking
to
adopt
open,
Telemetry
practices,
open
Telemetry
in
general
have
a
chance
to
like
interact
with
folks
in
the
community.
For,
for,
like
a
pointed
q,
a
to
like
just
to
sort
of
you
know,
ease
their
concerns
like.
F
Or,
or
would
you
all
feel
like?
This
is
too
much
of
an
overlap
of
what
this
group
already
does
I'm
just
wondering,
because
I
I
do
feel
like
just
putting
putting
folks
who
are
kind
of
like
not
really
sure
about
this
in
touch
with
with
other
folks
who
have
gone
through
this,
or
you
know,
have
some
expertise
in
in
the
area.
I
can
like
put
them
at
ease.
D
So,
oh
sorry,
I
don't
know
my
first
thought
is
like
it
does.
Have
some
overlap
with
this
group,
I
mean
I,
think
I
don't
know.
Maybe
I
would
need
to
think
about
that
a
bit
more
well.
One
thing
like
at
Rhys
mentioned,
like
she's,
recording
this
meeting
now,
which
I
think
is
good,
like
people
can
go
back
and
like
get
insight
into
the
past
which,
before
I
think
was
like
a
there's.
D
Some
good
topics
here
that
are
brought
up
that
that,
like
aren't
necessarily
easy
to
get
answers,
because
there
aren't
really
answers
like
they're,
they're,
more
gray.
If
you
will
I
don't
know,
I
would
have
to
think
about
that.
Yeah.
B
Yeah
I
would
like
to
Noodle
on
this
more
as
well,
and
that
said,
I
just
realized
well
I
realized
a
few
minutes
ago
that
I
haven't
been
time
boxing.
This
so
I
apologize,
but
it
sounds
like
we
might
be
good
on
this
topic.
Dan
was
there
anything
else
you
wanted
to
add
before
we
moved
to
the
next
one?
No.
E
Can
I
hop
in
real
quick,
just
share
one
last
thought,
of
course
this
is
kind
of
just
a
more
General
approach,
we're
trying
out
at
our
organization,
but
we're
kind
of
trying
to
drive
change
across
various
laterals
through
the
idea
of
like
a
service,
catalog
and
scorecard
for
services.
Where
various
like
subject
matter.
Experts
can
Define
like
a
set
of
standards
in
a
particular
domain
that
should
apply
for
services
and
kind
of
provide
a
rubric
for
those
things.
So
what
does
like
you
know
sufficient
or
like.
A
E
A
service,
a
grading
in
a
particular
set
of
criteria,
right
c,
a
A
plus
right
and
you
don't
have
to
be
responsible
for
implementing
what
is
like
an
a
look
like
in
your
service,
but
you're
allowed
to
you
know
you
can
define
those
standards
and
then
the
actual
service
owners
that
are
responsible
for
that
service
can
consult
that
rubric,
that
scorecard
kind
of
grade
their
own
Services
according
and
then
kind
of
refer
to
whatever
standardized
documentation
you
provide
as
a
subject
matter,
expert
for
figuring
out.
Okay,
how
do
I
take
my
service?
E
That
is
maybe
a
a
great
C
and
observability
to
like
a
a
grade,
A
right
and
so
in
that
kind
of
a
situation
you're
setting
forth
the
standards
you're
providing
a
clear
rubric
that
allows
service
owners
to
kind
of
grade
themselves
and
you're,
providing
more
information
if
they're
looking
to
kind
of
level
up
their
service.
So
this
is
kind
of
something
that
is
very
new
at
our
organization.
We're
going
to
try
to
drive
change
in
a
bunch
of
different
places.
E
Things
like
SRA
practices,
how
you
tune
your
services
and
kubernetes
code
quality
test
coverage,
observability
stuff.
So
we're
just
gonna
try
that
out
and
see
if
that
makes
it
kind
of
easier
to
gamify
making
strides
and
just
improving
services,
and
things
like
that.
So
I
don't
know
if
that's
helpful
at
all,
but
it
kind
of
separates
implementation
from
like
leading
the
way
and
improving
something.
B
D
Yeah,
these
are
actually
all
mine
for
today.
Nothing
soon
yeah,
so
like
I
I,
don't
know,
I
noticed
well,
I
noticed
because
our
back
end
doesn't
accept
data.
That's
like
in
the
future,
I
think
it's
10
minutes
in
the
future.
D
D
If
you
are
experiencing
this,
you
you
notify
services.
Do
you
implement
something
in
the
collector
to
to
understand
that
it's
happening.
G
So
I
can
probably
talk
a
little
bit
about
how
jiggers
deals
with
that
and
the
way
that
eager
does
or
used
to
do
is
it
tries
to
detect
well
so
I
guess.
The
first
realization
is
that
clock's
queue
is
going
to
happen
and
you
just
have
to
account
for
it.
There
is
no
way
for
clocks
with
synchronous
or
synchronized,
especially
on
a
microservices
architecture,
and
one
way
of
dealing
with
that
is
and
with
younger
chance
is.
G
It
first
assumes
that
you
don't
have
asynchronous
processing,
so
one
Trace
is
very
synchronous,
so
the
the
parent
span
or
the
very
first
span,
and
at
most
when
the
the
the
the
is
the
last
span
finishes.
You
know
so
and
if
that's
not
the
case,
then
Jaeger
by
default
will
try
to
adjust
the
parent
spans
of
that
span
to
be
as
big
or
as
as
long
as
the
longest
span
that
it
has.
G
It
did
generate
a
lot
of
confusion
among
users,
especially
for
users
who
do
have
a
synchronous
processing
so
much
that
we
ended
up
doing
a
flag
to
disable
this
behavior
on
the
eager
side
on
the
eager,
collector
side
and
I.
Think
we
at
some
point
thought
about
not
having
that
that
behavior
by
default,
so
only
users
who
would
know
what
they're
doing
would
then
enable
the
clock.
G
Skew
I
can't
remember
what
is
the
future
name,
but
they
wouldn't
have
this
clock
fixing
a
feature
enabled
on
The
Collector
side,
we're
not
doing
anything
that
I
know
of
perhaps
Alex
Cannon
can
share
if
he
knows
whether
we
are
doing
something
like
that
there,
but
and
I
guess.
G
The
short
answer
is
the
owner
of
the
of
the
the
system
that
generates
the
Telemetry
data
is
in
a
way
better
position
to
understand
the
clock.
Drift
then
any
any
general
purpose
tool
like
The
Collector.
So
the
collector
cannot
in
a
it's,
not
in
a
very
good
position
to
detect
and
and
fix
this
issue.
For
you.
D
To
me
at
least-
and
maybe
this
is
not
true
but
like
there's
sort
of
like
two
use
cases
here,
there's
like
like
a
low
or
mild
kind
of
clock-
skew
where
it's
like
I,
don't
know
a
couple
seconds
or
something
and
yeah
sure.
Maybe
it
makes
the
the
tracing
Vis
like
look
a
little
awkward
or
something,
and
then
there's
like
like
the
really
bad.
D
You
know
ones
that
are
like
in
this
case,
like
I
I
know
they
were
like
10
minutes
off.
So
I
was
thinking
of
like
creating
a
processor
that
detects
some.
You
know
incoming
data
point
that
is
like
some,
some
like
distance
I,
don't
know.
The
right
word
is
some.
You
know
if
it's
more
than
five
minutes
or
some
threshold,
like
just
I,
don't
know
creating
create
a
data
point
that
like
captures,
you
know
the
service
name
or
something,
and
and
like
you
know,
just
just
at
least
like
identifies
it
like
hey.
D
This
is
a
this
is
something
that's
like
way
out
of
sync
with
what
we
consider
real
time.
I
don't
know
if
like
something
like
that
would
be
useful
right
now,
our
back
end,
there's
no
good
way
to
like
correlate
like
the
air
that
our
back
end
throws
with
like
the
data
that's
coming
through.
So
I,
don't
even
know
like
what
data
is
missing
unless
a
service
order
to
reach
out
to
me
it
could
be
specific
to
my
back
end,
maybe
my
use
case
I,
don't
know
so
I
was
just
like.
D
C
I
think
it's
definitely
an
interesting
use
case.
If
you
know
you
could
have
a
processor
that
just
detects
things
that
are
out
in
the
future,
I
I
would
guess
that
the
the
hard
part
is,
you
know
you
still
wouldn't
catch
all
of
your
clock.
Drifts,
I,
guess
you
would
catch
a
very
obvious
ones
that
you
kind
of
know
about
that
you've
seen
in
the
past,
but.
D
Yeah
so
I
I
guess
then
yeah.
That's
like
I
guess
because
in
my
case
I'm
like
dropping
data,
because
the
back
end's
rejecting
it.
It
would
be
good
to
know
that.
But
if
there
was
like
say
it
was
a
10
minute
threshold
and
it
was
I,
don't
know
three
minutes
delayed
and
I
didn't
want
to
detect
that
or
I
couldn't
detect,
that
that
would
still
cause
issues
with
the
interpretation
of
the
data.
But
it
would
be
like
a
secondary
concern,
like
maybe
I
should
just
Shore
up
like
the
data
loss.
D
G
Are
you,
are
you
talking
specifically
about
metrics,
or
are
you
talking
about
any
Telemetry
data
type.
D
G
Yeah
so
for
metrics,
there's
some
metrics
stores,
like
you
know,
every
every
single
Prometheus
based
storage
will
block
new
metrics
that
are
from
the
past,
so
not
from
the
future
I
suppose
but
from
the
past.
G
G
So
by
looking
at
the
spans
of
a
Trace
and
I
guess,
it
would
not
be
too
hard
to
to
make
a
a
processor
that
detects
that
now
for
logs
that
for
logs
I,
don't
know
I,
I
wouldn't
even
know
if
we
would
want
to
have
something
like
that
for
logs,
because
you
generate
such
a
huge
amount
of
logs
on
one
specific
server
and
supposedly
all
of
the
logs
on
that
server
are
having
the
same
timestamps,
for
things
are
a
lot
at
the
same
time,
and
so
perhaps
just
yeah
I,
don't
know.
C
Yeah
I
guess
my
my
take.
There
would
be
just
do
the
simplest
thing
that
would
work
which
in
this
case,
if
the
log
time
stamps
are
still
in
the
future,
you
could
you
could
flag
it.
You
know
you
could
create
a
metric
that
would
at
least
capture
that
information.
But
again
this
would
only
capture
clock
drift.
That's
in
the
future!
Nothing
nothing
clear
would
happen
if
it
was
like
drifting
in
the
past.
B
D
Yeah,
so
this
is
also
mine
and
I.
Don't
really
know
how
to
phrase
this
in
a
in
a
simple
way.
Maybe
I'll
tell
you
a
really
weird
case
use
case
that
I
have
and
yeah
anyway.
So,
like
I,
have
data
flowing,
let's
say:
I
have
an
otlp
I,
have
a
collector
I
have
an
otlp
receiver,
I
have
some
processors
configured
and
then
I
have
a
back
end.
Let's
call
it
back
end
a
and
then
I
have
another
pipeline.
Also
otlp
receiver.
D
Well,
let's
just
say
we're
using
metrics
here
some
processors
and
back
end
B,
so
I
want
to
like
have
so.
The
same
set
of
data
is
flowing
into
both
of
these
pipelines
and
then
for
like
pipeline.
One
I
want
say
just
to
have
histograms
go
to
backend
a
and
for
pipeline.
Two
I
wanted
to
have
non-histograms
go
to
backend
B.
D
Is
configuring
two
Pipelines
like
that?
The
right
thing
to
do
I
was
trying
to
like
it
works
like
I.
Have
this
configured
it
works?
I
was
thinking
about
like?
Is
this
the
most
CPU
or
memory
efficient
in
terms
of
like
I'm,
not
I'm,
not
I?
Guess,
like
the
fan
and
fan
out
of
how,
like
the
receivers
and
and
stuff
work
in
the
in
the
in
the
exporters
work
I
was
trying
to
like
you,
know,
understand
if
I'm
doing
it
in
like
the
optimal
way.
Does
that
question
make
sense
at
all?
D
Is
that
a
use
case?
Anyone
else
has
I
hope
that
made
sense.
G
G
Now
has
four
types
of
components:
right,
extensions
and
then
pipeline
specific
components
like
receivers,
processors
and
exporters,
and
there's
going
to
be
a
fourth
one
or
a
fifth
which
is
connectors
so
connectors.
They
can
act
as
receivers
and
exporters.
So
you
have
one
pipeline
that
receives
otlp.
For
instance,
then
there
is
some
processing
and
you
end
up
with
a
with
a
an
exporter
and
the
exporter,
for
that
pipeline
is
going
to
be
a
connector
now.
The
connector
then
connects
with
another
pipeline
and
as
a
receiver
right.
G
So
what
you
can
do
is
you
can
you
can
translate
signals,
so
you
can
translate
its
pens
to
metrics,
for
instance,
but
in
your
case
here
it
would
work
in
a
way
that
you
have
like
one
receiver
and
then
two
exporters
or
two
connectors
as
part
of
the
same
pipeline,
one
that
is
going
to
connect
this
data
or
this
this
Pipeline
with
one
that
is
histogram
specific
and
one
that
is
going
to
connect
with
a
non-histogram
specific
with
the
other
one.
G
And
then
each
one
of
those
connectors
wouldn't
be
able
to
filter
what
is
interesting
to
be
passed
through
this
disconnector
and
block
everything
else,
so
it
ended
up
with
three
pipelines
in
there
and
so
one
that
receives
the
raw
data,
one
that
deals
with
histogram
data
and
one
that
deals
with
other
data.
G
So
this
is
what
we
have
planned
for
the
future,
so
the
basic
building
blocks
for
that
are
are
merged
already.
So
from
what
I
remember
so
Alex
can
correct
me
if
I'm
wrong,
but
the
basic
building
blocks
are
there
and
it's
it's
we're
all
looking
forward
to
seeing
connectors
being
ready
to
be
used,
because
we
have
so
many
use
cases
in
mind
to
implement
now
what
what
we
have
today,
for
that
specific
case
is
the
routing
processor.
G
So
we
have
a
routing
processor
and
it
doesn't
really
work
the
way
that
you
that
you
that
you
want
here,
but
it
it
works
in
a
very
similar
way.
So
what
we
do
is
we
route
the
data
points
based
on
their
characteristics
and
we
rush
them
to
a
specific
exporters,
but
we
do
that
by
making
another
network
connection.
So
it's
very
inefficient,
but
it
works.
D
With
the
router
I'll
obviously
look
into
this,
thank
you,
I
didn't
know
about
any
of
these
would,
and
maybe
this
is
specific
to
use
case,
but
what
would
be
the
advantage
of
like
the
router
solution
versus
like
having
a
filter
processor
that,
like
you
know,
having
the
two
pipelines
and
having
the
filter
processor
just
like
get
rid
of
a
you
know,
part
of
like
the
part
A
of
the
set
or
Part
B
of
the
set.
Would
there
be
a
advantage
to
either
one
of
those.
D
That's
what
I'm
doing
right
now
and
it's
working
there
were
some
bugs
in
the
filter
processor
that
have
been
resolved
so
now
it
seems
to
be
working
perfectly
as
far
as
I
know
it
just
I
I,
just
felt
like
I
was
because
I
had
like
two
receivers
receiving
the
same
data
like
I
was
doubling
up
on
the
memory
that
you
know
my
collector
was
using.
I
haven't
done
any
profiling,
but
maybe
that's
something.
I
should
do,
but
I'll
definitely
look
at
the
at
the
routing
processor.
G
Actually,
take
a
look
at
the
connectors
because
I
I've
made
a
very
similar
question
to
the
to
the
author
of
The
connectors
on
the
pr
that
they
introduced
and
I
think
I.
Think
indeed
the
the
answer
is
that
the
connectors
using
connectors?
You
then
only
have
one
receiver
and
only
only
on
the
costume,
only
one
copy
of
the
of
the
data
point
in
memory.
It
only
keeps
one
copy
of
the
data
point
to
memory.
Let
me
try
to
find
a
PR
and
you
can
get
more
information
from
there.
Okay,.
B
B
D
I
asked
this
a
couple
of
these
sessions
ago.
I
was
trying
to
I'm
trying
to
find
that
I
I
had
opened
a
GitHub
issue.
It
was
very
large
in
scoped,
so
I
don't
think
I
have
any
responses
on
it.
It
was
it
was.
It
was
about
like
understanding
more
about
I'll,
just
pop
this
in
the
zoom
chat.
D
D
D
Like
insight
into
like,
when
I
should
be
I,
guess
horizontally
scaling,
my
pod
versus
when
I
should
be
modifying
the
configuration
of
an
individual
pod,
if
that
or
an
individual
collector.
If
that
makes
sense
like
how
do
I,
what,
under
what
criteria,
do
I
use
to
determine
one
verse
versus
the
other.
D
It
so
like
I,
have
it
like
an
HPA
configured
so
like,
as
my
traffic
increases,
you
know
presumably
I
spin
up
new
pods,
but
why
are
there
settings
like
the
number
of
consumers
like
should
I
be?
Is
that
is
that
purely
for
non?
You
know
deployments
that
can't
automatically
scale.
G
I'm
sorry,
I
I
missed
the
the
very
beginning
of
the
question,
because
I
was
looking
for
for
the
issue.
I
found
the
issue
and
I
linked
here
and
I,
linked
directly
to
the
comment
that
I've
made.
That
is
close
to
your
question.
G
You
can
read,
then
a
dance
and
search
that,
but
coming
coming
back
to
this
question
here,
I'm
and
forgive
me
if
I'm,
if
I'm
missing
soft
some
context
from
the
very
beginning
of
the
question,
but
the
way
that
I
that
I
would
I
would
say
it
is
the
way
that
I
would
tackle
is
if
you
only
have
stateless
connect
or
or
components
in
your
collector,
you
can
then
just
use
a
specific
metric
and
scale
up
or
scale.
Adding
more
more
replicas
would
be
the
solution.
G
The
other
answer
to
that
is,
you
know
your
workload,
and
you
would
probably
have
to
think
about
that,
not
only
as
in
the
scaling
to
attend
the
demands
like
like
the
the
load
that
I
have,
but
also
scaling
to
improve
my
high
availability.
So
if
a
node
goes
down
what
am
I
going
to
what
what
are
the
effects
across
my
observability
pipeline?
G
So
how
would
I
want
to
isolate
failures
on
my
observability
pipeline?
So
is
it
better
to
have
one
per
namespace?
Is
that
and
is
that
good
enough
or
or
should
I
have
a
a
one
per
tenant,
or
perhaps
a
a
even
within
tenants
in
my
cluster?
Perhaps
I
should
have
different
layers
of
collectors,
because
you
know
failing
one
specific
branch
of
your
collection
of
collectors
isn't
not
going
to
be
so
critical
as
something
that
is
going
to
fail.
Only
you
know.
G
If
you
have
everything
going
through
one
collector,
then
it's
really
going
to
be
a
an
epic
failure
at
one
point,
I
guess:
there's
no
easy
answer
to
that.
Yeah.
D
D
No,
like
one
size
fits
all
I
guess:
I
was
just
trying
to
like
okay
like
so
obviously
like
I
have
some
like
bait.
Like
some
bass.
Throughput,
like
you
know
that
I
want
to
kind
of
I
want
to
serve
like
you
know,
I
get
a
certain
amount
of
data
points
per
second
or
whatever
base
and
then
like
I,
get
spikes
right,
so
I
have
to
like
have
a
minimum
deployment
that
can
handle
those
spikes
yeah.
D
Then
there
are
like
some
connections
that
are
I,
think
like
like
have
like
sticky
sessions,
so
I
need
to
be
careful
about
like
if
one
of
those
things
like
bursts,
real
high,
then
like
a
given,
you
know
a
given
pod
or
given
collector.
If
you
will
will
we'll
like
I,
don't
know
have
to
be
able
to
to
handle
that
right.
G
So
in
most
of
the
cases
you're
going
to
use
grpc
for
the
connection
between
collectors
or
between
your
workload
and
a
collector
and
the
good
thing
about
sharpc
is
when
a
connection
fails.
It
will
attempt
to
connect
to
another
to
another
back
end
automatically
right.
So
if
you
have
your
deployment
done
correctly
and
correctly
here
means
on
kubernetes,
you
wouldn't
be
having
a
a
headless
service
and
your
clients
would
be
then
connecting
to
the
Headless
service
so
that
the
the
client
has
a
list
of
known
backends
up
front.
G
So
whenever
one
of
them
fails,
it
just
fails
over
to
the
next
one.
And
so
so
that's
one
way
of
of
dealing
with
this
cheeky
problem.
The
sticky
session
problems
so
most
other
connections.
They
are
long-lived
when
we
talk
about
the
observability
pipeline,
so
there
are
grpc
connections
and
grpc
connections
that
they
are
long-lived
by
Design,
which
also
means
that
if
you
have
like
only
three
collectors
at
the
very
beginning
of
your
life
cycle
of
your
cluster,
for
instance,
and
then
you
keep
adding
more
collectors
to
that.
G
But
you
don't
increase
the
number
of
clients,
then
you're
not
going
to
see
any
effects
until
the
clients
reconnect
due
to
some
failure
right,
so
you're
not
going
to
see
any
advantage.
The
moments
that
you
scale
up
they're
only.
A
G
To
see
advantages
the
moment
things
start
to
fail,
and
then
you
start
seeing
the
other
collectors
receive
some
traffic
or
when
you
have
new
clients.
So
new
clients
are
going
to
be
load
balanced
through
the
to
the
new
nodes.
So
then
you
start
seeing
that
yeah.
So
I
guess
that's
there's
that
one
other
thing
you
may
consider
is
charting
your
pipeline
based
on
the
type
of
processing
that
it's
doing
all
right.
G
So
one
pattern
that
we've
seen
before
and
that
I've
recorded
at
some
point
is
having
one
pipeline
per
data
point.
So
you
have
one
one
Matrix
pipeline,
you
have
one
logs
Pipeline
and
you
have
one
traces
pipeline
because
at
types
of
workloads
or
you
know,
the
workload
for
metrics
is
way
different
than
the
workload
for
for
a
traces.
G
The
way
that
they
work
is
really
different.
So
you
might
want
to
split
by
that,
and
you
also
might
want
to
split
by
the
type
of
processing
that
you
do
on
on
the
Telemetry
data
that
has
been
generated
by
your
workload.
So
if
you
have
more
pii
information
being
generated
by
nodes
or
by
pods
on
one
specific
namespace,
then
it
makes
sense
to
have
one
collector
on
that
namespace
with
a
specific
processor.
D
G
I'm
sorry,
consumers,
as
in
yeah.
D
G
Okay,
so
sorry
I
see
now
so
it's
about
it's
about
the
signing
queue.
G
This
is
a
blocking
queue
and
the
way
that
it
works
is
whenever
things
are
being
sent
from
from
one
place
to
another,
from
from
an
exporter
and
and
to
the
back
end,
it's
placed
on
a
queue,
and
then
things
are
are
picked
from
the
queue
like
from
like
workers,
and
this
is
you
know,
number
of
consumers
is
basically
the
number
of
workers
that
are
picking
things
up
from
the
queue
now
I'm,
not
the
author
of
this
component.
G
Here
of
this
Helper,
but
I
we
had
a
similar
component
on
Jager
and
I
can
tell
you
by
experience
that
we
don't
actually
know
what
is
the
optimal
value
for
that.
You
have
to
find
that
out
by
yourself
a
and
it's
complicated,
because
in
in
Java,
for
instance,
the
number
of
workers
would
be
typically
closely
related
to
the
number
of
processors
in
a
specific,
like
CPU
processors,
in
a
machine
ACP
using
a
machine
now
with
Goal.
It's
not
like
that,
so
one
one
worker
thread
is
not
a
thread.
G
It's
a
good
routine,
which
is
not
related
to
a
lowest
threat,
a
Linux
thread,
so
there's
no
relation
between
or
no
very
explicit
relation
between
a
number
of
consumers
with
the
number
of
of
processors
on
a
specific
laptop
or
a
machine
server
metal.
G
So
you
have
to
play
with
that
number
I
I
think
I
kind
of
played
with
this
information
on
on
an
article
recently,
and
the
idea
is
that
the
more,
if
you
have
backhands
that
are
are
taking
very
long
to
answer
to
you
having
more
consumers
here,
means
that
you
have
more
HTTP
connections
with
the
backend
that
you're
sending
data
to,
and
it
might
be
a
problem
with
the
back
end
that
you're
you're
I
mean
if
the
backend
is
having
trouble
with
a
specific
amount
of
connections
and
you're,
adding
more
connections
you're,
adding
more
load
to
a
server
that
is
already
overloaded.
G
In
that
case,
you
want
to
decrease
the
number
of
consumers,
but
what
it
means
is.
It
is
processing
less
data
simultaneously
concurrently.
D
Awesome,
that's
like
super
helpful,
I
I
think
the
main
takeaway
for
me
is
like
I
should
probably
be
doing
some
more
specific
testing
around.
Like
my
expected,
you
know
like
how
much
data
I'm
receiving
and
expecting
to
send
to
my
back
ends,
and
just
you
know
trying
to
optimize
for
my
specific
use
case
rather
than
like
having
a
generic,
this
type
of
situation.
You
know
you
should
set
it
to
X,
Y
or
Z.
Absolutely.
G
B
Brad,
oh
yeah.
Thank
you
all.
We
are
a
few
minutes
over,
so
I
want
to
be
respectful
of
everyone's
time
Dan.
Thank
you.
So
much
for
all
your
questions,
Alex
gerasi.
It
was
great
to
have
you
on
and
if
anyone
has
any
questions
about
these
sessions
or
anything
else,
they
are
curious.
Myself,
Adriana,
as
well
as
Rin,
are
all
on
the
end
user
working
group
so
feel
free
to
reach
out
to
any
of
us
on
cnco
Slack
and
for
I
I
have
to
drop
real
quick,
but
Alex.