►
From YouTube: 2022-10-10 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
A
A
By
the
way,
are
you
what
time
do
you
keep
coming?
I.
B
Are
you
located
in
Austria,
okay,.
C
Yeah
Julie
I
know
it's
funny.
You
wear
that
shirt
because
I
was
talking
with
a
colleague
last
week
about
how
we
should
really
be
implementing
open
feature
for
our
future
flag
service
and
that's
maybe
a
goal
for
like
future.
Let's,
let's
focus
on
version
1.0
right
now,
but
I
really
think
that's
something
that
we
should
think
about.
C
You
know
whatever
that
means
I
think
like
someone
just
becomes
a
back
end,
but
adopting
the
open
feature
specification
yeah,
you
know
doing.
A
All
things
I
think
we
we
didn't
use
open
picture
because
it
was
implemented
in
Airline
and
I.
Think
in
Airline.
We
we
didn't
have
anything
yet,
but
I
think
there
was
some
discussions
with
Tristan
by
the
time
that
he
was
implementing
yeah,
okay,
but.
B
Yeah
definitely
we.
D
D
A
second
so
see.
Let's
start
it's
the
only
open
items,
I'm
really
aware
of
so
I
think
we
have
a
couple
python
PR
is
closing
this
week,
so
I
think
Miko
put
one
together
and
then
Adriana
put
one
together
around
the
auto
instrumentation
and
then
I
know.
I
know
we
have
to
do
the
default
grafana
issues
as
well,
so
I
want
to
spend
some
time
on
that
today.
Does
anybody
else
have
any
kind
of
top
of
mind
or
any
other
items
we
potentially
need
to
close
our
documentation
can
always
be
improved.
A.
C
C
D
Do
we
want
every
service,
though,
or
just
every
language,
I
I
think
we
kind
of
didn't
fully
decide
on
that,
because
there's
not
really
awaited
day
and
we're
calling
out
by
language,
and
then
we
had
a
discussion
in
a
PR
about
about
how
we.
A
Wanted
it
yeah,
I
I,
remember
seeing
that
somewhere
as
well
I,
don't
remember
when
it
was,
but.
C
D
Yeah
I
think
some
of
the
pushback
was
like
What.
If
we
have
a
python
service
that
calls
like
C,
plus,
plus
and
I
was
like
well
I
mean
I,
guess
that's
good
to
plan
for,
but
we're
pretty
far
away
from
that.
So
what
does
our
experience
look
like
today
in
terms
of
giving
customers
a
good
signal
or
a
user
is
a
good
signal
so.
B
D
Yeah
yeah
I
agree,
I
mean
the
people
who
were
pushing
back
on
it
aren't
on
the
call
today.
So
Austin
and
Riley
can't
share
their
perspective.
So
maybe
we
can
open
up
the
pr
again
and
go
into
it
a
little
bit
more
in
depth
on
GitHub
itself,
I'll
get
hubs
being
extremely
slow
today,
yeah.
D
So
now
she's
there,
the
python,
PR
I,
think,
is
pretty
much
good
and
then
I
went
into
maybe.
C
Is
there
an
Otep
or
anything
like
that
for
custom?
Metric
naming
I
I
are
like
like?
Is
there
a
schematic
standard
around
that
that
we
should
that's
already
well
known
and
established
that
we
should
follow.
B
The
research
I
did
was
basically
reading
the
the
docs
for
for
different
languages,
how
to
create
the
manual
metrics
and
there
wasn't
much
I.
Think
python
was
the
only
one
really
documenting
it
and
and
some
examples
I
saw
that
using
this
underscore
instead
of
dot
in
the
names
and
but
but
not
not
much
so
yeah
not
much
here
there.
You
can
see
the
underscores
in
the
in
the
metrics,
and
this
seemed
to
be
one
of
these
naming
practices,
but.
E
Just
from
my
perspective,
if
you
build
metrics
that
are
going
to
be
even
if
it's
custom,
if
you
do,
let's
say
I,
don't
know
a
number
of
products
and
we
have
two
Services
dealing
with
the
product,
it's
better
to
have
the
the
app
as
a
dimension
on
the
label
in
the
metrics.
Then
we
don't
store
extra
stuff.
In
your
time
series
database.
C
I
think
it's
a
specification
here,
but
for
metrics
we
should
definitely
come
up
with
a
standard
on
what
we're
doing.
We
come
to
schema
and
follow
it
across
everything.
D
Yeah
I'm
curious
with
all
these
runtime
metrics
too,
like
if
we
have
like
a
DOT
or
a
card
service,
or
you
know
the
checkout
service
dashboards
I'm,
not
sure
how
repeatable
some
of
this
information
is
based
on
like
what
each
runtime
gives
you
or
I.
Guess
it's
a
slightly
different
naming
conventions
for
each
one.
C
A
D
It
looks
like
they're
just
going
based
off
the
language
right,
I
guess
maybe
they'll
have
an
attribute
underneath
like
the
actual
metric
I
would
have
to
look
into
but
and
the
naming
at
least
it
seems
to
just
go
by
language.
So
we
have
like
the
the.net
runtime
and
then
we
have
like
the
runtime
kind
of
Etc
wash
grafana
but
able
to
scroll
better.
B
C
A
C
E
C
C
I'm
not
looking
into
anything
language
I
mean
applications
like
I'm,
counting
product
requests
or
product
ID
or
product
cops
right.
That's
that's
something
specific
to
my
app
that
has
nothing
to
do
with
really
the
language
or
anything.
You
could
add
a
language
label
to
it
if
we
feel
like
it,
I
think,
that's
fine,
but
the
metric
name
itself,
because
the
custom
metric
I
want
to
name
space.
All
my
custom
metrics
under
something
app
underscore
custom
underscore
I
like
app
underscore.
It
makes
a
lot
of
sense.
D
D
C
A
C
A
D
Well,
I
think
we're
I
think
we're
okay,
giving
it
another
week,
but
it
would
have
to
be
an
essential
item,
so
I
figure
we'll
kind
of
cut
version,
I'm
open
to
discussion.
Of
course,
the
version
like
0.6
this
Friday
and
then
we'll
just
have
a
week
of
bugs
fixes
and
things
like
that.
So
if
we
need
to
slide
in
like
one
feature
to
the
bug
fix,
we
I
think
I'm
personally,
okay
with
it
but
open
to
open
a
discussion
on
it,.
C
This
would
need
a
change
to
the
feature
flakes
well
just
to
add
the
feature
flag
as
part
of
its
startup,
and
we
would
then
just
need
we
would
have
to
have
logic
to
check
for
the
existence
of
that
feature.
Flag,
I
think
what
we
said:
we're
just
going
to
fill
up
a
cache
and
just
keep
on
filling
it
up.
Yeah
yeah.
C
We're
kind
of
also
going
to
be
dependent
on
the
helm,
chart
and
I
think
this
should
be
done
for
1.0
anyways,
but
it's
to
specify
resource
limits.
I,
don't
know
if
Tyler's
on
today
today
Opera
honeycomb,
so.
C
A
lot
of
moving
parts
to
make
this
work
and
for
Docker
we
have
to
specify
a
resource
limit
on
as
well
with
a
restart
policy
for
Docker
compose.
So
there's
I'm
just
gonna
raise
my
hands.
It
feels
risky
yeah
a
lot
of
moving
Parts
but
well.
D
I
guess
the
question
is:
are
we
okay,
just
cutting
a
hero
scenario
and
having
good
basic
instrumentation
support,
but
not
not
necessarily
guided
experience,
we're
looking
for
it's
a
question,
but
just
based
on
kind
of
what
I
was
looking
at
grafana
I
think
it's
going
to
be
tough
to
build
generic
dashboards
that
has
flipped
between
a
lot
of
data
we
have
out
of
the
box,
but
the
big
thing
we
do
need
some
sort
of
default
dashboard,
even
if
we
don't
have
a
default
scenario.
It's
being
like
pointed
to.
C
D
Yeah
I
think
we
have
HTTP
and
grpc
and
then
of
course,
like
the
span
metrics
and
things
like
that
yeah.
So
we
could
do
something
basic,
but
yeah
unclear
what
it
would
be
or
no.
D
A
C
Like
to
yeah
I
have
a
feeling
we
might
break
more
things
I'm
all
for
us,
creating
a
whole
separate
Branch
like
tag
branch
and
starting
to
work
on
it
right
away,
and
we
we
merge
PR's
into
that
other
Branch
until
we
have
it
all
flashed
out,
and
then
we
bring
that
into
Maine
but
I'm.
Just
a
little
fearful.
This
close
to
the
end
of
us
making
all
these
changes.
D
A
We
could
try
to
to
work
on
this
range
and
if
we
get
it,
we
merge,
if
not
yeah,
and
we
just
continue
and
I
think.
D
D
D
The
resource
limit
I
still
think
we
should
have
those
on
the
docker
and
the
helm
chart.
Even
if
we're
not
doing
anything
with
like
the
the
metric
scenario
or
something
like
that,
I
feel
like
it's
just
kind
of
good
practice
time,
but
do
you
think
that's
reasonable
I
guess
before
let.
C
Me
leave
a
week,
I
think
the
helm
resource
limits
might
already
have
been
in
place
I.
Just
let
me
check
real
fast
I
thought
there
was
a
PR
yeah.
E
They've
been
defined
for
sure.
It's
just
that
if
you
have
a
limited
I
mean
if
you
want
to
deploy
in
this
environment
in
your
cluster,
then
the
app
itself
requests
100
Milli
core
per
service.
Basically
it
eats
the
namespace
after
the
1012
Services
plus
The
Collector.
It
hits
quite
a
lot
of
the
resources.
So
that's
why
I
thought?
Maybe
we
could
figure
out
to
shrunk
it
down
just
to
make
it
work
for
a
couple
users
and
avoid
requesting
for
lots
of
memory
or
CPU
and
memory
from
in
the
end
user
perspective.
A
C
I
think
I
think
our
defaults
should
have
this
thing
all
pretty
short
and
tight
memory.
I
posted
a
table
I
think
in
our
slack
Channel,
which
showed
there's
actually
a
wide
range
across
them.
C
I
think
it
makes
sense
that
you
know
for
for
reasons
like
this
thing's
kind
of
a
pig.
Let's,
let's
reduce
how
much,
how
much
resources
it
it
requests
and
limits
itself
to
on
Startup
so
that
you
could
run
it
on
a
modest
cluster
I
think
at
first,
he
we
say
Hey,
you
know:
do
we
take
a
swab
at
300,
Megs
everything,
or
do
we
kind
of
try
to
tailor
the
limits
for
each
service
and
and
keep
it
a
little
bit
tighter.
C
E
A
C
My
goodness
and
Geiger
keeps
on
going
up,
I
think
that's
a
runaway
I.
We
need
to
really
figure
that
one
out
but
300
Meg
sounds
perfectly
reasonable
to
try
to
like
shoot
for,
except
for
I.
Think
like
postgres
and
Jaeger,
maybe
give
those
a
little
bit
more
to
start,
and
then
we
tailor
it
down
as
we
go.
A
C
Probably
get
that
set
up
all
right,
I
got
it
running
somewhere
already
in
eks,
I'm
gonna
go
merge
and
make
it
the
latest
build
and
we'll
just
run
it
and
we'll
look
at
the
data
and
I'll
post
the
data
on
Friday
and
and
we'll
figure
it
out
we'll
go
from
there
yeah,
so
I'm
gonna.
Also
in
the
meantime,
I'm
gonna
figure
out
real
quickly.
If
there's
a
way
to
hold
down
Jaeger
because
Jagger
just
kept
on
climbing
when
I
last
looked
at
it,
it
was
I
got
six
gigs
and
I
was
like
wow.
C
D
I,
don't
think
we
have
any
sampling
in
place
too.
So
definitely
something
to
consider
at
some
point.
I
know
we
talked
about
maybe
add
in
sampling,
The
Collector
or
doing
it
directly
in
the
sdks,
but
yeah
one
way
to
to.
A
D
D
Yeah
so
going
back
to
grafana,
though
I'm
not
really
a
grafana
expert,
so
I'll
probably
need
a
little
bit
of
help
here
if
I
get
more
familiar
with
it,
but
we
have
a
ton
of
information
kind
of
out
of
the
box.
We
have
no
dashboards
really
built
and
we
also
need
to
create
some
sort
of
default
experience
here.
So
I'm
not
sure
if
anyone
is
particularly
passionate
or
interested
in
grafana,
but
we
could
definitely
use
some
help
and
then
I'll
be
curious.
D
C
We
should
definitely
provide
an
open
television,
collector,
dashboard,
yeah
I,
think
that
is
that
raising
my
hand,
speaking
with
prospects
and
customers.
That's
actually
a
thing.
People
want
so
I
I
think
that
that
would
provide
value
across
the
board
for
for
a
wide
audience
and
and
perhaps
even
invite
others
to
help
make
us
the
perfect
version
of
it.
D
C
Think
more
tries
from
a
vendor
perspective
retries,
because
you
get
like
limited,
you
know
when
your
back
end
starts
getting
getting
in
bad
shape.
You
want
to
make
sure
that
your
exporter
is
healthy
or
your
export
news
are
healthy.
It's
definitely
something
I
heard
about
people
would
also
want
to
know.
Just
you
know
how
many
spans
per
second
am
I
running
through
this
thing
or
traces
per
second
metrics
per
second,
whatever
it
is
so.
A
Yeah
there
is
a
collected
dashboard
available
already
I'm.
Just
finally
give
me
a
second
see.
A
Yeah,
that's
like
and
I
can
help
there
or
do
that
on
the
the
spans
per
second
measures
per
second.
That
would
depend
on
having
or
are
we
collecting
those
in
the
collector
or
we
are
we
getting
spans
per
cartoons
funds
per
second,
in.
D
The
collection
we're
going
to
set
spans,
we
often
get
a
fair
amount
of
information.
I
don't
have
an
exhaustive
list
off
the
top
by
the
top
of
my
head,
but
yeah
I
think
since
spans
accepted
spans
and
then
potentially
one
more
refused
spans
as
well.
I
guess,
that's,
potentially,
okay
yeah
one.
C
Awesome
anything
around
the
batch
processors,
retries
and
cue,
like
I,
think,
would
also
be
important
to
understand.
A
D
See
what
their
default
collector
dashboard
gives
us,
and
it's
probably
will
be
a
pretty
good
starting
point
and
with
just
some
minimal
changes
on
top
of
that,
is
there
anything
else
we
want
to
add
out
of
the
box,
I
mean
I
feel
like
we
should
potentially
have
some
protocol
related
stuff,
but
just
with
the
service
coverage.
It's
so
sparse
that
it's
not
even
generic
at
that
point
and
just
be
like
an
HTTP
dashboard
for
the
cart
service
or
something
like
that.
E
Yeah
would
it
be
possible
to
make
like
a
generic
service
screen
so
then,
with
the
VAR
with
grafana
variables
and
filter,
you
can
just
select
the
right
service
that
you
want
and
then
the
metric
will
be
updated.
E
Of
course
there
are
specific
Services
producing
their
own
metrics,
so
we
have
to
come
up
with
a
dashboard
for
custom,
metrics
I,
don't
know
but
I
think
otherwise
yeah
the
user
can
then
just
browse
into
the
right
service
or
pod.
He
wants
to
look
at.
A
What's
your
Google
called
the
the
four
golden
signals,
so
latency
traffic
here
is
saturation
type
of
dashboard
and
then
you
just
choose
your
service
drop
down.
D
Yeah,
that
would
be
great,
so
speaking
of
the
the
metrics,
you
probably
want
to
Spotlight
there,
so
latency
we'd,
probably
just
get
from
these
latency
metrics
I'm,
not
sure
if
we
want
to
bring
in
any
protocol
specific
information,
retries
and
failures
and
queue
size
I
think
we
have
okay
yeah.
That
would
be.
E
D
This
probably
has
the
widest
coverage
that
we're
getting
because
I
think
I'm,
based
on
the
collector.
A
C
We
have
one
for
traces.
We
should
definitely
create
the
same
thing
for
metric
names
as
we
go
forward
and
I
think
we'll
probably
end
up
prefixing
everything
which
is
app
underscore.
That's
what
we
decided
here.
A
Cpr
you're
actually
right
the
even
though
I
agree
with
Henrik
on
his
approach
with
labels
and
the
the
spec
just
say
you
should
use
some
hierarchy.
I
put
a
link
on
the
the
chat
there
as
well.
Actually
that
Link's
a
bit
further
down
Zuckerman
but
higher
up
in
that
document.
It
does
talk
about
having
a
hierarchy
of
naming
for
a
system,
OS
applications,
Etc.
D
D
We'll
improve
our
our
metric
documentation,
make
sure
we're
capturing
any
out
of
box
metric
and
also
kind
of
routing
back
some
of
the
default
metrics
we're
producing
to
various
Services
we'll
get
the
collector
dashboard
example
from
Angus
and
see
how
that
fits
our
scenarios.
Does
anyone
want
to
take
a
look
at
the
generic
service
dashboard
and
talk
to
CJ
about
that
too?
I
think
he
has
some
interest,
but
if
anyone
on
the
call
feel
free
to
call
out
now.
A
I
have
not
much
experience
with
Groupon
and
remedies
so.
A
D
So
I
think
we
have
the
helm
chart
resource
stuff
figured
out
doing
anything
specific
on
my
doctor.
Docker
compose
file
that
needs
to
be
done.
I
think
Pierre
for
the
helm,
chart
you're
just
gonna
run
it
on
the
eks
cluster
and
kind
of
give
us
some
baseline
numbers.
There.
C
Yeah,
it's
I'm
just
going
to
run
it
on
the
eks
cluster
that
we
run.
I'm
I,
don't
think
the
load
is
anything
that's
too
over
significant
either.
It's
like
10
users
and
I
think
there's
a
10
second
weight
between
them.
So
it's
maybe
one
request
per
second,
which
isn't
bad
but
I
I'll,
monitor
it
real
closely
and
I'll
produce,
charts
and
graphs.
I
I
know:
I
have
some
fancy
honeycomb
metric
collection,
things
running
on
that
cluster,
so
we'll
get
it
all
figured.
D
D
C
Well
right
now
we
have
a
per
service
and
we
have
three
outstanding,
Services
kind
of
I
guess:
low
generator
yeah.
The
only
part
of
that
would
be
baggage,
I
think
for
now
it
makes
sense
just
making
multiple
service,
because
we're
almost
done
that
and
then,
if
we
go
to
language
you'll,
be
submerging
of
those
files.
I
I
do
think
we
should
go
to
language,
but
that
would
be
step
1.1.
D
Yeah
yeah:
we
can
cover
that
in
the
pr
okay,
so
I
guess
the
answer
is
we
do
want
to
cover
every
service
with
some
sort
of
documentation
so
Oscar.
If
we
have
a
request,
if
you
wouldn't
mind
documenting
the
front
end
like
the
other
services
or
document,
we
kind
of
have
just
a
default
doc
layout
and
then
we'll
probably
get
this
updated
with
some
cleanup
post
or
probably
this
week
or
next
week.
But
if
you
could
just
produce
a
similar
document
for
the
front
end
instrumentation.
That
would
be
fantastic.
D
Yeah,
just
yeah
it'll
be
pretty
much
everything
covered
and
all
the
other
docs
too.
So
it
should
be
pretty
easy
one
to
one
just
making
it
relevant
to
the
JavaScript
or
typescript.
C
Yeah
I
think
front.
End's
got
a
unique
part
of
it,
though,
where
there's
two
tracers
one
for
the
front
end
and
one
for
the
for
the
actual,
the
the
node.js
part
itself,
yeah.
A
A
A
A
Yeah
I
think
that's
also
nice
and
I.
I
would
also
like
to
see
the
trace
test
one
here,
because
in
the
blog
post
from
Oscar
they
have
a
chair
there.
So
I
think
it
would
be
cool
here
so
yeah,
just
my
two
cents
and
of
course
the
the
one
from
Diamond
trees
that
I'm
missing
and
pushing
any
code
for
a
couple
of
weeks
already.
D
Cool
I
think
that's
everything
I
had
at
least
I
wanted
to
cover
this
now,
like
I,
guess
an
open
four,
because
anybody
have
comments.
Questions
concerns
yeah,.
F
I
have
I
have
one
comment
so
when
instrumenting
the
front
end,
so
the
browser
side
of
the
front
end,
the
infra
structure
had
to
be
open
to
enable
browser
side
request
to
The
Collector
right.
So
in
that
part
for
the
docker
compose
setup,
I
just
basically
allow
the
connection
to
the
open,
collector.
F
F
C
B
I,
like
the
I
like
the
little
prompt
to
use
Cube
proxy
when
you
run
the
helm,
chart
I
think
it
would
be
nice
to
document
the
process
of
of
just
changing
those
services
to
load
balancers,
because
it's
pretty
straightforward
and
it
might
be
useful
to
people
foreign
like
one
of
the
first
things
I
do
when
I
spin
it
up
is
to
change
Jager
into
a
load.
Balancer.
C
I
I
created
a
couple
additional.
C
Myself
but
yeah,
it
would
probably
make
a
good
sense
for
us
to
document
that
either
probably
an
hour
deploying
for
kubernetes
aspects
of
hey.
Look,
you
can
do
all
these
Cube
cuddle
Port
forwards,
but
you
know
you
know
either
we
we
have
the
helm
chart.
Can
you
set
service
type
in
a
home
chart.
B
C
I
think
there's
two
parts
of
that
to
expose
something
kubernetes
externally.
You
need
the
infrastructure
to
provide
it
either.
You
need
something
like
a
metal,
lb
load,
bouncer
controller
or
the
elb
controllers
on
eks
or
ALB
controllers
for
Ingress
routes.
So
you
need
some
kind
of
info
to
make
it
work.
C
So
I
think
we
have
to
caveat
it
like
if
you
have
the
infra
to
do
this,
then
add
these
options
to
the
home
chart
and
walk
away.
Yeah.
B
C
For
eks
you
have
to
go
node
Port.
If
you
want
to
use
an
ingresso,
fun
fact:
yeah
yeah,
so
there
you
go.
C
So
should
we
well,
we
should
document
how
to
use
this
in
eks
and
I'm,
going
to
double
check
here
that
the
home
chart
itself
has
a
facilities
for
us
to
change
a
service
type.
If
not,
then
I'm
gonna
make
sure
we
have
an
issue
for
that.
Yeah.
F
So
there's
some
what
I
I
don't
know
if
they're
considering
this,
but
also
the
the
open
telemetic
collector
in
the
configuration
needs
to
allow
the
incoming
request
from
the
browser,
as
well
as
our
course.
So
we
need
to
know
ahead
from
which
endpoint
the
auto
collector
is
going
to
be
requested
from
or
from
which
URL
I.
C
So
there
is
a
way
today
where
you
could
override
the
collector's
config
and
we
could
specify
it
ourselves,
but
we
should
do
it
by
default
and
I.
Think
Oscar
we're
probably
going
to
need
another
environment
variable
for
you
right.
Would
you
call
like
next
open,
Telemetry,
collector
or
something
like
that.
F
C
Okay,
we
need
to
get
this
done
before
we
ship.
This
is
kind
of
key
to
make
this
work.
C
I
will
work
with
Tyler
to
shore
up
this
home
chart
here,
but
it
sounds
like
we're.
Gonna
have
three
or
four
little
tiny
mini
issues
to
make
it
work.
C
We
need
to
pass
in
a
new
environment
variable
by
default
to
the
open,
Telemetry
collector,
get
cores,
enabled
for
its
configuration
and
make
sure
we
can
specify
service
type
across
the
board
and
then
on
our
side,
we'll
need
to
add
docs
to
the
eks
or
the
kubernetes
deployment.
For
this.
D
But
with
that
I
think
we're
pretty
much
good
for
today.
I
think
we
did
hit
a
couple
different
Milestones
recently,
partly
thanks
to
piers
and
I.
Guess
all
the
different
blog
posts
and
things
like
that,
but
we're
at
200
slack,
channel
members,
30
000,
doctor
polls
and
I
think
175
stars
and
over
100
forks.
So
a
lot
of
recent
kind
of
round
numbers-
I
guess
don't
mean
anything
but
sound
kind
of
nice.
So
it's
cool
to
see.