►
From YouTube: 2022-10-17 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
B
C
C
C
Yeah
I
I
just
introduced
myself,
quick
I
work
at
Google,
where
you
know
trying
to
put
together
some
samples
and
demos
for
working
with.
You
know
open
Telemetry.
B
That
can
kind
of
help
gcp
users
navigate
that
infrastructure,
so
kind
of
just
trying
to
get
caught
up
on
what
you
guys
have
for
the
sample
app
here.
So
I'm
just
going
to
be
working
yeah.
C
B
A
Yeah
Lake
Carter
mentioned
it's
I,
don't
know
Docker
compose
up
it
works.
We've
got
a
few
more
little
things.
We
got
to
just
button
up
functionality,
wise
I,
think
we're
well
code,
Frozen
I,
don't
I
think
everything
we
need
at.
This
point
is
just
docs
and
some
deployment
details
mostly
around
resource
limits
and
some
configuration
stuff.
We
still
are
locking
in
helm.
C
Yeah
we're
pretty
comprehensive,
instrumentation
right
now,
so
any
GA
SDK
is
represented,
but
of
course
in
metrics
I
think
we
only
have
like
gopython.net
and
maybe
Java
as
well.
So
more
of
those
languages
mature
we'll
get
that
representation,
but
we
do
have
the
full
open,
Telemetry
language
representation
besides
Swift
today.
So
we
would
like
to
have
like
a
hosted
version
and
maybe
have
some
sort
of
you
know
like
mobile
app
or
something
like
that.
C
But
that's
definitely
you
know
kind
of
a
restriction
we
put
on
ourselves
as
non-swift
developers
to
put
that
off
until
a
later
date
and
then
I'm
trying
to
think
of
whether
some
other
notable
features,
maybe
out
of
the
box.
We
also
have
the
home
chart
and
are
working
on
a
default.
Grafana
experience,
foreign.
A
A
C
A
I
also
noticed
that
on
not
all
our
docs
mentioned
new
spans
when
they
should
I
think
so:
I
I,
okay,
so
we
we
have
two
things.
We
need
to
do
one
we
need
to
finish.
Currency
Service,
see
on.
Let
me
go
I,
think
we
chatted
or
no
it
was
Lolita
was
chatting
with.
B
A
C
A
A
A
I,
don't
know
if
somebody
wants
to
do
that
and
create
issues
or
just
do
it
and
just
create
a
PR
fixing
them,
but
it's
it.
My
sense
is:
take
a
look
at
the
The
Matrix
that
we
have
for
all
the
features
in
our
Docs.
A
C
C
C
Maybe
if
we
just
ensure
two
to
three
people
approve
that
PR
and
say:
okay,
this
doc
looks
up
to
date
for
me,
I
think
that
might
be
like
a
good
way
to
do
it
to
where
we
get
kind
of
more
of
a
group
involvement
on
making
sure
you
know
everyone
thinks
the
material
is
reflecting
what
it
needs
to,
but
I
can
understand
how
that
would
be
cumbersome
too,
but
I
guess
just
sending
one
person
off
to
do
this.
I
don't
know
I
feel
like
we
need
a
little
bit
more
eyes
on
it.
We.
A
Do
I
I
was
gonna,
say
because
we're
doing
this
because
we
did
one
person
docs
yeah,
so
let's
get
some
eyes
on
it,
but
so
you're
going
back
to
these
PR
as
you
opened
up.
C
Yeah
yeah,
there's
three
I
think
just
on
the
most
recent
docs
and
then
maybe
the
main
readme
and
I
was
just
putting
like
an
unreviewed
for
V1
I
kind
of
flagged
to
give
a
change.
Just
so
I
could
open
a
PR
and
then
I
figured.
C
If
you
know
two
or
three
people
approved
it,
then
that
Doc's,
probably
good
to
go
I
did
open
up
an
issue
related
just
to
the
service
docs,
because
I
don't
know
if
we
wanted
to
like
you
know,
do
this
method
more
all
up,
I
figure,
there's
probably
like
30
35
docs.
We
really
have
to
look
at
maybe
a
bit
less
so.
B
D
We
want
to
just
pick
a
day
this
can
we
pick
like
a
drop
dead
date
this
week
for
the
doc
stuff
and
then
have
like
one
person
like
either
me
or
you
go
through
everything
and
just
kind
of
do
a
final
pass
for
consistency
and
clarity.
C
Yeah,
that
might
make
sense
I
guess
we
want
our
drop
dead
just
to
be
like
Thursday
night
and
then
Friday
weekend.
D
D
That
that's
fine
I'll
put
something
on
my
calendar
for
Friday
afternoon
and
I
will
go
through
everything.
C
Yeah
and
I'll
make
some
comments
on
your
blog,
too
I
read
through
it
briefly,
but
didn't
get
a
chance
to
do
like
a
full
review,
so
make
sure
to
get
some
of
those
comments
in
there.
D
C
B
B
D
C
C
C
B
D
D
Okay,
so
it's
just,
we
need
to
build
multi.
We
need
to
do
multi-target.
Let
me
open
an
issue
real
quick
for
that.
D
B
B
B
B
A
B
D
I
see
278.
A
B
B
I
think
what
we
we
didn't
do
at
the
time
we
didn't
know
what
to
put
like
if
we
would
add
like
open,
open,
Telemetry,
something
or
if
we
would
put
a
snippet
about
talking
about
the
license
and
everything
yeah.
A
B
Should
I
break
it
down
into
different?
You
can
do
just
one.
B
D
Okay,
I
already
got
it
wait
which
one
yeah
I
already
signed.
It.
B
A
Okay,
I
think
we're
pretty
light
on
disc.
Now
that
we've
got
and
well
Jaeger's
all
memory
postgres
does
not
seem
to
grow
very
big.
It's
just.
A
Feature
flakes
yeah,
I
think
Prometheus
would
be
the
only
one.
We
should
be
concerned
about
right,
yeah.
D
I,
don't
understand
that
I
don't
have
open
shift.
Does
anyone
have
openshift.
B
Zoom
into
anything
about
openshift
I
worked
on
openshift,
but
I
looked
at
this
and
nothing
really
stood
out
to
me
as
very
openshift
specific
just
from
the
errors,
but
I
actually
don't
know
a
lot
about
redis.
A
B
Thank
you.
There
used
to
be
a
free
tier
for
openshift
online
I,
don't
know
if
they
still
have
that
yeah
yeah.
D
Multiple
yeah
I'm
going
to
look
at
that.
Do
we
want
to
move
on
to
the
next.
A
Today,
which
is
a
pleasant
to
see,
I
have
not
looked
at
your
PR,
but.
B
D
It
is,
it
is
not
good
code,
but
it
is
code,
but
it
works
and
it
accomplishes
the
objective
of
creating
a
memory
leak
through
the
cache.
A
Okay,
that's
the
future
flag,
so
this
is
in
the
recommendation
service.
We
did
this.
B
A
And
you're,
just
you
know,
you're
just
stuffing
up:
that's
really
perfect!
Okay,
so
this
is
something
great
with.
We
should
talk
about
resource
limits,
because
this
won't
happen
unless
we
enforce.
A
Don't
know
yeah
I
here,
let
me
open
up
I've
put
together
a
expensive.
A
Much
better
limits
and
I'll
tell
you
what
I
had
for
for
recommendation
here
in
a
second
I'm
gonna
load
up,
the
pr
for
or
I
didn't
submit
a
PR.
Yet
I
was
just
kind
of
playing
with
it.
I.
D
I'm
pretty
sure,
unless
you
set
the
limit
to
something
like
comically
low,
like
100,
megabytes
or
less,
then
you're
always
going
to
get
the
same
kind
of
graph
off
of
this,
because
it's
always
because
it's
exponential,
it's
always
going
to
like
do
do
do
do
do
do.
Do
you
know
if
you
go?
If
you
look
at
the
pr
thread,
I've
got
a
graph
from
like
a
12
hours
of
running.
It.
D
D
But
because
of
the
exponential
like
the
exponential
growth
factor
like
it
always
will
jump
up
to,
you
know
something
that
is
right
at
like
get
a
little
stair
step
and
then
it'll
hit
something
that
breaks
the
limit
and
then
because
the
contain,
because
the
container
gets
so
you
know,
killed
and
restarts.
There's
a
time
period
where,
like
last
values
reported
and
then
you
reset
and
so
I
think.
As
long
as
it's
not
like
50
megabytes
I
mean.
D
D
For
this,
like
every
you
know,
seven
or
eight
minutes
right,
like
I,
think
that's
a
reasonable
I'm,
just
trying
to
I'm
just
trying
to
think
like
how
would
I
do
a
demo
of
this
right
like
what
is
the
reason?
What
is
a
reasonable
amount
of
time
to
have
this
running?
To
actually
see
the
problem
occur,
foreign.
A
A
A
It's
still
running,
look
at
that.
What's
a
Docker
stats.
A
A
B
D
A
D
D
That's
this
one
right
and
you
can
see
the
the
RSS,
the
action
you
know
yeah
this
way
you
can
see
where
it's
hitting
500
or
you
actually.
What
I
think
is
happening
is
that
it
dies
on
the
neck
like
it'll,
get
up
to
about
500
and
then
the
next
time
it
hits
and
it
tries
to
grow
again.
That's
when,
when
it
does
that
allocation,
then
that's
when
it
gets
om
killed,
because
it's,
but
at
that
point
that
never
reports,
because
the
process
got
killed
before
the
because
I
think
it's
like
one
minute.
D
B
D
But
if
you
expand,
if
you
expand
the
next
code
block.
D
A
D
D
D
It
picks
five
out,
except
for
the
ones
that
were
put
in
that
came
in
except
for
the
product
that
was
like,
except
for
the
initial
product
right,
because
the
way
recommendation
service
works
is
like
it
gets
a
request
from
something:
that's
like
hey
here's,
the
product
that
someone's
looking
at,
what's
the
recommendations
and
then
it
asks
catalog
service
for
all
the
products
and
then
filters
it
out.
So
in
this
case
we're
just
we're
constantly
appending.
You
know,
like
I
said
this
isn't
a
good.
D
A
D
D
Because
if
we,
because
then
it's
a
more
interesting
like
data
Discovery
problem
of
like
noticing
that
when
you
have
of
looking
at
Cash
Miss,
are
looking
at
the
cash
hit
span
attribute
and
then
noticing
that
app
that
products
that
count
is
also
like
going
going
kind
of
wild.
A
I
think
yeah.
We
probably
don't
need
this
one,
okay,
if
they're
going
to
be
because
you're
setting
them
to
the
same
thing,
and
this
will
just
go
wild,
and
so,
whenever
you
flip
this
flag
on
you'll
see
your
products
count
goes
through
the
roof
and
that's
your
indication
that
there's
something
wrong
with
my
product:
cash
right.
D
A
A
What
if
we
had
a
using
cash
attribute,
this
true
false
when
the
flag's
on
or
off.
D
Oh
yeah,
okay,
you.
A
D
D
D
B
D
A
But
sure
yeah
I'm
sure
we
can
figure
that
out,
but.
D
A
Okay,
I
will
definitely
take
this
for
a
spin
run
tonight.
Okay
and
awesome-
that's
this
is
like
I
think
we
were
thinking.
We
were
not
gonna
be
able
to
get
this
through,
and
you
came
through
with
like
the
the
hero
scenario
for
this
you're,
the
hero
for
the
hero
scenario.
So
thank
you.
D
That's
how
Austin
was
so
we
want
app
dot.
Do
we
want
app
dot
recommendation
I
forget
our
namespacing.
A
App
dot
enabled
recommendation
abled
I.
B
D
A
Was
see
Joe,
who
was
I'm
gonna
pick
this
up.
D
B
B
D
This
specific
branch
is
a
that
Cube
Hotel
stack
is
Cube,
Prometheus
stack
without
the
Prometheus.
D
That's
how
you
scrape
from
atheist
targets
without
having
to
do
40
billion
lines
of
scrape
config.
D
Been
working
on
it
for
a
while
technically
I
am
spoiling
you
on
kubecon
announcements,
so
damn
yeah,
we've.
B
A
B
D
A
D
D
B
D
A
D
But
this
would
let
us
right,
but
we
I
think
what
we
could
do
is
we
could
just
take
the
there
might
actually
be
I.
Don't
think,
there's
graffana
dashboards
in
this,
but
there
we
could
use
like
the
grafana
dashboards
that,
at
least
on
the
kubernetes
side
we
could
use
the
grafana
dashboards
that
come
with
the
cube.
Prometheus
stack,
I
think
to
create
those,
and
then
we
would
just
basically
all
we
need
to
focus
on
I
think
is
the
Chrono
dashboards
for
the
application
itself.
So
yeah
you
can
get
those.
D
B
D
Posted
something
in
that
thread,
I,
don't
know
if
they're.
B
A
D
D
My
intuition
says
that
it
should
but
I,
don't
know
a
ton
about
monitoring.
Docker
I
know,
there's
a
collector
contrib
receiver.
For
this
there.
D
B
D
D
To
see
yeah,
because
you
would
want
to
see
the
docker
restarts
and
you
would
want
to
see
the
I
mean
we
actually
have
I
mean
there
is
a
metric
from
like
when
a
the
work
that
like
Adriana
and
whoever
else
did
the
python
metric
stuff
like
there
is
a
python
memory.
Consumption
metric
from
the
python
runtime.
So.
D
A
Anyway,
so
I
think
that's
another
issue,
because
we
have
two
different
deployment
models:
yeah,
so
we'll
have
to
figure
out
how
to
get
the
because
on
on
a
kubernetes
side.
Typically,
you
run
it
as
a
gaming
set
not
as
a
deployment
yeah
all
right,
so
they
make
an
issue.
We
need
to
hunt
that
down
I.
Think
for
now,
let's
just
Leverage
The
Python
metrics
that
we
get
yeah.
B
B
D
World
I
think
it's
just
I,
don't
here's
the
thing:
I
know
how
to
mail.
I
know
how
to
create
a
a
chart
in
grafana
I.
Just
don't
know
like
how
to
do
it.
Well,
like
I,
don't
know,
what's
important
or
what's
not,
I
guess
I've
poked
that
thread.
If
someone
comes
out
of
that
thread,
if
that
threat
comes
up
and
says
like
oh
yeah,
no,
this
will
be
done
then
great.
If
not,
then,
because.
B
D
Like
here
worst
case
scenario,
we
built
a
light
step
dashboard
for
the
services,
so
I
can
just
crib
off
of
that
and
then
figure
out
like
what
do
those
queries
translate
to
in
prom,
ql
and.
D
If
nothing
happens
on
this
by,
like
Wednesday
I,
will
take
our
demo
dashboard
and
try
to
convert
it
into
and
convert
to
grafana,
so
we'll
have
something
at
least.
A
We
only
have
yeah,
we
only
have
a
couple
minutes
here.
I
will
share
and
slack
some
notes
on
resource
limits.
A
We
have
a
couple
Services
still
League
memory
quote:
Service
email
does
some
weird
things:
I
cannot
wrap
my
finger
on
it.
It
seems
almost
random
when
it
happens.
I'm
not
quite
sure
why
and
lo
Jenner
still
leaks
a
tiny
bit
of
memory,
but
not
nearly
what
it
was
leaking
before
I'm,
not
sure
what
it
is.
It
could
just
be
Locus
the
way
Locust
does
his
things
and
it's
probably
just
not
releasing
everything
until
it
has
to,
but
we're
in
really
good
shape.
C
A
A
Share
results
of
long
run
and
slack.
My
recommendation
is:
we
do
a
minimum
of
20
Megs
for
every
service.
I
say
that,
because
Russ
and
go
are
remarkably
good
at
memory,
management
makes
for
a
great
cocktail
discussions
if
y'all
wanted
to
see
that
Russ
slightly
better
than
go
by
the
way
so
20
Meg's
minimum
yeah,
so
20
makes
no,
but
beyond
that
for
anything
above
that
I'm
targeting
like
80
usage
recommendation,
will
will
do
very
different
things
for
that
recommendation.
D
A
So
I'll
share
what
I
I
got
I'm
going
to
submit
a
PR
right
now
about
this
for
Docker
compose
at
least
I
gotta
I
gotta
merge
my
my
branch
and
I
know
Josh
and
my
goodness
Tyler
are
not
here,
but
we
need
to
get
there's
a
lot
of
gaps
in
the
helm.
Chart
right.
A
B
A
D
A
D
A
A
Yeah,
it
needs
to
I'm,
not
gonna
Austin
I'm,
gonna,
I'm
gonna
say
we
don't
need
an
Ingress,
because
Ingress
gets
into
really
hairy
nerdy
stuff.
D
A
A
D
I
think,
as
long
as
there's
an
example
of
how
to
do
it,
then
it's
just
like
and
then
it's
like
hey.
You
should
check
with
your
Ingress
provider
because,
like
K3
I
mean
k3s
right,
like
k3s
can
do
engross
through
traffic.
A
Yes,
I
agreed.
We
also
going
around
looking
at
this
more
in
order
to
get
if
you
want
to
access
the
front
end
and
have
that
still
send
data
to
a
collector.
We
need
two
environment
variables
specified
one
on
the
front
end
to
know
what
its
collector
URL
is
and
one
for
the
collector
no
cores,
although
I
think
star
star,
might
work
and
I'm
all
for
we
just
core
star
star
and
the
collector.
B
D
Three,
eight
or
whatever
yeah,
so
it's
entirely
possible
that
there's
that
that
has
gotten
less
shitty.
But
someone
should
validate
this
and
yeah.
D
D
But
we
could
also
have
it,
but
it's
one
of
those
things
where
it's
like
the
the
ID
I
mean
in
my
mind,
it's
like
we
should
have
configs
for
these
things
right,
so
we
should
be
able
to
say.
Like
hey,
are
you
using
an
Ingress?
Then
you
know:
here's
the
Ingress,
collector
contrib
I
want
to
I
mean
because
I
I
feel
like
we
need
to
make
it
simple
enough
for
people
to
swap
out
that
config
file,
because
that's
going
to
be
the
primary
thing
all
right
anyway,
we've
gone.
D
D
A
Open
until
really
day
is
something
done
by
someone
else.
Yeah
it's
on
Monday
plugs
on
Tuesday
I
will
be
at
both
of
those.
A
B
D
Will
put
something
in
the
channel
for
us
to
coordinate
a
like
demo,
Sig
Meetup
I,
don't
believe
Carter
is
going
to
be
there.
Sadly,
so,
but
okay.