►
From YouTube: 2022-11-07 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
B
A
Oh,
oh
I'm,
sorry
I
should
show
y'all
for
Halloween.
I
was
Hulk,
Hogan
I,
don't
know
who
you
know
who
that
is,
but
I
died,
the
mustache
blonde.
It
was
I
like
I
committed
and
then
I
had
a
very
and
then
I
had
a
really
important
call
with
a
a
very
large
Bank
in
on
this
past
Monday
and
there
was
or
last
Monday
I
should
say
and
I
couldn't
show
up
with
a
dyed
blonde
look
like
I'm,
Hulk
Hogan,
so
I
had
to
shave.
C
A
I'll
find
the
pictures
I
will
and
whatever
it
was
good.
A
You
know
actually
my
profile
picture
now
is
it's
not
far
from.
C
D
C
A
E
C
You
are
far
too
you're
minimizing
your
new
role
too
much.
C
Is
our
new
open,
Telemetry
product
manager,
awesome,
nice
I
believe
actually
the
first
dedicated
otel
PM
hire
we've
made.
So
it's
very
exciting.
E
C
E
Be
should
be
a
good
time,
but
we
can
hop
into
the
demo
agenda,
so
I
think
Pierre
already
did
a
good
job
of
kind
of
typing
out
the
items
we
had
so
it
seems
like
build
times.
We
have
a
message:
queue
PR,
it's
pretty
comprehensive,
so
I
think
I,
don't
think.
E
We've
got
like
as
a
Sig
had
a
chance
to
discuss
it
yet
and
then
there's
a
couple
kind
of
post,
City,
One
release
items
you'll
probably
want
to
take
a
look
at
issues
and
things
like
that
and
then,
if
we
have
time
I'd
love
to
have
a
more
kind
of
comprehensive
discussion
on
like
what
we're
looking
to
do
and
like
what
big
rocks
we're
looking
for
next,
but
that's
really
kind
of
just.
If
we
have
time
we
have
some
time
so
I'm,
not
sure
Pierre.
E
Do
you
want
to
like
lead
the
build
time
discussion?
Do
we
want
to
start
I
know
we
have
like
some
issues,
tracking
some
information
but
I'm,
not
sure
how
exactly
we
want
to
cover
this
I.
C
Yeah,
what
we
can
do
here,
I
mean
I've
done
a
bunch
of
research
on
this,
that
I've
documented
in
PRS
and
issues,
but
just
to
summarize
like
this,
this
is
mostly
an
unavoidable
problem
and
the
basic
reason:
why
is
due
to
Resource
constraints
like
there's?
No
all
right,
so
the
the
root
cause
of
the
problem
is
grpc,
specifically
grpc
builds
in
in
services
that
require
the
full
grpc
tool
chain
to
build,
such
as
C
plus
PHP
did
have
this
problem.
C
I
did
find
a
couple
of
like
opportunities
where
I
could
remove
grpc
from
the
images
that
sped
those
up
I
think
we
did
that
in
two
of
them,
like
basically
anywhere
where
grpc
was
only
used
for
Telemetry
and
not
needed
for
actual
service
operations,
then
I
pulled
it
from
there
and
that
you
know
that
does
shave
time,
but
fundamentally
for
the
services
that
need
to
do
grpc
the
need
grpc
for
cross
service
communication.
You
can't
get
rid
of
it.
Oh
well.
C
We
can't
get
rid
of
like
significant
refactoring,
I
should
say,
but
we
can't
get
rid
of
it
and
it
just
like
you
have
to
build
the
tool
chain.
You
have
to
pull
in
all
this
stuff
and
it
takes
a
while
now.
C
This
isn't
the
end
of
the
world.
If
you
can,
parallelize
builds
right
like
and
that's
the
the
big
PR
that's
sitting
there
for
multi-arch
builds
like
it
does
that
and
if
we
took
that-
and
we
said,
oh,
let's
just
build
x86
only
and
we
got
rid
of
the
other
thing.
Then
it
would
probably
we
would
probably
cut
the
build
time
in
half
or
better
just
I'm
just
kind
of
spitballing.
But
you
know
it
would
take
much
less
time
because
we
would
be
building
each
image
on
its
own
Runner.
C
However,
that
doesn't
solve
the
problem
about
arm
64.
and
apple
silicon.
So
if
we
want
to
do
Apple
silicon,
then
we
have
to
build
Docker
letsa.
You
know
the
docker
lets
you
build
multiple
architectures
and
then
push
both
those
merge.
Those
manifests
and
push
them
as
one
image
right
with
under
the
same
tag
and
then,
whatever
your
run
time
is,
will
pull
the
right
thing.
So,
if
you're
running
on
Apple
silicon,
it
would
pull
arm
if
you're
running
on
x860.686,
whatever
the
problem.
D
C
And
also
in
addition
to
that,
the
runner
images
that
we
get
from
GitHub
actions
are
not
huge,
they're,
the
default
ones,
so
they
have
like
eight
gigs
of
RAM
and
maybe
2v
CPU
or
one
I'd
have
to
go
like
if
you
go
Google
like
GitHub
actions,
Runner
like
it'll,
show
you
the
specs,
but
we
are
basically
in
a
like
isosceles
lock
of
things
that
we
can't
necessarily
resolve.
Right,
like
the
actual
actual
actual
problem
is
grpc,
I
will
say.
C
Even
if
we
got
rid
of
grpc
everywhere,
we
would
still
run
into
the
emulation
overhead
and
slow
builds
in
Matrix
builds
right
because
again,
you're
emulating
stuff
it
takes
longer.
Could.
A
C
Because
you
have
to
merge
the
Manifest
locally
as
near
as
I
can
tell
if
you
build
also
there's
no
way
to
as
far
as
I
can
tell
there's
also
without
using
custom
Runners
there's
a
way
to
use
like
a
custom,
build
thing,
but
that
has
other
challenges.
I
can
talk
about,
but
it
seems
like
you
have
to
build
both
things
in
the
same
kind
of
context.
C
If
you
build
them
separately
and
then
try
to
merge
them
later,
it
doesn't
work,
or
at
least
I
can't
find
any
information
that
says
it
would
work
beyond
that.
The
other
way
that
you
could
like
get
around
this
is
by
having
Runners
that
turn.
You
could
have
two
runners
or
a
runner
per
architecture
and
then
assign
jobs
to
the
right
Runner
based
on
you
know
some
GitHub
actions
attribute
right,
but
that
means
you
have
to
run
your
own
Runners
and
to
run
your
own
Runners.
C
Also,
you
would
have
to
have
something
that
would
spawn
like
in
the
the
pr
that
I
have
open,
that
will
actually
every
single
one
of
those
Services
gets
its
own.
Runner
gets
its
own
unique
Runner,
so
you
would
have
to
have
some
way
to
like
spawn
new
runners
for
10
or
12
or,
however,
many
services
right.
C
Well,
I,
guess
that's
really
it
right,
like
the
the
the
two
they're
like
three
kind
of
options,
and
then
one
is
bug
someone
at
the
project
to
turn
on
larger
Runner
support
like
that
exists.
There
are
ways
that
there
is
a
way
I
don't
have
access
to
this
button
in
GitHub,
but
there
should
be
a
way
at
the
org
level
to
opt
in
to
larger
GitHub
actions
Runners,
and
if
we
could
do
that
and
we
doubled
the
Ram
or
double
the
CPU
or
probably
or
both,
preferably
then
it
would
I
I.
C
Imagine
that
would
actually
help
quite
a
bit
because
then
we
could
increase
the
parallel
parallelism
of
these
and
it
would
speed
up
the
emulation
greatly.
So
that's
one
option,
one
and
I
think
that's
probably
the
best
option.
Option
two
is
for
the
services
where
grpc
is
a
heavy
dependency
and
it's
causing
these
problems.
We
removed,
we
refactor
them
to
remove
grpc
and
we
just
do
like
http
I,
don't
hate
that
idea,
but
that
might
but
that's
a
lot
of
work
for
something
that
again
larger
Runners
should
fix.
A
A
C
C
C
A
C
So,
okay
option:
three:
is
we
option
three?
Is
we
publish
intermediates
right,
however?
I
like
that
idea,
publishing
intermediates
doesn't
actually
solve
the
problem
because
it
means
that
anytime
there's
a
change
to
the
intermediates
like
every
month.
Basically,
we
still
have
to
go
through
and
do
a
four
plus
hour
build.
C
A
C
And
it's
also
anytime,
you
change
anytime.
There's
a
I
mean
the
things
that
would
trigger
that
is
like
one.
You
would
need
a
monthly.
You
need
to
do
it
monthly.
Whenever
you're,
you
know,
or
whatever
Cadence
of
you
know,
builds,
you
would
need
to
pull
in
security
updates,
whatever
you
need
to
do
it.
Whenever
the
hotel
dependency
changed
like
I,
don't
think
it's
I
mean
you're,
not
wrong.
We
could
do
that,
but
all
you're
doing
is
like
I.
Would
much
prefer
I,
don't
feel
like
it
should
take.
C
B
Just
to
give
everyone
an
idea
regarding
the
intermediate
step,
I
posted
on
the
on
the
chat
character.
If
you
could.
B
C
C
A
You
I
think
we
have
to
see
how
often
it
would
be.
Would
we
have
to
rebuild
the
intermediates
I
think
is:
is
the
is
the
unknown
here
that
we
would?
We
should
make
sure
we
understand
that
more
clearly,
I
I
think
if
you're
asking
me
I'm
seeing
two
things,
we
should
be
chasing
one:
let's
get
more
Hardware,
let's
throw
Hardware
at
the
problem,
clearly
we're
under
resource
for
what
we're
trying
to
do.
Let's
get
that
cleaned
up
right
away,
even
if
we
do
the
intermediates
and
we
don't,
we
still
need
the
hardware
running
intermediates
right.
A
So
let's
get
the
hardware
and
then
we
should
investigate
a
little
bit
more
on
on
the
downsides
of
the
intermediates.
What's
the
cons
here,
I'm
hearing
is
we'll
have
to
publish
them.
We
have
to
keep
track
of
that.
It
would
be
a
manual
process.
So
what
does
that
world?
Look
like
I,
think
the
cons
or
the
pros
for
sure
our
local
builds
are
gonna,
go
way
faster.
We're
gonna
make
a
lot
of
people
happy.
Yeah,
I.
C
F
B
A
It
makes
the
future
State
man.
C
I
I
don't
understand
why
I
mean
I
I
looked
also
I
did
another
thing
I
looked
at
was:
can
you
get
pre-built
binaries
for
grpc
in
like
C
plus
plus
land,
and
you
cannot.
C
A
Chase
up
the
are
you
gonna
ask
Morgan
for
Hardware.
C
G
B
C
You,
but
also
I,
wanted
to
update
people
on
this,
but
yeah
I
I
will
keep
trying
to
run
down
the
resourcing
thing.
If
someone
else
would
like
to
take
refactoring
factoring
out
things
to
split
the
intermediates
one
thing
I'll
also
point
out,
we
will
have
to
also
publish
intermediates
for
each
architecture.
We
want,
because.
F
C
Can't
do
my
research,
at
least
you
can't
do
something
like
x86
for
grpc
and
hotel
libraries
and
then
flip
to
arm
for
the
actual.
A
E
Okay,
cool:
that
was
a
good
amount
of
time
on
the
build
times.
So
let's
talk
about
the
message:
queue
service.
So
when
it's
a
message,
queue
service,
they're,
really
introducing
two
new
other
services
as
well
I,
don't
think
they're
actively
doing
much
right
now
and
it
seemed
like
it
was
a
go
and
a
python
based
new
services,
but
I
think
one's
a
fraud,
detection
and
one's
like
something
else
related
to
that
Accounting
Service,
but
it
seems
pretty
comprehensive,
I.
E
Think
the
first
question
is:
do
we
think
there's
a
better
fit
or
area
for
us
to
implement
a
queue
rather
than
you
know,
adding
two
new
Services
and
then
of
course,
do
we?
If,
if
not
do
we
like
the
current
implementation,
I'm,
not
sure
how
many
people
have
gotten
a
chance
to
take
a
look
at
this
yet
and
put
the
link
in
the
chat
window
too.
A
So
it's
actually
four
components
right:
it's
a
zookeeper
Kafka
and
the
two
services
in
total.
What
we're
adding
I
thought
Kafka
now
had
a
zookeeperless
mode.
If
that
thing
exists,
it's
probably
an
idea.
We
should
look
at
that
straight
off
right
away.
Instead
of
adding
two
more
components,
does
this
calf
go
I,
don't
know,
but
I
thought
there
was
a
zookeeperless
Kafka
that
existed
or
was
recently
released
and
the
other
one
is
I,
don't
know
about
adding
two
new
Services.
F
C
C
But
still
build
it
and
it
would
still
deploy
Kafka
and
all
that
it
just
wouldn't
do
anything.
I,
guess,
I'm
broadly
supportive
of
this
I
think
we
need
to
understand
a
little
bit
about
the
resource
hit.
C
C
D
G
G
To
flip
them,
yeah
so
like,
if
you
wanted
the
most,
you
know
we
we
could
choose
if
which,
which
would
be
the
most
I
would
say.
Probably
the
least
Services
would
go
in
the
most
basic
Docker
compose
and
then
you
just
add
daf
F,
you
know
Dash
F's
to
kind
of
add,
like
you
could
even
leave
out
Like
grafana
Jaeger.
They
could
go
on
their
own
if
you
wanted
to
put
all
of
like
the
Telemetry
back
ends
in
their
own
Docker
compose.
A
A
C
C
A
Yeah
I
think
we
will
need
a
way
as
well
as
to
start
up
the
feature
flag
service
with
specific
States
for
the
future
flights.
F
C
A
C
B
Now
one
thing
that
we
didn't
consider
would
it
be
possible
to
have
that,
but
without
adding
the
two
exercises
like
can't,
we
use
services
that
we
already
have
to
do
to
make
up
to
make
it
a
producer
and
a
consumer
and
then
just
send
cues
to
Kafka.
Then
we
would
only
have
zookeeper
and
Kafka,
and
if
we
have
something
like
the
zookeeperless
kakka,
then
we
would
just
add.
B
C
A
So
I
think
what
we
want
to
do
is
one
optimize,
this
PR
in
some
ways
for
at
least
a
capital
side
and
let's
see
what
the
resource
constraints
look
like
and
then
next,
based
on
that,
we
will
probably
carve
this
out
or
not
to
carve
it
out
and
go
from
there.
F
Yeah
I'm,
looking
at
this
one
line
chains
in
the
documentation
stating
that
we
would
need
to
increase
the
four
gigs
of
ram
in
Docker
to
five
gigs
of
RAM
that
maybe,
if
you
now
run
it,
we
could
also
check
that
whether
how
much
memory
is
really
needed
and-
and
because
this
one
gig
increases
doesn't
seem
too
much
to
me.
But
but.
B
My
my
company's
laptop
has
32
gigs,
that's
not
a
problem.
You
know
my
personal
one
yeah,
it's
something
that
I
need
to
test,
but
I
think
it
it's
worth
taking.
They
can
take
it
to
a
run
just
to
see
if
it
crashes
or
not
if
it
runs
or
not,
and
if
I
have
any.
If
we
have
any
issues
and
then
we
can
come
back
to
it.
A
A
I'm
going
to
raise
my
hand
and
saying
job
as
a
pig,
can
we
minimize
a
number
of
java
services
like
it
is
our
heaviest
service
in
terms
of
memory?
Consumption
is
the
odd
service
and
it
does
nothing
can.
Can
we
not
use
Java,
maybe
go
or
maybe
we
get
Gary
to
write
us
another
rough
service,
because
that's
that's
really.
Nice
on
memory
is
rust,
so.
A
C
E
C
Do
we
want
to
also
talk
or
I
just
to
make
sure
that
we're
all
on
the
same
page
for
Helm
charts
as
a
new
item
like?
Are
we
all
good
with
that
going
forward?
We
have
someone,
that's
going
to
become
an
improver
over
there.
A
E
Information
about
this,
it
seems
like
a
couple
sdks
like
erlang
and
then
I'm,
not
sure
which,
for
the
other
two,
but
they
aren't
just
producing
a
standard
language
attribute,
was
that
Juliana
did.
B
A
B
B
And
for
the
shipping
service,
I
was
talking
with
Tomi
CPP,
that's
his
tag
and
he
he
pushed
something
to
the
wrist
but
to
the
rest,
hotel,
but
it's
it
is
merged,
but
not
released.
So
I
cannot
use
so
I
have
an
open
PR
that
adds
some
resource
attributes,
but
not
the
SDK
attribute.
A
I
I
think
it's
fine
to.
We
don't
need
it
I
think
the
SDK
should
do
it
if
anything,
it's
a
great
Testament
to
figure
out
which
sdks
are
within
spec
and
which
ones
are
not,
and
and
if
anything
we
could
use
this
to
force
the
sdks
to
get
the
spec
I
would
prefer.
We
do
that
and
leave
them
missing
until
the
sdks
clean
themselves
up,
unless
anybody
else
says
no,
we're
gonna
add
them
for
now,
we'll
pull
them
out
later.
C
I
would
also
rather
wait
until
just
looking
at
your
PR
Geno
so
as
it
stands
just
to
clarify
as
it
stands.
There's
nothing
in
here.
That
is
duplicative
of
the
work.
That's
been
done
Upstream
or
there
is.
B
C
C
B
B
Ask
yeah
exactly
well,
it
is
already
added.
So
if
you
check
in
the
in
the
chat
of
the
the
pr
tell.
C
B
C
C
B
A
285
year
spans
are
great,
I,
I,
swear,
I
didn't
even
know
they
could,
they
could
exist,
but
there
is
a
bug
that
was
introduced
in
Google
Chrome
that
broke
the
timings
of
browser
spans.
If
you
leave
your
tab
open
for
more
than
a
couple
minutes
or
if
you
walk
away
and
your
tablets,
its
focus
and
regains
focus,
some
weird
stuff
happens,
and
it's
a
cat
and
mouse
game,
there's
a
PR
to
fix
it.
It
was
merged,
but
they're
not
released.
A
Yet
we
need
to
pick
up
that
change
as
soon
as
possible
when
JS
releases
or
we
should
at
least
test
it
to
make
sure
but
yeah
right
now.
If
you
do
a
browser
generated
span
like
it,
I
have
the
open,
Telemetry
demo
running
right
now
in
the
open
world
you
could
actually
edit
with
the
browser,
and
you
could
go,
look
at
it
and
go
look
at
Jaeger
and
you'll.
See
your
your
first
span
for
the
browser
is,
is
a
timing
that
does
not
make
sense.
E
A
A
Yes,
okay,
it's
it's!
The
duration,
specifically
separation
of
the
the
span
that's
generated
inside
of
a
web
browser
itself.
E
E
I
think
on
the
issues,
the
list-
it
was
really
just
these
probably
top
three
or
four,
and
then
we
had
a
couple.
Different
suggestions
come
in
like
adding
tempo
as
a
second
tracing
back
end,
for
example,
but
that's
that
might
be
more
of
a
kind
of
like
future
big
rocks
item,
I,
think
this
or
the
feature
flag
service
or
the
memory
issue,
potentially
I.
Think
our
Austin
already
followed
up
on
that.
This
is
an
M1
issue,
probably
well.
The
multi-arch
architecture
builds
fix
this.
Yes,
okay,.
C
E
C
E
C
C
C
You
go
I'm,
actually
gonna
open
an
issue
for
this.
This
is
a
repo
thing,
but
we
should
assign
code
owners
for
paths.
E
Oh,
we
should
do
the
what
the
Java
owners
did
for
the
Java
app
and
see
if
we
can
get
the
sick
approvers
to
be
also
approvers
on
the
the
language
services.
That
would
be
pretty
cool
I'm,
not
sure.
If
you
saw
that.
D
E
A
Ahead
up
here
go
ahead:
the
Java
sync
team,
Trask
added
code
owners
basically
to
their
to
the
ad
service
Within,
our
demo
team.
C
Yeah
yeah
yeah
like
how
and
what
in
the
website
approvers
for
a
Sig
are
approvers
for
that
their
doc
path.
I'll
just
wait:
I'm
I'm
just
going
to
open
an
issue.
That's
like
enhanced
code
owners.
E
Yeah,
exactly
just
so,
the
the
sigs
have
pay
attention
to
what's
going
on
in
their
language
kind
of
show,
off
area,
okay,
cool
I,
don't
think
we
need
to
go
over
any
other
issues
and
we
can
spend
some
more
time
yeah.
We
already
have
another
Mac
and
one
Mac
problem,
but
we
get
to
spend
our
next
big
meeting
kind
of
talking
about
more
kind
of
future
looking
items
if
anyone
doesn't
have
anything
else.
C
No
I
think
we
would
be
good
to
I
would
be
cool
if
we
could
like
do
an
async
sort
of
I'm
gonna
open
a
discussion
in
the
discussions
thread
or
discussions
tab
for
like
what
do
we
want,
like
for
people,
just
put
their
thoughts
about
like
what
we
should.
You
know
where
this
should
go.
What
should
be
the
next
things,
and
then
that
way,
we
can
just
sort
of
async
record
our
thoughts.
I'll
link
that,
in
the
slack
and
in
the
meeting
notes.
E
Makes
sense
to
me
that
would
yeah
that'd
be
good
to
just
have
kind
of
an
ongoing
discussion
on
what
we're
looking
for
I.
Definitely
think
posting
should
be
one
of
those
items
that
maybe
having
some
sort
of
live
version
and
then,
of
course,
like
Swift
and
all
those
other
fun
stuff
too,
but
I
think
we're
good
to
call
it
here.
Thanks
so
much
for
joining
everyone.