►
From YouTube: 2022-11-14 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
A
B
Hey
guys
just
give
me
a
second:
we
get
set
up.
B
Yep
any
specific
items
I
want
to
cover
I
know
we
have
item
or
Austin's
awesome,
spr.
C
B
Yeah,
we'll
have
to
hear
that
okay,
so
we
have
after
this
PR
low
hanging,
fruit,
I,
think
that
should
cover
most
things
Austin,
do
you
want
to
go
ahead
and
start
with
your
PR.
D
Yeah,
so
this
has
become
sort
of
a
mega
PR
now,
but.
D
To
catch
people
up
I'm,
pretty
sure
we
left
it
was
getting
I.
Don't
so
I
got
larger
Runners
last
week,
I
want
to
say
and
tried
those
with
the
cross-platform
builds
and
the
upside
is
that
it
did.
D
It
was
faster,
but
it
was
a
linear
increase
in
speed
like
we
went
from
four
hours
to
two
hours,
so
by
two
exit
effectively
by
2xing
the
runners,
we
got
a
2X
performance
increase
which,
unfortunately,
probably
you
know,
wasn't
really
good
enough
to
justify
the
additional
cost
because
you're
charged
per
minute-
and
it
was
like
there
was
a
problem
with
that
build,
and
you
know
it
took.
Oh
I,
don't
know
it
was
like
50
bucks,
I
think
at
the
per
minute
rate,
but
you
figure
two
hours
and
then
you're
getting
it's.
D
D
So
I've
done
two
things:
one
I
have
an
open
service
desk
ticket
to
see
if,
because
I
think
we
we
still
want
multi-arch
builds
but
I
think
the
way
that
we're
gonna
have
to
get
them
is
by
building
on
the
correct,
like
building
on
Native
architectures
right.
So
if
we
could
get
on-demand
Runners
through
some
other
way
like
I
know,
equinix
has
like
donated
has
donates
servers
to
cncf.
D
So
maybe
we
can
get
like
a
pool
of
those
and
build
out
our
own
build
infrastructure,
but
if
we
can
have
basically
a
pool
of
arm
and
a
pool
of
x86
Runners
and
then
Target
each
independently,
then
we
can
build
the
Manifest
independently
and
push
them
to
the
registry.
There
was
a
thread
in
the
slack
about
this.
D
I
switched
this
PR
back
to
just
x86
and
have
done
a
few
things
one.
It
is
actually
much
faster
now
because,
instead
of
having
to
build
everything
kind
of
independently
we're
able
to
build.
D
I
also
managed
to
get
caching
working
between
checkouts,
although
it
doesn't
seem
like
it
works
reliably.
I
have
seen
it
work,
though,
which
is
weird,
but
there's
a
GitHub
actions
cache
that
the
docker.
D
D
Unexpected
inputs
build
Arc,
so
I
need
to
keep
working
out
a
little
bit,
but
there
should
be
a
way
to
the
problem
with
the
inline
with
the
caching.
Is
it
works?
Fine
for
single
stage
builds,
but
for
multi-stage
builds
it
doesn't
work
without
some
build
argument
being
passed
to
Docker
and
that's
what
I've
been
trying
to
do.
D
But
if
we
can
get
that
working,
then
each
layer
of
both
a
multi
and
a
single
stage
build
will
get
cached
to
the
GitHub
actions
cache
and
then
it
would
only
rebuild
thing
it
would
only
then
it
would
use
that
cache
rather
than
using,
rather
than
having
to
rebuild
each
layers
independently,
which
would
solve
the
slow
builds
for
recompiling,
grpc
and
hotel
and
stuff
like
that
in
like
rust,
c-sharp.
Whatever
right,
there
does
appear
to
also
be
some
interactions
with
like
rust,
where
it.
E
D
D
D
So
I
do
think
it.
It
cuts
things
down
pretty
significantly,
I'm,
not
quite
sure
what
else
could
be
done
on
the
C
plus
plus
side
to
make
it
faster,
like
maybe
bazel,
would
be
faster
than
cmake,
but
I
don't
I'm,
unfortunately
not
enough
of
a
like
C
plus
guy
to
it
would
take
me
a
bit
to
like
figure
out
how
to
convert
the
cmake
stuff
into
bazel.
D
I
know
basil
is
supposed
to
be
faster,
but
some
of
this
is
just
the
default.
Runners
are
the
default
Runners
they're,
two
vcpu
they're,
seven
gigs
of
RAM,
like
they're,
only
ever
gonna
that
that's
all
they're
ever
gonna,
be
until
we
get
bigger
Runners,
but
I
do
think
that
what
we
have
now
is
probably
the
best
we're
gonna
get
for
now
with
this
PR.
D
If
I
can
get
this
inline
caching
working
and
then
this
would
actually
take
intermediate
builds
down
to
probably
seconds,
or
at
least
minutes
for
the
like
a
couple
like
a
handful
of
minutes
for
every
PR,
because
it
would
only
have
to
build
changed
code
so
they'll.
Basically,
the
last
couple
steps
in
a
build.
C
Yeah,
maybe
maybe
a
comment
there,
because
I
was
also
running
it
now
in
the
morning
and
I.
Don't
know
if
you
saw
the
saw
the
comments,
but
for
me,
I
was
also
into
seven
eight
minutes
category
to
build
the
C
plus
plus
service,
but
somehow
the
mult
link
or
didn't
help
at
the
end
kind
of
that
I
think
I
spent
like
or
it
took,
I
don't
know,
160
seconds
to
get
clone
and
yeah.
D
D
I
mean
we
can
take
it
out
like
it
linking
isn't.
Probably
the
biggest
part
I
did
notice.
D
Grpc
seems
to
do
a
lot
of
linking
Hotel,
not
so
much
that's
where
I
saw
the
biggest
improvements
but
yeah.
It's
like
a
five
minute
install
so.
D
C
A
D
The
shipping
service
was
just
there
was
something
with
relative
links
and
the
build
context
that
I
actually
don't
even
know.
If
those
were.
D
Yeah,
because
the
only
change
of
the
shipping
service
was
the
build
context,
was
not
getting
set
properly,
so
I
had
to
change
some
of
it.
One
other
option
that
might
help
is-
and
we
could
probably
do
this
on
a
separate
PR
but
I
think
someone
mentioned
I.
Think
someone
had
a
PR
for
this
or
it's
already.
There
is
compiling
the
Proto's
independently.
If
that's
possible,.
D
D
Unless
we
compile
protos
and
then
check
in
the
compiled
protos,
if
we
don't
have
to
pay
the
overhead
for
building
the
produce
every
time,
then
that
would
probably
make
things
faster.
But
then
we
would
need
more
complicated,
build
logic
to
see
when
those
were
recompiled
or
when
those
were
changed
to
recompile
them.
D
Well,
I
can
take
out
the
mold
stuff,
I
can
and
just
I
can
take
out
the
mold
stuff.
I
can
try
to
get
this
build
ARG.
D
This
inline
caching
working
I
do
think
that
if
we
do
that,
like
that's
a
significant
enough
Improvement
in
build
times
like
I
mean
because
that's
over
having
it
that
it's
worth
doing,
it's
worth
doing
the
new
GitHub
actions,
especially
because
that
sets
us
up
for
if
we
can
get
dedicated,
I
think
the
actual
path
forward
has
to
be
like
a
dedicated
arm
Runner,
because
if
we
can
get
a
dedicated
arm
Runner
and
then
we
can
say,
Okay
build
x86
on
this
set
of
Runners
and
build
arm
on
that
set
of
Runners.
D
Then
that
solves
the
that
that
fixes
these
problems
of
multi-arch
at
least.
D
D
It's
not
I
I've
opened
a
I've
opened
an
issue
to
do
that.
D
Access
to
that
stuff,
the
downside
really
is
gonna,
be
like
it's
just
going
to
be
stuff
that
we're
going
to
be
responsible
for
managing
because
we'll
have
to
like
set
up.
You
know.
D
D
The
other
open
question
I
think
for
this
is
I
would
assume
we
want
to
if
it's
still
going
to
take
20
minutes
or
whatever
we
probably
don't
want
to
run
it
I.
If
we
get
caching
to
work,
do
we
want
to
run
Builds
on
each
PR.
B
Oud
each
PR
would
be
tough
at
20
minutes,
I'd
say
no,
but.
B
Oh
then,
that's
that's
a
different
question:
I'd
open
it
up
to
the
group
I'm,
not
sure
on
that
one.
G
D
I
mean
there
might
be
like
a
currency
limit,
but
it
seems
like
it's
maybe
Project
based,
but
even
then,
like
I'm
I'm
I
feel
like
we
would
get
I
mean
I.
Guess.
The
two
things
in
my
mind
is
if
we
did
build
on
PR,
then
that
gives
us
the
ability
to
run
the
images
after
they're
built
and
do
like
actual
if
we
want
to
do
like
integration
testing
or
something
that
would
actually
unlock
unblock
that.
E
Yeah
that
actually
happened
a
couple
of
times
already
that
we
got
PR,
that
the
service
didn't
build.
G
E
There
there
is
a
thing
in
GitHub
that
whenever
is
a
new,
a
new
member
sending
a
PR,
someone
needs
to
approve
the
the
GitHub
actions
to
yeah.
D
E
D
Yeah
I
know
there's
let
me
look
at
the
thing
real
quick.
There
is
pull
request
on
pull
requests-
branches
Main.
D
There
is
conditional
logic
in
here,
so
you
might
be
able
to
do
something.
I
think
we
might
actually
it
might
be.
We
might
need
to
factor
it
out
a
little
bit,
but
there
should
probably
be
some
way
to
do
run
this
check
on
approval.
B
Yeah
makes
sense
to
me:
well,
let's
see
if
we
get
caption
working
and
then
we
can
talk
about
maybe
implementing
some
sort
of
gated
build
for
PR
because
it
does
seem
like
it
might
be
risky
to
do
it
for
every
like
commit
or
something
like
that.
We
just
have
all
these
builds
being
triggered.
E
You
said
we
are
now
I
I,
don't
mind,
pulling
the
pr
and
building
it
locally.
Every
time
I
need
to
to
test
a.
G
E
Oud,
if
we
get
like
a
PR
with
a
lot
of
services,
changing
that
would
be
cumbersome,
but
I
think
we
always
tend
to
go
to
one
service
per
PR
or
like
minimal
changes
in
the
pr
and.
A
E
D
D
D
I
mean
I,
guess
yeah
in
my
mind,
that's
the
the
first
question
should
be
like
what
is
the
actual
time
we
are.
What
is
the
amount
of
time
in
minutes
that
we
think
we
need
to
hit
and
then
kind
of
reverse
from
there?
Because
if
it's
like
again,
if
it's
five
minutes,
then
okay,
you
know
you
can.
Probably
you
can
see
a
world
where
we
get
the
where
we
go
with
sort
of
the
intermediate
build
stages
and
or
publish
intermediates.
C
And
now
we
are
already
in
the
in
the
15
minutes
range
and
I.
Think
that's
a
that's
quite
good
in
a
sense.
But
but
we
have
a
risk
that
if
we
had
many
more
services,
then
I
don't
know
if
someone
gets
out
of
memory
or
if
you.
A
D
D
It
would
give
us
more
Headroom
everywhere.
I
guess
is
basically
my
point,
so
we
could
add
more
services.
You
know.
E
I
think
also,
another
valid
point
is
that
whenever
we
use
HTTP
and
we
just
use
kubernetes
Services,
the
load
balancing
between
those
Services
happen
better,
because
grpc
has
like
a
H
I
forgot,
the
name
but
like
whenever,
like
long
living
connection.
So
whenever
the
service
is
connected,
it
will
use
the
same
endpoint
all
the
time
yeah
and
with
HTTP
that
gets
a
new,
a
new
connection,
every
every
request.
What
is
nicer
to
Showcase
like
load,
balancing
of
kubernetes
service
itself?
E
D
C
E
Yeah,
that's
actually
a
good
point:
I
mean
we
can
replace
just
the
Currency
Service
grpc
keep
some
keep
the
others,
but
then
we
remove
the
the
biggest
one
and
we
still
have
grpc
in
the
demo.
What
is
nice
I,
I
I'm
against
removing
like
everything
all
the
grpc
calls
from
from
the
from
the
diagram.
D
F
I
I
wanna
I
wanna
speak
up
in
support
of
a
shipping
service,
specifically
going
more
towards
HTTP,
because
right
now
we
have
kind
of
a
lot
of
corner
cases
of
how
you
have
to
handle
context
guards
and
how
you
handle
asynchronous
rust
like
threads
and
how
you
keep
the
the
span
alive
over
different
procedure.
Calls.
That
would
be
easier
if
you
were
using
just
HTTP
the
way
that
the
runtime
is
built
right
now
you
have
to
do
like
async
context,
that's
separate,
but
it
still
keeps
that
span.
F
It
works
better
when
we
have
to
call
the
the
PHP
service
than
it
does
when
we
have
to
call
the
other
grpc
service,
for
example,
I,
I,
guess
I'm.
Just
speaking
up
and
saying
it
will
probably
reduce
build
time
to
not
have
to
do
grpc,
because
tonic
is
by
far
the
biggest
dependency
outside
of
actually
compiling
the
grpc
server,
and
it
would,
it
would
like
it
would
be
worth
like
investigating.
If
we
just
took
it
out,
how
long
is
the
build?
Is
it
dramatically
better?
C
D
Mean
the
other
thing
to
keep
in
mind
is
like:
if
we
can,
if
we
do
get
the,
if
we
can
get
wall,
clock
builds
down
low
enough,
then
we
can
turn
back
on
multi-arch
through
the
Free
Runners,
because
what
kills
multi-arch
and
the
Free
Runners
is
anything
like
the
G
is
really
just
it's
the
same
stuff.
It's
like
out
of
memory
errors
due
to
trying
to
parallelize
simulation
I
mean
that
would
still
probably
put
us
at
like
a
like.
Well
actually,
no,
we
would
just
do
it
on
release.
D
We
wouldn't
build
between
caching
and
between
caching
and
restricting
multi-arch
to
release
builds
I.
Think
we
could
probably
get
release,
builds
in
the
20
to
30
minute
range
emulating
on
the
Free
Runners.
D
B
Well,
you're
not
coming
up
Gary,
but
I,
just
added
you
in
it!
So
speaking
this
on
yourself.
A
E
No
I
I
started
after
you.
Sorry
just
one
thing:
when
we
also
need
to
make
sure
that
the
caller
is
changing.
A
E
E
F
E
Yeah
I
think,
if
wasn't,
you
was
Pierre.
Okay,
maybe
maybe
you.
G
B
Okay
cool
now
this
would
be
two
good
initial
items,
so
shipping
service
and
currency
service
will
be
replaced,
that's
kind
of
hard,
too
top
of
mind
or
top
performance
Dylan's,
and
then
we
can
reassess
post
removal
of
these
two
and
see
what
the
build
time
improvements
look
like
and
then
maybe
schedule
a
more
comprehensive
grpc
refactor
for
v2
and
start
adding
new
features
off
of
that
or
before
that
awesome.
Well,
Miko
I
think
we
have
a
little
bit
of
time
for
your
items
so
something
about
the
the
postgres.
C
Yeah,
it
was
yeah
small
topic
just
that
we
are
using
different
postgres
images
in
the
for
the
docker
and
for
kubernetes,
and
it
looks
to
me
that
it's
just
a
small
mistake
but
but
yeah
and
if
no
one
knows
what
would
be
the
reason
to
use
different
limits.
Then,
because
in
the
docker
we
use
the
circle
CI
version
of
postgres,
which
is
really
large,
like
uncompressed,
it's
more
than
two
gigs
and
seems
completely
unnecessary.
So
I
can
try
to
align
it
with
the
kubernetes
helm,
chart
image
and.
D
B
I'm,
supportive
of
that,
so
you
catch
me,
go
yeah,
let's
see
so
I'm,
not
sure
if
we
wanted
to
take
a
look
at
our
issues.
I
think
we're
we're
about
out
of
time
today,
but
if
any
bugs
or
top
of
mind
so
there's
some
traces.
B
Some
cart
view
issues
and
then
the
front
end
container.
So
hopefully,
before
we
release
1.1,
we
can
get
some
of
these
smaller
items
of
fixed
and
then
move
on.
D
B
D
On
an
M2,
Max
I
might
make
it
anything
like
it
well,
anything
that
gets
pulled
should
have
the
AMD
tag.
I
just
I,
it's
weird
to
me
because
it.
C
And
you
can
see
the
warning
from
the
docker
that
if
you
run
a
different
architecture
on
the
Mac
M1,
that
kind
of
that
be
careful.
There
might
be
issues.
B
Yeah
we're
gonna
look
at
it
more
later,
but
if
anyone
has
any
ideas
or
anyone
Max
and
is
willing
to
investigate,
please
please
help
us
out.
B
But
awesome
yeah
I
think
we
have
around
five
minutes
left.
So
does
anyone
have
any
specific
items
or
going
to
give
this
time
back.
E
Quick
one:
we
have
a
open
pull
request
on
changing
logs
for
the
Currency,
Service,
I,
think
sorry
and
yeah.
So,
basically
before
we
had
this
single
line
here
that
the
person
is
is
mentioning.
A
E
B
B
B
Stuff
by
log
we
want
produced
as
a
log
like
and
that
being
stored
not
being
stored
or
like
being
presented
on
the
console
necessarily
in
my
mind,
but
that's
just
my
opinion.
B
Yeah
I'm
going
to
create
an
issue
tied
to
it
if
anyone's
specifically
interested
at
feel
free
to
assign
yourself.
B
Okay,
well
yeah,
so
if
you're
interested
in
helping
us
out
in
a
logging
story
and
I
just
missed
all
that,
so
it's
cool
yeah
feel
free
to
assign
yourself.
If
you
see
any
bug,
issues
feel
free
to
take
those
on
too
and
yeah
I
think
we
should
be
good
for
today,
thanks
for
joining
everyone,
all.
D
You
thanks
everyone
bye.