►
From YouTube: 2023-01-09 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
C
D
Yeah
and
it's
I
gave
that
app
several
dollars
of
flying,
probably
about
twenty
dollars
in
total,
between
myself
and
my
partner,
a
few
others
so.
C
C
E
G
D
E
Awesome
all
right,
I
finished
it
two
weeks
ago,
so
yeah
I
understand
it.
G
G
Okay,
yeah
I
think
we
have
most
people
here,
so
we
can
go
ahead
and
the
first
item
I,
don't
think
the
calendar
switched
to
bi-weekly.
So
let
me
get
that
updated
bi-weekly,
so
you
won't
be
meeting
next
week.
That's
just
a
quick
FYI
and
then
just
kind
of
I
think
we
have
like
10
agenda
items,
I'm,
not
sure
what
do
we
want
to
spend
our
time
on
First
and
what
do
we
think
will
take
more
time
to
discuss
because
I
would
love
to
talk
about
resource
detectors
quickly
with
their
documentation?
G
I
know.
Giuliano
has
a
great
topic
on
like
image
size
for
spill
time.
But
what
are
you
guys
looking
to
discuss.
D
I
think
I'm
at
the
mindset
of
we
need
to
keep
Docker
running
for
the
developer
experience,
though
we
may
be
just
forcibly
having
to
limit
some
of
the
functionality
we
get
out
of
the
docker
experience,
and
if
you
want
the
full-fledged
thing
you
need
to
deploy
this
thing
to
kubernetes
yeah
like
a
local
KH
thing,
particularly
if
we
want
to
add
support
for
the
kubernetes
operator.
D
Clearly,
that's
kubernetes
only
thing
and
I
I
think
what
we
need
to
do
is
say
we're
going
to
keep
Docker
there,
but
it
may
not
have
the
full
feature
so
and
how
we
Define
that
I'm,
not
sure
I
think
if.
E
We,
if
we
decide
to
to
to
use
the
operator
in
technologies
that
we
already
have
by
technology,
that
we
languages
that
we
already
have
the
manual
approach.
I
think
it's
fine.
We
just
need
to
state
that
somewhere,
because
most
of
the
comments
in
the
in
the
discussion
were
in
people
like
against
dropping
Docker,
so
it
I've
definitely
think
we
should
keep
it
because
it's
pretty
easy
to
to
spin
up
and
play
around
locally.
E
F
Maybe
one
thing
here:
we
could
discuss
that
if
we
let's
say
provide
the
instruction
that
this
is
how
easily
you
run
kubernetes
locally
on
top
of
Docker
or
something
like
really
easy
can
come
up
with
something
like
that.
Maybe
this
this
question
is
actually
easier
and
it
might
get
forward
in
the
that's.
The
next
step.
D
E
Yeah
we
have,
we
have
a
lot
of
tools
like
kind
K,
zeros,
k3s
mini
Cube.
They
are
all
easy
to
set
up
locally,
but
the
problem
is
that
once
we
do
that,
then
we
kind
of
force
people
to
kind
of
know
how
to
navigate
kubernetes.
And
it's
not
something
that
everyone
knows.
D
Yeah
I
think
moving
your
Envoy
simplifies
UI
exposure
for
sure.
G
D
G
I
think
that's
fine,
there's
also
I,
don't
know
if
it's
on
the
list
right
now,
but
there's
a
request
to
be
able
to
disable
open
Telemetry
instrumentation
as
well.
So
maybe
we
could
do
something
through
like
an
environment
variable
to
where
the
kubernetes
version
has
it
disabled
and
has
an
operator
in
place
or
something
like
that.
G
C
I
just
missed
yes,
I'm
sure
it's.
D
I
think
what
we
decided
here,
often
and
we'll
probably
put
it
in-
is
we
won't
strive
for
future
parity,
but
we'll
make
sure
that
we
could
at
least
do
the
the
easy
button
Docker
compose
up.
You
may
not
have
every
feature
the
demo
offers,
but
you'll
have
the
majority
of
it.
It'll
be
damn
near
good
enough.
I.
D
Until
the
our
leaders
at
kubernetes,
maybe
Kelsey's
Focus
this
year
will
be
on
how
to
make
easy
buttons
for
the
world's
system.
So
kubernetes
is
as
easy
as
docker.
That
should
be.
That
should
be
the
Easy
Button
of
where
kubernetes
goes
next
working.
F
C
F
H
H
H
H
Investing
a
little
bit
in
upfront,
tooling,
for
making
a
easy,
kubernetes
version
locally
will
save
us
a
lot
of
headache
long
term
in
terms
of
like
even
fulfilling
the
basic
idea
of
like.
H
Oh
there's,
you
know
we'll
strive
for
parity,
but
it
won't
happen
right,
like
the
simplest
version
of
this,
would
be
resource
detection
via
you
know
the
collector
operator
versus
not
operator
like
if
we
have
two
different
deployment
strategies,
then
eventually
we
are
going
to
have
like
just
a
bunch
of
switches
and
logic
at
deployment
time
and
it's
going
to
leak
into
sort
of
the
application
runtime
itself
in
order
to
support
those
multiple
strategies
right
and
even
if
that
optionality
is
literally
just
it's
off
or
it's
on.
H
H
We
want
to
actually
ship
with
this
right,
because
now,
if
I
have
a
situation
where
I'm
shipping
a
grafana
dashboard
and
in
some
cases,
I
have
kubernetes
resource
attributes
and
then
in
some
cases
I
don't
well
now
all
those
dashboards
break
depending
on
where
I've
deployed
it,
which
means
I,
know
I'd,
have
two
copies
of
the
dashboard.
H
So,
let's
do
yeah
in
the
interest
of
like
asynching.
This,
like,
let's
create
an
issue
just
figure
out
like
what
are
the
requirements
for
the
kubernetes
easy
button
and
then
we'll
go
from
there.
G
D
G
Yeah,
there's
a
yeah
there's
an
active
discussion
just
like
the
the
PFC
there
and
people
can
get
feedback,
but
Austin.
Can
you
make
the
the
calendar
bi-weekly
for
our
meeting,
so
it.
H
G
It's
only
every
two
weeks,
thanks
yeah
I'm,
not
sure
if
we
want
to
discuss
this
issue.
E
Yeah
I
think
this
one
we
can,
we
can
skip
because
we
already
have
a
ticket.
Are
you
sure,
okay
and
we
depend
on
the
PHP
Community
to
update
or
allow
us
to
to
configure
that?
So
we
can
move
on.
B
B
H
G
H
The
next
one
should
be
on
the
23rd
for
people.
Does
everyone
say
that.
H
E
I
I
G
C
G
G
Easy
to
happen,
but
one
thing
I
want
to
discuss
quickly,
is
Severance.
Adding
all
this
resource
detectors
to
the
various
Services
based
on
language,
I
think
it'd
be
great
to
have
that
captured
and
then
document
somewhere,
I'm,
not
sure
where
the
exact
spot
would
be,
but
we
should
probably
call
out
that
we
have
all
this
resource
detection
going
on
person.
You
know
that's
just
what
I
personally
think
so
I'm
curious.
What
you
all
think
on
that.
D
E
H
B
D
Do
they
apply
to
all
yeah?
They
apply
to
a
metrics
and
two
there's
a
resource
attribute,
so
you
try.
They
apply
to
everything
that
comes
out
of
the
service.
H
D
H
Hey
I
mean
yeah
they're
attached
to
the
trait
they're
attached
to
the
pipeline,
so
they
would
not
automatically
apply
to.
H
H
H
Provider
but
technically
it's
attached
to
each
independently.
Instead
yeah.
D
G
Okay,
so
this
is
gonna,
be
exciting
topic,
so
Giuliano
made
some
great
kind
of
image,
time
reductions
based
on
like
dependencies
and
versions
that
he
updated,
but
that
did
increase
the
build
time
kind
of
similar
amount.
So
it's
about
a
10x
reduction
in
space
and
about
a
10x
increase
and
build
time
so
I
think
kind
of
like
as
a
group.
G
Some
point
discussion
is:
do
you
prefer
image
size
or
we
prefer
build
time,
and
this
is
probably
or
yeah
also
related
to
some
of
our
performance
issues.
But
storage
is
cheap.
H
Storage
is
cheaper
than
time
yeah
and
considering
that
the
arm
64
thing
keeps
coming
up,
keep
in
mind
that
if
I
mean
I,
guess
it's
one
of
those
things
where.
C
H
Emulation
you
know
it
really
doesn't
matter
that
much
because
they're,
the
overall
build
is
going
to
be
constrained
by
whatever
the
longest
single
thing
is
like
the
jobs,
not
gonna,
be
done
until
rust
and
C
plus
plus
finish.
So
you
can
hide
more.
You
can
hide
time
under
that
and
from
a
local
perspective,
going
from
seconds
to
151
seconds
is
like
a
couple
minutes.
I
G
I
think
that's
probably
a
good
discussion.
One
kind
of
idea
I
had
about
here
to
help
us
as
much
on
the
build
times
but
I
know
we
have
like
continuous
profiling
expertise
in
the
community
I
wonder
if
we
could
potentially
ask
one
of
the
performance
vendors
to
help
us.
You
know
get
a
better
understanding
of
what's
going
on
here
and
also
maybe
set
in
some
performance
requirements.
C
H
We
have
a
pretty
good
well
there's
a
pretty
idea
of
like
where
the
build
time
like
comes
from.
We.
G
H
G
D
G
An
option
so
I
think
we
had
an
assessment
done
of
that
for
us,
in
which
case
the
answer
was
no
that
did
not
make
sense.
C,
plus
plus,
has
been
but
not
assessed,
earn
assessed.
I
Yeah
it's
worth
noting
for
Russ,
but
that
was
highly
related
to
how
the
open
Telemetry
collector
works.
So
if
the
open
Telemetry
collector
in
C
plus
plus,
depends
on
grpc,
then
it's
unlikely
that
we'll
get
the
time
back,
because
that's
what
happened
in
rest
is
that
we
were
installing
the
same
libraries
anyways
yeah.
So
we
could.
We
could
it'd
be
worth
having
somebody
look
into
it
for
sure
what.
I
C
H
Actually
this
a
good
piece
of
feedback
to
take
back
to
the
sigs
would
be
in
Rust
and
C,
plus,
plus,
especially
to
allow
to
create
a
way
to
build
without
grpc.
H
I
D
H
I
I
seem
to
recall
when
I
looked
at
this,
that
there's
no
flag
for
it,
but
install
building
business.
H
Something
wild
you
can
do
with
basil
basil.
How
do
you
pronounce
it
but
yeah
I
think.
H
G
E
So,
just
just
to
to
be
clear,
we.
C
H
Mean
I
think
we
don't
I,
think
it's
a
significant
Savings
in
terms
of
image,
size
and
I.
Do
think
that
that's
good,
especially
if,
like
people
that
have
constrained
you
know
like
low
respect
or
like
256,
gig
laptops
or
whatever
so
I
actually
would
be
in
favor
of
reducing
the
image
size,
especially
if
the
like,
if
we're
not
actually
increasing
the
end-to-end
wall,
clock
time
right,
because
if
we're
already,
if
we
already
have
long
builds
and
this
just
kind
of
slots
under
the
long
builds,
then
cool
whatever
you
know,
we
didn't
actually
reduce
it.
H
We
didn't
make
anyone's
experience
worse.
E
Yeah
what
what
what
I
always
think
is
in
our
case,
we
we
constantly
build
the
images,
because
we
are
always
testing
PRS
and
testing
changes
and
doing
all
that
stuff.
But
for
for
users
there
that
are
trying
the
demo,
they
just
pull
it.
And
then,
if
the
image
reduces
from
one
giga
to
200
Mega,
then
the
pull
time
would
be
reduced.
H
G
Yeah
I
think
Austin
agrees,
I
think
targeted,
build
or
image
size
reduction.
So
if
there
are
any
other
services
where
we
can
add
or
decrease
the
image
size
without
a
drastic
increase
in
build
time,
I
think
we're
okay
with
so
in,
in
this
case,
I
think
they're
like
what
10x
size,
reduction,
5x
size,
reduction
and
3X
build
time.
That's
probably
okay
for
me,
I,
don't
know
if
you
want
to
formally
vote
in
the
Sig,
but
this
one
seems
acceptable,
but
if
there
was
probably
a
bigger
Spike,
then
I
would
say
no.
D
While
you
guys
are
discussing
that
there
is
an
option
that
was
recently
merged,
December
1st
by
lalit
and
the
C
plus
plus
repo
remove
grpc
from
the
build
awesome.
So
if
somebody
wants
to
give
that
one,
a
whirl
man,
if
we
could
take
that
20
minute
build
and
bring
it
down,
I
would
be
very
ecstatic.
F
C
G
We'll
follow
up
when
I
don't
have
limited
time.
G
Think
I
think
at
least
I
think
we're
okay
with
your
PR
I.
Don't
think
you
need
to
close
it
I
think
we
can
go
ahead
and
get
it
merged,
but
in
general,
we'll
just
have
to
kind
of
do
a
per
image
and
build
time
kind
of
performance
Delta
and
see
what's
worth
it
or
what's
not.
E
E
G
Yeah
that'd
be
great
before
we
go
on
to
the
next
couple:
topics
Cedric
and
James
I
know
I
think
either
have
PR's
open
or
have
topics
they
potentially
want
to
discuss.
So
guys
thanks
for
joining
us,
if
you
have
anything
going
on
right
now,.
A
A
Full
requests
that
I
added
around
the
ad
service
for
adding
metrics
but
largely
I'm
here,
just
to
sort
of
start
getting
involved.
G
Let's
see
does
this
need
any
active
Sega
attention,
James
park
or
Pierre
I
haven't
actively
looked
at
it,
but
now
we
have
some
hanging
PRS.
D
G
B
Remember
yeah
exactly
so,
I
mean
for
two
reasons:
one's
also
to
get
involved,
I'm
casually
contributing
to
the
project.
Here.
My
main
concern
is
actually
I'm
64
support,
in
particular
for
the
feature
Flex
service.
B
This
one
keeps
dying
on
me
on
my
M1
MacBook
and
it
seems
that
I
need
to
build
it
natively.
So
what
I
did
is
I
proposed
that
we
built
images
natively
enough
for
on
64
for
all
the
services,
but
actually
that's
not
so
that's
not
solving
my
problem
and
problem
solving
would
be
just
building
the
feature
Flex
service
natively
on
hype,
64
right.
B
H
Have
is
that
feature
flag
crash
happened
since,
with
the
most
recent
releases.
D
Sorry
1.1
I
think
is
when
we
rolled
it,
but
1.2
should
definitely.
C
B
Yeah,
it's
like
definitely
happened
to
me
in
December
end
of
December,
so
it
should
be
in
the
102.1
yeah
still
happening.
A
H
Because
that
should
have
the
erling
thing
should
have
fixed
it.
I
mean,
like
I,
said
we
would
love.
I
I
would
be
overjoyed
to
build.
Both
the
problem
is
in
a
few
splunkery
issues.
You
can
kind
of
find
the
full
history
is
that
to
build
arm
64
in
GitHub
actions,
you
have
to
emulate
arm
64
on
the
GitHub
actions
Runners
and
that's
fine,
except
when
it
comes
to
building
the
rust
and
C
plus
images,
because
they
have
the
the
grpc
build
basically
takes
forever
yeah
and
that's
why
we
don't
do
it.
B
So
I
I
was
working
with
another
window
before
joining
Groupon
and
what
we
did
there
was
we
had
a
multi-layered
or
a
multi-stage
build
that
took
care
of
compiling
the
the
grpc
bindings,
the
different
build
stage,
so
we
could
be.
We
were
basically
able
to
Cache
the
the
GMP
build.
B
Would
that
be
an
option
for
this
group,
or
do
you
think
we
should
build
everything
fresh?
You.
H
You
we
could
do
so,
there's
GitHub
actions,
caching,
which
will
do
this.
The
problems
I
ran
into
with
that
are
that
you
still
pay
the
penalty
for
the
first
one
to
anytime.
You
know
the
second
one
is
I,
don't
know
if
we
pin
everything,
but
any
change
to
Autolite
dependencies
causes
it
to
be
rebuilding.
Considering
that
you
know
our
release,
Cadence
is
so
slow
or
relatively
slow.
H
It's
not
like
we're
building
every
day,
then
you
kind
of
lose
the
benefits
of
the
caching,
because
there's
a
lot
of
stuff
that
we
don't
pin
if
we
went
through
and
like
we
pinned
a
bunch
of
dependencies
and
we
made
sure
that
we
weren't
changing
like
the
underlying
image
layers
and
that
would
make
sense.
The
other
thing
we
could
do
is
publish
intermediate
grpc
images
where
we've
done
the
jrpc
build,
but
we're
still
paying
the
penalty
to
build
those
like
that's
why
our
discussions
are
ranging
towards.
H
Now
that
said,
if
you
want
to
go-
and
you
want
to
try
doing
a
multi-stage
like
cool
the
the
only
thing
I
could
figure
out,
that
would
be
actually
faster
would
be
to
have
a
native
arm.
64
Runner,
which
GitHub
actions
doesn't
provide
and
I've
had
a
request
out
to
the
cncf
to
use
their.
H
They
have
a
thing
with
equinix
metal
to
do
like
custom
on-demand.
You
have
to
do
like
on-demand
whatever
or
you
need
which
we
could
take
advantage
of,
but
I
haven't
heard
back
from
them.
I
should
probably
go
poke
them
some
more
yeah.
Basically,
the
short
version
is:
if
we
could,
if
we
could
remove
grpc,
then
we
could-
and
we
could
you
know
I
would
say:
what's
our
gut
feeling
like
10
15
minutes
like
15
minutes
20
minutes,
maybe
is
reasonable
for
building
everything
on
Free
Runners.
H
If
it's
twice
as
bad
but
we're
building
arm
and
non-arm
simultaneous
you
know
in
the
same
process,
then
I
think
a
20
minute.
An
additional
20
minute
penalty
is
fine
yeah.
But
yes,.
C
H
You
want
to
take
a
run
at
this
again,
it's
very
possibly
something
I
missed
I
am
by
no
means.
You
know
an
expert
here,
but
I
did
spend
a
lot
of
time.
G
Yeah
I
think
so,
if
you
want,
if
you
want
to
help
us
on
that
that
jerk
we
love
it,
but
we've
been
a
bit
stumped
on
trying
to
solve
it
before
so
definitely
an
area.
We
could
use
a
bit
more
helping
yep.
I
Yeah
and
something
else
to
be
aware
of,
if
you're
going
to
look
into
this
rust
is
not
as
bad
as
C
plus
plus,
but
the
rust
one.
I
The
issue
that
we've
run
into
at
this
point
is
that
starting
the
process
of
downloading
the
dependencies
and
doing
the
build
takes
about
as
long
as
actually
doing
the
dependency
download
and
build,
because
it
just
takes
so
long
for
the
like
cargo
registry
to
populate
it
just
downloads,
a
lot
of
like
where
all
of
these
packages
are
before
it
starts
doing
any
of
the
downloading
and
building
and
really
the
only
way
to
Cache.
I
B
Would
it
be
acceptable
to
have
out-of-band
build
for
cashing
those
that
we
could
have
a
recurring
bill?
That
would
just
you
know,
build
the
cache
layer
for
for
these
images,
yeah.
C
H
Be
great
yeah
and,
by
the
same
token,
like
if
we
wanted
to
do
multi
like
have
like
every
week
or
something
or
whatever
Cadence
like
if
we
can't
split
grpc
out
than
having
a
recurring
like
a
Cron
job
type
thing
where
every
week
or
whatever
it's
building
those
intermediates
I,
think
that
would
be
fine
in
a
very
perfect
world.
I
would
say
we
get
to
the
point
where
we
can
actually
do
fast
builds
like
not
publish,
but
at
least
fast
build
for
every
image.
Every
PR.
C
H
Because,
right
now
we
don't
really
have
a
great
way
to
make
sure
that
people
haven't
like
broken
thing.
You
know
that
things
are
you're
actually
working
at
PR
time
other
than
people
downloading
the
PO.
You
know,
then
checking
out
the
pr
and
building
it
locally.
So
you
know
perfect
world
we're
five
ten
minutes
builds
and
then,
when
people
do
a
PR,
we
can
either
as
like
a
final
pre-pr
check
or
as
a
just
on
every
push
images
are
being
built.
D
If
we
had
the
intermediates
and
we've
built
those
immediates
every
week
or
whatever,
we
could
totally
do
it
on
pushes.
H
Yeah
I
mean
if
I
wanted
to
really
be
fancy.
We
could
have
that
so
that
it
only
if
all
the
if
everything
was
self-contained,
then
it
only
could
just
build
what
changed.
D
I
want
to
fix
grpc
first,
let's,
let's
rip
it
out,
let's
get
us
some.
Let's
get
an
issue
up
on
the
rust
Sig
ask
him
to
split
it.
It
looks
like
it
might
be
split
already
for
C
plus,
but
if
we
could
do
that,
I
I
think
it
all.
These
discussions
become
very
like
they're,
almost
smooth,
because
nobody
cares
anymore.
This
thing
goes
in
10
minutes.
G
Sense
well,
I
think
we're
at
times
we'll
have
to
cover
everything
else
in
slack
or
asynchronously,
but
welcome
back
everyone,
Happy
2023.,
a.