►
From YouTube: 2022-02-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
D
F
Okay,
let's
start
is
pablo
here.
I
think.
D
Sure
so
I
opened
this
issue
to
propose
basically
sticking
to
the
open,
telemetry
goal
approach
for
supporting
different,
go
versions
on
when
using
the
the
open,
tenancy
collector
as
a
library.
So
roughly
what
they
do
is
whatever
versions
are
supported
by
the
go
team,
so
whatever
versions
are,
will
still
have
bug
fixes
backported
on
security
issues.
Fixed
are
the
ones
that
are
supported
by
the
library
that
are
tested
on
ci
and
that
can
be
used
with
the
delivery
code.
C
Yeah,
so
in
hotel
go,
we
were
mainly
looking
to
mirror
what
the
upstream
go
support
policy
is.
We
know
that
users
are
not
going
to
always
use
the
latest
version
of
go
once
it's
immediately
released
and
it
may
take
some
time,
but
we
also
don't
want
to
be
trailing
with
three
or
four
or
five
minor
versions
of
support,
especially
when
those
are
not
supported
upstream.
C
So
we
we've
decided
that
we
would
set
our
floor
for
support
or
minimum
support
aversion
as
n
minus
one.
Where
n
is
the
the
current
minor
version,
and
we
would
increase
that
when
a
new,
a
new
minor
version
has
its
first
patch
release
or
at
some
point
I
think,
30
days
after
the
minor
release
of
there's
no
patch
release
in
that
time,.
F
C
F
F
C
F
C
Is
that
there
are
actually
two
separate
steps
that
we
would
start
supporting
the
newest
one
when
they,
when
they
have
a
patch
release
or
after
one
month,
then
we
would
drop
support
for
the
old
one
after
three
months.
C
Point
there
may
be
one
version
supported
only
if
it's
no,
there
would
be
three
versions
supported
for
some
period
of
time
like
currently
16
is
our
minimum
version.
Point
17
is
the
current
version.
When
point
18
gets
released
after
either
one
month
or
0.18.1.
C
What
18.1
we
would
start
testing
against
18
while
still
having
support
for
16
and
then
three
months
after
18
is
released.
We
would
drop
support
for
16.,
okay,.
F
Okay,
but
anyway,
I
think
that
the
general
idea
that
we
want
to
do
the
same
thing
as
the
go
sdk
does.
I
think
it's
it's
a
good
idea.
Let's
clarify
the
things
that
were
unclear
a
bit
in
the
way
they
are
written
in,
go
repository
and
then
and
then,
let's
make
sure
that
the
other
maintainers
approvers
are
on
board,
and
I
think
personally
I
I
I
find
finding
a
good
good
idea.
C
Recalling
now
I
think
my
preference
had
been
to
do
them
in
lockstep,
but
this
was
the
compromise
that
we
settled
on.
I
don't.
There
hasn't
been
a
new
minor
release
since
we
adopted
this
policy,
so
we
haven't
actually
exercised
this
just
yet.
Okay,.
F
F
Thanks
pablo
thanks
for
looking
into
this
all
right:
what's
the
next
one
transform
and
reduction?
Yes,.
E
Hello
yeah.
So
after
last
week's
seg,
I
went
in
to
compare
the
two
and
brought
up
some
of
the
differences
and
let
me
know
if
I
should
screen
share
but
feel
free.
If
you
want
to.
E
Okay,
sorry,
you
should
see
my
screen
so
in
terms
of
what
they
do
in
a
like.
You
can
so
so
they
set
out
to
so
the
reduction
process
set
out
to
remove
spanish
with
keys
that
are
not
in
the
allowed
schema.
So
you
can
do
that
and
it
also
set
out
to
replace
attribute
values
that
are
that
match
certain
rejects
and
principle.
You
can
do
that,
so
I
do
have
a
question
about
whether
the
processing
language
supports
rejects
or
if
it's
just
wild
cards,
that's
somewhat
unclear.
E
E
That's
an
expressiveness
question
and
then
the
last
two
questions
I
had
were
about
metrics
in
in
the
use
case
that
dashboard
is
trying
to
address
the
schema,
is
in
flux
and
then
the
purpose
of
for
the
redaction
is
to
and
enforce
the
schema
across,
possibly
multiple
teams
that
might
not
be
in
sync
and
might
be
introducing
their
own
spot
attribute
keys.
So
the
question
is:
can
we
have
metrics
and
alerting
when
we
remove
keys
use
in
the
transform
processor?
E
C
So
to
answer
the
first
question,
I
think
one
of
the
goals
of
the
transform
processor
is
to
create
a
language
for
operating
on
attributes
and
and
signals
with
functions
that
can
be
user
defined.
So
even
if
the
replace
wildcards
doesn't
support,
regex,
a
replace
regex
function
could
be
created
and
probably
would
make
sense
to
have
as
part
of
the
the
core
of
that
processor,
rather
than
requiring
a
user
to
define
it
as
far
as
whether
it
can
operate
on
a
set
of
attributes.
C
Instead
of
a
single
attribute
that
I'm
not
clear
on,
I
think
that
the
language
would
support
that,
but
I
would
have
to
defer
to
honorag
who's,
obviously
not
your
date
of
time
zones.
F
I
think
this
is
a
very
useful
exercise,
even
if
we
end
up
not
doing
this
using
the
transform
processor,
I
would
expect
the
expressiveness
of
the
language
should
potentially
at
least
allow
this
to
be
possible,
but
yeah
since
soundrag
is
not
here,
he's
the
best
person
to
answer
these
questions.
I
think
we
may
need
to
have
this
discussion
to
happen
offline
on
the
github
issue,
but
the
specific
question
I
have
here
is:
can
you
talk
about
the
metrics
part?
I
didn't
quite
get
it.
What
what
do
you
want
to
happen
here?
E
So
I
I
mean
the
use
case,
I'm
most
familiar
with
in
this
case
that
I
I
personally
need
is.
We
have
a
team
that
owns
the
tracing
infrastructure
and
would
own
these
sample
schema
enforcement
using
the
keep
keys.
E
A
language
statement,
but
we're
not
the
ones
building
the
services
that
are
being
traced,
there's
a
lot
of
different
teams,
building
services
all
using
the
assured
tracing
infrastructure
and
then
having
the
schema
enforced
against
them.
So
when
a
service
maintained
by
a
different
team
introduces
a
new
span
attribute
that
appears
in
removing
because
we
defined
the
schema
that
method
removes
at
the
span
attribute.
E
F
F
That
the
processor,
the
transform,
processor
or
yeah,
the
transform
processor
in
this
case
is
kind
of
a
custom
metric
which
says
how
many,
whatever
keys
were
dropped
by
by
this
particular
function.
So
you're,
essentially
looking
at
potentially
a
volume,
significant
volume
of
different
specialized
metrics,
which
describe
what
has
happened
during
the
processing.
F
E
F
E
A
Yeah,
that's
that's
the
that's
the
decision
criterion
here.
This
is
a
useful
use
case
that
is
necessary
for
people,
for
the
the
transform
process
to
replace
a
redaction
processor
is
that
we
need
to
have
metrics
for
the
transform
processor.
So
if
the
decision
is
that
we
don't
want
to
have
metrics
for
the
transform
processor,
then
we
needed
a
separate
redaction
processor.
So
the
question
is:
are
we
pushing
more
toward
unification
and
covering
the
use
cases,
or
are
we
pushing
more
toward
separation
because
we
don't
like
the
idea
of
metrics
on
the
transformers
yeah.
F
Yeah,
I
think
I
think,
that's
a
that's
a
good
question.
I
think
we
need
more
generally
decide
whether
we
have
we
want
to
have
this
concept
of
the
transform
processor,
exposing
metrics
about
the
the
more
granular
details
of
its
operation
right,
not
just
how
many
data
points
it
received
and
processed,
but
specifically,
with
regards
to
a
particular
function,
how
many
attributes
it
dropped
or
did
something
else
right.
I
don't
know.
H
H
Besides
the
fact
that
now
I
understand
the
need
of
a
met,
the
metrics
part,
which
I
think
can
be
covered
simply
in
know,
let's
I
mean
I'm
happy
even
to
have
a
separate
one,
because
I
don't
want
people
to
pay
the
cost
of
having
these
metrics
if
they
don't
need
that
metrics
and
stuff.
So.
C
I
wonder
whether
it
could
be
handled
by
either
additional
functions
that
record
metrics,
so
you
know
remove
attribute
with
metrics
or
a
wrapping
function
that
can
record
metrics.
You
know
record
metrics
remove
attribute,
I'm
not
sure
what
the
exact
syntax
in
the
language
would
look
like
for
that,
but
it
seems
like
there
might
be
a
way
to
have
a
single
processor
and
give
the
flexibility
that
point
of
use
to
decide
whether
those
metrics
should
be
tracked
or
not,
and
you're,
saying.
F
Make
it
composable,
instead
of
making
part
of
the
keep?
What
was
the
name
of
the
function
but
keep
maybe
reports
the
number
of
the
attributes
it
dropped,
and
then
there
is
another
function
which
emits
a
metric
based
on
the
value
returned
by
keep,
and
it's
up
to
you
whether
you
want
to
use
that
or
no
make
it
composable
right,
yeah.
C
Yeah,
I
I
would
just
say,
without
trying
to
get
into
solutioning
here,
that
we
should
try
to
make
our
guiding
principle
to
be
unify
as
much
as
possible
on
the
transform
processor
and
and
only
look
at
having
a
separate
processor.
When
we
find
we
absolutely
cannot
have
a
single
processor
that
does
it.
F
But
I'd
like,
I
think
it's
worth
exploring
more
the
approach
with
having
this
in
the
transform
processor
and
if
we
don't
see
a
good
nice,
reasonably
achievable,
easily
achievable
way
to
do
this
only
then
resort
to
having
this
in
the
redux
process.
If
possible,
I
mean
if
we
see
that
this
is
going
to
be
a
multi-month
effort
to
have
this
support
for
the
composable,
metrics,
etcetera,
etcetera
in
the
transform
processor.
Then
maybe
it's
it's
too
long
right
then.
F
F
F
G
Yes,
I'm
offering
my
help.
I
just
I'm
here
to
to
bother
you
about
this
because
google's
still
interested
in
moving
to
our
new
exporters.
So
if
there's
a
task
that
can
be
broken
off
and
that
I
can
help
with
I'm
happy
to
do
so,.
H
By
the
way
to
to
give
you
a
heads
up,
I
think
we,
we
are
good
friends
with
google,
so
with
the
new
policy
of
deprecation
and
stuff,
I'm
feeling
confident
now
to
to
to
start
depending
on
that.
As
long
as
you
update
pretty
frequently
to
the
latest
data.
G
Okay,
so
so
we
can
go
ahead
and
move
to
our
new
exporter
as
long
as
I'm
responsive.
H
G
H
Yeah,
I
I
think
I
think
it's
it's
a
reasonable
thing.
I
I
mean
as
long
as
you
upgrade
pretty
frequently.
H
G
H
H
Separately
on
that,
I
will
talk
to
dmitry
to
start
assigning
things
to
you.
Okay,
thank
you.
F
Well,
I
think,
right
now,
with
p
data
we're
still
deciding
what
exactly
is
the
approach
that
we're
taking
until
we
know
that
it's
probably
going
to
be
difficult
to
start
writing
actual
code
and
doing
the
actual
changes.
I
think
we
didn't
come
to
any
conclusion
in
the
issue
that
it's
recreated.
H
F
So
to
summarize
dmitry
presented
three
different.
I
think
three
different
approaches
about
how
we
could
split
p
data
and
while
discussing
the
options,
I
guess
bogdan
asked
the
question
whether
we
still
want
to
do
this,
considering
that
there
is
no
easy
and
nice
way
to
do
it,
and
I
think
it's
a
good
question
to
ask
and
then
pablo,
I
think,
convinced
me
that
not
doing
it
is
not
the
right
way
not
doing
it
by
by
actually
deviating
from
the
the
recommended
goals.
F
F
D
Do
you
have
a
single
package
and
do
your
major
versions
whenever
there's
a
breaking
change?
Is
that.
F
Have
a
single
package
for
the
stable
ones
for
traces
and
metrics
today,
right
and
then
logs
are
unstable.
Today
they
we
move
them
to
another
separate
package
and
then
when
we
need
to
bring
the
logs,
for
example,
to
to
the
stable
one,
we
can
increase
the
major
version
number.
If
that
requires
any
breaking
changes
there.
C
Something
like
that
or
module.
I
think
if
we
move
them
to
separate
modules,
bringing
in
an
unstable
signal
too
stable
shouldn't
require
a
major
version
increase
because
it
should
be
additive.
C
F
C
F
Yeah,
okay,
sorry
I
I
misrepresented
what
I
wanted
to
actually
say:
we
put
everything
in
one
module,
including
the
logs,
which
are
unstable.
We
understand
they
are
unstable
when,
but
from
the
perspective
of
the
p
data,
they
are
declared
stable
right.
So
when
we
make
changes
to
the
logs
data
model,
all
the
or
the
protobufs
which
are
breaking
changes,
then
p
data
module,
the
only
and
single
p
module
pp
data
module
gets
a
new
major
version.
F
That's
what
I
wanted
to
say:
that's
that's
a
different
approach,
a
single
module
which
includes
unstable
data
types,
unstable
signals
as
well,
but
when
you
actually
need
to
make
changes
to
those
unstable
signals,
you
increase
the
major
version
number
according
to
assembly,
so
we
kind
of
ignore
the
fact
that
they
are
unstable
today.
But
we
follow
the
the
semantics
of
december
expected
by
by
goal.
C
F
G
D
H
Haven't
read
it
so
we
have.
We
have
a
set
of
package,
an
internal
package
that
is
shared
between
this.
They
share
between
between
objects
in
p
data
if
we
split
them
into
different
packages.
The
p
data,
it's
fine
as
long
as
we
keep
them
in
the
same,
go
module
because-
and
here
is
because
they
can
access
the
internal
package.
H
H
A
set
of
internals,
let's
I'll,
explain
after
what
is
there
in
the
internet,
but
in
the
internal
we
have
an
internal
package
which
by
default,
does
not
expose
publicly,
and
we
can
share
that
internal
package
between
different
packages
as
long
as
they
are
in
the
same
module.
Now,
if
we
split
them
in
different
modules,
we
cannot
share
the
internals,
so
we
have
to
expose
the
internal
okay
now
the
problem.
H
What
we
have
in
the
internal
is,
we
have
some
generated
classes
using
gogo
proto,
and
we
don't
want
to
expose
this
because
we
know
it's
deprecated.
We
know
we
want
to
make
changes.
We
know
we
want
to
improve
those,
so
we
don't
want
to
to
guarantee
anything
on
those.
That's
why
we
have
the
data
where
we
guarantee
backwards
compatibility,
but
not
on
those
internal
stuff
that
we
have.
C
H
That
that's
true,
then
that
solves
the
problem,
but
I
don't
believe
so
because,
because
that
internal
package
has
to
be
in
its
own
module
because
otherwise
gomo
does
not
know
how
to
import
the
dependencies
and
stuff.
So
you
most
likely
you
put
that
into
a
independent
module,
the
internal
stuff,
but
actually,
if
you
put
it
in
internal,
but
you
have
a
different,
a
different
package
doesn't
mean
it's
internal.
It's
just
by
convention
is
internal,
but
they
are
exposed
publicly.
C
C
So
you
put
the
internal
global
and
it's
it's
able
to
do
so.
C
Yeah,
it
internal
shares
a
common
root
path
with
it
yeah
the
the
limitation
on
use
of
internal
packages
is
based
on
path,
import
path,
hierarchy
and
not
module
hierarchy,
but
but
you
you.
H
Do
have
in
what
package
do
you
have
the
global?
Sorry,
not
what
module
in
go.overtelemetry.ioslash.
C
C
C
Correct
sdk
module;
yes,
module,
sorry,
yeah
yeah!
I
would
say
that
this.
This
feels
kind
of
wrong
like
it
shouldn't
be
that
you
can
use
internal
packages
across
a
module
boundary,
but
because
of
the
import
path
hierarchy
they
do
have
to
be
somewhat
related.
So
and
all
of
that
predates
modules
and
you
verified,
and
you
verified
the.
H
H
F
Yeah
is
this
sort
of
undefined
behavior
that
we'll
be
exploiting
and
subject
to
be
changed
somehow,
should
we
be
careful
with
this.
H
H
Yeah,
I
I
I
I
checked
that
as
well.
All
the
internals
are
on
godot
doesn't
mean
anything.
H
So
so
are
we
okay,
if
this
breaks
somehow
to
to
if
we
have
three
modules
to
unify
them
under
one
module
as
long
as
we
don't
change
the
import
path,
I
know
it's
a
breaking
change
in
goal
mode,
but
that's
that's
something
that
any
tool
fixes
by
itself
like
if
I
depend
on
you
and
you
didn't
upgrade,
I
still
gonna
be
able
to
upgrade
and
do
things.
C
Assuming
if
we
had
like
a
p
data,
slash
logs
and
p
data,
slash,
traces
and
traces
were
under
a
p
data,
module
and
logs
were
under
p
data
logs.
If
we
later
just
remove
the
p
data
logs
go
mod
and
yes
move
that
up
yeah.
I
think
that's
the
recommended
way
to
accomplish
that.
This
is
the
plan
that
we
have
for
the
hotel
go
metrics
when
that
becomes
stable.
So
I
I
think
that
that
should
be
the
way
to
go.
C
C
Yes,
what
we
would
do,
then,
is
make
one
final
release
at
p:
data
slash
logs-
that
has
the
same
api
as
the
one
that
is
promoted
to
stable
and
then
at
that
point
any
further
changes
will
be
additive.
Non-Breaking
and
getting
a
new
version
would
require
a
go.
Mod
change,
anyways.
F
H
F
Can
you
post
in
the
in
the
issue
as
well,
because
twitter
is
not
here.
H
H
Why?
No,
no,
we,
we
will
have
to
make
breaking
changes
to
the
resource
because
it
will
have
to
be
moved
in
a
different
package.
Correct
I
mean
I
mean
in
the
proto
or
in
the
data
yeah,
but
I
would
personally,
if
you
ask
me,
I
would
map
to
this
structure,
even
though
that
structure
is
not
ideal.
Maybe.