►
From YouTube: 2021-08-31 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Thank
you,
let's
give
another
minute
for
anyone
else
to
join
and
if
you're,
only
in
the
call,
please
remember
to
write
your
name
to
the
attendees
list.
A
Okay,
I
think
we
can
start
if
you
are
in
the
code.
Please
make
sure
to
write
your
name
in
that
indians,
oh
yeah.
I
already
said
that
no
updates
so
I'll
just
see
if
there
are
any
topics
in
the
agenda,
I
don't
see
anything
so
I
just
added
like
some
like
release
milestones
here,
so
anyone
has
questions
or
anything
which
we
can
go
through
it
now.
Otherwise,
we'll
just
go
through
the
thing
which
I
have
put
here,
all
right,
yeah.
I
can
give
an
update
on.
A
I
think
we
did
release
alpha
2
last
friday.
It
contains
pretty
much
all
the
instruments
and
we
also
released
the
prometheus
exporter.
It
was
our
exam
version
because
we
already
had
a
version
like
one
one
and
a
half
year
back.
So
this
is
the
new
version
which
supports
the
current
metrics
api,
although
it
has
a
major
bug
which
basically
makes
it
useless.
So
that's
why
you'll
be
releasing
alpha
3
this
friday,
which
would
contain
like
even
more
changes
to
the
exporter
interface.
A
So
I
have
the
draft
pr
out
already
so
at
the
end
of
this
pr
I
would
expect
all
the
exporters
would
be
looking
like
very
similar
to
the
tracing
and
logging
signal.
It's
not
fully
baked
yet
so
I'm
I'm
still
working
through
it.
So
mostly
rtlp
is
done,
but
I'll
sing
with
alan
to
make
sure
like
I
haven't
broken
anything
and
prometheus
is
also
fixed.
A
I
was
just
testing
it
a
few
minutes
earlier,
so
it
should
be
like
ready,
for
I
mean
part
of
the
release
this
friday,
so
I
I
mean
the
pr
is
right
now
like
fairly
big,
because
it's
mostly
like
my
work
in
progress,
so
I'll
break
it
down
and
send
smaller
pr's,
if
that
makes
reviewing
health
helpful.
A
But
basically
the
like
key
things
here
is
updated
in
the
has
updated
the
milestone.
So
it's
it's
mostly
performance
thing.
So
now
with
this
change,
the
collection
or
the
export
would
be
like
zero,
allow
very
similar
to
tracing
part,
and
I
also
removed
all
the
explicit
logs
in
two
of
the
aggregations
for
histogram.
I
did
try,
but
I
couldn't
figure
out
the
right
way.
A
So
histogram
would
still
have
logs,
but
all
the
other
things
there
are
no
logs
anymore
in
the
aggregation
part,
we
still
use
the
dictionary,
so
it
still
has
logs.
That
is
something
which
we
can
revisit
later.
A
Okay
and
also
I
implemented
some
caps
just
to
protect
sdk
from
like
indefinitely
growing
memory,
because
the
I
think
the
java
folks
reported
these
issues
and
they
started
shipping
metrics
last
week.
So
we
are
just
trying
to
be
like
safer.
There
is
no
configuration
for
customers
to
change
it.
A
I
think
I
defined
some
hard
limits,
like
1000
metric
frames,
to
the
max
and
within
each
metric
up
to
2000
metric
points,
so
something
like
that
we'll
figure
out
what's
a
good
number,
because
spec
is
not
likely
to
specify
anything
about
this,
so
this
is
just
to
protect
the
sdk
yeah
and
just
going
through
the
milestone.
A
So
this
is
what
I
just
described
for
the
release
coming
in
couple
of
days
this
friday
and
we'll
be
doing
another
alpha
release
next
week,
because,
given
that
this
is
a
fairly
big
change
on
the
exporter
side,
I
expect
there
will
be
some
feedback
from
exporter
authors.
So
one
of
them
is
myself,
writing
prometheus
and
then
allen
is
writing.
Otlp
and
utkarsh
is
doing
a
microsoft,
internal
export
as
well.
So
we'll
get
like
feedback
from
three
and
address
any
feedbacks.
A
Also,
there
is
a
spec
now
written
for
metric
grader,
it's
not
merged
as
of
this
morning,
but
I
expect
it
to
be
merged
like
by
end
of
this
week,
so
by
next
release.
If
that
stick
is
marked,
we'll
try
to
align
ourselves
with
this
thing
we
are
like
conceptually
similar,
except
that
we
don't
use
the
same
name.
A
We
use
the
term
like
push
and
pull
metric
processor
and
it
should
be
like
modified
to
match,
whatever
the
spec
says
and
also
quite
likely
will
support
multiple
exporters
again,
since
the
spec
doesn't
say
anything
about
how
to
support
multiple
things
we
by
default,
do
it
as
independent
pipeline.
So
if
you
have
like
two
exporters,
they
will
all
be
working
on
independent
aggregator
stores
and
everything
would
be
independent
and
like
later,
if
you
think
we
need
to
like
improve
the
performance.
For
that
scenario,
we
can
try
to
be
like
really
smart
about
it.
A
Having
the
same
pipeline
shared
by
multiple
but
quite
likely,
we
won't
need
that
because,
based
on
what
I
hear
from
the
metrics
back,
it's
it's
quite
likely
that
in
production,
people
would
only
run
like
one
exporter
and
also
this
is
matching
what
we
have
for
tracing
so
even
for
tracing
today,
if
you
had
like
two
exporters,
it
kind
of
runs
to
separate
pipeline
like
each
with
its
own
circular
buffer
to
keep
things
so,
okay,
not
like
any
worse
than
that.
A
So
after
that
I
will
like
with
this
thing
like,
I
would
say
that
we
are
like
mostly
stabilized.
We
don't
expect
like
any
big
major
changes.
A
Everything
after
this
would
be
like
additive,
so
I'm
planning
to
rename
it
as
beta
after
the
fourth
alpha
and
that's
where
we
are
introducing
views
which
would
be
like
an
additive
change.
I
don't
expect
any
change
for
export
interface
or
instrumentation.
So
it's
just
an
additive
change
like
again
more
features,
so
there
is
a
thing
called
xmlr
in
the
spec
which
just
got
merged
two
days
back
yeah.
So
that's.
A
The
only
thing
which
I
can
have
some
visibility
into
and
probably
like
october,
is
good
for,
like
other
performance,
optimizations
and
things
like
that.
So
any
questions
on
like
milestone.
I
have
like
asked
for
everyone
who
is
working
on
the
matrix,
but
any
questions
so
far
on
this.
A
All
right
yeah,
so
there
is
an
ask
from
the
spec
to
mark
the
sdk
as
experimental
and
the
api
spec
as
stable.
However,
the
concern
is
most
languages.
There
is
no
sdk
implementation
yet
so
there
isn't
much
feedback
so,
as
per
the
spec
and
maintenance
meeting,
the
specs
would
only
be
marked
stable
after
at
least
three
languages
are
like
ready
for,
like
beta
ready
with
like
beta
and
they
implemented.
So
one
of
them
is
java
and
dot
net
is
second
and
hopefully
like
by
the
no
go
would
be
there.
A
So
I
trying
to
see
like
if
any
of
you
work
with
customers.
If
you
have
any
feedback,
please,
like
let
us
know
like
at
the
earliest
in
case
of.net,
it's
the
api
is
already
stable.
It's
already
like
a
like
code
freeze.
We
have
the
last
fix
spending
like
about
a
week
and
a
half
back,
so
there
wouldn't
be
any
more
code,
changes
or
apa
changes
in
metrics,
so
in
the
apa.
A
So
if
you
have
any
feedbacks,
please
let
us
know
because
we
promised
we
would
be
doing
the
bitter
release
on
middle
or
end
of
september,
we'll
get
dates
from
java,
because
this
is
basically
going
to
affect
when
wood
these
specs
would
get
marked
as
stable
or
experimental
or
feature
freeze
whatever
that
term
they
are
using.
A
A
Okay,
that's
pretty
much
the
update.
I
have
so
alan.
You
wanted
to
discuss
something
about
the
odlp
bugs.
Can
you
like
briefly
describe
what
I
mean?
I
think
I'll?
Let
you
go
first
and
maybe
after
that
I
can
describe
what
I
intend
to
do
with
the
refactoring
part.
B
Yeah
sure
so
yeah,
I
noticed
a
few
bugs
in
my
implementation
of
the
otlp
exporter,
namely
around
transforming
histogram
stuff,
because
when
I
wrote
it,
it
was
before
we
had
histogram
support
so
anyways.
It's,
it
should
be
a
pretty
simple
bug
fix.
I
just
was
not
handling
the
buckets
correctly
or
transforming
the
the
buckets
correctly.
B
Okay,
though
separate
from
the
bug
fix
and
hotel
p
exporter,
this
might
be
a
minor
thing,
but
I
I
did
notice
that
the
histogram
aggregator,
the
our
histogram
buckets,
have
a
low
boundary
and
a
high
boundary
and
the
way
that
the
net
sdk
has
modeled.
It
is
that
the
high
boundary
oh
yeah,
rather
than
inclusive
of
the
and
that's
different
than
how
the
otlp
data
model
is
done.
I
was
wondering
if
it
was
intentional
or
or
not,
no.
A
Utkarsh
writes
the
same
question
like
a
couple
of
days
back,
I
thought
I'd
ask
victor
whether
it
was
intentional,
maybe
like
he
wrote
it
when
the
stick
was
not
stable,
so
I'll
confirm
with
victor
whether
it
was
intentional.
If,
yes,
there
is
any
reason
otherwise
I'll
consider
it
as
a
typo
or
like
gesture,
but
we'll
fix
it.
Okay,
we
got
the.
I
mean
the
inclusive
and
exclusive
like
reversed,
that's
what
okay,
okay
yeah!
So
it's
the
same
as
what
lukas
mentioned
like
two
days
back
so
yeah.
B
A
All
right,
yeah
yeah,
I
haven't,
touched
the
histogram,
yet
today
I
mean
in
the
refactoring,
because
I
could
not
figure
out.
I
mean
I
thought
I
had
the
log
free
way
of
updating
histogram,
but
then
that
didn't
work
out
the
way
I
expected,
so
it
still
has
log.
So
I
didn't
include
it
here,
but
I
will
see
if
there
is
any
way
we
can
avoid
the
law
because
all
other
aggregations
we
should
be
mostly
good.
A
A
It's
like
maintained
only
for
like
some
backward
compatibility
reason
with
I
don't
know.
Maybe
it's
like
open,
metrics
or
open
sensors,
so
in
the
sdk
spec,
the
spec
for
aggregators
got
marched
about
a
week
back
and
it
explicitly
lists
the
kind
of
aggregations
here.
So
there
is
no
summary
here,
so
that
would
mean
we
won't
have
somebody
at
least
in
the
v1
like
maybe
in
the
future.
There
would
be
so.
I
think
the
only
thing
which
is
missing
is
no.
Are
we
missing
anything?
A
I
think
histogram
can
cover
like
everything,
so
I
don't
know
whether
like
there
would
be
any
need
for
summary
somewhere
in
the
spec
or,
like
the
hotel
page
says,
the
summary
is
only
maintained
for
like
back
head
compatibility,
it's
only
in
doubt
it'll
be
here
it's
only
in
the
otlp
model,
but
nowhere
else
it's
mentioned.
A
B
A
B
A
B
Yeah,
I
think
the
the
the
where
my
colleague
was
coming
from
was
that
right
now,
as
it
stands,
new
relic
is
an
example
of
a
vendor
that
has
a
decent
visualization
for
an
open,
telemetry
summary
metric.
C
B
Stands
right
now.
Histogram
support,
though,
is
on
the
horizon,
but
it's
it's
pretty
new,
and
so
I
wouldn't
expect
instrumentation
to
actually
be
generating
summary
metrics,
but
the
ability
to
maybe
configure
through
maybe
a
view
once
that
capacity
is
there
might
be
desirable.
But
I'm
I'm
not
saying
that
super
strongly
right
now,
because.
A
A
About
it
yeah
I
mean,
if
it's
a
like
important
thing,
we
should
definitely
raise
it
in
the
spec
meeting.
Also
because,
at
least
in
the
initial
version,
there
is
no
way
like
a
vendor
can
write
a
new
aggregator
will
only
have
I
mean
we
will
not
be
exposing
any
mechanism
for
someone
to
replace
the
built-in
aggregator
with
a
new
one
right.
So
if
it's
not
supported
by
the
sdk,
then
you
don't
have
any
way
to
create
it.
A
So
if,
if
it
is
important
that
we
should
resit
in
the
thursday
or
tuesday
matrix
tech
meeting
and
see
whether
it
is
a
oversight
or
it
was
intentionally
left
out
in
the
aggregator
model,
because
that
aggregator
was
like
merged
like
a
what
a
week
and
a
half
bag,
so
there
was
no
like
mention
at
all
about
it,
so
I
would
assume
I
mean
I
assumed
that
it
was
already
like
gone.
It's
just
there
for
like
historical
reasons,
that's
what
I
see,
but
if
it
is
otherwise
yeah
that
we
can
definitely
add
it
back.
B
A
Like
if
the
aggregate
the,
if
the
output
produced
by
the
built-in
aggregators
are
not
sufficient
for
any
back-end,
then
it
should
be
raised
as
a
spec
issue,
because
there
is
no
way
you
can
work
around
it
like
even
like.
A
We
had
a
similar
conversation
for
the
microsoft
specific
exporter,
which
was
expecting
like
min
and
max
as
well,
and
there
is
no
none
of
these
aggregators
are
producing
the
nmx
right
right
right
but
like,
and
there
is
no
way
we
can
solve
that
in
exporter,
because
we
don't
allow
any
extensibility
point
so,
but
for
us
it
was
considered
a
like
optional
thing,
so
we
decided
not
to
perceive
it.
But
if
there
are
any
similar
things,
it
would
be
useful.
A
All
right,
thank
you,
yeah,
okay,
since
there
are
no
other
questions.
I
can
quickly
summarize
what
I
am
trying
to
do
here.
So
folks,
who
are
already
familiar
with
our
tracing
and
logging
pipeline
knows
how
do
we?
How
does
our
export
pipeline
work?
So
we
have
a
thing
called.
We
have
a
fixed
sized
buffer
and
I
think
we
call
it
circular
buffer.
It's
a
circular
buffer.
It's
a
fixer
size
thing,
so
we
create
one
at
the
startup
and
all
the
activity
or
law
would
be
just
added
into
that
fixed
buffer.
A
A
So,
even
if
you
produce
activities
at
a
very
fast
pace,
then
the
exporter
can
keep
up
will
not
go
beyond
a
limit,
so
the
cap
is
enforced
at
the
circular
buffer,
so
we
just
drop
it
and
for
the
actual
exporter
we
did
not
expose
the
software
buffer.
We
did
not
give
the
server
buffer
to
the
exporter;
instead,
we
wrote
our
own
abstraction
on
top
of
buffer.
A
A
So,
in
short,
the
export
cycle
any
or
the
exporting
part
does
not
have
any
locations.
It's
just
like
dragging
through
a
like
pre-allocated
array,
and
the
actual
iterator
itself
is
a
struct.
However,
for
matrix
it
was
not
moving
that
way.
Basically,
if
you
look
at
the
current
code,
what
we
are
doing
is
we
are
creating
in
the
list
and
that
lists
these
regulators
for
each
export
cycle
and
we
copy
things
from
the
memory
state
into
that
list,
and
we
give
that
list
to
the
export.
A
A
So
obviously
this
would
be
like
every
time
you
export
there
will
be
like
a
lot
of
lists
being
created,
so
it
was
a
like
number
one
issue
which
I
was
trying
to
solve
but
like,
while
I,
while
I
ever
said
it,
I
tried
to
fix
some
other
things
as
well,
so
what
the
solution
is
very
similar
to
what
we
need
for
other
signals.
A
So
we
have
a
pre-allocated
array-
it's
not
circular
or
anything,
because
there
is
no
ability
to
remove
a
data
point
once
it's
created,
so
we
have
like
hardcoded
in.
We
have
an
array
of
size
n
for
each
metric
string.
A
So
what
that
really
means
is,
if
you
have
a
metric
when
that
metric
is
created
for
the
very
first
time
we
have
a
callback
in
dotnet,
and
at
that
time
we
allocate
like
an
array
of
size
n,
which
stores
all
the
creators
and
all
the
subsequent
operations
would
be
just
operating
on
that
fixed
array.
So,
after
the
initial
creation
of
the
metric
stream,
there
is
no
extra
location
and
for
exporting
I
added
a
new
overload
to
the
existing
batch
so
that
the
batch
knows
how
to
navigate
through
the
list
of
metrics.
A
So
it's
mostly
the
same
as
activity.
So
if
you
once
you
open
the
pr
you'll
see
that
I
just
modified
existing
patch.
However,
like
for
metric,
there
is
a
new
thing
which
is
under
each
metric.
A
We
have
like
up
to
end
data
point
so,
and
this
is
something
which
I'm
still
like
working
on
so
as
of
today,
the
individual
metric
points
are
a
struct
I
and
there
is
no
way
I
can
use
the
batch
to
navigate
this
metric
points,
because
patch
only
takes
a
t
of
a
class,
so
you
cannot
take
structs.
A
So
what
I
did
for
now
is
I
created
a
new
thing
thing.
I
called
it
patch
metric
which
allows
the
user
to
walk
through
the
metric
points
in
an
analog
freeway.
So
that's
the
current
change
and
I
don't
I'm
not
100
convinced
that
it
should
be
a
struct.
A
A
We
have
a
pre-allocated
array
of
like
thousand
metrics,
so
if
you
ever
create
like
keep
creating
instruments
new
and
you
will
cap
1000
and
within
metric,
we
have
an
array
of
metric
points.
So
this
is
struck.
This
is
what
I'm
not
a
hundred
percent
sure
about,
so
it's
just
a
like
overall
structure
and
we
used
to
have
like
four
separate
aggregators,
but
right
now
all
the
logics
are
like
made
into
this
class
itself
so
like
when
you
get
an
upgrade.
A
We
just
do
the
update
right
there
without
going
any
place
and
like
by
doing
this,
we
did
some
optimization
it's
because
previously
in
the
whole
path,
we
were
kind
of
looking
up.
What
at
the
letter
to
pick
up
from
a
dictionary
that
that
part
is
still
there.
However,
like
once
we
were
introducing
views,
we
were
kind
of
looking
at
the
view
config
on
default
path,
to
figure
out
what
to
do
with
the
incoming
measurement,
so
kind
of
move
all
those
logic
into
the
metric
construction.
A
So
we
decide
like
this
is
a
place
where
metric
is
being
created
or
an
instrument
is
created.
So
we
decide
what
type
of
instrument
it
is
and
what
type
of
aggregation
to
apply.
So
these
decisions
are
made
at
construction
time,
so
in
the
hot
path
where
the
metric
coding
is
happening.
All
we
do
is,
let
me
show
it
so
we
just
figure
out
like
which,
like
time,
series
to
update,
and
once
you
find
the
time
series,
it's
just
a
like
update
core
on
it,
so
this
is
still
little
bit
expensive.
A
In
fact,
this
is
the
like
90
percent
of
cost
in
the
hot
path,
because
this
is
basically
looking
at
the
dictionary
to
find
which
time
series
to
attach
this
to
and
to
do
that,
it
has
to
do
something.
Memory,
sorting
and
also
this
I
expect-
will
be
optimized
later.
I'm
not
focusing
on
it
right
now,
because
this
is
purely
in
general
implementation
detail.
A
Neither
the
export
orders
or
the
user
would
know
it,
so
we
should
be
able
to
optimize
it
later.
So
once
views
are
introduced,
there
will
be
no
change
here,
because
views
are
taken
care
of
the
build
time
and
when
you
do
the
new.
As
for
the
finding
for
a
given
tag,
you
find
the
time
series
and
you
just
call
the
update.
There
is
no
need
of
checking
whether
a
view
exist
or
which
we
used
to
match,
because
all
those
would
be
instrument
creation
time.
So
it's
mostly
like
soldering.
A
So
when
you
create
an
instrument,
you
solder
it
to
the
right
views
already,
so
that
when
you
actually
get
a
measurement,
you
you're
already
soldered.
So
it's
not
exactly
like
that
term,
but
very
close
to
what
that
is.
A
So
those
are
the
major
changes
here
and,
of
course,
since
I
touched
the
exporter,
I
was
forced
to
all
the
exporters.
Otherwise
I
couldn't
imagine
so
depending
on
like
how
big
the
final
pr
is.
I
might
break
it
down
into
like
really
small
ones
and
try
to
make
each
exporters
one
by
one.
A
Maybe
like
I
see
like
once
I
mark
it,
does
not
draft
if,
if
it's
still
hard
to
review
I'll
break
it
down
otherwise
I'll
go
to
this,
so
that
there
are
open
questions.
Like
I
said
about
the
usage
of
struct
versus,
I
mean
the
structures
we
are
basically
coping
in,
the
copy
cost
might
be
more
than
what
we
say
from
saving
their
location,
so
those
are
few
things
which
I'm
still
working
on,
but
I'll
mark
the
prs
ready
when
I
at
least
have
for
my
own
opinion:
okay
yeah.
A
There
are
like
few
other
things
which
is
like
these
would
probably
be
like
separate
pairs.
We
can
replace
try
to
replace
the
dictionary
with
log
with
a
concurrent
dictionary,
because,
based
on
one
of
my
experiments,
which
I
showed
earlier
concurrent
dictionaries
seem
to
be
faster
for
reads,
because
once
the
matrix,
like
once
the
app,
starts
up
and
warms
up
most
of
the
time
we
are
just
going
to
be
like
looking
up,
we
are
just
going
to
read.
The
rights
would
be
like
very
less
frequent,
so
it
might
be
benefiting
from
concurrent
dictionary
there.
A
But
again
it's
going
to
be
a
separate
thing,
and
this
is
something
which
I
already
mentioned,
like
the
sorting
of
keys
and
values
is
a
major
cost,
because
we
all
we
get
from
dotnet
runtime
is
a
rigorous
span
which
is
unordered,
so
we
have
to
pay
the
cost
of
sorting
it.
A
I
saw
that
like
when
the
dot
net
forks
nova
from
dot
net
team
and
they
did
the
prototype,
they
did
some
magic
to
reduce
the
impact
here.
It's
I
mean
I
have
to
check
what
was
the
thing
which
they
did,
because
I
have
like
one
like
very
neat
solution
which
is
like
whenever
we
see
a
new
unordered
thing.
A
So
we
worry
about
all
these
things
once
we
have
rest
of
the
things
ready,
okay,
yeah,
like
any
like
questions
or
like,
doesn't
even
want
to
go
deeper
into
like
any
of
the
things
or
like
you
can.
Let
me
I
can
mark
the
prs
ready
for
review,
probably
by
tonight
or
late
worst
days
like
tomorrow
morning,
and
then
you
can
take
a
look
or
if
you
want
to
go
over
any
specifics,
I
can
do
it
as
well.
So
things
I
mean
people
who
are
like
writing
exporters.
A
A
And
you
can
mostly
like
copy
what
it
is
doing.
So
you
can
see
like
exporter
is
now
like
exporting
metric,
which
means
the
actual
exporter
will
get
a
batch
of
metal.
This
is
very
similar
to
activity,
so
in
activity
case
it
was
a
basic
quarter
of
activity.
Now
it's
metric
and
each
metric
within
the
metric.
It
can
have
like
any
number
of
data
points.
So
that's
what
we
deal
with
here
like
within
each
metric.
A
We
have
these
time
series
concepts
so
yeah.
That's
the
summary
of
this
pr
and
yeah.
This
fixes
all
the
issues
with
has
helped
like
fix
the
problem
with
this
issue
and
it's
much
more
efficient
than
the
previous
one,
so
yeah
you
can
take
a
look
and
if
you
have
questions
like
I
mean
don't
yet
look
at
it.
If
you're
curious,
you
can
look,
but
I'm
still
in
the
draft
state.
So
let
me
finish
it
and
we
can
discuss
any
like
comments
in
the
pr
search.
A
All
right
are
there
any
questions,
otherwise
I'll
move
to
the
last
topic,
so
we
have
a
new
member
to
introduce.
So
if
there
are
no
questions,
we
can
move
on
to
that.
A
C
A
Yeah
so
she'll
be
working
with
along
with
kersha
and
myself
and,
like
other
folks
in
our
team,
so
michael
maxwell
is
kind
of
the
new
journey
to
microsoft.
Expect
like
at
least
50
percent
of
their
time
will
be
in
the
open,
elementary
dot
net
report.
So
if
you
have
like
any
small
issues
which
requires
I
mean
which
is
good
for
like
learning,
please
let
us
know
that
we
can
assign
to
the
new
journeys.
A
I
think
both
of
them
already
submitted
their
first
years
and,
like
already
like
michael
just
joined
the
what
open
telemetry
over
last
week.
So
for
you
team,
like
we
need
an
approver
from
non-microsoft,
so
either
thing
like
michael
or
allen
to
get
that
going.
A
I
think
there
is
another
maintainer
like
survey,
but
he's
usually
not
active
these
days,
so
he
was
the
one
of
the
original
like
maintainers,
but
as
of
now
like
it's
michael
and
alan,
who
are
the
maintainers
non-microsoft
and
yeah,
we
have
other
folks
like
michael
who
makes
a
lot
of
contributions,
but
not
officially
like
maintainers,
oh
by
the
way.
It's
also
interesting
in
case
folks
did
not
notice.
Michael
is
also
now
part
of
microsoft.
As
of
last
week,
converts.
A
Okay,
yeah
any
questions
I
think
like
we
did
discuss
something
about
sequel,
mine
breaking
us
with
michael,
like
two
weeks
back.
I
haven't
had
that
time
to
follow
up
so
we'll
do
that,
like
offline
with
maker
line,
postback
all
right
yeah
any
questions
to
be
discussed,
so
I
will
be
definitely
like
thinking
like
I
learned
to
get
some
help
with
otlp
part
and
as
soon
as
I
get
some
more
time,
I'll
mark
the
prs
ready
for
review
and
then
I'll
focus
to
get
the
otp
part
correct.