►
From YouTube: 2022-05-19 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
A
A
All
right
cool,
I
guess
we
can
get
started
all
right
as
usual.
Guys,
please
add
your
names
to
the
attendees
list.
If
you
could
nice
well.
Looking
at
the
agenda
topics
today
looks
like
matrix.
Sdk
rc1
was
released,
I
believe
two
days
ago,
so
awesome
job
everyone
thanks
for
all
the
reviews
and
patience,
especially
diego
for
leading
this.
A
I
think
diego's
joining
today
too.
It's
just
a
bit
late,
but
yeah.
It's
pretty
awesome,
aaron
thanks
for
linking
the
blog
blog
post
yesterday,
so
yeah
we're
all
pretty
excited
for
this.
Next
steps
try
to
get
as
many
people
to
use
this
as
possible.
You
know
so
our
rc
can
be
verified
and
we
can
make
an
actual,
stable
release
so
great
job.
Everyone,
all
right,
cool,
doesn't
look
like
there's
any
other
topics
right
now,
so
I
can
go
right
ahead
into
the
issues.
A
E
So
I
I
added
this
issue
to
the
doc.
It's
basically
a
question
about,
like
you
know,
what
do
we
do?
E
You
know
when
let's
say
so,
this
this
issue
specifically,
is
about
the
the
prometheus
exporter
you
know
has
like
stores.
All
of
that
in
in
a
double
ended
queue.
So
what
if
the
the
scrape
is
not
happening
for
some
reason,
you
know
it
remains
in
memory.
So
the
author's
question
is
that
so,
like
that's
not
going
to
be
ideal,
you
know,
that's
not
desirable.
What
do
we
do
about?
You
know
such
cases.
I
also
created
another
issue
to
you
know
bring
up
this
discussion.
E
Let's
say
somebody
does
some
some
some.
You
know
unintended
mistake
in
their
sdk
like
something
when
adding
the
metrics.
So
how
do
we?
What
do
we
do
at
least
to
have
some
sort
of
checks
in
our
sdk
and
then
an
exporter
pipeline
to
prevent
it
from
blowing
up
the
whole
application.
Just
wanted
to.
You
know
start
some
discussion
so
that
we
can
start
working
towards
doing
something
about
it.
B
Yeah
yeah,
this
is
definitely
a
good
thing
to
start
yeah.
I
agree
it
could
be
pretty
bad.
I
think,
with
regards
to
prometheus,
though,
like
the
exporter
specifically
doesn't
have
this
problem
anymore,
because
I
think
this
issue
is
based
on
the
previous,
like
pull-based
implementation,
where
it
basically
buffered
the
metrics
that
get
pushed
into
it
until
it
gets
scraped,
whereas
now
the
prometheus,
sorry,
the
prometheus
exporter,
will
only
read
the
metrics
while,
while
it's
being
scraped
right,
so
it
will
do
it
dynamically
and
won't
buffer
those
metrics.
B
So
I
think
in
regards
to
prometheus,
this
should
be
solved,
but
yeah
like
the
the
problem,
still
remains.
If
the
cardinality
is
really
high,
it's
gonna
allocate
a
ton
of
objects,
yeah.
E
Yeah,
so
I,
as
so
I
mean
we
can
start
having
some
sort
of
limits
like
some,
you
know
upper
bound,
we're
going
to
limit
it
to
some
number.
I
don't
know
what
other
sdks
are
doing.
Maybe
we
should
check
out
what
dot
net
and
I
think
java
was
our
other
other
one
and
see
what
we
can.
B
Yeah
for
sure
I
think
yeah
we
should
also
it
would
be
nice
to
suggest
like
a
view
to
reduce
the
cardinality,
specifically
the.
What
is
it
the
drop
that
the
label?
B
Sorry,
the
attributes
to
drop,
attribute
keys
to
drop
if
there's
like
a
high
cardinality
label,
specifically,
I
think
also
there's
there
was
in
like
the
semantic
conventions
they
list
which,
like
labels,
would
be
higher
cardinality,
and
I
believe
that
there's
going
to
be
like
this,
an
api
thing
which
would
let
the
instrumentation
author
say
hey,
this
label
should
be
off
by
default,
but
it's
still
part
of
the
instrumentation
to
hopefully
reduce
the
cardinality
as
well.
B
I
know
that's
kind
of
orthogonal
to
the
to
the
main
issue,
but
it
would
be
good
if
you
know,
for
instance,
it's
like
a
http
duration
in
there
and
the
instrumentation
doesn't
support
the
http.path
attribute.
So
instead,
it's
like
you
know,
making
a
new
metric
for
every
single
request
with
a
unique
id
or
something
like
that.
We
should
recommend
deleting
that
label
or
something
like
that.
If
possible,.
B
No,
I
don't
think
so.
I
I
think
actually
we
can
get
rid
of
the
double
ended
queue
altogether.
I
think
it's
sort
of
we
just
haven't
updated
it
if
you
wanna.
E
That's
kind
of
right:
let's
see
if
it
collects
the
measurement
from
the
sdk.
Only
when
it's
when
the
like
this
script
is
invoked
right,
yeah.
B
A
Nice,
as
for
this
issue,
is
this:
is
this
only
for
pull
base,
exporters
or
every
metric
exporter.
B
I
think
like
there
is
the
force
flush
method
on
the
exporters,
so
they
are
allowed
to
buffer
if
if
they
want
or
do
retries,
so
I
don't-
I
don't
think
it's
an
exporter
issue
necessarily
like
that.
There's
no
way
we
can.
We
can
protect
against
that
for
all
all
export
implementations,
but
so
were
you
going
to
say
something
shikhan.
E
No,
no,
I
was
saying
that
it's
not
necessarily
exported
issues.
This
is
like
an
umbrella
issue
for
everything
that
related
to
you
know,
unknown
issues.
B
Yep,
and
if
I,
if
I
remember
right,
I
think
java
will
delete
really
old
streams
and
I
think
there's
some
guidelines
and
there's,
I
guess
a
guideline
dock
in
the
spec,
about
recommendations
for
doing
that
and
how
you
can.
B
E
Yeah
that
that
would
be
great
can
also
take
a
look
at
what
dot
net
on
other
sdk
is
doing,
and
then
we
can
try
to
come
up
with
some
solution.
I
I
looked
at
the
matrix
recommendation
guidelines.
There
wasn't
any
specific,
you
know
guideline
because
they
wanted
the
sdk
authors
to
decide
what's
best
for
their
implementation
and
look
at
other
sdks
and
do
something.
A
So
I
know
we
yeah,
I
don't
know
if
this
is
like
an
export
like
like
circumstant,
it's
not
it's
specific
in
x4
issue.
How
do
we
handle
this
in
tracing?
Don't
we
just
like
use
a
cue
in
the
span
processor?
Are
we
able
to
just
do
something
similar.
B
So
the
problem
is,
we
have
to
keep
like
the
aggregations
around
in
like
a
addressable
fashion,
so
that
they're
they're
basically
stored
in
dictionaries
by
the
attribute
keys
right
now
yeah.
So
it's
like
it's
more
of
an
issue
of
like
which
ones
do
you
drop,
you
need
to
know
which
ones
are
the
oldest
and
and
yeah
and
handle
it
in.
Like
a
you
know,
graceful
fashion.
B
So
there's
no
like
q
in
the
metrics
in
the
metrics
data.
It's.
A
Right,
so
I'm
I'm,
I
guess
I'm
confused
it's
like
when
would
we
want
to
get
rid
of
aggregation
information
like
instead
of
storing
them
forever.
B
Yeah
most
likely
like
like
a
lr
like
an
lru
or
something
like
that.
A
Yeah
not
not
like
how
but
like
like,
we
have
some
sort
of
limit
right
like
that's.
Why
we
have
we're
trying
to
have
this
because
right
now
we're
just
storing
aggregations
like
infinitely
right
and
we
never
get
rid
of
them.
A
I
see
so
we
have
some
sort
of
limit
and
but
what
what's?
What's
the
I
guess
we
could
talk
about
design
after
all,
but
yeah?
Okay,
I
understand
the.
I
understand
the
premise
for
this
cool
so
as
for
this,
do
one
of
you
want
to
comment
on
this
just
explaining
our
current
implementation
of
the
prometheus
exporter
yeah?
How.
E
Yeah,
I
added
this
again.
I
don't
add
a
comment
that
you
and
alex
was
alex.
Had
some
thoughts
about
this
just
want
to
bring
it
up.
This
is
like.
There
was
also
some
mention
about
this
in
the
community
that,
as
you
know,
other
linked,
I
think,
in
in
coming
months.
More
people
will
be
interested
in
knowing
about
this.
B
I
see
yeah,
I
haven't
seen
this
new
comment
here,
but
I
think
didn't
didn't
alex.
I
thought
alex
won
the
call
I
thought
you
mentioned.
You
tried
something
out
or
you
had
some
ideas
for
c,
plus
plus,
like
minimal
sdk
or
something.
F
Yeah
I
I
haven't,
I
haven't
tried
anything
with
it,
but
I
know
there
was
a
suggestion.
I
don't
know
this
was
probably
a
year
ago
now
from
ted
young
around
like
ensuring
that
the
apis
were
compatible
between
implementation
by
doing
something
like
wrapping
c
plus
plus
implementation
in
in
python.
B
B
Yeah
I
played
around
with
that.
It
was
mostly
like
for
for
collector
exported,
so
you
could,
you
could
basically
call
you
could
convert
like
stuff
from
python
into
otlp
and
then
in
the
same
process,
send
it
out
through
a
collector
exporter,
but
it
was
not
anything
to
do
with
the
instrumentation
right.
B
B
I
think
the
only
thing
to
call
call
out
my
comment
was,
I
said,
like
I
think,
if
people
are
asking
for
this
for
python,
they'll
probably
ask
for
it
for
like
ruby
and
node
as
well.
So
I
think
like
it
might
be
reasonable
to
let
the
c
plus
plus
sig.
You
know,
figure
out
a
way
forward
since
since
they're
sort
of
like
the
lowest
common
denominator
for
a
lot
of
interpretive
languages,
if
they
want
to
have
like
foreign
function,
calls
to
to
like
the
underlying
c
plus
sdk.
B
A
It
looks
like
I
haven't
read
the
latest
comment
yet,
but
it
looks
like
they're
from
the
c
plus
plus
community,
just
judging
by
their
answer.
I
think
they
might
be
waiting
on
our
response
so
alex.
Do
you
mind
just
kind
of
like
writing
that,
like
we
haven't
really
taken
any
steps
towards
this?
Yet
it
would
probably
in
like
an
onus
on
their
their
side
right
to
if
they
want
to
move
this
forward.
E
A
All
right,
nice
are
there
any
other
pr's
issues
or
topics
that
people
want
to
discuss.
B
Yeah,
do
you
want
to
discuss
that
thing?
There
was
an
issue
with
the
auto
instrumentation,
but
I
wasn't
able
to
reproduce
it.
I
don't
know
if
you
have
more
context
right
right.
A
Yeah,
let
me
find
that
thing.
A
All
right
so
yeah
yeah
interesting
that
you're
able
to
reproduce
this.
I
personally
didn't
run
this,
but
jeremy
he's
working
on
microsoft.
Now
he
ran
this
on
the
latest
releases
and
he's
able
to
see
the
client
spans,
but
none
of
the
server
spans
are
being
generated
on
the
floss
side.
A
Is
there
some
sort
of
like
like
input
like?
Is
it
not
like
printing
to
the
console?
Do
you
know
anything
about
that
for
flask
apps?
We
are
running
on
windows
as
well.
Oh.
A
Right,
I
don't
know
if
I
actually
didn't
really
take
a
look
at
the
change.
A
Oh,
this
is
the
pr
itself.
Oh
yeah,
I
guess
you
could.
What
was
what
was
this
specifically
for
this
change?
Do
you
remember.
A
Okay,
oh
yeah.
I
remember
this
yeah
yeah
yeah,
so
for
windows
like
it
doesn't
really
affect
things
but
like
like.
I
think
it
complains
whenever
there's
like
an
r
proceeding
like
the
file
path,
but
this
has
never
like
affected
any
performance
issues.
It
just
throws
a
warning,
but
yeah
we
don't
see
anything
printed
to
the
console
for
flask,
but
it's
good
that
you
you're
able
to
actually
get
it
working.
The
the
weird
thing
is
it:
it
works
prior
to
this
pr
change.
A
So
I
don't
have
all
the
context
I'll
invite
jeremy
next
time,
but
not
much
to
discuss
right
now.
Just
just
gonna
keep
trying
to
give
you
context,
there's
not
much
else.
It's
just
like
it
didn't
work
before
this.
So
we'll
have
to
see.
E
E
We
had
some
issue
in
that
with
that
release,
so
I
was
asking
like
if
that
was
the
reason
that
they're
not
seeing
that
the
data,
so
is
it
not
even
printing
it
to
the
console
or
was
it
an
issue
with
the
otlp
exporter.
A
Right,
it
has
nothing
to
do
with
tlp
we're
simply
running
it
like
we're
just
running
the
auto
instrumentation
example.
That's
it.
Okay,
yeah.
A
A
Don't
believe
it's
this
change
that
you're,
referring
to
right
here.
E
A
Yeah,
so
it's
okay,
I
I
will
investigate
with
him.
So
thanks
for
your
help,
aaron
answer
khan,
but
it's
good
to
know
that
it's
working
for
you
guys,
at
least
you
know
we
don't
have
auto
instrumentation
broken.
B
Yeah,
if
it
actually
broke
it,
the
only
thing
I
could
think
is,
and
it's
specifically
on
windows.
Maybe
it's
something
with
how
the
flask
development
server
runs.
Maybe
it
starts
unprocessed
for
some
reason.
A
Right,
I
guess
like
like
this
might
be
a
long
shot,
but
has
has
auto
inspiration
always
worked
for
windows.
Yes
know
like
I'm.
I
don't
know
if,
like
it
just
so
happens,
we've
always
been
testing
on
linux.
You
know,
I
don't
know,
but
I
think
I've
done
this
before
and
it
did
work
so
like.
I
have
fairly
confident
confident
that,
like
it's
just
something
stupid,
so
yeah
yeah.
F
E
E
F
B
B
B
It
is
yeah
now
that
we
have
like
our
three
signals
done.
I
imagine
we
can
we'll
probably
be
focusing
a
lot
on
instrumentation
for
the
next.
You
know.
A
B
A
Yeah,
I
think
about
jeremy.
It
will
be
focused
primarily
on
instrumentation,
so
he's
just
ramping
up
right
now.
A
Oh
also,
question:
does
anyone
know
the
progress
of
blogging
right
now
in
terms
of
the
specs?
Last
time
I
checked,
the
data
model
was
stable.
E
There
is
nothing
that's
happening
on
the
sdk
front,
nothing.
You
know
change
on
the
sdk
specification.
I
mean
there
was
mostly
discussion
about
whether
or
not
to
introduce
the.
Like
I
mean
we
have
the
ap
for
the
tracing
and
matrix,
but
the
sdk
is
basically
the
language
independent.
We
are
hooking
the
you
know,
handlers
into
the
libraries
and
then
exporting
them.
So
the
discussion
there
was
there
were
a
couple
of
discussions
around
introducing
api.
E
I'm
not
sure
what
has
happened
on
that
front
and
and
then
the
other
discussions,
the
sequels,
having
is
about
using
the
same
log
data
model
for
the
events
and
the
client
instrumentations
like
browser
mobile
using
you
know
same
the
same
data
model
yeah
on
the
sdk
side.
They
want
any
major
changes.
A
A
Okay,
so
are
like:
are
they
being
blocked
on
that
api
discussion
like?
If
that's
the
case
then
like?
Is
anyone
trying
to
move
sdk,
fax
forward.
E
I'm
not
sure
about
that.
I
should
ask
the
I
I
should
ask
them
next
time.
A
B
B
I
was
wondering
what
kind
of
like
outreach
we
can
do
to
sort
of
get
people
to
try
this
rc
before
we
mark
the
thing
as
stable,
like
I
think,
for
for
a
minimum
we
could
put
stuff
in
our
readme
and
stuff
like
that.
Asking
people
to
try
it
out,
but
it
would
probably
be
good
to
you
know,
do
something
more
than
that.
E
Yeah,
I
think
I
think
one
easiest
thing
we
can
do
is
to
add
more
examples.
There
isn't
anything
really
on
that
side,
the
documentary
like
documentation
is
minimal,
and
then
there
were
people
there.
Some
questions
on
the
slack
channel
about
how
do
you
do
certain
things
with
the
metrics
sdk?
B
I
think
that's
that's
a
really
good
point,
like
the
docs
aren't
great.
In
addition
to
the
examples,
do
we
want
to
open
up
issues
to
do
that
like
are,
are
folks
able
to
work
on
that
and
help
out
or.
B
A
Yeah
there
is
like
open,
prs
and
open
telemetry.
I
o
I've
kind
of
been
neglecting
them,
but
like
now
that
we're
kind
of
got
metrics
kind
of
more
stable
quote-unquote.
I
think
we
could
put
more
effort
into
making
our
docs
good.
Now,
there's
also
one
on
auto
instrumentation.
I
believe
that's
open
so.
C
E
Yes,
I
they
opened
the
pr
on
the
open
delimiter
that
I
took
like
some
dogs.
I
was,
I
got
some
time
to
review
it.
I
had
some
comments
left.
I
I
I
haven't
taken
a
look
at
it
again,
but
I
think
they
haven't
addressed
some
comments.
A
Cool
cool
I
was
wondering
if
like
it
is
on
open
time
to
io
now
but
like
will
we
be
the
ones
who
are
maintaining
this?
I
don't
remember
the
like
the
supportability
contract
for
this
when
we
decided
to
move
everything
upstream,
yeah.
E
I
mean
we
are
the
owners
of
like
the
python,
hyphen,
approvers
and
python
maintenance,
other
owners,
I
believe
for
that
folder.
So
our
responsibility
to
maintain
them.
A
Right
right,
so
I
guess
now
that,
like
metrics
is
a
better
place
like
it.
It
would
make
more
sense
for
us
to
begin
the
documents
instead
of
like
having
to
go
back
and
forth
with
people
that
aren't
as
familiar
with
our
apis.
A
A
Cool
sounds
good
aaron.
Would
you
be
able
to
create
an
issue
about
what
you
were
referring
to
yeah.
B
I
can
do
that.
I
will
make
an
issue,
maybe
a
few
issues
for
examples
and
then
some
for
more,
like
general
documentation,
sure
that'd
be.
A
A
Oh,
I
believe
sukanth
left
some
comments
that
haven't
been
addressed
yet
yeah.
A
Okay,
okay,
so
that'd
be
good,
so
that's
like
it's
not
blocking
what
we're
doing
so
or
what
we
want
to
get.
We
want
to
get
done.
B
A
B
Like
the
getting
started,
doesn't
doesn't
mention
metrics
now
it
doesn't
look
like
it
has
any
metric
stuff
in
getting
started
or
in
the
manual
instrumentation
section.
A
Right
so
I'm
wondering
like:
do
we
want
to
directly
contribute
to
ultimate
intelligent
io?
E
I
mean
I
I
don't
remember
yeah.
I
should
have
to
go
back
and
see
what
was
the
procedure
last
time.
I
remember
we
added
to
our
own
repo
and
then
they
pulled
it,
but
I
don't
know
if
that's
still
the
case.
A
Yeah,
I
think
we
like
got
rid
of
that
workflow
right,
like
we
had
like
a
website
folder
or
something
like
that,
and
then
they
would
like
manually
copy
it
to
open
telemetry
io.
But
I
believe
we
just
make
prs
to
this
now.
C
B
E
E
Yeah,
I
think
we
can
create,
we
can
create
an
issues
and
then,
if
people
want
to
volunteer,
that's
fine,
if
nobody,
you
know
volunteers,
we
will.
I
I
mean
I
will
pick
up
getting
that
getting
started,
updated.
B
A
Cool
yeah
so
aaron
to
address
your
question.
I
think
we
just
create
issues
in
our
repo
or
I'm.
I
don't
know
the
workflow
for.
B
Okay,
cool
so
I'll
I'll
make
issues
for
all
these
things
and
then
I'll
try
to
know
you
all
to
volunteer
to
do
them.
I
guess
sounds
good
yeah.
B
B
We'll
do
okay
and
then
the
only
other
thing
was
the
instrumentation
work.
So
there's
like
currently
two
http
semantic
conventions
for
metrics.
They
shouldn't
be
too
difficult
to
implement
or
to
add
to
the
existing
stuff
we
have.
I
know
somebody
was
working
on
it.
I
saw
in
slack.
A
Hey,
hey,
I'm
sorry,
you
were
saying.
D
Yeah
yeah
so
yeah
I
am
working
on
it,
so
I
was
going
through
the
previous
metric
instrumentation
that
we
had
and
the
next
the
new
one.
So
we
are
trying
to
understand
the
metrics
so
yeah,
as
we
discussed
like
having
docs
for
better
docs
for
metrics.
That
will
help.
E
D
I
know
I'm
trying
doing
instrument
so
add
the
support
from
my
site.
So
I
I
did
go
through
the
spec
directly,
so
I
was
able
to
understand
things
and
relate
with
the
with
the
implementation
that
we
have
in
our
api.
So
it's
so
it's
clear
right
now
and
if
I
face
some
difficulty
in
the
channel.
A
B
Yeah,
that's
that's
awesome.
I
think
the
server
ones.
I
don't
think
we
had
any
server
ones
previously
and
those
ones
like
they're
different
than
the
like
requests,
one
so
yeah
we
should.
We
should
work
on
that.
I
wonder
one
like
potential
thing:
I'm
guessing
is
gonna
happen.
Is
I
remember
for
tracing.
We
had
this
issue
with
you
get
like
parent
and
child
trait
spans
for
like,
for
instance,
http
libraries
that
were
composed.
B
So
this
would
be
like
pretty
bad
for
metrics
since
they're
on
by
default
all
the
metrics.
So
I
wonder
if
it
makes
sense
to
just
do
metric
instrumentation
once
at
the
lowest
level
for
these
or
if
we
want
to
have
like
the
same
mechanism
to
you,
know
like
check
if,
if
some
something's
already
recording
a
specific
semantic
invention.
B
Like
a
certain
sequence
of
spans,
but
I
don't
remember
to
be
honest.
A
A
Yeah,
for
example,
I
guess
we
still
have
time
left,
so
we
can
talk,
for
example
like
if
they
do
like
requests
and
url
like
like
they
wouldn't
duplicate
the
span
for
url
like
the
underlying
instrumentation,
if
they're
instruments
with
both,
but
in
the
case
in
which
you're
talking
about
like
certain
instrumentations
that
generate
multiple
spans,
I
think
some
some
there
are
certain
like
consumer
producer
spans
that
are
produced
is
that
are
those
examples
that
you're
referring
to
as
well
aaron.
B
Yeah,
that's
what
I
thought,
but
right,
maybe
we're
just
going
to
have
to
work
on
this
and
then
work
through
bugs,
rather
than
try
to
break
bro.
You're
gonna
use
it
so
yeah.
B
I
guess
the
point
I
was
getting
at
is:
maybe
we
should
open
some
issues
for
instrumenting,
maybe
like
ascii
and
whiskey
with
metrics
for
for
server
side,
metrics
right
just
to
start
at
least
I
feel
like
for
for
our
stable
release,
we're
going
to
have
to
have
some
sort
of
instrumentation
story,
so
got
it
all
right.
I
I
guess
I'll
just
be
making
a
lot
of
issues
for
everybody,
then
yeah
and
then
I'll
follow
up
on
slack.
A
So
you
love
making
issues.
Man
it's
causing
problems,
man,
yeah
there
you
go
yeah
that'd,
be
great.
Like
could
procure
a
list
of
popular
instrumentations
that
most
people
would
like
metrics,
for
I
think
asking
whiskey
are
a
great
place
to
start.
A
B
I
don't
know
my
my
knowledge
could
be
out
of
date,
but
I
think
yeah.
If
you
look
at
a
specific
type
of
http,
if
you
look
at
that
one
there's
http
server,
duration
and
server,
active
requests
and
then
there's
client
duration.
So.
A
That's
what
you
meant,
okay
got
it
cool
sounds
good,
yeah,
all
right,
nice,
it's
a
good
effort.
This
will
be
a
good
effort,
cool
anybody
else.
Have
any
other
topics
talk
about?
We
don't
have
anything
else
in
the
agenda.
So
up
jokes,
oh,
never,
mind!
Yeah!
That's
just
you
trust.
Were
you
here
to
talk.
A
G
A
G
So
srikanth,
would
you
like
me
to
give
kind
of
a
overview?
Yes,
yes,.
E
Just
give
us
the
overview,
and
then
we
will
take
a
look
at
it.
I
mean
others.
It
would
be
helpful
for
other
students
to
have
an
idea
of
this.
This
whole
thing
sure.
G
So
yeah,
so
I
have.
I
had
spent
a
couple
of
months
working
through
improving
our
release,
workflows
in
the
in
the
java
repos
and
was
hoping
that
I
could
that
it
could
be
useful
for
other
cigs,
and
so
I
put
together
a
kind
of
a
repository
template
that
has
a
automat,
automated
workflow
release
workflow
as
well
as
some
other
github
common
github
workflows
that
we've
found
useful
and
so
in
this
pr
here.
G
What
I'm
only
proposing
the
the
release,
automation,
workflows
and
if
you,
if
you
go
back
leighton
and
go
over
to
the
repository
template
on
the
top
link,
yeah
so
yeah.
So
this
here
describes
the
release,
automation
and
kind
of
why,
if
you
stay
at
the
top,
the
kind
of
the
important
thing,
I
think
is
why
this
release
automation,
workflow
or
what
right
like,
there's
infinity
different
ways.
G
You
could
automate
your
release,
workflows
and
there's
a
lot
of
good
best
practices
out
there
already,
but
in
particular,
in
open
telemetry.
G
With
the
cla
checks,
we
have
some
additional
restrictions,
and
certainly
one
option
is
to
request
exceptions
to
those
the
cla
checks,
but
what
I
wanted
to
I
kind
of
wanted
to
see
and
implemented
in
the
java
repos
how
we
could
do
it
still
respecting
the
cla
checks,
and
I
I
liked
how
it
came
out.
G
I
liked
the
explicitness
of
it
so,
for
example,
all
of
the
automated
changes
all
of
the
automation
generates
prs
that
then
require
a
human,
reviewer
and
visibility
to
the
community
and
merging,
as
opposed
to
pushing
directly
to
branches
and
pushing
directly
to
maine.
G
So
I
would
say:
that's
one
of
the
primary
drivers
of
this
particular
release.
Automation
process.
The
other
is
releasing
from
release
branches
which
not
everyone
is
doing.
We
worked.
We
were
doing
from
one
of
the
open
limit,
the
java
repos,
but
not
the
other
two
previously,
but
it
looked
like
you
all
were
already
releasing
from
release
branches.
So
I
don't
think
that
is
really
a
change
for
you.
G
So
if
you
want
to
go
scroll
up
leighton
and
let's
go
to
the
mate,
the
ho
code,
just
the
main
yeah
and
then
go
to
releasing
doc,
and
so
this
is
the
release
process
and
so
for
starting
a
new
major
minor
release
right
first
update,
the
change
log
looks
like
you
all
are
incrementally
updating
the
change
log
as
prs
roll
in,
which
is
awesome.
G
I
wish
that
we
did
that
I
have
to
during
the
release
process,
go
through
all
the
old
pr's
and
construct
that
change
log.
That's.
A
G
It
is
that's
why
I
have
this
script
to
draft
the
changelog
entries,
but
you
all
don't
need
that
so,
and
I
had
kind
of
customized
this
in
that
pr
for
your
particular
repo
and
then
you
rely
run.
This
prepare
release
branch
workflow.
So
you
run
this.
It
detects
the
version
in
your
in
your.
G
Main
branch
and
creates
a
new
creates
the
release
branch
based
on
that
name
and
then
so
that
release
branch
is
created.
Just
exactly
as
what
you
know.
You
have
in
maine,
currently
no
changes
and
then
it
creates
two
pull
requests,
one
which
bumps
the
version
in
the
release,
branch
and
one
which
bumps
the
version
in
the
main
branch,
and
so
this
is
under
the
assumption
that
you're,
using
like
a
dash
dev
or
we
in
java
use
dash
snapshot
on
our
main
always
and
then
in
the
release
branch.
G
If
you
don't
want
to
do
that,
we
can
customize
this
to
yeah
yeah.
G
So
and
one
nice
thing
with
the
since
the
prs
are
submitted
by
the
bot
account
and
you
can
have
make
you
can
use
the
bot
account.
So
we
have
a
open,
telemetry,
javabot,
github
user,
and
so
I
logged
into
that
user
and
signed
the
ezcla
with
that
user
as
myself
and
then,
since
the
pr
has
come
from
that
bot,
the
person
who
triggered
this
release
workflow
can
approve
it
and
merge
it
because
it's
a
different
user.
G
Now
I
understand
from
leighton
that
in
your
repo
you
require
two
approvers,
so
that's
not
quite
as
convenient
in
the
java
repos.
We
only
require
one
approver,
so
it's
it.
It
does
make
it
nice
for
triggering
the
triggering
the
release,
automation
yourself
and
reviewing
it
and
merging
it,
makes
that
process
go
quickly.
A
I
think
that
was
like
a
legacy
thing
like
we've,
always
done
that,
because
I
remember
quite
a
while
ago
the
bar
for
reviewing
prs
was
like
it
has
to
be
two
approvers
and
they
both
have
to
be
from
different
companies.
You
know
back
when
there
was
a
lot
of
activity,
so.
G
Yeah,
so
we
do
that
sort
of
informally
on,
like
larger,
you
know
on
more
significant
pr's
for
sure,
but
for
small
things,
yeah
we
find
it
convenient
to
just
a
lot.
Just
have
a
single
reviewer
as
the
minimum
bar.
G
So
then,
after
you
prepare
so
we
can
skip
the
next
section
because
that's
preparing
a
new
patch
release.
So
then,
if
you're
making
a
new
release,
you'd
go
on
and
after
merging
those
version
bumping
prs,
you
would
run
the
release
workflow,
and
so
this
will.
G
Yeah
will
actually
perform
your
release
and
it
it
will
generate
the
github
release
and
it
will
copy
over
the
section
of
the
changelog.
So
you
know
it
detects
the
version
you're
releasing
and
it
strips
out
that
part
of
the
change
log
between
the
that
version
and
the
previous
version
and
automatically
includes
that
in
the
release,
notes
and
publishes
the
github
release.
G
So
it's
it's
a
really
nice
automated,
like
workflow,
it's
just!
Then
it's
done
and
and
released.
G
You
have
a
post
release
kind
of
a
workflow
that
does
the
pi
pi
publishing,
so
that
would
you
know,
still
get
triggered
after
the
github
release
is
published
and
then
there's
a
final
step
here
on
making
the
release
where
on
the
bullet
in
still
under
making
the
release.
G
The
second
bullet
review
and
merge
the
pull
request
above
no
under
the
section
called
making
the
release
above
after
the
release
yeah.
So
the
when
the
actual
release
happens.
That's
when
we
know
the
release
date.
So
I
I
not
don't
love
this.
We
there's
some.
G
We
could
potentially
do
something
different
here
for
sure,
but
this
basically
as
submits
a
pr
to
update
the
change
log
with
the
actual
release
date
that
the
release
was
published
and
then
after
the
release,
then
there's
one
step
of
merging
the
changelog
back
to
main
and
that's
again,
another
workflow
that
audit
generates
an
automated
generates
a
pr
automatically
and
you
can
review
and
merge
that
back
to
main.
G
So
that's
the
high
level
I
would
love
to
leave
like
we've
got
two
three
minutes
here
for
any
questions
and
for
sure
happy
to
follow
up
with
any
questions
on
the
pr.
Also.
E
Yeah
thanks:
this
was
a
lot
of
effort,
some
of
the
steps
currently
one
of
the
maintenance
does
as
of
today.
I
think
this
will
help
improve
that
part.
Yeah,
we'll
take
a
look
at
it
again,
thanks
for
the
all
the
work
that.
C
G
Cool
and
thanks
for
being
my
guinea
pig
on
the
first
non-java
repo
to
try
this
out,
if
I
kind
of
want
to
see
if
it,
you
know
how
it
goes
with
your
group
and
your
repo,
how
much
commonality
there
really
is-
and
you
know
if,
if
it's
more
broadly
applicable
than
just
the
java
repos
and
if
so
then
I'll
share
it
more
widely
with
other
open
telemetry,
sigs.
A
I
think
very
useful,
especially
for
maintainers.
G
A
All
right,
if
not
thanks,
everyone
for
coming
and
I'll,
see
you
guys
next
week.