►
From YouTube: 2021-10-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
After
metrics,
I
think,
is
the
plan,
but
I
think,
like
a
beta
release
is
out
currently
or
something
like
that.
B
Two
minutes
past
okay,
I
mean-
I
know
anthony
is
not
gonna
make
it
today
he's
not
on
vacation,
but
he's
hoping
to
get
a
few
more
people
to
join.
If
not
at
the
end
of
the
world,
we
wait.
We
can
wait
a
few
more
minutes.
B
Yeah,
it
only
gets
slower
yeah,
I
know
just
coming
up
into
the
holidays.
It's
a
tough
one,
yeah.
B
Okay,
well,
I
guess
we're
three
minutes
past
it's
usually
about
as
long
as
like
the
weight.
So
I
guess
attendance
is
a
little
low
today,
but
I'd
still
love
to
get
some
feedback
on
this.
I
have
an
agenda
of
things.
I
kind
of
wanted
to
talk
about
yeah.
If
you
guys
want
to
talk
about
something
else,
please
be
sure
to
add
to
it.
B
Oh
okay,
there,
it
is
turns
out.
Zoom,
now
shows
a
preview
of
google
docs.
Okay.
This
is
not
what
I
was
expecting:
okay,
cool!
I'm
telling
you
one
of
these
days
I'll,
learn
how
to
use
computers.
So
to
start
I
created
some
new
project
boards.
A
few
weeks
back,
I
had
kind
of
mentioned.
We
should
probably
do
this
for
some
more
like
bigger
projects.
I
don't
know
how
big
some
of
these
projects
are,
but
just
wanted
to
show
them.
B
B
So,
as
you
can
tell,
I
broke
this
out
into
more
parsable
and
parallelizable
tasks
that
are
tracked
here
using
a
new
project
board
beta
thing,
I'm
not
completely
sold
on
it
yet,
but
this
is
what
it
is
so
yeah
I
don't
understand
exactly
the
benefit,
but
maybe
maybe
I'll
figure
it
out
eventually.
So
yeah
there's
this
issue
here.
I
think
this
is
something
we
could
probably
try
to
prioritize
and
a
minor
release,
and
similarly
for
logging
and
troubleshooting.
B
I
think
this
is
a
good
idea
to
try
to
prioritize
this.
You
know
I
was
having
a
conversation
recently
and
it's
one
thing
to
have
a
1-0
release,
but
it's
another
thing
to
use
something
in
production
if
it
doesn't
have
an
ability
to
troubleshoot
it.
That's
probably
a
blocker
in
my
opinion,
but
you
know
maybe
some
people
that's
not,
and
so
I
want
to
make
sure
that
we
try
to
prioritize
this
to
support
those
people
that
aren't
and
are
actually
already
using
this.
B
So
I
think
we
have
this
not
really
well
specked
out,
yet
I
think
there
could
be
some
more
issues
that
this
could
be
broken
up
into.
Specifically,
I
think
this
is
kind
of
like
a
really
big
overarching
issue
that
is
essentially
like
figure
out
how
we're
gonna
log
things
and
based
on
that,
I
kind
of
want
to
talk
about
this
issue
afterwards,
but
yeah.
This
is
kind
of
like
the
first
agenda
item
I
didn't
have
anybody
had
some
feedback
on
the
project
boards
themselves.
B
B
How
we
want
to
like
standardize,
logging
and
c
mandy
had
pointed
out
like
this
project,
logger
log
r-
I
don't
know
how
to
say
it,
but
I
think
there
was
also
a
one
page
doc
that
was
put
forward
on
a
solution
here,
and
this
did
a
little
bit
of
an
audit
and
it
proposed
essentially
creating
our
own
logger
interface
and
building
that,
as
opposed
to
picking
up
one
of
these
projects
from
there
the
that
was
built
into
a
particular
pr.
B
That's
still
open,
it
says
draft
on
it,
but
I
don't
know
it
looks
really.
Stale
nothing's
really
moved
on
in
a
few
months,
so
I
don't
know
really
the
status
of
this,
and
there
are
a
lot
of,
I
think,
like
design
issues
that
are
being
asked,
which
is
not
something
I
try
to
do
or
to
have
in
a
pr.
Usually
I
like
to
have
designs
kind
of
fleshed
out
before
we
do
that.
But
that
being
said,
I
think
it
may
be.
B
Like
this
is
I'm
kind
of
taking
a
step
back,
and
I
was
looking
at
this
project,
this
logger
project,
which
I'm
guessing
david,
probably
knows
because
kubl
or
k
log
uses
this
as
well,
so
it
might
be
something
that
is
well
based
on
that
reaction.
B
Probably
not,
I
don't
know
like
it's
kind
of
an
interesting
project,
so
I
was
looking
into
it
and
for
those
that
aren't
familiar
with
it,
it
is
designed
similar
to
open
telemetry,
where
there
is
a,
maybe
there's
better
things
that
are
said
around
here,
there's
an
api
and
then
there's
an
actual
implementation.
Actually,
it's
not
too
big
of
an
api.
We
can
probably
just
take
a
look
at
this,
and
essentially
what
that
means
is
that
open,
telemetry
itself
could
use
the
api.
B
This
logger
api,
where
this
logger
api
you
know,
allows
you
to
do
info
logging
allows
you
to
do
error,
locking
it
chooses
these
two
levels.
It
does
have
a
verbosity
level
similar
to
g-log
as
well,
which
is
kind
of
interesting,
and
I
think
this
is
it's
pretty
compatible
with
what
the
other
verbosity
levels
we've
talked
about
in
other
parts
of
like
draft
pr.
B
As
well
as
some
other
areas
of
logging
and
essentially
just
allows
you
to
plug
this,
all
in
and
the
implementation
itself
is
done
through
this
log
sync
interface
and
they
export
this,
and
so
what
essentially
you
can
do
is
plug
everything
in
for
the
api
for
our
call
center
and
then
allow
the
user
to
really
customize
like
how
they
want
to
you
know:
sync,
those
logs
it
comes
with
built-in
implementations,
like
the
project
itself,
has
something
that
will
wrap.
B
B
So
if
somebody's
already
using
these
logging
libraries,
they
can
just
easily,
you
know,
wrap
their
logging
library
in
this
and
it
should
be
compatible
with
up
instrumentary
or
whatever
again,
I'm
making
sure
this
is
clear.
This
is
not
the
telemetry
signal
of
logs.
This
is
the
logging
for
the
sdk
or
any
operational
thing,
but
yeah
on
first
glance
and
looking
into
this,
this
looks
like
a
pretty
good
design
and
it
helps
us
not
lock.
B
In
users
to
a
single
package
and
having
a
single
dependency
of
a
single
logger,
but
at
the
same
time
something
I
kind
of
stated
here
is
it
doesn't
make
us
try
to
redesign
yet
another
logging
interface
and
support
logging
and
rather
allows
this
project
to
focus
on
delivering
telemetry,
which
is
a
goal
I
I
would.
I
would
really
like
to
do
that.
I
definitely
I
think
that
you
know
some
of
the
other
points
I
made
was
it
has
a
structure
kind
of
that
design
we
were
just
talking
about.
B
They
reached
a
stable
release.
Recently
this
year,
it's
actively
being
developed
and
again
like
it
seems
widely
used.
Obviously,
logging
libraries
are
a
dime
a
dozen,
but
there's
you
know
four
thousand
input
ports
of
this,
some
of
which
are
blog
sync,
some
of
which
are
just
using
the
api.
So
it's
kind
of
hard
to
distinguish
those.
I
do
think
that,
like
the
project
itself
does
say
that
you
know
these
are
popular
third-party
implementations.
B
This
is
where
k-log
kind
of
came
up
that
you
know
allow
for
many
different
projects,
and
essentially
a
lot
of
different
projects
can
be
wrapped
to
implement
that
log
sync
interface.
So
I
kind
of
wanted
to
raise
that
because
I
do
want
to
try
to
prioritize
getting
some
vlogging
and
troubleshooting
progressing.
This
is
blocking
it.
We
need
some
sort
of
interface
to
actually
get
logging
done,
and
I
think
this
is
a
good
approach
to
try
to
address
this.
B
C
So
having
used
this
before,
I'm
perfectly
okay
with
it,
but
either
like
we'd
either
have
to
take
this
whole
hog,
like
you,
know,
adopt
this
librarian
and
allow
people
to
configure
it,
which
we'll
probably
have
to
do
anyways
or
we
would
end
up
re-implementing
a
lot
of
the
same
things
that
they've
done
here
so
just
especially
for
both
internal
logging
and
for
the
logging
signal.
You're,
probably
going
to
have
to
do
something
similar
so
honestly
doing
this
and
then
having
open
telemetry
as
a
sync
is
probably
also
a
good
idea.
B
Yeah,
I
think,
if
that's
a
a
good
point,
because
eventually
we
do
want
live
like
the
open
source.
Telemetry
signals
to
be
a
sync
where
you
can
send
your
logs.
That's
that's
a
good
point
which
again
speaks
to
the
importance
of
that
partition
of
the
api
that
we
use
for
logging
versus
the
actual
sync,
and
so
as
long
as
we
can
wrap
the
open
telemetry
into
whatever
log
r
is.
I
think
that
that
would
work.
A
Yeah,
I
don't
you
know
yeah.
Would
we
then
be
writing
our
own
otlp
exporter
for
this
library.
B
C
If
we
go
down
that
tech,
that
that
pact
right
and
being,
if
somebody's
already
done
this
and
done
it
better
than
us
like
why,
try
and
reinvent
the
wheel,
it'd
be
my
kind
of
suggestion,
like
we
have
our
api
that
fronts
their
api,
that
does
the
hard
work
of
logging,
because
it's
not
trivial
to
manage
the
logging
streams
that
eventually
exports
into
our
open
telemetry
sync,
I
don't
see
how
that
if
that's
a
problem
in
particular
aside
from
somebody,
might
not
want
to
have
log
r
as
one
of
their
core
dependencies
right.
C
But
but
that's
sort
of
on
the
other
side
of
this-
that's
not
the
like.
This
is,
I
think
the
thrust
of
this
was
to
have
some
way
of
internally
log
logging
issues
that
happen
within
the
library
that
are
exposed
in
a
user
accessible
way,
but
not
necessarily
on
by
default
right
yeah,
one
of
the
biggest
problems
with
all
of
this
being.
C
If
we
choose
a
particular
logging
library
like
if
we
choose
zap
or
or
logaris
or
xerolog,
then
all
of
our
consumers
by
default
have
to
to
import
that
and
if
they're,
using
something
else,
something
that
we
didn't
choose.
C
Then
we
have
divergent
message
formats,
strings
whatnot,
whereas
an
interface
abstraction
like
this
allows
us
to
say
we're
gonna,
send
it
to
this
interface
and
if
there's
nothing
there,
it
just
drops
it.
But
if
you
put
something
there,
you'll
actually
get
messages
in
the
format
that
you've
already
prescribed
for
your
project.
B
It's
ideally
debugging
is
is
important.
You
know
this
is
really
to
try
to
address
this
like
what
happens
when
users
come
to
us
and
they
say,
like
I'm,
not
getting
my
telemetry
right
like
currently,
it's
like
well
try
putting
the
standard
out
exporter
in
to
see
if
you
get
anything
which
is
a
different
path
and
it
is
not
comprehensive
showing
you
like
where
in
the
path
it's
going
to
actually
break
down,
so
we
do
want
to
have
some
form
of
debugging.
B
I
think
built
into
this.
One
of
the
things
that's
kind
of
pointed
out,
like
debugging,
is
in
some
form
info
logging.
You
know
it
depends
on
your
philosophy
level,
how
you
want
to
like.
Do
it
whether
you're
dave
cheney
or
not.
So
like
there's
a
lot
of
things
there
and
I
think
that
the
goal
is
to
do
have
both
info
and
debugging
and
info
can
be
kind
of
a
comprehen
info
and
error
logging.
And
then
you
know
the
info
would
be
more
comprehensive
if
that
makes
sense.
D
Yep,
just
a
thought
would
be:
if
we
didn't
want
to
we,
we
could
even
take
a
subset
of
the
logger
api
if
we
didn't
need
all
of
it
and
wanted
to
keep
more
options
open
for
the
future.
B
D
B
Yeah,
that's
actually
an
interesting
point.
So,
instead
of
what
you're
saying
is
like
we
could
use
this
under
the
hood.
But
if
we
did
like
a
subset
of
this,
then
we
can
essentially
like
wrap
our
interface
in
this
interface.
D
B
Yeah,
well
I
mean
I
was
thinking
of
just
using
this
package,
but
and
the
only
way
that
this
would
be
compatible
like
to
use
the
logging
or
the
logger
type
is,
if
you
have
a
you
know
full
implementation
here,
but
you
know
there
is
the
option
of
like
if
we
just
wanted
to
like
copy
this.
I
I
just
didn't
yeah.
It
just
seemed
like
a
oh.
D
C
Clear
go
ahead
so
the
way
this
would
work
is
we
would
be
using
this
interface
to
instrument
our
code
in
the
in
the
sdk,
with
the
expectation
that
a
user
of
this
library
would
pass
in
would
would
pass
in
something
that
accepts
that
as
the
sync.
C
So
we
would
be
syncing
our
debugging
or
info
information
into
that
sync.
This
is
the
the
kind
of
interface
that
kind
of
separates
the
two
uses
and
that
way
like
if
they
are
using
logger.
There
is
a
translator
from
logger
to
or
loggers
if
they're
using
loggers,
there's
an
interface
adapter
between
there.
B
Yeah,
so
I
think
that
I
was,
I
probably
should
have
got
a
little
bit
more
of
a
proof
of
concept
to
show
here,
but
the
idea
would
be
similar
to
like
what
we're
doing
for
the
error
handler
or
the
tracer.
So
essentially
I
would
envision
this
is
kind
of
my
vision.
I
don't
know
if
it's
compatible
with
what
you
guys
are
talking
about,
but
like
some
sort
of
way
to
say,
like
you
know,
set
the
the
logging
interface
or
the
log
sync
right,
and
that
would
accept
instead
of
something
internal
here.
B
It
would
accept
this
log,
sync
type
from
log,
r
or
right
and
then
from
there
internal
to
the
project
we
would
instantiate
from
that.
We
would
sorry
we
would
instantiate
this
logger
type
and
then
this
would
just
be
essentially
like
shared
internals
to
the
open
telemetry
project
and
we
would
use
this
and
all
the
methods
that
are
attached
to
that.
B
This
is
what
aaron's
saying
so
they
would
pass
us
and
they
would
set
like
the
the
project
logger,
and
that
would
be
some
sort
of
interface
representation
of
you
know,
g
log
standard,
all
these
things,
essentially
it's
a
wrap
or
whatever
they
want
to
log
in
and
then
we
would
be
able
to
just
then
generate
log
messages
in
the
form
that
they
needed
it.
It's
kind
of
the
idea
there
I
can
put
together
a
little
bit
more
of
a
proposal.
B
B
It
is
interesting
to
ask
the
question
as
to
like
you
know:
if,
in
the
long
term,
the
open
telemetry
project
is
going
to
have
a
logging
telemetry
signal,
I
I
don't
think
if
there's
a
logging
api,
I
don't
think
that
was
a
part
of
the
specification.
B
I
think
it's
just
a
logging
sync
essentially
or
it's
an
equivalent
to
what
we're
talking
about
here
as
the
login
sync,
and
so
I
maybe
need
to
look
a
little
bit
further
into
that,
because
I
would
rather
not
have
the
project
depend
on
an
external
dependency
and
then
have
us
build
our
own,
and
then
you
know
for
to
log
for
the
open,
telemetry
sdk,
you
use
an
external
dependency,
but
then,
if
you
wanted
to
also,
you
could
also
send
to
the
telemetry
signals
like
I'd
like
those
things
to
be
aligned
if
we're
going
to
have
like
overlapping
concepts
there.
B
B
C
Yeah,
the
only
the
only
thing
I
would
suggest
is
try
and
think
differently
between
the
internal
logging
like
what
we
log
versus
our
logging
path,
because
if
something
is
wrong
with
our
logging
path,
but
we're
using
our
logging
path
to
also
log,
we
don't
get
any
of
those
logs
so
like
have
a
shorter
path
for
our
internal
logs.
That's
a.
B
Yeah,
that's
a
really
good
point.
We
just
always
have
the
previous
version
log.
The
new
version
is
what
you
do
there
yeah
yeah.
B
Yeah
and
then
we
log
the
errors
on
the
traces
so
cool.
I
think
that's
some
good
feedback
I'd
like
to
look
into
that
a
little
further,
they
could
probably
get
a
poc
out,
I
think,
is
what
I'm
hearing
from
this
so
that'll
ideally
be
something
I'm
working
on
next
week.
But
that
being
said,
the
primary
thing
I
want
to
look
at
next
week
is
this
new
test
export
thing
which
I'm
sure,
if
you're
at
all
involved
in
the
project,
you've
seen
this
a
million
times.
B
So
this
has
been
a
long-standing
bug
since
earlier
this
year
it
was
reported
probably
before
that-
and
it's
only
gotten
worse,
because
we
split
out
the
connector
code
across.
You
know
multiple
different
packages
for
grpc,
but
essentially
what
this
is
is
for
those
that
aren't
aware.
There's
you
know
a
test
in
our
otlp
exporter
that
is
trying
to.
B
You
know
ensure
that
we
reconnect
to
the
collector
if
the
collector
dies
and
comes
back
up,
and
the
thing
is,
is
if
you've
taken
a
look
at
our
otlp
exporter,
there's
a
lot
of
complex
code
to
handle
the
retry
situation,
and
I
was
able
to
validate
that
the
grpc
client
connection
or
con
type
does
this
for
us,
like
it
automatically
handles,
as
it
says
in
the
docs
handles
errors
on
established
connections
by
re-resolving,
the
name
and
reconnecting,
and
I
was
able
to
validate
this
through
manual
testing
just
by
spinning
something
up
and
using
just
this
client
connection.
B
You
know
plugged
in
really
quick
to
our
exporter
and
you
know,
starting
a
docker
container
with
a
collector
making
it
run,
seeing
their
spans
turn
off
the
collector,
seeing
a
bunch
of
errors
come
out
of
our
exporter,
going
like
hey,
this
isn't
working
and
then
turning
back
on
the
collector
and
it
would
reconnect
and
spam
to
start
flying
again.
There's
like
some
retry
timing
that
goes
on
in
there.
That's
still
a
little
wonky,
but
I
mean
I
I
turned
it
off
for
you
know
20
minutes
and
it
still
was
able
to
reconnect.
B
After
that
time
period
it
took
a
little
while
but
like
it
eventually
does
it.
So
I'm
not
100
sure
why
we
included
a
lot
of
this
retry
logic.
Probably
I'm
guessing
is
because
this
didn't
this
behavior
didn't
always
exist
or
we
may
not
have
used
this
library
always.
I
know
we
inherited
this
code
from
open
census.
So
I
don't
know
these
are
guests.
I
don't
actually
know,
but
I
do
know
that
this
would
simplify
a
lot
of
things.
B
It
would
also
mean
that
a
lot
of
our
options
are
pretty
obsolete
because
you
could
just
pass
a
client
connection.
You
know
from
the
user
that's
already
configured
in
the
way
that
the
user
wants.
But
you
know
the
thing
is:
is
perfection's
just
never
going
to
be
the
case.
B
There's
always
going
to
be
some
issues,
so
I
I
wanted
to
look
into
replacing
our
reconnect
logic
with
just
using
the
client
connection
and
in
the
process
get
rid
of
this
test
because
it
doesn't
need
to
exist
so
yeah
that
I
think,
is
more
of
my
top
priority
because
it
is
blocking
a
lot
of
stuff
and
is
frustrating
the
maintainers
of
this
project.
B
So
yeah,
I
think
that
was
kind
of
what
I
was
thinking,
I'm
seeing
some
thumbs
up.
I
don't
know
if
anybody
else
has
more
context.
I
love
david.
If
you
know
more
about
this,
I'm
guessing
than
I
do.
D
You
might
need
to
reach
out
to
josh.
I
I
actually
he's
talked
a
bunch
with
the
grpc
team
and
knows
what
all
the
problems
are
with
it.
I
I
haven't
actually
used
it
all
that
much.
A
All
I
have
to
add
is
that
at
new
relic,
with
the
go
agent,
we
started
having
this
exact
same
error
with
our
tail
base:
sampling,
infinite
tracing,
and
it
runs
in
github
actions
just
like
this,
and
it
is
the
reconnect
to
grpc
test
that
is
failing
and
that
so
everything
you're
saying
makes
total
sense.
B
B
I
think
it's
this
one,
we're
kind
of
pointed
out
that,
like
exporters
disconnect
on
marshalling
errors,
so
there's
an
interesting
thing
that
I
found
in
the
code
itself,
which
is
another
really
good
reason
to
just
pull
out
this
reconnect
logic,
because
it
has
bugs
in
it
where
the
grpc
logic
itself
will
try
to
reconnect
and
if
essentially
like
an
export
failure,
so
something
something's
actually
wrong
with
you
know
the
format
of
the
data
that
we
transmitted
you're
going
to
get
an
error
here
and
that
doesn't
necessarily
mean
that
the
thing
can
disconnected,
but
we
then
say:
we've
disconnected
and
we
completely
reset
the
connection
which
is
really
bad.
B
I
mean
those
are
that's
a
lot
of
overhead
to
try
to
reset
that
connection.
So
I
mean
there's
just
there's
just
poor
logic
that
we
baked
in
here,
and
I
think
it
would
all
go
away
if
we
just
get
rid
of
our
handling
of
this
reconnect.
But
I
think
that's
a
good
point.
I'm
gonna
connect
with
josh
and
see
you
know
if
you
have
more
context
than
I
understand
at
this
point.
A
It
could
be
a
bug
in
grpc
right,
someone
brought
that
up,
I
mean
if
we're
both
getting
at
the
same
time.
I
what's
the
bug.
Sorry,
I
don't
know
just
something
that
keeps
us
from
reconnecting
when
we
from
tearing
down
it's
resetting
up
the
connection.
B
Yeah,
I
I
think
if
there
is,
I
think
at
the
you
know,
when
you
don't
have
blocking
on
that
connection,
will
come
back
up
and
it
doesn't
validate
the
connection.
If
you
look
at
the
client
connection
code,
it
just
returns
the
the
connection,
but
I
don't
know
if
I'd
particularly
say
that
that's
a
bug
rather
than
we
shouldn't
have
been
disconnecting
in
the
first
place,
because
if
you
don't
and
you
let
the
client
connection
handle
all
that
retry
logic,
it
handles
that
situation.
B
It
either
will
error
and
will
tell
you
like
from
the
start
like.
This
is
a
bad
connection.
Please
fix
it
or
it'll
say
I've
been
able
to
connect.
So
I
will
continue
to
reconnect
indefinitely
until
you
close
this
connection
and
so
like.
I
think
what
we're
trying
to
do
is
like
we're
trying
to
retry
and
it's
already
trying
to
retry
and
our
retrying
is
not
really
understanding
the
internals.
I
think,
of
the
system
based
on
what
I've
seen
in
debugging
this.
B
But
I
I
think
that
I'll
try
to
have
a
proof
of
concept
for
next
week
and
maybe
show
that
to
give
a
little
more
clarity
again
code
is
a
lot
easier
to
understand
than
me,
rambling
cool.
So
that's
this
issue
pause
again.
If
anybody
else
has
something
else
you
want
to
add,
and
then
I
know
you're
interested
in
this,
so
I
I
can
I'm
guessing
pass
this
to
you
for
review.
Once
I
get
a
poc
as
well.
E
Yes,
I've
addressed
it.
I
I
was
just
doing
a
random
search
through
through
the
internet.
There
is
a
thing
on
stack
overflow,
not
necessarily
trust
that
it
talks
about.
It
does
reconnect
the
client
connection,
but
not
necessarily
the
stream
yeah.
B
That
is
a
good
point.
It
does
not
and
you
have
to
handle
those
reconnects
themselves.
Good
news,
though,
is
we,
don't
actually
send
data
via
a
stream?
Sorry,
it's
all
synchronous
requests
yeah.
I
did
see
that
as
well.
I
was
like
oh,
this
isn't
gonna
work,
but
yeah.
B
No
yeah,
okay,
you're
ahead
of
me,
then
yeah
yeah,
no
worries,
okay,
cool
last
thing
I
wanted
to
talk
about,
and
then
we
can
jump
to
aaron's
point
is:
I
had
said
last
week
that
I
was
trying
to
get
a
release
out.
We
have
a
lot
of
really
good
bug,
fixes
that
are
in
place
to
some
bugs
already
existing
in
the
ci
system
that
I
think
should
get
released.
That
being
said,
it
was
noted
by
tigran
that
we're
pretty
far
behind
on
our
semconf
packages.
B
1.7
is
out
for
the
specification
and
we
still
are
on
1.4.
So
I've
opened
up
three
pr's
to
add
the
missing
minor
versions,
notably
not
1.6.0,
because
it
contained
a
bug.
Apparently,
so
I
just
don't
plan
on
releasing
that,
but
otherwise
this
could
use
your
eyes.
I
guess
it's
a
question
I'll
post.
Everyone
on
the
call
like
do
you
think
that
we
should
wait
for
the
release
to
get
these
merged
or
is
it
okay?
If
we
go
before
that,
I'd
probably
do
one
or
the
other.
B
Yeah,
I
don't
know
if
anybody's
really
antsy
to
get
a
release
out,
there's
just
a
bunch
of
I
think
tooling
fixes
and
some
small
bugs.
So
I
don't
I'm
not
really
like
motivated
to
get
a
release
that
takes
a
lot
of
work
until
these
are
merged.
But,
okay,
I
think
I'm
just
gonna
delay.
As
I
wrote
here.
C
I
I
have
a
kind
of
follow-on
question
for
that
there
seems
to
be
a
lot
of
versioning
happening
on
that.
A
lot
of
churn.
Would
you
consider
moving
that
into
its
own
package?
B
The
semkov
package
itself.
C
Well,
we
wouldn't
release
any
any
future
versions.
Those
can
be
released
independently
in
their
own
repository
versus
having
them
released
here
and
then
just
have
the
current
versions,
pointing
to
whatever
the
current
version
is
in
the
in
the
code
or
in
the
in
the
simcom
repository
that
way
you
don't
have
to
manage.
You
don't
have
to
juggle
a
release
of
otel,
go
with
the
simcom
version
because
they
seem
to
be
versioning
separate
from
each
other.
B
Yeah
they
do.
B
I
mean
I'm
considering
yeah,
I'm
not
opposed,
I
just
don't
it's.
I
don't
see
that
we're
going
to
win
anything.
I
guess
like
it
seems
like
more
work
and
it's
just
going
to
move
the
dependency
out
into
a
different
repository,
but,
like
I
still
got
to
update
things
and
I
still
got
to
release
them.
B
C
The
the
reason
why
I
suggest
it
is
if
people
are
antsy
for
the
1.7
semcom
like
they
need
something
and
the
cycle
time
for
us
is
long
for
the
hotel
go.
C
B
B
The
the
the
tc
of
the
open
telemetry
itself
are
not
a
big
fan
of
that
approach.
They
yeah
it's
it's
something
that
I
think
is
not
common
in
many
other
languages.
So
it's
not
something
I
think
they're
in
favor
of
and
they're
the
kind
of
the
gates
I'm
building
to
repo.
So
we
can.
We
can
try
to
talk
with
them
about
it,
but
it
is
a
little
bit
tougher
to
get
a
new
repository
added
than
it
is
to
just
add
a
package
to
this
existing
one,
understandable
yeah.
B
I
I
don't
want
to
shoot
you
down,
because
I
think
that,
like
that's,
that's
fair,
like
we
definitely
are
doing
that
in
other
situations
like
the
build
tools
and
we're
doing
that
with
the
the
thing
you're
talking
about
next,
the
hotel
proto
go
stuff,
but
it
was
a
challenge
to
getting
those
so
yeah
yeah.
C
So
as
a
segue
into
that
hotel
proto
go
is,
I
think,
two
versions
out
of
date.
At
this
point
like
it's
on,
0.9
and
0.11
has
been
released.
C
There's
two
big
questions
that
I
have
is
one:
do
we
need
the
the
point
release
between
there?
So
do
we
need
ten
if
we
are
going
straight
over
to
eleven
and
two,
does
anybody
have
any
time
to
do
automation?
I
won't
have
any
this
week.
I
probably
will
have
some
next
week,
but.
B
B
B
D
C
Probably
will
have
a
little
more
time
to
work
on
internal
projects
like
this,
so.
B
Cool,
I
I
don't
think,
there's
a
huge
need
right
now.
I
think
a
lot
of
the
latest
versions
are
just
for
the
metrics
revs,
which
I
think
we're
not
quite
ready
to
plumb
in
with
the
exporter
but
yeah.
I
think
that's
a
good
question
for
the
question
around
whether
we
want
to
go
to
v010
or
just
directly
to
11.
I
would
definitely
check
in
with
the
collector,
if
the
collector's,
using
this
repository,
which
I
think
they
might
be,
they
may
have
dependencies.
They
want
met
on
that
as
well.
B
So
yeah,
that's
what
I
would
recommend,
but
I
would
definitely
recommend
automation
as
we've
kind
of
talked
about
as
the
pie
in
the
sky.
In
this
repository
in
the
past.
B
But
yeah
that's
a
good
question.
I
don't,
I
think
in
two
weeks,
if
that's
the
time
frame
for
you
to
take
a
look
at
this
aaron,
is
that
a
it's
not
a
huge
brush
or
even
a
month
honestly,
but
yeah?
B
That
being
said,
if
I
do
find
time
I'll
put
it
on
my
agenda,
man
back
to
the
original
thing
rich-
and
I
were
talking
about-
I
probably
won't
find
time
but
yeah
well
cool
awesome.
So
that's
the
end
of
the
agenda
that
we
have
written
down.
Is
there
anything
else
anybody
anybody
wanted
to
talk
about.
B
Cool
any
user
stories
that
have
any
use
case
of
hotel,
I
know
last
week
was
kubecon,
maybe
even
like
cool
talks.
You
saw
if
you
attended,
or
if
you
didn't
heard
of
through
the
grapevine.
B
Yeah,
I'm
pretty
behind
on
that
one
well
cool!
I
think
anthony
might
have
some
some
cool
things
to
say
when
he
comes
back
next
week,
because
yeah
he's
been
at
kubecon
this
last
week
and
he's
on
vacation.
So
I
imagine
he's
just
behind
the
computer
just
coming
up
with
new
cool
ideas,
because
that's
how
you
spend
vacations
right,
no
in
all
seriousness,
I
hope
he's
out
enjoying
his
time
off.
Well
cool.
B
I
think
we
can
probably
end
it
then
give
you
guys
some
time
back
and
yeah
thanks
everyone
for
joining,
we'll
see
you
all
next
week
or
virtually
right
thanks.
Everybody.