►
From YouTube: 2021-08-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
Yeah,
so
we
don't
have
a
lot
to
discuss
today.
We
still
have
that
destroy
the
ship
ending.
But
since
diego
is
not
here,
probably
we
shouldn't
go
too
deep
into
it,
given
that
he
has
very
strong
opinions
about
it,
so
probably
best
wait
for
him
to
be
back
and
then.
A
Then
discuss
it
in
detail.
Maybe
we
should
even
schedule
a
dedicated
call
for
that
next
week,
sometime
yeah
so
other
than
that
new
approvers
maintainers.
A
So
I
try
to
document
some
things
on
this
discussion,
so
mainly,
I
think
that
the
two,
the
two
major
classes
are
developers
or
service
owners
who,
like
integrate,
observability
right
into
their
their
services
and
then
deploy
them
and
they
own
their
own
also
with
the
entertainment.
A
Luckily,
we
support
that
with
open,
telemetry
instrument,
so
we
kind
of
support
both
use
cases,
but
our
documentation
is
not
really
geared
towards
like
one
role
or
the
other.
So
I
think
if,
if
we
can,
if
everyone
can
go
over
this
and
share
their
thoughts,
maybe
we
can
like
streamline
the
dots
or
and
also
possibly,
this
might
help
in
understanding
getting
a
better
understanding
about.
How
do
we
split
the
configuration
and
destroys
an
instrumentation
thing
down
the
line
so
yeah
if
everyone
could
take
a
look
share
their
thoughts
after
the
call.
A
A
A
This
is
having
trouble
has
been
having
some
problems
for
the
last
couple
of
weeks,
so
it
could
be
annoying
release
next
week
and
also
I'm
gonna
go
over
the
spec
early
next
week
and
try
to
see
try
to
figure
out
if
there
are
any
new
features,
stressing
related
features
or
environment
variables
that
we
don't
support,
and
hopefully
we
can
add,
support
for
them
next
week
and
then
the
next
release
will
have
we'll
have
all
those
things
in
it.
A
Yeah
so
then
any
pr's
issues
I
added
one.
Nothing
substantial
just
need
some
some
additional
reviews.
If
anyone
could
take
a
look
and
your
thoughts
would
be
great
and
other
than
this,
I
went
over
a
bunch
of
pr's
and
most
of
them
have
ci
failing
from
like
five
days
ago
six
days
ago.
A
Any
other
vr
issues
we
need
to
talk
about.
C
I
guess
one
one
question
I
have
about
ci
is:
do
we
do
we
have
like
an
alternate
plan
if
things
don't
get
better
with
github
actions
like
should
we
start
considering
alternatives,
or
I
don't
know.
B
What's
the
I
haven't
taken
a
look
at
this
workaround
that
diego
created
to
fix
ci.
Apparently
anyone
know
anything
about
that.
C
D
It's
such
a
weird
issue,
because
it
only
breaks
or
the
only
the
instrumentation
tests
specifically
for
which
run
open
challenge
instrument
are
breaking
and
they
don't
even
break
they
just
time
out
and
it's
like
they
have
like
an
infinite
while
loop
or
something
and
they
get
cancelled.
But
if
you
run
it
locally,
it's
fine.
B
C
They're
they're
cute
because
basically
there's
like
10
jobs
that
are
so.
If
you
look
under
like
actions
the
actions
tab,
you
can
see
there's
like
two
jobs
that
are
currently
running
and
both
of
those
have
contrib
build
jobs
that
are
like
they're
they're
at
the
step
where
it
says,
run
talks
and
it's
been
sitting
there
for
20
minutes
and
there's
no
output
from
any
of
the
logs.
If
you
scroll
down
yeah
to
the
ones
that
are
active,
like
remove
contacts
far
as
one
so.
C
A
C
So
if
you,
if
you
exhaust
all
your
parallel
runners
across
the
entire
organization,
then
basically
everything
else
is
queued
up
right,
and
so
I
guess
what
nathaniel
was
pointing
out
was
that
it
looks
like
there's
an
issue
with
our
contrib
builds
that
are
just
kind
of
hanging.
So
yeah,
I
I
don't.
I
don't
know
anything
beyond
this
point,
but
usually
when
run
talks
is
running
it'll,
at
least
output
that
it's
running,
but
currently
it's
not
even
doing
that.
So
I
mean
it
could
be
that
running.
A
Yeah,
possibly
maybe
like
because
it
was
tagged
the
latest,
maybe
they
recently
released
an
image,
and
that
has
an
issue,
but
even
his
pr
and
even
diego's
pr,
which
was
which
is
supposedly
able
to
fix
this.
Even
that
is
blocked,
was
blocked.
I
updated
it,
let's
see
if,
if
it,
if
it
finishes
ci,
we
can
merge
it
into
that.
D
Someone
in
the
chat
way
that
you
can
look
at
I.
I
know
that
there
is
the
problem
across
the
org
and
even
I
think,
on
our
aws
downstream
ones.
We
were
seeing
that
they,
I
think
we
got
a
bunch
of
tickets
filed
because
the
actions
were
failing,
but
I
do
think
that
this
seems
like
something
specific
to
us.
It's
always
these
tests
that
fail
the
contrib
instrumentation
ones
they
just
time
out
and
get
cancelled.
D
Maybe
maybe
something
that
we
do.
You
know
because
the
instrumentation
one
does
that
thing
where
it
installs
the
script
and
puts
it
somewhere.
Maybe
that
isn't
working
on
github
anymore
or
something.
E
Oh,
I
was
going
to
ask
real
quick
if
anybody
tried
there's
like
a
cache
step
that
caches
all
the
virtual
lens.
I
was
wondering
if
it
could
be
a
problem
with
that.
So
there's
like
this,
you
could
bump
the
version
in
it
to
clear
all
those
caches
and
it
will
be
slow
until
until
they
repopulate,
but
that
might
be
it
too.
I
don't
know
if
anybody
tried
that.
D
Yeah,
that's
a
good
idea.
Yeah
that'll
ask
them.
B
Aaron,
what
what
is
the
what's
the
thing
that
controls
the
cache
is
that
talks
is
that
github.
E
Yeah,
I
see
we
could
also
add
all
the
we
could
add
to
this
hash
files.
I
recently
realized
you
could
do
globs
in
here,
so
you
could
do
like
all
the
setup.pi
and
setup.cfg
files.
So
if
you
update
dependencies
in
those
ones,
you
also
need
to
do
it.
Otherwise
it's
going
to
be
using
the
old
cache.
So
I
could.
I
could
try
to
make
a
pr
and
see
if
it
fixes
it.
D
But
if
you
didn't
update
it
and
it
had
the
wrong
cache,
it
should
fail
right.
It
shouldn't
hang
like
this.
D
Yeah,
that
makes
sense
it's
also.
It's
almost
seems
like
this
was
the
old
problem,
because
yeah
now
on
the
pr's,
it's
not
just
instrumentation.
Now
it
seems
like
everything
is
cute,
so
maybe
maybe
it's
just
a
whole
new
problem
now
but
yeah.
I
think
it's
a
good
idea
to
try
the
new
pr
that'd
be
great.
B
I
think
I'm,
I
think,
I'm
blacklisted.
C
E
Yeah,
I
think
we
should
be
able
to
fix
it
and
then,
like
worst
case,
we
could
try
to
even
use
like.
I
think
you
can
run
the
test
in
docker
too.
If
we
like
absolutely
need
to
customize
it
and
it's
some
issue
with
the
ubuntu
image.
B
C
Yeah,
I
did,
I
cancelled
all
of
the
running
actions.
A
A
D
C
The
preliminary
outlook
on
that
pr
looks
good
run.
Talks
is
not
so
it's
hanging
there.
So
that's
exciting.