►
From YouTube: 2021-11-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
I
guess
that's
unfortunate,
because
I
wanted
to
discuss
something
that
he
raised
the
cluckiness
of
the
tests.
Probably
I
guess
he
posted
something
there
about
that.
Maybe
we
can
discuss
that
next
time,
because
I
wanted
him
to
be
here
because
he's
initiative.
A
C
A
A
Yeah,
I
saw
the
notes,
so
I
guess
there
was
no
like.
I
guess
there
was
no
decision
made
on
that.
So
I
don't
know
so
I
I've
been
thinking
about
what
we
can
do
about
that
and
I
guess
what
are
the
options
you
can
disable
the
test
and
removing
the
component,
I
think,
is
the
kind
of
a
nuclear
option
right.
You
probably
don't
want
to
do
that
immediately,
maybe
eventually,
so
I
was
thinking.
Maybe
we
do.
A
We
do
something
like
this.
Maybe
we
once
once
we
see
a
test
that
is
unstable.
We
try
to.
We
ask
the
author,
the
code
owner
to
fix
it
and
if
the
decode
owner
is
unresponsive,
let's
say
we
give
a
few
weeks
to
the
code
ownership
to
fix
it.
But
if
they
don't
respond,
then
probably
what
we
can
do
is
we
do
disable
the
test
that
particular
one.
A
A
Probably
we
print
a
warning
in
the
logs
when
the
component
is
used
by
the
by
the
user
and
we
give
the
users
plenty
of
time,
like
maybe
six
months,
12
months
off
of
period
where
this
component
is
actually
markers,
are
maintained
and
if
by
then
it
continues
to
remain
unmaintained
like
the
offer
doesn't
do
anything
about
the
test.
Doesn't
make
other
changes
that
anyway,
it's
abandoned.
A
We
remove
it
then
right,
but
we
give
plenty
of
time,
like
I
think
at
least
six
months
or
something
like
that.
And
then,
if
the
author,
let's
say,
maybe
for
whatever
reason
they
were
not
able
to
fix
it
quickly,
but
they
came
back
and
and
they
want
to
fix
it,
that's
fine.
A
We
removed
that
label,
it's
no
longer
unmaintained
or
if,
if
they
come
and
somebody
comes
and
wants
to
change
the
component
in
some
some
other
way,
we
tell
them
you
can't
until
you
fix
the
the
that
that
bad
test,
just
just
do
that
right,
make
sure
it
doesn't
remain
that
way.
So
I
think
something
like
that
and
and
another
thing
maybe
well,
some
tests
are
actually
junk.
Sometimes
right.
A
Maybe
it's
really
a
bad
test,
so
if
it's
clear
that
it's
the
test,
that
is
bad
and
not
the
component,
that
is
somehow
failing
the
test,
actually
exposing
a
bug,
then
we
just
delete
the
test
right.
That's
also
an
option.
If
we
it's
clear
that
the
test
is
actually
a
poorly
written
one,
we
get
rid
of
that
right.
So
there's
no
need
to
for
all
of
this
choreography
around
marking
it
unstable
doing
all
those
things.
Sometimes
really
really.
You
have
a
bad
test.
You
just
get
rid
of
that
jurassic
good.
A
A
C
A
Maybe
I'm
going
to
repeat
this
one
more
time
for
jurassic
unless
so
what
I
was
suggesting
regarding
flaky
tests
is
that
we,
if
the
author
is
unresponsive,
we
mark
it
as
the
component.
We
mark
the
compound,
that's
unmaintained.
We
disable
that
that
particular
test
and
if
the
component
remains
unmaintained
for
a
long
period
of
time,
like
six
months
or
more,
we
then
remove
the
component
so
give
plenty
of
time
to
users
to
to
not
use
that
component
anymore.
We
give
a
warning
in
the
log
that
this
is
not
maintained.
A
If
the
author
comes
back
and
fixes
the
test,
we
remove
that
label,
so
the
component
is
no
longer
unmaintained
and
sometimes
the
test
can
be
really
bad.
So
if
we
clearly
see
that
the
test
is
poorly
written,
we
just
delete
the
test.
That's
that's,
okay
as
well
right.
We
just
need
to
be
careful
so
that
we
don't
remove
tests
which
are
actually
uncovering
component,
bugs
maybe
unstable
components.
Stuff
like
that.
So
I
guess
that's
the
summary.
A
B
All
right,
okay,
that
sounds
good
to
me.
I
mean
I
think,
over
the
past
week,
we
we
had
a
few
flaky
tests
and
the
one
that
is
bothering
me
the
most
is
the
one
that
I
that
I
linked
here
for
a
couple
of
reasons.
First,
it
is
failing
quite
often
at
least
for
the
prs
I'm
reviewing,
and
second
it
is
for
a
component
that
is,
you
know.
A
B
One
thing
that
we
thought
talked
about
last
week
was:
you
know,
we're
gonna,
bring
freaky
tasks
to
this
discussion
here
and
hope
that
people
are
gonna
fix
them
over
time.
But
I,
like
our
suggestion,
I
think
it
it
goes
in
line
with
what
we
discussed
it
it
and
it
defines
the
drastic
measures
that
we
haven't
discussed
back.
Then
you
know
yeah,
because
we
were
hopeful
that
things
would.
B
What
we
had
discussed
would
be
enough
for
things
restabilized
and
if
they're
not
then
yeah,
we
need
drastic
measures.
A
Okay
sounds
good.
Let
me
maybe
post
this
all
of
this
as
an
issue
or
maybe
as
a
pr
against
contributing
document,
and
we
can
decide
on
what
the
timelines
look
like
exactly
how
much
we
wait
for
the
offer,
how
much
we
wait
until
the
unmaintained
component
should
be
removed.
Maybe
we
can
discuss
that
on.
The
pr
doesn't
have
to
happen
right
now:
yeah,
okay,
I'll,
do
that,
and
for
the
particular
slack
flight
test
that
you
have
here.
Do
you
want
to
have
a
look
at
it
right
now
or
you're?
Just.
B
So
I
tried
to
review
this
one,
so
this
is
apr.
I
tried
to
take
a
look
there.
There
was
also
a
review
by
movie
store
guy
sean
mark
marciniak
and
there
was
a
a
a
discussion
about
specific
part
of
the
code
and
I
don't
know
the
permits
receiver
at
all,
so
I'm
not
comfortable
in
reviewing
that
one.
So
I
would
ask
someone
who
knows
about
metrics
about
the
previous
receiver
to
review
that
instead
of
me,
I.
E
F
Be
adam
there
as
well.
I
think
I've
been
kind
of
informally
responsible
for
this
for
a
little
bit
might
as
well.
Add
me
to
the
code
owners.
F
I've
already
reviewed
this,
so
I
reviewed
mr
funds
pr
before
he
made
it
in
the
the
upstream
repo,
and
I
think
what
it's
doing
here
is
that
the
the
way
the
tests
are
set
up,
the
scraper
sometimes
gets
an
extra
cycle
in
before
it
gets
shut
down,
so
it
will
have
two
scrapes
instead
of
one,
and
this
test
was
looking
at
all
of
the
data
that
was
scraped
and
saying.
Oh,
I've
got
twice
the
number
of
metrics
I
expected
to
see
here.
F
A
Let's
move
to
the
next
travis
discuss
pr
miss
travis
here,
nicole,
yes,.
D
Yes,
hi
yeah.
I
wanted
to
discuss
pro
request
5835,
so
my
reviewer
saw
a
place
where
I
was
using
a
mutex
to
protect
the
counter,
and
he
just
suggested
that
I
just
use
the
the
atomic
package
to
increment
that
counter
instead
of
a
mutex.
So
I
did
that
change
and
it
was
a
really
small
change.
D
A
Yeah,
that's
that's
a
lot
test
that
can
happen
sometimes
because
the
github
actions
are
running
on
some
some
sort
of
overloaded
machine.
That
does
happen.
Typically
re-running
fixes
this,
but
sometimes
we
also
need
to
increase
the
limits
for
the
test
because,
usually
that's
a
hits.
The
memory
limit
and
the
collector
just
grows
in
size
and
that's
kind
of
expected.
A
So
if
you
can
do
you,
you
see
the
failure
right.
You
see
that
in
the
github
actions.
Can
you
yes
can
you,
so
it
would
be
great
if
you
could
post
the
actual
failure.
What
was
the
problem
and
then,
depending
on
the
what
the
problem
is?
Maybe
we
need
to
increase
the
limits?
I'm.
B
Yeah,
I
think
someone
might
have
restarted
the
test
already
and
there's
nothing
failing
right
now.
The
things
that
are
failing
are
related
to
the
circle,
ci
build,
publish
which
is
not
required,
and
I
think
it
is
failing
for
all
the
prs
actually,
so
someone
might
have
restarted
the
load
test
already
for
this
one.
B
But
I've
seen
apr
that
also
increased
the
the
the
limits
yesterday
or
the
day
before.
So
I
think
we
are
good
and
on
for
the
next.
A
B
A
We
usually
have
some
buffer
for
for
the
observed
typical
observed
values
and
the
limits
that
we
said
for
for
the
load
tests,
because
well
load
tests
are
not
predictable
on
this
sort
of
executors,
but
if
they
start
failing
more
frequently,
we
should
just
increase
the
limits,
a
bit
more,
which
I
I
typically
do
from
time
to
time.
But
if
anybody
else
is
there's
any
or
other
maintainers
or
approvers,
please
please
do
that
as
well.
Right.
A
A
Yeah.
Well.
Yes,
that's
that's
a
good
point.
You
typically
need
to
look
at
the
historical
routes
to
see
whether
it's
something
that
is
trending
as
a
result
of
making
the
collector
larger
and
larger,
or
it
is
a
result
of
of
a
change
right.
One
time.
Actually
we
upgraded
to
the
protobuf
library-
and
it
was
a
known
performance
issue
in
that
version
of
the
protobuf
library
that
was
caught
by
the
by
the
road
test.
So
yes
evaluate
that!
Don't
do
that,
like
always
without
looking
at
what
was
the
problem
right
so
I'll.
A
Just
look
at
the
few
previous
rounds
on
the
main
branch
to
see
whether
they
were
close
to
the
limit
and
if
they
were
close
to
the
limit
it
like
likely
means
that
the
pr
is
is
not
the
the
the
at
fault
right
here.
It's
just
that
over
time
we
came
closer
to
the
limit
that
is
expected
that
we
come
closer
to
the
limit,
because
the
collector's
memory
usage
just
grows
right.
We
becomes
larger
and
larger.
B
So
I
guess
there
are
two
things
there
in
here
for
the
future.
To
think
about.
I
think
the
first
one
is
running
on
more
stable
machines.
Like
bare
metals,
I
think
we
did
have
a
discussion
before
about
metal
usage
and
just
a
reminder
as
part
of
the
cncf.
We
we
have
the
possibility
of
using
pact
machines
or
whatever
they're
called
right
now,
and
so
we
can.
B
We
could
use
them
as
like
host
host
executors
for
github
or
host
runners,
whatever
they're
called
as
well
for
the
performance
related
tests,
and
the
second
thing
could
be
to
not
use
the
whole
country
before
those
load
tests,
but
instead
to
use
the
builder
to
build.
You
know
specific
distributions
or
specific
things
for
specific
tests.
So
if
a
test
is
ensuring
we're
exercising
only
I
don't
know,
the
prometheus
receiver
and
hlp
exporter
then
generate
a
distribution
with
only
those
parts
so
that
that
test
doesn't
get
affected
by
chains
in
other
components.
A
F
B
Yeah,
I
think
it
also
kind
of
ties
to
a
it's
a
suggestion
that
was
made
a
few
weeks
ago
in
that
we
don't
actually
have
to
run
all
of
the
tests.
All
of
the
time
for
all
the
prs,
we
can
have
a
combination
of
those
special
performance
tests,
performance
related
tests
running
as
a
nightly
build
right.
So
sometimes
it's
okay
to
catch
issues
once
a
day
instead
of
on
apr
basis.
A
Or
if
we
could,
I
guess
figure
out
what
has
changed
if
the
component
is
changed
and
we
run,
although
only
the
components,
change
right,
we
component
tests,
they
are
the
command
components,
are
fairly
decoupled
from
each
other.
They
don't
really
affect
much
each
other
unless
there's
some
sort
of
shared
code
that
has
changed,
which
is
a
small
minority
of
the
code
base
in
the
country
repository.
F
There
was
someone
from
aws
who
was
working
on
something
similar
to
that
recently.
I
need
to
find
what
happened
to
their
pr.
They
they
had
a
first
stab
at
it.
That
wasn't
quite
right,
and
I
talked
to
them
about
some
changes
I'll
see
if
we
can
get
that
through.
But
what
that
would
do
is
basically
look
for
what
what
files
have
changed.
What
modules
are
those
in
then
identify
other
modules
that
depend
on
them
and
kind
of
build
up?
F
A
Yeah
the
first
approximation
could
be
that
if,
if
anything
changed
in
the
root
module,
you
run
everything
and
if
anything,
changed
in
just
a
single
components
module.
We
just
found
that
that
single
component
right
that
should
be
fairly
easy.
I
guess
to
do
there
is,
I
don't
think
we
have
a
lot
of
interdependencies
between
the
the
components.
A
A
A
G
Yeah,
this
is
just
administrative
here,
but
I
removed
the
circle
ci
dependency.
Now
we're
going
to
be
publishing
from
the
github
actions,
but
we
don't
have
to
write
credentials,
so
I
opened
an
issue
whoever
the
grown
up,
who
has
access
to
the
credentials
if
they
could
help
us
out
here.
That
would
be
great.
This.
A
Is
for
pushing
to
the
docker
hub
right,
correct,
yeah,
yeah!
I
have
those
credentials
and
I
need
to
put
them
in
the
github
action
where,
where
do
I
put
that?
Where
do
you
need.
B
A
G
Okay,
yeah,
and
if
you
want
you
can
we
can
talk
offline,
but
we
can
also
share
the
secrets
to
the
one
password
account
that
we
have
and
that
way
maybe
prevent
this
in
the
future.
Do
we
have.
A
G
For
open
telemetry,
but
I've
separated
each
sig
into
its
own
vault,
but
only.