►
From YouTube: 2022-07-27-Node.js Technical Steering Committee meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
guess
it's
also
worth
mentioning
we're
still
working
on
a
location
for
a
collaborator
Summit
at
the
same
time
as
nude
class
EU
looking
to
for
a
location
in
Dublin
the
couple
days
before.
So,
if
you're
planning
on
you
know
heading
out
there
just
keep
that
in
mind
or
even
if
you
just
want
to
come
for
the
collaborator
Summit.
A
A
B
There's
the
election
for
the
for
the
board
seat
that
you're
vacating,
yep
I,
believe
I,
I,
know
Mateo's,
running
I,
think
joseppi
is
running
and
a
few
other
people
Emily
and
so
but
but
those
are
the
two
people
that
are
that
are
leap
to
mind
that
are
node
people
and
if
I'm
forgetting
someone
else,
I
apologize.
A
Okay,
yeah
I
think
that
ought
to
be
closing
not
too
not
too
long
for
now
in
terms
of
the
working
session,
it
was
a
retrospective
on
the
the
the
event,
so
that
should
be
like
the
I
think.
A
published
version
of
that
should
be
coming
out
soon
as
well,
and
just
so
people
are
sort
of
in
the
loops
thinking
about
it.
A
You
know
there
is
some
thought
of
doing
maybe
smaller
events
or
co-locating
with
other
events,
as
opposed
to
having
one
big
event,
so
I
think
that'll
be
being
discussed
at
like
the
the
program
committee.
That's
open,
for
you
know
people
to
join
us.
If
you
have
strong
feelings
on
that
front,
it's
that's
a
good
place
to
get
involved.
A
A
Let's
move
on
then
to
node.js
node,
so
the
first
one
that
was
tagged
for
the
agenda
was
ruthlessly
marked
tests
that
fail
frequently
as
flaky.
That's
four
three
nine
five:
five.
A
It
was
opened
by
Mateo
and
he
added
it
to
the
TSC
agenda.
There
is
some
discussion
in
there
and
I
think
the
the
key
suggestion
that
the
you
know.
The
reason
it
would
be
marked
for
the
TSC
is
like:
can
we
be
more
aggressive?
Should
we
be
more
aggressive
about
marking
things
flaky
in
the
Cai.
B
B
I
mean
yeah,
I
I,
don't
know
I
I,
I,
I,
I
I
would
be
a
little
concerned
if
we're
going
to
Mark
something
that
is
always
failing
as
flaky.
That's
that
that's!
That's
that
that's
a
bit
of
a
red
flag
for
me
personally,
but
otherwise
I
don't
know
I,
don't
know,
I
mean
if
it
fails
30
of
the
time
and
nobody.
Nobody
realistically
is
going
to
get
to
the
bottom
of
it
anytime
soon
sure
Market.
A
Yeah
I
I
guess
it's
the
I,
don't
know
if
there
was
a
suggestion
of
like
Auto
marketing
or
something
that's
something
that
would
concern
me,
but
you
know
certainly
somebody's
saying
looking
at
it
and
saying
hey.
This
is
failing
a
good
amount
of
time.
There's
no
reason
to
believe
it's
anything
more
than
flaky
and
then
marking
just
flaky
seems
totally
reasonable.
Yeah.
B
There's
no
there's
no
reason
to
believe
that
an
automated
flaky
marking
process
is
going
to
be
any
less
flaky
than
our
tests.
So
you
know-
and
you
know
like
as
Ben
and
possibly
others
pointed
out,
a
lot
of
failures
are
infrastructure
related
and
marking
them,
as
flaky
will
not
help.
So
you
know.
D
It's
flaky
I
know
that
Antoine
has
been
working
on
my
suggestion
of
having
better
tooling
around
identifying
flaky
tests
and
the
degree
to
which
they
are
flaky,
so
I
believe
he's
adding
options
to
the
test
Runner
to
see
if
it
has
really
as
fakey
or
if
it's
failing,
deterministically
and
also
sorry,
the
other
one
I
forgot
right
now,
but
there
are
two
options,
essentially
that
allow
you
to
get
better
insight
into
how
much
jets
are
failing
that
our
map
is
flaking.
E
D
C
Though
I
I
think
we're
kind
of
missing
the
point
here,
I
think
the
point
Mateo
is
trying
to
make
is
that
it's
very
difficult
to
land
PRS
right
now,
because
the
the
sea
is
so
flaky
and
his
way
of
resolving.
That
is
asking
someone
to
just
Mark
everything
that
is
basically
failing
as
flaky.
D
Yeah
I've
been
a
fan
of
that
for
weeks.
Since.
The
situation
has
been
quite
bad
recently,
especially
due
to
that
keep
alive,
PR
Landing
with
failing
tests,
and
there
has
been
back
pressure
when
we
try
to
revert
it.
So
the
situation
has
been
quite
bad
and
we
should
definitely
I
I
myself
when
I
see
it
as
failing
I,
don't
necessarily
and
I
know
it's
been
failing
for
weeks.
I,
don't
necessarily
open
a
PR
to
mark
this
fakie,
just
because
it's
simpler
to
resume
CI
since
I
know
it's
an
old
flag.
D
C
F
Can
you
guys
back
up
for
a
second
like
once,
it's
marked
as
flaky
that
essentially
disables
the
test
right
like
you?
Can
then
land
CI
without
that
test?
You
need
to
pass
so
it
right
so
I
feel
like
what
most
of
us
are
doing,
which
is
like
okay,
the
build
failed
click,
resume
click
resume
again,
click
resume
again:
oh
great
I,
gotta
I
have
green
or
green
and
yellow,
let's
land
it.
F
That
at
least
means
that's
at
least
better
than
if
some
tested
and
some
additional
tested
with
Mark
flaky,
because,
like
me,
being
able
to
land,
it
meant
that
some
test
that
is
flaky
but
isn't
marked
as
flaky
like
it
failed
four
times.
Then
it
passed
the
fifth
time,
I
clicked
retry
or
resume
like
it
did
pass.
You
know
what
I
mean,
so
there
was
at
least
one
pass,
whereas
once
it's
marked
as
flaky,
it
might
never
pass
and
we've
merged
in
anyway.
F
C
C
C
But
and
it's
a
problem.
E
E
Okay,
around
100
times
yeah,
so
so
the
idea
with
this
change
is
we
can
make
Mark
a
lot
of
tests
as
clicky
without
compromising
our
security,
hopefully
without
needing
to
resume
full
CIS
as
well.
E
E
It
stops
at
the
first
successful
attempt,
so
it
would
mean
the
see
I
will
refuse
the
freaky
test
if
it
fails
for
100
times
in
a
row.
I.
F
F
D
If
it
ignores
any
test,
regardless
of
any
pre-existing
markers
that
that
fail
only
like
50
of
the
time
and
then
sure
wow
to.
D
Well,
that's
what
happened
in
the
Keeper,
Live,
piano
and
I'm.
Sorry
I
keep
referring
to
that
particular
PR.
But
it's
been
a
major
issue
for
days
and
essentially
there
were
five
PR
as
sorry.
5Ci
runs
on
that
PR,
the
first
four
of
them
all
failed
the
same
test
and
the
fifth
one
didn't
by
chance.
So
DPR
was
merged
and
at
that
point
it
started
failing
CI
for
every
other
PR
for
every
other,
build
on
all
branches.
So
it
introduced
the
flake
because
someone
re-rented
until
it
passed
and
that's
what
would
happen.
A
F
So
having
it,
having
this
retry
mechanism
be
limited
to
predefined
list
prevents
someone
from
snake
sneaking
in
a
flaky
test
because
like
unless
they
add
their
new
test
to
the
flakiest
right
from
the
get-go,
is
what
you're
saying.
Okay.
A
A
G
A
D
We
could
technically
I
I
thought
about
that
too,
but
we
already
run
all
new
tests
essentially
a
hundred
times
since
we
run
all
tests
on
I
I,
don't
know
how
many
different
configurations
and
platforms
so
the
additional
work
of
identifying
actually
new
or
modified
tests
right,
and
we
we
so
would.
If
there's
a
signal
test
that
tests
a
single
feature
and
someone
modifies
that
feature,
we
wouldn't
have
the
same
guarantee
well,.
F
But
I
feel
like
it's
it's
much
more
likely
to
be
flaky
on
the
like.
Less
popular
platforms
like
I
feel,
like
I,
never
get
flaky
tests
on
like
the
Linux.
You
know,
CI
run
it's
always
like
whenever
smart
Os
or
you
know
the
the
the
operating
systems
that
are
not
like
very
widely
available.
You
know
what
I
mean
so
I
think
there
would
be
some
value
in
being
like
okay,
the
new
test
has
to
pass
10
times
in
a
row
on
Smart
Os
or
you
know,
on
each
one
of
the
environments.
D
That's
fair,
so
we
could
definitely
if
someone
modifies
a
particular
test,
regardless
of
whether
it's
added
or
just
modified.
We
could
definitely
run
that
as,
like
all
existence,
questions
around
hundreds
or
thousands
of
times,
but
we
could
probably
add
two
links
will
run,
modify
testifieds
more
than
another
foreign.
A
Like
an
action
question
here,
if
that
change
is
landed,
which
is
going
to
I,
guess
continue
to
run
them
if
they're
marked
flaky
I,
guess
that
that
helps
us
survive
ones
which
are.
C
Yeah
I
mean
these
automatically
re-running
I'm
I'm
a
little
skeptical,
but
I
can
either
say
yes
or
no,
but
but
the
immediate
problem
now
is
that
it's
difficult
to
land
PRS.
So
if
we
have
to
find
a
solution
that
is
like
somebody
can
fix
this
situation
rather
sooner
than
later.
E
Anecdotical
level
it's
much
easier
like
this
week,
maybe
since
Monday
or
something
and
even
at
the
first
strike
CI
turning
green.
So
I
don't
know,
since
somebody
lended
effects,
Maybe.
A
F
But
current,
but
the
situation
right
now,
though,
is
that
and
correct
me
I.
Think
please
correct
me:
if
I'm
wrong
about
this
I
thought
a
flaky
test
didn't
have
to
pass
for
CI
to
pass
like
it
would
just
give
you
the
yellow
warning,
but
that
flaky
test
never
had
a
past.
So
if
you're,
if
your
thing
lands
and
says
oh
I'm
gonna
rerun
the
flaky
one
ten
times,
but
it
must
pass
at
least
one
of
those
ten.
F
D
F
A
But
I
think
that
improvement's
good
and
if
there
are,
if
we
did
run
into
that,
we
should
disable
the
tests
and
and
this
this
Improvement
makes
it
I
think
lower
impact
to
Mark
things.
Flaky
is
kind
of
what
I
was
now
understanding
right,
because
it's.
G
It's
only
considering
things
that
are
already
flaky
I
mean
the
issue
at
the
moment
is
that
tests
are
flaky
which
aren't
slightly
Jeffrey
is
right.
If
the
test
is
marked
flaky,
then
you
know
tests
will
be
yellow.
You
know
that
are
failed,
that
already
Mount
flaky,
so
part
of
the
reason
that
CI
has
been
failing.
A
lot
is
because
there
are
tests
which
are
not
Mark
flaky,
which
are
failing
frequently
in
the
CI.
E
Yeah
but
hopefully
with
the
change,
we
would
be
less.
You
know
we
would
Mark
more
tests
as
per
key
because
it
doesn't
cost
as
much.
D
Yeah,
that
was
the
idea
behind
it.
Then
I
I
mean
the
just
the
word
flaky.
We
should
if
we
can
Market
as
fakie,
that's
flaky,
and
we
have
a
problem
in
our
definition
that
we
need
to
use
different
terms
for
like.
If
a
test
is
flaking,
it
has
been
four
weeks.
We
should
be
able
to
market
research
and
or
we
have
to
use
different
terminology,
so
I,
don't
think
I
I
think
Rich
was
right
from
the
very
beginning.
D
A
A
C
I
think
that's
kind
of
the
situation.
I
haven't
landed
so
many
PRS
lately
so
I
can't
say,
but
that's
the
sense
I
get
from
the
issue.
D
I
feel
like
it's
just
been
a
couple
of
bad
weeks
due
to
a
few
changes
that
landed
and
increasing
the
number
of
fake
results
tremendously,
which
should
maybe
affect
all
process
in
the
future.
For
example,
we
should
have
reverted
such
change
much
much.
We
should
have
reverted
it
and
not
tried
to
fix
it
for
days
which
wasted
so
much
of
our
time.
D
That
of
contributors
and
I
know
more
than
one
collaborator
who's
been
very
of
good
brothers
and
whose
contributions
have
definitely
suffered
from
not
being
able
to
run
CI
and
not
being
able
to
merge
PLS.
So
maybe
it
should
affect
how
we
do
things,
but
maybe
on
you,
if
they
could
just
set
up,
is
going
to
be
enough
to
change
that.
A
So
it
sounds
like
we
there's
two
there's
two
things
we
could
say:
hey.
Let's
declare
declare
bankruptcy
like
suggested
and
close
them
all,
but
I
think
the
suggestion
is.
It
was
bad.
It
is
now
better.
So
we
don't
necessarily
need
to
do
that,
but
people
are
free
to
Mark
things
flaky
if
there
are
still
ones
that
are
problems.
D
A
A
A
Okay,
if
not
the
next
one
is
Source
lib
print
Source
map,
error,
Source
on
demand.
This
is
number
43
875
and
looking
at
the
issue,
the
question
seems
to
be:
are
we
willing
to
land
this
in
node
d18
with
a
do
not
Blackboard
flags
for
V16?
A
The
reason
the
question
is
there
is
that
I
think
it's
like
it's
breaking
to
the
format
of
stock,
traces
or
and
if
anybody
else
has
more
has
been
more
involved,
please
jump
in,
but
it's
breaking
the
sort
of
the
the
format
of
the
stock
traces,
but
but
from
some
of
the
discussion
we
often
you
know,
don't
consider
that
to
be
part
of
breaking
changes
because
for
a
number
of
different
reasons
and
there's
a
fair
amount
of
support
for
doing
that,.
B
For
the
benefit
of
people,
listening
to
the
recording
or
or
watching
the
stream
Antoine
says,
plus
one
two
I
believe
what
Reuben
was
just
suggesting.
E
Yeah
correct:
usually
we
don't
count
your
messages
as
part
of
the
stable,
API
I.
Don't
think
it's
a
sometimes.
It
should
change.
A
Okay,
the
next
one
is
node.js
18n
I
would
like
to
become
a
member
number
6112,
rich
I
know
you
were
looking
at
this
one
yep.
B
I
took
the
TSA
TSC
agenda
label
off
of
it.
It's
we
should
or
I
should
or
something
I'm
I'm
I'm
handling
it,
but
but
we
need
to
get
our
access
to
crowd
and
back
that
should
that
should
stay
as
it's
a
place
where
people
can
you
know
it?
It
provides
valuable
functionality.
It
lets
people
provide
translations
for
and
read
translations
of
our
API
docs,
which
currently
we
don't
have
another
place
to
host.
B
Eventually
it'll
be
nice
to
get
integration
so
that
it
can
be
integrated
into
our
actual
hosted
API
docs
on
the
website,
but
for
now
workflow
is
totally
broken
because
nobody
seems
to
have
the
administrative
privileges,
I,
pinged,
Ben,
I,
hope
I'm,
not
mispronouncing.
His
last
name
Michelle
open
source
to
see
if,
because
they
have,
they
have,
they
seem
to
have
elevated
privileges.
I
also
asked
Richard.
If,
if
the
crowded
bot
credentials
might
help
us
get
administrative
access
to
the
crowd
and
instance,
but
I.
G
G
I
I
I,
don't
believe
build,
has
any
credentials
for
anything
to
do
with
crowding.
So
so
even
okay
I,
don't
think
that's
owned
by
there's
nothing
that
I
know
of
that.
Build
knows
about
so
wherever
that
is
it's
not
not
in
anything
owned
by
build
I
know,
yeah.
A
B
So
what
I
might
do
is
try
to
contact
crowd
in
the
company
on
behalf
of
you
know
and
see-
maybe
maybe
maybe
escalate
to
the
foundation
to
see
if
like
they
can
get
involved,
to
help
us
out
to
see
if
we
can
get
the
administrative
password
reset
to
something
that
they
can
give
to
me,
or
you
know
someone
else
who
you
know
some
some
other
highly
trusted
person,
but
point
is
I'm
handling
it
there
and
I.
B
A
I
I
had
one
not
only
semi-related,
but
you
know
you
mentioned
the
integration
into
our
API,
Docs
and
I
know
we
had
the
discussion
with
the
foundation
stopped
last
week
in
terms
of
them
offering
to
provide
some
support
on
our
website.
A
A
That
takes
us
to
the
end
of
the
issues
we
had
tagged
for
the
agenda.
I
am
going
to
find
reopening
the
issue
itself
because
I
know
there
was
some
updates
on
the
Strategic
initiative.
There.
A
In
terms
of
the
Shadow
Realm
strategic
initiative,
listen
to
cast
has
updated
that
you
know
with
Joey's
helpful
reviews,
he's
finishing
up
the
design,
doc
and
started
working
on
distinguishing
per
isolate
per
environment
properties
like
two
issues:
43s
eight
through
two
and
43
781.
A
I'll
say
on
the
next
10
strategic
initiative:
we
have
a
session
on
typescript,
really
and-
and
you
know,
focused
on
potential
sort
of
not
integrating
typescript
but
making
it
easier
to
use
typescript
with
node
tomorrow.
So
if
people
are
interested
in
that
check
out
the
issue
in
the
the
next
10
repo
they've
been
some
good
discussions
and
different
issues
and
stuff,
and
so
this
is
a
hope
to
get
people
together
and
agree
on.
Maybe
our
higher
level
way
forward
or
higher
level
requirements
versus
individ
specific
implementations.
E
A
And
I
don't
think
we
have
anybody
else.
I
will
say
on
single
executable
applications.
A
I
think
we'll
see
a
an
issue
opening
in
the
admin
repo
to
start
a
team.
That's
going
to
talk
to
be
able
to
focus
on
single
electrical
applications
and
potentially
a
repo
to
go
along
with
that
to
sort
of
investigate
some
of
the
key
issues
and
have
a
place
to
have
those
discussions
and
stuff
like
that.
So
that's
something
on
that
is
likely
to
happen.
A
So
I
think
that
takes
us
to
the
end
of
the
agenda.
Is
there
anything
else
that
we
should
talk
about
before
we
close
out
for
today.