►
From YouTube: Node.js Release Working Group Meeting 2019-11-07
Description
A
B
B
B
A
D
A
A
D
A
Exactly
so
other
than
then
you
know
things
that
are
broken
or
or
or
issues
that
have
been
raised
or
major
bugs
that
are
reported.
We
do
not
really
change
the
RC
after
the
RC
has
been
cut.
So
what
this
generally
turns
into
from,
like
a
workflow
perspective,
is
kind
of
like
two
weeks
of
backporting,
then
the
cut
then
two
weeks
of
fixing
anything
that's
broken,
and
then
it
going
out.
B
B
Of
the
issues
we
just
had
recently
is
fit
at
1017,
some
ver
minor,
because
we've
kind
of
swapped
to
not
landing
anything
in
not
landing
everything
in
10,
because
the
backlog
of
commits
is
so
high.
It
actually
did
take
me
like
opening
the
pr
to
get
people
to
start
pinging,
saying
hey.
This
should
go
in
hey.
This
should
go
in.
So
we
didn't.
We
kind
of
didn't
follow
that
4:10.
We
added
most
of
the
commits
after
the
original,
the
RC
had
been
opened.
I.
A
B
C
As
question,
do
we
land
PRS
on
those
back
port
release
lines
that
are
younger
than
two
weeks
or
and
do
we
only
backwards
things
there
have
landed
before
the
question
would
apply
because
as
far
as
I
remember,
we
have
like
the
two
weeks
rule
that
each
commit
should
be
M
on
the
release
line
for
at
least
two
weeks
on
the
current
release.
I'm.
Sorry,
yes,.
A
Generally
done
it
that
it's
two
weeks
before
you
land
it
not
two
weeks
before
the
RC
date,
but
I
think
that
you
know
in
general,
these
things
are
very
much
meant
to
be
at
at
the
the
the
releaser
and
the
the
back
boarding
team
is
meant
to
have
a
bit
of
insight
into
this,
and,
if
there's
something
that's
important
to
land,
you
know
we
always
can
can
look
at
things
on
a
case-by-case
basis
and
be
pragmatic
about
it.
Okay,
so
for.
D
A
C
D
D
A
It
makes
sense
yeah
we
had
usually
done
the
first
Tuesday
but
I
think
what's
more
important
is
less
like
picking
like
using
a
day
that
we've
done
in
the
past,
but
being
like
fairly
consistent
about
where
in
the
week
we're
in
the
month.
That's
landing
is
is
good
to
have
do
we
want
to
add,
maybe
one
more
column.
It.
B
B
A
B
B
D
D
B
D
D
Think
we
have
a
CI
job.
That
already
does
that
the
comparison
and
tells
you
have
the
percentage
of
drop
or
or
win,
oh,
but
the
the
main
issue
here
would
be
to
to
decide
which
benchmarks
we
run,
because
if
we
run
all
of
them,
it
takes
really
a
lot
of
time.
I,
don't
know
how
how
long
you
need
to
run
all
the
benchmarks,
but
it's
very
long.
C
Yeah
at
moment
or
micro
benchmarks
are
partially
taking
very
long.
I
try
to
already
reduce
a
couple
of
those
so
that
they
would
finish
in
relatively
fine
time,
but
it
really
depends,
and
sometimes
we
just
land
the
new
micro
benchmarks
that
takes
hours
or
days,
even
even
though
it
should
not
and
like.
Maybe
we
could
even
introduce
like
a
recommendation
and
update
our
benchmarks
in
case
an
individual
benchmarks
takes
longer
than
I.
Don't
know.
D
D
A
So
so
one
thing
that
I
think
is
worth
kind
of
throwing
out
there
to
be
clear
to
start.
I
think
this
is
a
great
idea
and
we
should
figure
out
how
to
implement
it,
but
in
case
of
primordial
x'.
As
far
as
I
know,
the
micro
benchmark
suite
did
not
even
show
any
differences.
It
was
the
performance
degradation
that
happened
was
under
extreme
CPU
load
and
was
identified
out-of-band
of
our
benchmarks,
so
I
mentioned
that
only
to
say
that
like
well.
A
This
would
be
good
to
implement
it's
not
guaranteeing
us
to
actually
fix
these
things
or
check.
It.
I
think
that
one
thing
one
thing
that
we
definitely
could
do,
which
might
be
helpful,
would
be
like
a
nightly
job
with
the
benchmarks
that
are
going
against
the
staging
branches
potentially
to
flag
on
things
early,
but
yeah
I
think
a
combination,
a
of
like
how
to
make
this
fast
and
I
say
that
in
quotes,
because
we
also
do
carrying
the
gold
mine,
which
is
also
you
know,
a
slow
job.
A
C
So
ideally-
and
we
would
even
be
notified
earlier
about
these
problems
and
maybe
not
as
release
there,
but
maybe
as
a
issue
that
would
be
opened
and,
of
course,
and
the
benchmark
suite
has
to
have
a
benchmark
that
actually
also
triggers
that
specific
part
so
and
won't
ever
be
guaranteed
to
detect
a
couple
of
things.
But
it's
at
least
like
one
part
that
we
can
do
to
improve
things
like
that
and
to
the
day
to
detect
them
potentially
yeah.
A
C
B
Yep
I
guess
that's
what
we
could
between
do
amongst
us
between
now
and
next
meeting
is
see,
try
playing
with
that
job
once
we
find
found
it
in
Jenkins
and
see
if
there's
a
way,
we
can
kind
of
go
Delta
between
two
releases
and
how
long
it
takes
roughly,
because
what
would
be
nice
is,
if
we
could
even
just
say,
is
part
of
the
node
test
release
candidate
job
it
did
it.
It
kicked
off
some
configuration
between
two
releases
like
the
ERC
release
and
the
latest
of
that
release,
line
yeah.