►
From YouTube: Node.js Build WG meeting - January 29 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
It
thanks
Michael,
hello,
everyone.
This
is
January
29th,
2019,
road
work,
group,
cool
thinking,
it's
a
reasonably
short
agenda.
Today
present
we
have
myself
Michael,
Dawson,
wad-bag,
rich
Trott's
and
Raphael
I.
Guess
before
tackling
the
agenda.
Did
anyone
have
anything
they
wanted
to
add
to
the
agenda
or
anything
they
want
to
bring
up
before
we
get
started
just.
A
B
B
B
Nope
I'll
take
that
silence
to
be
good.
So
the
first
issue
to
talk
about,
then,
is
issue
162,
three
and
they're
just
build.
It's
add
more
lax
machines.
This
is
obviously
being
a
bit
of
an
ongoing
issue.
I
know
that
quite
a
few
people
have
obviously
expressed
some
upsets.
Let's
say
about
the
kind
of
backlog
that
they
expand,
causing
we
do
have
an
additional
ax
machine.
That's
been
provisioned
through
the
OSU
that
is
xx1,
but
I
am
having
quite
a
bit
of
a
battle
with
it.
B
I
unfortunate
I'm
actually
still
trying
to
get
it
to
sort
of
run.
It
seems
that
almost
every
instruction
that
was
written
in
the
bootstrap
document,
just
for
some
reason,
doesn't
seem
to
work
anymore.
All
the
dependencies
have
gone
so
I've
had
to
be
kind
of
pulling
talling
over
from
the
other
test,
CI
machines
and
so
forth
says
it's
just
made
my
process
quite
a
bit
longer.
It's
worth
pointing
out
that
on
ax7,
one
I
have
actually
submitted
a
work-in-progress
PR
that
basically
will
build
us.
B
A
proper
Eric
7-1
build
machine
and
I
run
that
internally
at
IBM
and
we've
had
some
pretty
good
results.
We've
been
out
to
build
and
run
tests,
so
in
principle,
I've
been
pretty
happy
and
obviously
that
ties
in
quite
nicely
about
whether
we
can
then
go
in
and
then
add
x7
one
at
some
point,
but
obviously
it
doesn't
quite
help
us
with
the
year
this
sort
of
imminent
six
one
backlog,
I'm
just
reading.
What
Richards
put
here
probably
an
argument
for
more
frequently
deleting
and
rebuilding
restoring
machines
in
Jenkins.
B
B
So
hopefully,
in
the
future,
we
will
have
a
much
better
system
that
we
can
answer
blazing
and
regularly
rebuild
and
make
sure
that
we're
happy,
but
these
these
area,
six
one
machines
that
we've
currently
got
all
really
kind
of
tailor-made
to
what
we
have
I
didn't
Michael
wants
to
add
something
to
us.
Oh
yeah,.
B
It's
been
a
bit
difficult
because
I've
obviously
been
trying
to
prioritize
getting
mr.
theater
machine,
but
I
had
no
idea
just
how
much
work
it
was
gonna,
be
I'm,
starting
to
wonder
whether
it's
worth
me
flipping
my
priorities
and
actually
getting
that
big
machine
sort
of
worked
on
and
out.
You
know,
I
really
set
some
goals
of
getting
that
out
in
February,
maybe
sort
of
towards
the
end
and
I
think
that
that's
still
achievable.
B
If,
if
I,
do
try
and
sort
of
flip
my
priorities
a
little
bit
so
I,
don't
know
how
everyone
else
feels
on
here.
We've
obviously
looked.
It's
well
I
think
actually
on
our
supported
list
from
no
dating
above
we're
on
air
at
seven
one.
Obviously
people
there.
We
do
build
on
ax6
and
as
a
proposal,
we've
kind
of
been
internally
talking
about
moving
over
to
x7,
174,
no
12
and
above
say,
I.
B
Think,
there's
obviously,
there's
there's
clearly
a
good
point
in
getting
that
out
where
it's
just
whether
we
think
the
wish
grow
ties
in
getting
that
third
six
one
machine
in
or
whether
we
should
say
that's.
Actually,
let's
make
a
point
and
get
that
big
new
machine
out,
try
and
take
some
of
the
workload
off
CI
and
see
what
we
can
do
from
there.
A
D
Regular
enough
problem,
yeah
yeah
I
mean
you
know
it's
it's
it's
not
not
not
like.
It
was
though
I
mean
I,
don't
know
what
happened.
Maybe
maybe
we're
just
running
CI
less
often
or
maybe
maybe
it's
improved
substantially,
but
it's
it's
still
it
and
and
and
Windows,
but
I
tend
to
be
laggards,
but
especially
in
it
yeah.
B
E
A
Back
to
you
know
how
close
I'd
hate
I'd
hate
to
stop
getting
the
third
one
done.
You
know
a
few
days
before
we
succeed
or
even
a
week,
cuz
I
suspect
it's
still
gonna
be
like
a
month
or
more
before
we
get
the
larger
machine
right,
yes,
yeah,
so
it's
kind
of
like
the.
If
Richard
said.
Well,
we
don't
see
it's
a
problem
at
all
anymore.
I
might
have
said.
Ok,
let's
just
switch
over
to
longer
term
yeah.
A
B
A
B
Right,
but
if
we're
going
down
that
route,
then
I
would
argue
it.
Can
we
not
just
start
looking
at
X
7,
which
defaults
to
GC
6
well
better,
especially
as
master
claims
to
support
x7
1
and
above
yeah.
A
A
B
The
thing
that's
currently
blocking
me
is:
that's
it
open
SSL
on
OpenSSH
and
every
time
I
try
and
install
it.
The
basically
just
goes
into
some
crazy
Hank
and
I've
tried
to
talk
to
Grisha
about
it.
Obviously,
because
I
figured
that
he
might
be
had
to
help
me,
but
it's
for
some
reason:
the
the
public
install
instructions
just
don't
make
sense.
They
just
don't
seem
to
working.
What's
gonna
sweetly,
they
are
slightly
different
levels.
B
B
A
B
I
think
I
think
I
think
the
message
I'm
trying
to
get
across
here,
Rich's
I've,
been
doing
a
lot
of
note
builds
recently
internally
or
not
particularly
powerful
vm's
and
we're
doing
note
builds
in
like
12
to
15
minutes,
so
I
I,
really
I
believe
that
we
can
be
doing
our
Eric's
builds
faster
and
I
believe
that
the
main
reason
is
just
the
disk
I/o
on
on
the
machines.
Just
isn't
really
particularly
up
to
scratch,
and
it's
just
on
those
particular.
B
B
C
It's
just
it
happened.
It
seems
to
be
happening
when
we
get
in
the
u.s.
start
to
see
back
lots
because,
like
Michael
noted
and
the
last
meeting
the
build
time
on
the
pendant
s
time
for
a
Windows
job,
it's
longer,
but
we
just
have
more
workers.
So
we
have
wider
bandwidth
and
the
ax
jobs
become
like
backlogged.
If
we
have
more
than
two
jobs
running
in
parallel
and
it's
it's
all,
you
know
within
reason,
I
haven't
seen
it
become
like
you
know,
ten
or
more
jobs
steep,
but
maybe
we
could,
you
know
devise.
C
A
The
first
thing
that
comes
to
mind
is
actually
with
the.
If
there's,
if
you
have
the
particular
times,
I
might
take
a
look
at
some
of
our
other
jobs
to
see.
If
we've
got
like
you
know,
I
know,
there's
some
testing
for,
for
what
is
it
napi
and
laboux,
and
thinking
about
any
PR
is
one
that
comes
to
mind,
but
there's
a
few
jobs
like
that.
We
should
check
the
schedules
for
those
I.
C
A
C
C
A
B
B
B
Isn't
it
I'm
not
sure
whether
rod
made
any
progress
on
that
I
think
that
was
certainly
the
last
I
display
to
roll
about,
but
it
was
kind
of
a
case
of
going
through
and
whether
we
could
split
split
down
permissions
a
little
bit
further
and
kind
of
work
out
a
system
of
allocating
sort
of
specific
permissions
to
to
each
users
based
on
their
need
role
than
just
sort
of
test
and
then
release
an
inference.
I.
E
Was
just
commenting
on
the
issue
that
miles
opened
on
the
admin
repo
about
GCP,
using
GCP
to
prototype
the
new
website
and
as
saying
that,
I
actually
think
that's
a
really
good
opportunity
to
begin
that
process
that
this
particular
process
we're
talking
about
now
pulling
apart,
because
the
problem
is,
we've
built
a
monolith.
There's
too
many
things
on
the
one
server
and
there's
too
much
sensitive
stuff
there
yep.
So
we
can
use
this
new
website
bill
to
like
if
we
can
build
that
independent
of
all
these
other
sensitive
things.
E
E
E
We
were
fine,
we
had
failover
that
worked.
Okay,
there
were
a
couple
of
problems,
but
you
just
highlights
how
much
she's
done
on
that
one's
over.
So
you
know
things
things
like
nightly
he's
had
to
stop
so
that
there
was
no
nightly
step
for
that
24-hour
period.
Well,
it
was
more
than
20
for
everything,
because
they're
triggered
off
that
server.
E
A
C
I,
just
you
know,
orthogonal
8
to
that
I've
been
talking
with
Myles
a
little
bit
and
what
Google
is
offering
essentially
is
is
a
budget
and
GCP,
and
we
can
besides
use
their
bill
service
and
there
are
firebase
hosting.
We
could
use
it.
That
says
blob
storage
for
our
artifacts.
We
can
use
it
for
all
sorts
of
things
and,
as
cloud
platforms
go,
it
has
a
a.m..
We
can
define
individual
users
with
granularity
to
our
liking,
for
everything
that
doesn't
have
to
be.
C
A
C
C
E
C
A
C
As
long
as
you
know,
the
only
issue
I
had
with
their
plan
is
that
they're
talking
about
you,
know
the
right
now
and
I
I
wanted
to
make
sure
they
have
a
plan
for
tomorrow
like
who
are
there.
Who
are
they
planning
to
hand
this
off?
To
was
gonna
maintain
it
was
gonna,
keep
you
know,
document
a
process,
but
as
long
as
they
you
know
break
ground
and
then
prove
that
that's
it's
easy
to
do.
E
Think
I
think
the
the
whatever
is
happening
over
there
that
can
be
I.
Can
let's
look
sort
of
like
a
beach
into
new
architecture
I,
as
I
said
over
there
as
well
I
am
concerned
about
vendor,
locking
you
know
and
I
think
do
you
think
Google
needs
to
prove
their
investment
in
node
infrastructure
because
they
haven't
invested
at
all
before
and
is
this
just
being
driven
by
miles
and
if
miles
moves
on
we're
left
with
nothing?
You.
E
A
I
think
in
the
scope
of
the
dot
dev
website,
that
is
a
good
place
to
experiment
right,
because
that's
the
kind
of
thing
that
I
think
once
da
dev
is
launched
in
everything
you
know
we
could
either
bring
that
content
back
to
the
main
website
pretty
easily.
If
it
doesn't
turn
doesn't
work
out
or
I've
got
to
expect
that
you
know
hosting
a
static
sites,
gotta
be
pretty
straightforward
right,
so
moving
it
to
any
other
cloud
or
any
other
place
shouldn't.
So
it's
a
great
it's
a
great
place
to
hopefully
get
some
experience.
A
C
A
C
A
A
E
An
interesting
extension
to
this
is
the
blob
storage
thing,
so
we're
currently
storing
all
of
them
downloadable
it's
on
a
server
and
we
have
backups
of
it
and
we
serve
it
through
that
server.
You
know
which
is
which
is
fine.
Like
it's
simple
me,
it's
you
know
uncomplicated,
but
we
have.
We
still
have
this
problem
of
logging
and
we're
still
bypassing
client
CloudFlare
for
downloads,
so
that
we
can
log
through
nginx.
That's
that's.
E
You
know,
I'd
you'll,
believe
it
cause
with
our
digital
ocean
down
time,
because
the
downloads
are
so
are
so
heavy
from
that
server
and
moving
to.
We
do
have
access
to
the
enterprise
CloudFlare
log
feature,
but
you
know
I've
I've,
written
scripts
to
download
and
fetch
those
logs
and
I'm.
Storing
them
I
just
don't
trust
it
because
they
only
keep
logs
for
48
hours
and
the
log
download
service
in
my
experience
has
been
flaky.
E
So
if
Google
is
offering
us
something
better
for
that
and
that
that
could
be
a
positive
thing.
C
E
B
E
C
As
a
producer
of
alerts,
I
think
the
only
incident
we
had
in
the
last
couple
of
months
three
months
was
we
had
at
the
benchmark.
Machine
was
down
and
it
was
hard
for
me
to
raise
awareness
to
that,
but
otherwise
I,
don't
I,
don't
recall
an
incident
where,
where
access
was
a
significant
cotton,
so
I'm?
Okay,
with
this.
A
E
A
C
A
B
Okay,
okay,
so
rod
are
you
happy
to
to
continue
to
take
your
your
action
to
itemize
at
some
point,
I
mean?
Is
it
safe
from
what
we
folks
saying
I
think
it's
probably
something
that
can
be
reasonably
so
they've
kept
on
the
backlog
for
now,
I
think
there
are
still
some
cases
where
abuse
would
have
some
additional
people
to
have
a
little
bit
more
elevated
access,
but
I
think
in
principle,
as
refocus
that
there
haven't
been
any
extreme
cases
that
I
can
think
hold
recently.
A
I
did
want
to
understand
a
little
bit
more
than
like.
Is
it
on
in
terms
of
the
context
like
it
for
because
it
says
for
nodejs
org,
which
is
actually
quite
different
than
like
for
the
dot
death
yeah.
E
A
C
C
For
my
impression,
that's
like
maybe
they
they
haven't
thought
about
it
completely,
but
that's
like
the
plan
hopes.
They
reaffirm
that
my
notion
in
that
thread,
so
they
want
to
run
something
in
parallel
and
thing
they're
happy
with
it
and
we're
happy
with
it,
and
then
we
can
switch
over
the
DNS,
at
least
for
the.
C
C
A
A
C
A
B
D
A
C
A
C
Things
like
that,
it's
I
think
they're
they'll
they'll
be
willing
to
consider
that
and
I
think
like
in
that
sense
of,
like
you
said
more
machines,
more
resources
is
great,
but
I
would
like
to
see
like
some
retiring
something
else.
So
we
don't
explode
with
like
our
learning
curve
and
our
knowledge
base.
C
E
A
C
Yeah
so
I'm,
assuming
everybody's
for
it.
As
long
as
like
you
said,
we
can
get
a
long-term
commitment
that
isn't
dependent
on
the
person
yeah.
A
A
A
E
A
Actually,
that's
not
snow.
That
doesn't
matter
in
this
case,
so
the
goal
is,
is
that
those
ones
are
just
gonna
generate
the
numbers.
They're
actually
gonna
do
like
a
sanity
test
like
if
the
coverage
falls
below
90%
they'll
fail.
Otherwise
the
goal
of
those
ones
is
purely
to
make
sure
we're
not
breaking
coverage.
E
A
Cuz
then
the
nightly
like
the
nightly
one
will
still
be
the
one
that
generate
runs
and
generates
the
results.
It's
just
that
we
want
to
make
it
fail
if
something
other
than
the
test
failing
fails,
but
before
we
do
that
I'd
like
us
to
have
it
so
that
if
somebody
adds
a
test,
that's
gonna
break
coverage.
We
know
that
so
that
it
doesn't
come
in
and
then
later
on,
somebody
has
to
fix
the
coverage
thing.
E
So
the
reason
we
do
the
tags
thing
is
so
that
we,
so
that
shows
and
within
the
job,
if
you
just,
if
you're,
adding
an
isolated
job,
you
could
just
use
the
main
tag,
which
is
something
something
shared
leaves
the
room
we
use
that
may
exist
so
that
we
can
do
it
all
within
the
same
tall.
I
know
a
whole
lot
of
things
that
same
job
and
if
you.
D
E
A
A
E
E
C
C
C
E
A
So
I
like
for
x86
I,
have
no
concern
for
power.
I
know,
George
has
a
proposal
for
what
we're
gonna
do
for
new
releases
yeah
for
some
of
the
older
releases.
You
know
we
still
build
on
I.
Think
it's
a
boot
to
14.
Yes
right
so
I
think
those
need
to
at
least
stick
around
until
you
know
those
releases
go
out
of
LTS.
Yes,.
E
E
So
what
do
we
test?
Well,
we
only
test
Bionic
in
about
four
four,
no
12,
so
I
think
we
need
to
have
this
this
this
whole.
Let's,
let's
map
it
out
sometime
ever,
we
need
to
have
an
issue
or
a
whole
meeting
dedicated
to
what
is
our
support
a
jewel
for
no
12?
What
are
we?
What
are
we
supporting?
What
are
we
testing?
Let's,
let's
be
really
clear
this
time,
because
if
we,
if
we
miss
the
boat,
then
we're
locked
in.
C
E
C
C
A
C
A
A
C
A
C
A
C
A
C
And
and
I've
been
hearing
that
bootstrapping
LLVM
tool
chain
is
easier
than
bootstrapping
jesusÃ
to
chain.
So
if
and
I
think
I
made
that
point
before
like
if
we
have
to
put
strap
from
from
nothing,
maybe
we
should
consider
Elvia
and
I
I
opened
an
issue
and
I
did
a
breakdown
of
all
the
platforms
and
Butch
and
available
for
us
for
them
and
as
far
as
I
remember,
we
have
everything
covered.
So
we
just
need
to
start
trying.
C
A
C
C
C
E
C
E
C
C
A
D
C
A
C
C
B
B
The
nei,
so
you
want
to
stay
on
that
or
are
we
happy
I
really
do
say,
I
think
it
sounds
like
we
just
need
to
be
prioritizing
and
working
out
whether
we
need
to
be
looking
at
lining
those
resources
from
no
12
or
13
I.
Think
there's,
obviously,
as
you
say,
retract
there's
value
in
in
pulling
into
on
the
v8
patches,
but
I
think
we
just
have
to
do
everything
we
can
sort
of
feasibly
to
pull
in
those
updates.