►
From YouTube: Node.js Build WG meeting - Sep 25 2018
Description
B
So,
just
to
very
quickly
remind
everyone
if
you
are
on
the
chorus
think
as
anyone
else
join
the
cool
no,
but
if
anyone
has
please
do
join
and
I'd
remain
to
the
agenda.
So
in
terms
of
announcements
there
weren't
any
announcements
on
the
agenda.
Did
anyone
have
any
specific
announcements
that
we
wanted
to
just
point
out.
B
Like
that,
as
a
note,
that's
good
to
hear
okay,
so
are
we
just
gonna
start
from
the
start
of
the
agenda
there
Michael
might
as
well
yep,
so
how
we,
how
we
addressed
requests
to
join.
We
just
briefly
brought
up
I
know.
We
just
said
that
the
two
issues
I
think
it
was
Adam
and
was
it
Daniel
I
think
we
said
you
know.
I
was
like
they're,
both
quite
frantically
busy
at
the
moment,
so
we
might
obviously
delay
that
slightly
and
trying
to
fit
them
in
when
we
can.
B
B
A
C
A
Broken
the
existing
one
was
broken.
We
added
a
10,
yes
11
and
so
I
think
we
want
to
switch
over
and
use
rods
as
sort
of
the
backup
or
even
like,
maybe
have
them
both
going.
But
we
want
to
add
this
one
in
it's
just
about
what
testing
do
we
want
to
do
and
the
timing
of
when
we
do
that
switchover
yeah.
A
B
D
You
just
wanted
to
say
that
it
just
in
in
throughout
the
history
of
all
this
stuff,
I've
never
had
trouble
with
producing
Mac
binaries
that
are
compatible
because
I
think,
even
when
we've
upgraded
OSX
sorry,
we've
got
great
Xcode
and
OS
X,
I
I.
Think
Xcode
is
just
really
good
at
targeting
when
we
start
cuz,
we
basically
say
type
at
least
I
think
it's
just
really
good
at
being
compatible.
So
my
level
of
comfort
is
pretty
high
with
if
it
seems
to
work,
it
probably
does
all.
A
A
D
B
C
C
A
B
B
A
Think
there's
only
draught
only
has
one
machine
right:
yeah,
that's
right,
yeah,
so
I,
don't
I,
don't
think
we
can
count
on
that,
but
I
think
we
are
still
trying
to
get
some
other
Mack
diversity
in
terms
of
some
other
machines,
and
that
is
something
yeah.
Maybe
that
would
be
14,
although
I
I
just
had
the
thought
of
so
if
we
can
get
13,
is
there
a
way
to
just
say
upgrade
me
to
14.
A
A
B
B
I
guess
the
argument
macro
is
the
other.
Pers
are
nearly
always
on
the
bleeding
edge
on
a
Mac.
The
entire
ecosystem
of
Mac
OS
is
that
you
basically
go
to
the
latest
version.
As
soon
as
it
comes
out,
you
know,
I
would
I'm
not
sure
how
many
no
developers
are
actually
deploying
to
production
in
Mac,
but
they
would
certainly
be
interested
in
having
node
tested
on
the
latest
Mac
version.
So
I
think
we
should
be
our
best
to
keep
up
with
that.
B
C
Off
of,
like
you
forget,
you
know
our
it's
called
baseline,
the
one
that
we
build
with,
and
the
latest
one
I
think
that
would
be
the
most
interesting
case
like
adding
12.
If
we
have
13
might
be
less
interesting,
I'm
just
considering
because,
for
instance,
I've
been
looking
at
at
our
Linux
a
repertoire
and
it's
it's
huge
yeah
and
I
still
haven't
ran
statistics
like
have
we
found
interesting
cases
in
all.
These
are
just
flakes.
Yes,.
B
A
B
B
Certainly
ten
twelve,
unless
there
is
a
real
you
know,
someone
really
feels
that
we
should
be
testing
on
ten
twelve
I'm,
not
sure
that
we
would
need
I
would
be
surprised
if
people
were
still
running
ten
twelve
on
their
max
we're
going
back
sort
of
nearly
three
years
now
but
I
guess
it's
I
guess
the
point
is:
if
we've
got
the
resources,
do
we
go
on
the
thing
of
let's
go
for
complete
coverage
or
do
we
go
for
you
know?
Let's
have
the
listening
machines
to
maintain.
You
know
so.
A
C
B
D
No
I,
don't
we've.
We've
had
very
few
issues
that
have
been
specific
to
make
OS
versions
again
it's
this
compatibility
thing
yak
tend
to
be
pretty
good.
We
early
on.
We
had
some
issues
with
old
Mac.
Like
you
know,
there's
this.
It
seems
to
be
rather
than
you
know,
between
point
releases.
It's
like
does
this
work
on
old
Mac
vs.
new
Mac?
Maybe
so
I
really
don't
you
know
as
long
as
we
as
we're
testing
the
latest
and
something
older
like
that's
sort
of
good
enough
for
me,
but
yeah
yeah.
D
B
Beast,
ok
shall
I
shall
I
create
an
issue,
then
we
can
obviously
see
what
the
collaborators
think
and
then
kind
of
work
out
if
everyone's
in
general
consensus
of
saying
yeah,
that's
it.
Actually,
it
makes
more
sense
to
be
testing
on
the
level
we
build
on
and
the
latest,
or
at
least
the
latest
two
or
whatever
yeah
I
think
we
obviously
need
to
allow
my.
C
C
B
C
C
B
C
D
D
C
B
I
can
think
of
was
when
I
was
trying
to
put
together
the
Mac
release.
Machines
and
I
was
obviously
having
to
battle
with
getting
Michael
in
the
same
time
zone
as
myself,
but
yeah
I
mean
I.
Guess
is
we
say
the
idea
is
that
we've
just
got
to
always
try
and
have
someone
with
the
elevated
access
available,
I,
think
sort
of
generally
between
between
rod
and
Michael
I.
C
D
I
was
I
was
on
there
at
that
time,
so
I
was
I
was
fiddling
before
you
even
started
fiddling
and
exchanging
memory
sides
and
yeah
I
will
say
about
that,
though
it
is
normal
for
Jim
for
Jenkins
to
use
as
much
memory
as
you
want
out
to
so
it's
it's
watching.
Memory
is
not
that
helpful
from
the
outside.
The
CPU
usage
is
easy
interesting,
but
it's
a
beast
anyway.
So
like
what.
A
D
D
A
B
A
B
Can
see
the
value
of
having
access
to
the
IBM
platforms
in
relation
it
takes
a
bit
of
the
load
of
Michael.
Also
I.
Think
the
Mack
stadiums
set
up,
there's
kind
of
yep,
something
that
me
and
Michael
caught
I've
seem
to
have
owned
and,
of
course
we
should.
We
should
probably
diversify
that
and
an
educator.
One
else
had
to
use
that
as
well
and
write
that
up.
Okay.
D
Carol,
this
gets
back
to
the
original
thing
of
we're.
Gonna
atomized,
these
permissions,
so
lettings,
letting
certain
individuals
have
access
to
some
release.
Machines
should
be
okay
rather
than
just
saying,
if
you're,
if
you're
a,
but
you
know
you
have
to
be
in
the
release,
Scripture
and
access
all
of
the
release-
machines,
yeah
yeah,
we
should
be
able
to
do
that.
So
we
look.
Let's
come
up
with
a
model
that
allows
us
to
do
that
with
and
and
be
consistent
about
it
or.
D
A
B
Let's
go
through
so
very
briefly:
I,
don't
think
state
of
ansible.
There
have
been
any
updates.
I
think
it's
the
same.
Most
thing
advice
to
me
to
do:
ax,
census.
Five
I
think
we
basically
agreed.
We
don't
care
about
and
I
think
the
www
directory
is
doing
something.
This
I
thought
it
was
in
charge
of
that
I.
Think.
B
B
Yeah,
should
we
remove
this
from
the
agenda,
then
I
think
we've
kind
of
got
it
under
control.
It's
just
obviously
finishing
other
tasks
now
and
I'm
positive
everyone
happy
with
that
brilliant
yeah
I
could,
let's
moose
with
you
on
CDs
of
doc,
and
build
CI,
which
I
think
Rob's
gonna
tell
us
about.
I
am.
D
D
C
B
D
D
B
C
B
C
B
D
D
B
D
No,
yes,
that's
fine!
Okay!
So
it's
actually
this
one's
actually
an
eight
CPU
machine,
so
but
I
actually
think
I.
Think
they're
all
I
think
I've
got
them
all
roughly
the
same
size
they
they
there's
difficulty
getting
them
exactly
right,
but
they're
all
they
all
should
be
roughly
the
same
amount
of
beef
and,
if
not
there's
waves,
to
tune
that
the
waste
waste
attune
their
usage.
So
so
this
is
long
digitalocean
yeah
I
was
just
going
to
show
that
that's
that's
the
type
of
machine
we
have
now.
What
happens?
D
D
D
We
have
docker.
So
this
this
is
run
whenever
you
run
ansible
against
one
of
those
four
hosts.
It
will
run
this
role
and
this
role
will
the
it
doesn't
set
this
machine
up
like
the
other
machines,
because
it
doesn't
install
Jenkins
on
that
post
machine
itself.
It
runs
effectively
an
installed
installed
docker.
This
is
about
just
standard
stuff.
It
does
that
we
do
on
other
machines,
but
it
installs
docker
and
then
it
it's.
D
C
D
D
D
Okay,
so
is
it
you
see
that
yep?
That's
fine!
Okay,
this!
This
should
explain
so
it
uses
this
host
bus
to
set
up
docker
on
there,
and
so
what
it
does
is
for
each
of
these
lines
in
the
containers
group.
So
in
the
containers
variable
here.
Each
of
these
is
a
dr.
container
and
each
of
them
has
a
different
types.
So
this
this
OS
actually
maps
to
darker
files,
dr.
file
templates,
so
we've
got
a
whole
bunch
of
types.
So
at
the
moment
we've
got
our
pine
templates.
We've
got
planar
one
two.
D
We
have
plain
of
one
two
18:04
as
well,
but
I
don't
think
we're
using
that.
That
was
going
to
be
a
test
thing
before
that
was
before
one
two
1804
came
out.
We
were
testing
on
that
and
then
we
have
this
one,
which
is
the
most
heavily
used.
So
it's
called
shared
libs
I'll
show
you
what
those
templates
do
in.
D
Show
you
exactly
where,
especially
about
so,
if
we
go
into
templates
in
the
in
the
docker
roll
each
of
these
files
you'll
see
maps
to
those
OS
names,
so
our
point
37.
So
whenever
we
have
an
hour
PI
upgrade,
we
add
a
new
one,
remove
the
old
one
and
it
sets
up
those
those
things.
So
if
we
look
at
the
a
bunch
of
1604
one,
for
example,
it's
really
it's
fairly
plain,
it
adds
it
adds
the
stuff
we
need
for
Jenkins
I.
Don't
think
everything
is
still
tap
to
James,
probably
using
that
anyway,
but
yeah.
C
D
This
is
just
classic
stuff
that
you
will
see.
This
is
basically
a
docker
file
version
of
what
we
do
in
ansible,
for
our
standard
test
hosts
and
and
then
in
you
know,
as
as
the
startup
command,
it
starts,
Jenkins
yep,
that's
it
now.
The
interesting
thing
here
is
you'll
see
that
volume
there's
two
volumes
in
all
of
these
hosts
one
is
the
home
/
io,
Jas
user,
that
is
mapped
mounted
from
the
host.
D
So
that's
a
that's
a
persistent
volume
for
each
within
each
of
these
containers
and
the
other
one
is
the
CC
cache
as
well.
Now
we
share
the
CC
cache
across
all
containers
on
our
hosts,
including
our
PI,
in
the
wrist,
so
even
where
it
doesn't
necessarily
make
sense
to
have
overlap.
We
have
this
big,
huge,
shared
CC
cache
and
it's
it's
got
a
large
size,
so
it
can
actually
expand
pretty
big,
and
the
reason
for
that
is
that
we
have
a
lot
of
duplicate
containers.
These
shared
live
ones.
D
These
even
the
Ubuntu
1604
and
the
Ubuntu
1604
shared
libs
will
will
share
a
lot
of
objects
in
the
CC
cache.
So
it
speeds
things
up
a
lot
and
that
gets
pretty
big
so
that
this
mounts
to
the
same
CC
cache
across
all
of
the
hosts
and-
and
we
tell
Cici
cache-
are
this
a
couple
thing?
It
is
a
hack.
We
have
to
defer
to
make
CC
cache
work
with
shared
library,
the
shared
directory.
That's
that's
about
it!
So
that's
a
standard,
Ubuntu
1604
setup,
but
then
we've
got
this
I'll,
the
alpine
ones,
a
fYI.
D
The
next
person
has
to
upgrade
Alpine.
You
do
have
to
jump
through
a
hook
through
a
few
hoops,
so
you'll
have
to
set
it
up
and
test
it
to
make
sure
it
works.
But
there's
there's
you
know
things
change.
All
the
time
is
little
things
different
packages
you
have
to
install,
they
change
names.
It's
it's
got
to
be
a
churn
in
the
Alpine
and
ecosystem.
So
it
takes
a
little
bit
of
testing
to
get
it
right,
but
mostly
you
just
copy
you
copy
the
last
one.
D
You
increment
the
number
test
it
out
and
figure
out
what
packages
need
to
go
in
there
so
shared
libs.
This
is
this.
One
is
used
very
heavy.
It's
the
same
as
a
wonder
in
1604,
except
that
it
ins
it
actually
installs
shared
libraries
that
we
test
against.
So
for
each
of
the
shared
libraries
that
we're
testing.
We
have
these
this
block,
so
open
SSL,
1:02.
We
we
are
building
it
as
a
shared
library.
D
So
this
we've
got
a
this
run
statement
that
will
download,
compile
and
install
open,
SSL
into
opt,
and
then
we
export
an
environment
variable
that
sorry,
we
explore
an
environment
variable
that
says
where
it's
installed
so,
for
instance,
we're
on
1,
0,
2
n
at
the
moment
that
when
that
gets
incremented,
you
know
that
will
increment
that.
But
this
stays
consistent
so
that
the
build
job
and
I'll
show
that
later
cuz
still
knows
exactly
where
it
is,
so
it
will
reference
this
install
directory.
D
So
in
shared
libs
we
have
that
for
open,
SSL,
1
0
to
1
1
0.
We
got
1
1
1
that
that's
still
a
pre
version.
I
do
have
a
branch
that
has
the
latest
version
that
has
been
released
since
now,
so
we
just
need
to
merge
a
PR
to
get
that
one
up
to
the
latest.
We
our
tests,
don't
fully
work
for
this,
yet
so
that's
not
enabled
for
all
jobs,
that's
just
something
that
we
can.
D
We
can
tinker
with,
but
even
when
we
go
an
upgrade
node
2
1,
1
1,
we'll
still
want
to
test
it
as
a
shared
library,
and
then
we
might
need
to
disable
this
one
in
in
in
master,
because
that
won't
work.
In
fact,
I
think
this
is
disabled
in
master
already,
because
it's
we're
not
backward
compatible
with
1:02
commander.
So.
D
A
D
D
D
D
D
A
D
D
So
if
you
look
in,
if
you
look
in
the
root
directory,
you'll
see
these
build
directories
that
we
created
at
Build
time
there,
they
don't
delay
that
they
could
be
doing,
there's
nothing
very
important
and
and
in
each
of
them
there's
a
docker
file.
So
it
just
makes
a
docker
file
in
each
of
these
directories.
D
I
show
you
that
docker
file,
so
it's
go
secret,
but
that's
just
that
templated
version
you
valuated,
so
each
of
these
maps
to
one
of
these
containers,
then
it
runs
docker
build
against
each
of
those
docker
files
and
it
gives
it
a
name
to
the
nei
item.
Dot
name
comes
out
of
that
secrets,
file,
so
item
name,
so
there's
the
name
and
so
I
guess
the
tag.
It
gets
note
CI
:
item
name.
Then
it
creates
a
system
D
file.
So
we
do
we
use
system.
D
D
Now
that's
mapped
on
this
on
the
host
to
home
io
j
s,
/
hostname.
So
if
we
go
to
we
go
to
the
IHS
director.
We
will
see
in
here
that
we've
got
a
c
c
cache
directory,
which
is
presumably
very
large,
20
gigs
and
then,
and
then
each
of
these
maps
to
a
standard
home
directory
inside
the
container
so
thisis.
This
here
looks
like
I,
stand
home
directory
right
yeah
and
on
that
container,
let's
see.
D
D
A
A
D
D
A
D
D
D
B
D
B
D
It's
well,
you
know
we've
a
lot
of
our
hosts.
Have
you
know
two
gig
of
memory?
That's
about
it.
Yeah
I
have
so
this
like
they
fit
nicely
and
we
can
max
these
out
and
they
work.
Fine.
There's
enough
amuse
I
mean
it
slows
down
a
little
bit,
but
CC
cache
really
helps
out
as
well.
So
it's
Jesus.
These
are
nowhere
near
our
slowest
jobs
there
that
looks
complete
in
no
time
the
only
the
only
the
slowest
one
we
have
is
we
run.
Debug
builds
in
flow
well,
I'm
just
slow
because
they
do
yeah.
C
B
D
18
is
not
used
actively
now
I
think
we've
actually
got
18
hosts
yeah
there's
a
testing
one
16
is
useful,
for
if
we
want
to
do
debug
and
release,
we
don't
need
the
16
in
there.
Actually,
there
was
a
request
by
someone
I
can't
remember
who
it
was
maybe
was
Rafael
when
I
first
said
it's
up
was
to
have
six
teens
around
just
some
spare
six
teens
to
do
random
stuff
on
so.
C
C
D
Okay,
let
me
get
to
that
bit
now.
Okay,
seconds
question,
so
in
India
in
the
Jenkins
host
list
you
will
see
a
set
for
each
of
the
hosts,
so
the
way
I
number
them
here.
You
will
see
one
three,
five,
seven,
nine
yeah,
so
the
other
digitalocean
one
has
two
four
six,
eight
ten
and
then
on
on
the
our
hosts
list.
You'll
see
them
here,
digitalocean
one,
two:
three,
that's
right!
That's
the
destiny
of
one
two,
so
the
shared
lives,
one
ten,
two,
three,
four,
five,
six,
seven,
eight,
not.
D
They're
all
there
and
then
you'll
see
the
same
thing
when
you
go
to
giant
you'll
find
the
giant
shared
live
ones.
The
joint
hasn't
has
one
word
one:
oh,
no,
no
joint
doesn't
have
any
of
one
two
16:04
planes
eat.
They
only
have
the
shared
lips
and
the
Alpine,
so
that
1604
is
a
special
case
and
then
the
other
one
was
SoftLayer.
So
you'll
find
the
same
thing
in
SoftLayer
there
they
are
there,
it's
very
similar
they
drink.
They
join
hands
it
up,
yep
and
there's
1804
there.
But
this
the
Alpine
is
the
same
thing.
D
There's
this
one
of
these
airplanes
on
each
now
we
have
to
be
aggressive
with
deleting
old
our
pine.
We
can't
just
keep
it
around,
because
we've
had
so
many
compile
problems
in
the
Alpine
yeah.
That's
like
the
newer
versions
solve
all
our
problems
so
and
people
just
don't
hang
on
to
old
versions
without
client.
Just
doesn't
work
the
same
way
so
who
I'll
be
glad
when
we
get
rid
of
33.7
because
he
actually
is
called
easing
causing
us
some
compile
problems,
that's
stopping
us
from
and
there's
a
PR,
that's
not
being
merged
goldlink.
D
Because
of
three-point-seven,
so
when
that
goes,
that
we'll
be
happy,
but
then
who
knows
what
other
issues
will
be
introduced
there?
So
we
can't
just
keep
these
things
around
with
our
pine
FYI.
So
the
way
it
works
now
is
interesting
because
we've
got
this
pool
of
workers
essentially,
and
we
want
to
be
able
to
draw
from
them
as
much
as
possible.
D
So
we
want
to
be
out
of
maximum
out
if
we,
if
we
need
to
so
we've
got
all
this
capacity
and
we
want
to
be
able
to
do
all
that
we'll
be
stuff
in
parallel.
So
what
we
do
we
go
back
up,
some
digitalocean
shared
leaves
and
shared
leaves
one.
What
we
do
is
we
use
labels
to
make
the
turn
them
into
a
pool.
So
each
of
the
particularly
shared
lives
has
all
these
labels
and
they
all
map
to
a
different
type
of
job.
D
D
D
So
if
you
look
inside
there
now,
this
is
this
is
the
not
the
only
way
you
could
do
this
and
I
could
imagine
someone
coming
up
with
a
different
mechanism
to
do
this,
but
this
works.
The
only
the
main
overhead
here
is
that
every
time
you
add
a
new
a
new
set
of
these,
you
have
to
add
all
the
tags
to
them.
So
there's
a
bit
of
a
manual
job
there,
it's
just
a
matter
clicking
and
copying,
and
so
in
in
our
Jenkins
configuration
we
have.
D
Matrix
so
we
use
these
labels
and
we
we
only
we
don't
add
the
machines
themselves,
we
add
the
labels,
so
so,
whenever
we
run
this
job
this
this
pool
of
machines
is
available.
Now,
so
you
see
one
one
one
solution
and
then
within
the
the
build
each
of
these,
so
we've
got
the
preliminary
stuff,
the
post.
Where
is
a
post
state
of
stuff
and
diagnostics?
D
But
then
in
here
we
are
so
in
each
each
of
these
configurations
for
different
types
of
job
to
run.
We
have
a
block,
a
conditional
step
block
to
run,
and
we
do
this
regular
expression
match
where
we
match
the
the
node
name,
the
Jenkins
node
name,
with
the
tag
that
we
care
about.
So
Jenkins
will
look
at
that
pool
of
of
tag
of
labels.
Openssl
1,
1
0
will
choose
one
of
them.
That's
available.
Stick
it
into
its
run.
D
Then,
when
it
goes
through
this,
it
will
find
the
one
the
one
that
matches
and
then
in
run
this
job
on
it
and
then
every
one
of
these
things
so
Phipps
has
one.
They
will
only
run
on
one
of
the
job,
one
of
the
things,
and
then
we
skipped
over
for
the
others,
debug
there's
another
one,
1:02,
etc,
etc.
They
live
open.
This
is
one
one
one
which
is
not
used,
so
this
one
right
run
because
we've
removed
that
label
without
Intel
we
and
without
a
toaster
cell
and.
A
D
D
D
All
the
post
status
ones
are
just
multiplying
I'm
worried
that
that's
actually
taxing
Jenkins
every
too
much,
but
we
could
actually
agree
configure
this
to
be
in
sub
jobs.
What
this
gives
us
is
just
there's
one
place
where
we
put
the
logic
for
the
different.
What
needs
to
happen
to
make
this
thing
work,
so,
let's
take
a
classic
case.
So
sadly,
this
is
earlier
this
that
there's
some
generic
stuff
here,
which,
like
this
stuff,
could
actually
be
removed,
except
for
the
flaky
tests.
But
this
stuff
here
is
this:
is
the
Zed
Lib
specific
yeah.
A
D
And
the
main
bit
is
that
we
we
added
conflict
company
flags,
we
say
user
shared
the
Lib
and
it's
in
this
environment
variable
which
you
remember,
we
have
in
the
docker
file
yeah,
so
we're
in
a
document.
A
new
shared
lived
up
in
container.
That
knows
where
this
is
and
and
that's
that's
it
and
we
use
LD
library
path.
We
don't
need
LD,
that's
for
Mac
OS
and
then
it
runs
the
tests
like
like
normal,
but
so
this
run
CI.
But
then
it
also
does
this
extra
check.
D
So
each
one
of
these
has
an
extra
check
to
make
sure
that
it's
running
the
version
that
we
think
it
is
so
it's
actually
compiling
the
way.
We
think
it
also.
So
we
run
normal
tests,
but
we
also
say:
did
it
actually
compile
as
a
shared
library,
and
we
have
each
for
each
of
them
has
a
some
kind
of
mechanism
where
we
do
this
test
and
check
it,
but.
D
D
Is
this
debug
yeah
a
persistently
okay?
Well,
we
do
have
ways
to
make
this
work
so
I
see.
Somebody
else
has
put
some
new
ones
in
here.
Basically,
this
manually
hacking
the
flaky
test
list
in
here,
because
we
know
like
to
make
it.
We
have
we've
done
this
for
OpenSSL
as
well.
You
basically
mark
this.
This
particular
test
is
flaky,
even
though
it's
not
in
the
master
branch
piece,
it's
a
bit
naughty,
but
it
gets
us
through
four
just
for
this
specific
job.
C
A
But
if
we
so
it
was
the
reason
I
was
asking
is
like
that.
That's
tagging
tagging
extra
tagging
you
have
to
do.
There's
there's
nothing
fundamental
to
that
right
like
if
we
were
willing
to
make
these
separate
jobs
yeah,
then
the
the
container
machines
could
just
be
tagged
with
a
single
thing
and
we
could
use
them
just
like
any
other
machine
right.
Yes,.
C
C
D
A
D
A
D
Yeah
we
want
to
introduce
this
new
configuration,
we're
going
to
find
this.
If
we
there's
a
couple
other
configure
build
configurations
that
we
could
add
in
here,
but
because
it's
not
well
maintained
there,
probably
a
lot
of
failures
and
instead
of
saying
okay,
we're
introducing
this
new
build
configuration
and
by
the
way,
it's
red
for
everyone.
D
We
say
it
will
market
flaky
just
for
this
job
yep
and
then
flag
it
to
other
people
that
might
be
able
to
fix
it,
but
the
the
flaky
mechanism
itself
is
not
it's
not
great,
because
people,
it's
not
everything
everyone's
face-
I
mean
it,
makes
your
job
always
yellow,
but
there's
a
lot
of
flakies
across
the
area
for
us.
So
it's
not
really
a
big
deal
yeah,
but
this
is
a
pattern.
We've
used
to
get
basically
to
get
these
configurations
into
CI
without
causing
headaches
for
everyone.
Yeah.
D
Shared
lives
win
that
we
did
that
because
they
were,
you
know
in
the
old
open
SSL
there
was.
We
actually
had
to
patch
open
SSL
to
make
our
tests
work
and
when
we
were
billed
as
a
shared
library
we're
building
against
the
non
patched
open
SSL
on
our
tests
and
washed
out.
So
there
was
some
flakies.
We
have
to
mark
the
mark
there
as
well,
but
we've
moved
a
bit
beyond
that.
Now.
Thankfully,
oh.
D
We
just
go
into
the
main
machine
you
will
see
in
in
each
of
these
machines
in
Jenkins.
It
actually
tells
you
what
machine
it's
on.
So
in
the
description
docker
container
running
on
this
machine,
it
will
tell
you
where
this
is
in
the
infra,
because
it
doesn't
matter
only
Cantor's
SSH
into
this
box.
So
we
know
SSH
into
that
machine.
You've
got
this
and
then
you're,
just
gonna
have
to
pull
out
your
doctor
foo,
so
docker
PS,
docker
exec.
D
B
D
So
you
can
this,
you,
can
you
can
go
into
that
and
just
do
it.
So,
if
I
go
into,
let's
look
in
let's
looking
that
number
9
that
we
were
in
before
you
can
treat
that
you
don't
even
have
to
be
in
the
container.
You
can
treat
this
like
a
like.
You
would
a
normal
host,
so
I
can
I'm
just
gonna,
guess
that
nothing
is
running
and
I'm
just
gonna.
Remove
that,
and
you
can
do
that
right
right
now
and
you
can,
you
could
do
the
same
thing
with
slave
dot
jar
as
well.
D
D
Do
oh
yeah,
if
you,
if
you
exact
into
them
you
if
you're
in
you're,
stuck
in
the
I/o,
J's
users
so
yeah,
and
even
if
you
upgrade
that
doesn't
persist,
so
it's
just
yeah
so
that
that
brings
to
the
challenge
of
upgrading
these
things.
So
there
are
ways
to
do
that,
and
mostly
it's
awkward.
You
can
what
we
one
thing
you
can
do
is
if
you,
if
you
can
kill
one
of
these
things.
D
So
if
I,
if
I
use
system
systemctl
to
to
take
out
one
of
these
hosts
or
even
all
of
them,
I
could
and
then
I
use.
Docker.
Okay
I
could
do
docker
system
Kroon
and
that
would
like,
if
I
stopped.
Although
all
the
containers
ran
that
command,
everything
in
docker
would
be
just
blown
away
as
good
and
that
there
be
no
docker
state
left
which
is
fine
to
do
because
then
you
just
run
ansible
again
and
it
would
install
them
again
for
you
sure,
okay,
so
you
could
do
that
on
the
host.
D
If
you
want
to-
and
the
nice
thing
about
having
this
spread
over
four
different
hosts
is
you
can
just
knock
one
of
these
hosts
out
and
everything
continues
fine
in
CI
as
long
as
you
not
don't
have
running
jobs
on
that
machine,
you
could
just
stop
everything
on
one
of
these
hosts
and
Jenkins
would
just
continue
on
like
normal
and
people
wouldn't
notice.
I've
done
this
in
the
past
before
the
upgrading
machine
you
just
takes
the
whole
machine
offline
from
Jenkins.
D
Jenkins
doesn't
care,
you
do
your
upgrades,
put
it
back
in
and
then
move
on
to
the
next
host,
so
you
can
actually
do
this
in
a
rolling
update
type
of
wave.
So
that's
that's
one
way
to
do
it.
The
challenge
with
with
doing
well.
This
works.
If
you
want
to
just
upgrade
everything,
if
you
want
to
upgrade
all
of
the
machines,
the
images
on
there,
you
do
that
and
then
run
ansible
again
and
it
would.
It
would
get
fetched.
The
latest
images
build
everything
up.
D
It's
building
is
slow
because
it
has
to
compile
all
that
stuff,
but
it
would
set
everything
up
scratch
from
scratch
and
you'd
get
fresh
everything.
The
challenge
is
when
you
are
removing
and
installing
something
new
so-
and
this
is
thanks
to
system
D.
So
when
we
get
rid
of
Alpine
3:7,
there's
all
these
hoops,
you
have
to
jump
through
to
get
rid
of
the
old
system,
D
configuration
for
it,
so
you
can
get
in
there
and
you
can.
D
You
can
remove
that
container,
but
then
system
D
will
persist
in
trying
to
start
it
up,
and
even
if
you
remove
the
system,
D
configuration
file,
it
will
still
go
because
your
system,
D
configuration
files,
are
not
the
not
the
primary
source
of
truth
that
it
uses
it
has
its
own
database.
So
you
have
to
run
these
system
D
commands,
which
I
think
might
be
documented
in.
D
No,
it's
not
in
there
okay,
so
you
have
to
there's
a
bunch
of
hoops.
You
have
to
run
through
to
clean
these
out
from
system
Z
and
and
so
that
it
forgets
about
the
old
machine
really
annoying.
The
other
thing
you
can
do
is
just
leave
the
old
container
there
and
just
tell
Jenkins
to
forget
about
it.
It
actually
doesn't
cost
much
to
leave
a
running
container
in
in
docker.
You
could
just
leave
it
running
and
then
clean
it
up
some
other
time
when
you
can
be
bothered
so
I.
D
B
D
That's
right
because
remember
that
when
you
do
a
docker
pull,
you
know
it
persists
that
information
lets
you
do
a
docker
system
prune.
All
you
do.
It
manually
delete
those
images.
It's
still
going
to
use
the
last
one
that
was
pulled
when
you
do
a
docker
build
so
you're
still
not
gonna
get
the
latest
one
unless
you
explicitly
pull
it,
which
is
really
annoying,
but
that's
just
the
way
of
docker
and
thankfully
it's
all
isolated
within
a
container.
So
the
security
issues
aren't
huge.
D
It's
just
a
little
annoyance.
If
you
need
to
upgrade
something
inside
the
detail.
Sure
one
downside
of
this
whole
setup
is
that
it
makes
it
difficult
for
collaborators
who
want
to
test
their
stuff,
because
you
then
have
to
give
them
additional
instructions
of
okay.
You
really
get
into
this
host
machine
and
then
you've
got
to
get
into
the
container.
This
is
how
you
trigger
things
to
build,
and
if
they
don't
know
doc,
I
it's
a
little
bit
annoying
for
them.
Could.
B
D
Could
do
it
then
we'd
mean
additional
we'd
need
to
install
some
kind
of
in
it
in
each
of
the
containers
that
would
start
SSH
server
as
well
as
Jenkins.
That's
not!
Let's
do
all
it's
just
additional
complication.
Yes,
I
see
what
you
mean
yeah
and
then
and
then
you
have
to
manage
the
SSH
config
for
each
of
them
as
well.
So
yeah
we've,
given
you
access
to
this
docker
host,
but
just
the
first
we
had
to
add
your
key
to
the
jump
box
and
the
docker
host
as
well.
Yeah
I.
Think.
C
D
Yep
done
that
too.
The
only
problem
with
that
is
the
dock
of
files
are
heavily
templated.
So
you've
got
all
this
stuff
here,
so
you
have
to
be
lots
of
substitution
to
get
this
doctor
for
usable.
We
could
give
them
raw
ones,
but
then
we'd
have
to
strip
out
the
secret
at
the
end
there,
which
is
actually
which
is
not
actually
that
made.
Maybe
that's
a
good
idea.
D
C
B
A
B
D
B
D
D
D
Which
is
not
so,
these
containers
are
not
in
the
ability,
16
pool,
so
we
have
to
add
more
a
2016
VMs,
which
is
costly
for
us.
So
my
argument
was
this:
stuff
should
just
be
integrated
into
the
whole
docker
setup.
There's
an
argument
that
this
this
uses
a
configuration
matrix.
It's
like
a
sub
jobs
and
all
stuff
and
the
way
the
doctor
stuff
set
up
is
very
different.
D
A
A
D
D
Yeah,
that's
right,
we've
got
tons
of
them
and
we
can
you
know
we
can.
We
could
add
another
one
of
these
big
hosts,
I'm
sure
we
could
find
somewhere
to
do
that
on
that
weird,
even
more
capacity
if
we
needed
it,
it's
just
it's
it
in
terms
of
the
cost
to
us
of
going
to
our
infrastructure
providers
and
saying
hey,
we
want
another
VM
versus.
We
want
another
bunch
of
VMs
to
run
a
bunch
of
16.
We
say:
well,
we
just
want
one
one
big
one
and
we've
got.
You
know
that
big
one
I'm.
B
A
B
D
D
D
D
A
D
D
So,
for
instance,
on
here
we're
running,
running
raspbian
strip,
so
debian
knife
and
we're
running
that
the
same
OS
now
across
all
of
the
different
raspberry
PI's
we
used
to
be
right.
Having
to
we
had
garage,
peopIe
ones
were
on
Weezy
and
twos
were
on
Weezy
and
threes
were
on
the
next
one
up,
so
Debian
8
jesse.
D
So
we
would
have
had
the
the
oldest
machines
we
could
get.
We
should
get
on
the
machine.
We
have
this
so
that
when
people
were
using
them
in
the
wild,
we
were
matching
those
versions,
and
that
was
the
same
for
the
build
on
these
things.
But
then
you
know
raspbian
moves
on
so
there's
three
different
versions
of
raspbian
wheezy,
with
jessie
stretch
and
there'll,
be
another
one.
Eventually,
when
Debbie
and
pin
comes
out,
so
it
makes
a
complicated
when
we've
got
this
put
limited
pool
of
rasa
PI's.
So
what
we've
done
now
is
turned
these.
D
D
D
Actually
you
know
they're
being
anyway,
it's
it's
a
version
that
works
on
these
arm
machines,
these
arms,
it's
an
arm,
seven
machines,
and
so
these
these
containers
just
sit
idle
doing
nothing.
They
are
they're
literally
doing
tail,
dev,
no,
nothing
at
all,
and
then
what
happens
is
whenever
a
job
is
run.
D
If
you
look
in
the
Jenkins
come
config,
it
will
set
everything
up
inside
the
you
know
the
standard
build
directory
and
you
know
it'll
go
no
test
binary
arm
it'll,
get
everything
ready
to
go,
pre
pre
test!
Remember
this
is
compiled
elsewhere
and
shipped
out
as
its
cross-compiled,
and
we
unpack
it.
So
we
can
run
the
tests
and
then
it
runs
this
command.
D
That's
in
use
of
local
spin-doctor,
node
exec-
and
this
is
all
inject.
This
is
all
in
ansible
and
all
it
does.
It
looks
like
there's
a
lot
of
stuff
in
it.
It's
just
it's!
It's
not
really
that
complicated.
All
it
does
is
say
the
docker
container
that
I
give
by
names
or
whether
it's
weezy,
Jesse
or
stretch
against
this
directory,
and
then
that
wakes
up
that
docker
container,
because
it
does
a
docker
exec
it'll,
do
a
docker
exec.
D
C
B
A
Or
just
make
them
exactly
like,
like
look
like
just
regular
machines.
Yes,
right,
like
I,
would
just
start
by
making
it
exact.
You
know
just
like
what
it
is
in
the
other
case
where
they're
just
configured,
you
start
them
up.
They
connect
through
Jenkins.
They
look
like
a
regular
machine,
yeah
and
I.
D
Think
we
could
do
the
same
thing
now
with
our
64
with
the
packet
net
machines,
because
we
we
are
using
some
very
beefy
machines
to
do
fairly
simple
stuff.
So
we
could
actually
go
back
to
just
having
a
host
machine
and
then
having
a
dock
container
for
CentOS
to
dock
at
the
10th
for
Debian
a
document
or
a
bun,
and
we
could
do
it
all
in
the
same
machine
and
like
us,
don't
we
CPUs
on
the
machines?
We
don't
mix
them
out.
D
This
is
this
is
special
for
unn
because
of
there's
resource
constraints
and
right
motion
way
to
make
it
happen.
But
this
doesn't
work
very
well.
It
really
does
and
it
gives
us
really
good
test
coverage
on
and
all
our
machines
get
to
run
latest
recipe
and
it's
very
stable.
And
yet
we
get
to
test
old
stuff.
B
D
D
Again,
this
is
the
this
is
all
in
ansible
and
Jenkins
is
there's
no
secret
sauce.
Here,
that's
hidden
elsewhere.
This
could
be
replicated.
If
need
be.
It's
just.
It
adds
layers
of
complications
for
build
team
and
collaborators
when
they
need
to
do
bargain.
You
know,
that's
that's
the
real
big
issue
here.
B
B
D
Jumper
we
can,
we
can
just
SSH
to
a
specific
port
on
it
right
and
put
them
in
our.
You
know:
edible
inventory,
post
I
mean
we
met
with
if
we
go
for
the
whole,
what
was
the
SSH
config
machine
that
we
were
going
to
set
up
key
box
yeah?
Perhaps
that
solves
a
lot
of
the
problems
with
giving
in
and
giving
people
access,
yeah.
D
C
D
C
C
C
A
B
I
think
the
other
thing
is,
of
course,
that
if
we
went
down
that
kind
of
route
we
could
essentially
then,
as
rod
said,
if
you
can,
if
you
can
use
key
box
to
manage
all
the
keys
and
yeah,
you
can
really
quickly.
Yeah
I
think
I
think
it
makes
sense.
I
think
we
can
certainly
do
something
with
that.
It's
just
kind
of
making
sure
we've
got
it
documented
somewhere.
Yeah.
D
Well,
if
this
is
the
thing
I
know,
this
is
complexity
for
the
build
team
and
that
we've
got
a
lot
of
complexity
for
each
of
our
members
to
think
about
how
there's
there's
so
many
mental
models
that
each
must
need
to
have
in
our
heads.
How
is
the
Mac
stuff
set
up?
Has
the
window
stuff
that
airplane
arm
like
it
is?
It
is
really
complex
and
what
each
one
of
us
has
a
slice
of
it.
D
B
I
think
the
only
other
thing
is
possibly
if
we
did
cut
our
reach,
if
they
have
to
directly
ssh
to
each
docker
container,
then
you
can
actually
then
start
the
whole
ansible
story
on
it.
So
you
could
technically
then
just
point
that
the
standard
ansible
playbook
at
that
specific
container
iraq,
that's.
B
D
C
B
Only
reason
I
would
would
slightly
instead
of
get
nervous
about
that
is
you're
essentially
you're
duplicating
dependencies.
So
you
have
got
to
make
sure
that
when
someone
changes
the
dependency
and
playbook,
they
do
the
same
on
docker
and
vice
versa.
Of
course,
that's
that's
the
only
thing
I
would
say:
I
mean
yeah.
I
agree.
It
does
seem
a
slightly
a
a
slightly
horrendous
way
to
be
handling
the
world
docker
yeah.
B
D
Way
right
now,
it's
technically
not
broken,
so
we
don't
have
anything
to
fix
I'm
concerned
about
the
whole
custom
sweet
thing
that
follows:
either
we've
got
these
two
different
approaches
to
to
doing
build
permutations,
because
the
whole
shared
loops
that
you,
like
you
can
see
in
there
we're
doing
build
without
into
without
SSL.
That
stuff
to
me
is
just
the
same
kind
of
permutation
to
stuff
this
week.
That
bothers
me
a
bit,
but
technically
it's
not
broken.
D
The
things
that
is
broken
is
the
shared
knowledge
amongst
this
team,
and
so
I,
don't
know
if
the
SSH
stuff
is
gonna
fix.
Any
of
that,
but
I
think
that's
what
we
should
be
focusing
on.
How
do
we
fix
the
fact
that
how
this
works
is
in
is
in
very
few
minds,
yeah
like
easier
for
us
as
a
team.
That's
the
bit
that
lets.
C
From
the
city
from
experience
with
collaborators,
John
wrote
down
the
first
half
of
the
story
story
with
the
shared
hosts,
wrote
it
down
in
a
document
and
I've
seen.
Several
people
use
that
document
to
accomplish
tasks
without
asking
me
for
further
guidance.
No,
so
so
a
guide
is
definitely
a
step
in
the
right
direction.
C
D
About
this
group,
so
if
anything,
let's
take
the
restroom
parts,
for
example,
and
this
actually
this
same
setup
runs
on
the
list.
The
scale
way
rz7
machines
as
well.
This
is
the
same
setup
we
have
on
there
now
as
well,
so
not
even
just
raspberry
PI's
I
am
NOT
confident
than
anyone
else
we'll
be
able
to
debug
this
flow.
If
something
was
wrong
with
it,
yeah.