►
From YouTube: INFRA Weekly Meeting 2020 02 11
Description
Jenkins Infrastructure Project Meeting - 2020-02-11
Notes - http://bit.ly/2T0oZ9v
A
Hi
everybody,
so
there
are
a
few
things
we
need
to
discuss
today
in
the
Jenkins
ephra
meeting.
So
the
first
thing
that
I
want
to
announce
is
that
we
officially
switched
the
mailing
lists
from
mailman
to
the
Google
Groups.
So
now
every
post
sent
the
list
is
still
accepted,
but
you
should
receive
besides
myself,
saying
that
that
many
misses
duplicated
and
now
you're
invited
to
use
the
Google
Groups.
So
we'll
keep
working
this
way
for
the
next
few
weeks
and
then
we
will
just
switch
the
meeting
isn't
written
me.
B
A
C
A
C
A
Are
we
have
another
option
which
is
using
fastly
for
this
DM
to
distribute
packages
I'm
disgusting
with
the
SST
and
fast
aside,
and
they
would
be
interested
to
sponsor
the
Jenkins
projects
and
the
idea
would
be
to
to
have
a
contract
at
the
first
year
and
then
we
have
a
contract
that
will
be
new
months
after
months.
So
we
could
you
as
fastly,
to
distribute
all
packages
and
also
improved
the
speed
of
our
websites
like
Jenkins
at
I/o
and
platon
sites,
but
I
still
have
to
find
a
meeting
with
them.
A
So
that's
the
current
face
so
either
either
either
the
fastly
solution
works
and
then
I
stopped
working
on
yours
are
fastly,
I
mean,
does
not
work,
and
then
we
have
to
replace
your
brain
with,
for
example,
the
offbeat's,
which
seems
to
be
working
very
well
and
skin
way
better
than
the
current.
And
so,
if
we,
if
we
switch
with
me
repeat,
we
could
just,
for
example,
enable
HTTPS
bars
and
just
drop
HTTP
by
default.
I.
A
A
A
So,
there's
something
that
should
be
this.
The
second.
The
next
topic
is
regarding
the
Rackspace
sponsoring.
So,
as
you
saw
apparently
works
based
on
sponsoring
OSS
projects,
and
so
now
we
have
to
pay
for
one
of
the
machine
that
we
have
in
the
Rackspace
account
which
is
okra,
and
that
machine
is
used
for
archives
Inc
in
co.org,
which
is
our
full
batch
service,
our
packages.
A
The
thing
is
that
machine
is
totally
maintained,
managed
by
puppets,
so
it
would
be
trivial
to
move
it
move
that
machine
you
know
or
assure
accounts
and
then
configurator
anyway,
the
current
state
you
just
like
we
stopped
in
Rackspace
to
pay
Microsoft.
If
we
do
this
because
Microsoft,
but
at
least
it
will
simplify
the
bidding
process
because
right
now
it
means
that
KK
has
to
be
real
boards
from
this
SPI.
So
you
just
simplify
this
management.
A
A
B
Done
a
couple
of
bits
recently,
one
was
to
try
to
a
great
terraform-
that's
used
on
me
as
your
repository
just
so
that
we
can
clean
it
up
a
bit
and
fix
some
of
the
issues.
I,
don't
think
it
hasn't
been
upgraded
in
quite
a
while,
and
the
other
is
to
start
looking
at
pecker
images
so
that
the
ears,
your
VM
agents
can
start
up
quicker
and
so
that
they
can
be
upgraded
easier
as
well.
A
A
Fine,
so
I
think
we
should
split
the
traffic
codes
for
non-critical
resources
that
can
be
updated
automatically
and
if
it's
deleted
by
mistake,
it's
fine
and,
for
example,
Tom
did
it
by
mistake
like
twenties
cluster,
so
because
right
now,
it's
not
really
it's
not
any
more
transparent.
So
it's
not
easy
to
see.
That's
too.
A
D
A
A
D
I
agree
with
that
point:
I,
don't
know
with
with
the
service
principle
that
we
create
for
terraform
how
granular
of
access
we
would
be
able
to
delegate,
because,
right
now
a
terraform
is
using
an
app
and
service
principle
that
basically
gives
it
route
inside
of
the
azure
account.
If,
if
we
were
able
to
to
create
the
service
principles
such
that
that
terraform
pipeline
would
just
be
able
to
access
as
your
dns,
then
I
think
that
would
be
totally.
That
would
be
fine,
I
agree
with
your
point
there.
A
So
so
your
is
buddy
now,
which
is
the
way
we
manage
the
the
access
and
right
now
we
have
a
really
basic
way
to
manage
Rizzoli.
So
basically,
we
just
use
the
basic
group
in
Azure.
So
either
you
have
access
or
you
have
admin
access
whatever,
and
if
we
want
to
have
a
better
control
of
who
has
access
to
what
you
should
probably
use
a
better,
a
better
version
of
the
azure
active
directory
which
will
cost
us
a
few
bucks.
A
A
A
Realize
that
I
totally
forgot
to
mention
something
regarding
the
infrastructure
and
the
way
you're
paying
its.
So
basically,
the
CDF
agreed
to
pay
the
bill
for
infrastructure
for
the
next
six
months,
but
we
need
to
go
below
the
10k
per
month,
so
they
get
accepted
to
temporary
paid
the
20k
per
month,
but
we
definitely
need
to
reduce
the
cost
of
our
infrastructure.
So
if,
if
if
we
have
some
monies
from
Amazon,
it
would
be
already
a
lot
easier
to
manage,
but
yeah
we
have.
We
have
to
find
some
ways
to
reduce.
D
A
I
think
it
should
be
done
after
because
we
are
I
mean
right
now.
Our
Jenkins
file
are
designed
to
be
running
and
using
the
current
doctor
directly
written
so
I
think
it
will
be
just
Euclidean
deployed
machines,
and
so
it
will
it
will
reduce
the
cost
on
our
other
accounts
and
at
the
same
time
we
should
also
work
on
a
configuration
as
well.
So
then
other
people
can
just
refactor
or
maybe
use
specific
levels
for
specific
repository
or
whatever,
but
yeah
we
should
just
switch
from
either
p.m.
I
see
two
instances.
A
My
main
concern
is
about
like
using
the
artifactory,
because
we
saw
in
the
past
that
sometimes
we
have
some
latency
between
providers,
but
otherwise
this
is
everything.
So
basically
for
years
we
had
the
master
running
in
Amazon
and
the
agents
running
in
Azure,
and
now
we
have
the
master
in
Azure
and
we
are
deleted.
A
A
A
So
basically,
if
we
have
to
talk
about
the
automated
release,
now
is
because
we
use
the
cluster.
Initially,
we
deployed
the
automated
release
environment.
Our
equipment
is
cluster
that
we
also
use
now
for
all
the
public
application.
So
we
have
to
deploy
a
new
tremendous
cluster
in
a
private
environment
and
just
use
for
trusted
application.
So
we
have
to
really
plan
a
cluster
and
reapply
the
environment
in
a
more
secure,
VPN
and
more
secure
Network.
A
That's
anything
that
you
need
to
do,
and
we
still
have
a
few
things
that
we
need
to
work
on
some
for
the
release
parts.
It
seems
it's
working
for
the
packaging
part.
We
still
have
to
work
on
the
way
we
publish
artifacts,
and
this
will
be
mainly
influenced
by
defy
that
if
we
have
fastly,
we
don't
have
to
end
all
heroes
so
yeah.
The
way
we
will
distribute
packages
is
not
really
there
at
the
moment
and
should
be
better.
D
In
a
similar
vein
and
Tim,
thanks
for
mentioning
the
really
students,
because
I
hadn't
I'd
forgotten
about
that
I've
been
watching
coast,
K
and
J
frog
go
back
and
forth
about
artifactory
I
had
also
seen
an
alert
around
Jenkins,
CI,
dot,
org,
certs,
expiring
and
I.
Don't
know
if
that
was
like.
We've
got
two
sets
of
manually
created
certs
that
we've
made
in
the
past.
There
was
the
wild
card,
Jenkins
see
a
network
certificate,
and
then
there
was
the
artifactory
certificate.
A
Yes,
for
that
one,
he
or
we
created
anomaly
should
be
already
configured
as
far
as
and
otherwise
the
certificate
is
for
adapt,
observe
this
and
made.
The
reason
to
this
is
because,
when
I
refactor,
the
adapt
container
I
used
the
static
configuration.
So
if
we
want
to
have
dynamic
certificate
like
generate
with,
let's
encrypt,
which
is
something
that
could
be
working
now,
because
so
because
in
the
past
we
were
using
the
HTTP
method
to
generate
certificate.
So
this
yeah
available.
A
Now
we
switch
to
the
DNS
0-1
meters,
so
we
can
have
certificates
even
in
private
environment.
So
we
could
have
a
certificate
for
adaptogens
that
IO,
but
we
need
a
way
to
to
inject
that
configuration
in
the
LDAP
container
so
directly,
it's
possible
to
use
with
a
new
attack
configuration,
but
they
totally
change
to
some
text
and
I.
Don't
know
it
very
well
so
I'm.
So
what
I'm
really
getting
at.