►
From YouTube: INFRA Weekly Meeting 2020 02 25
Description
Jenkins Infrastructure Project Meeting - 2020-02-25
Notes - http://bit.ly/2T0oZ9v
A
So
hi
everybody
I'm,
starting
the
recording
and
thanks
for
being
here
for
this
new
Jenkinson
for
a
meeting,
a
quick
follow-up
on
the
last
week's
discussion.
So
first
regarding
the
infrastructure
as
sponsoring,
we
are
still
in
the
process
to
get
sponsored
by
Amazon.
So
right
now
before
working
and
see
other
Genki,
that
I
owe
I
just
try
to
be
I'm
just
waiting
for
them,
but
this
one
is
going
to
be
officially
enabled.
So
we
can
start
talking
about
that.
A
It's
the
simple
fastly,
so
that
as
discussion
that
I
had
with
fastly
is,
we
have
to
sign
any
contract
and
they
should
send
me
a
contract,
but
right
now,
I
still
not
know
at
the
bandwidth
that
we
could
have
with
fastly
or
what
would
be
the
terms
of
the
contract.
So
it's
still
ongoing
and
still
in
discussion
regarding
the
address
but
other
accounts.
We
managed
to
be
around
six
thousand
three
hundred
K
for
the
months,
and
so
the
billing
is
next
week.
A
A
We
spent
some
time
to
work
on
those
and
we
are
almost
ready
to
put
it
to
put
that
in
place.
The
last
thing
that
need
to
be
done
is
just
to
create
a
new
to
create
new
credentials,
speak
before
that
and
based
on
figures.
The
other
changes
that
I,
oh
so,
there's
something
that
should
come
in
the
coming
days
and
otherwise
we
did
not
spend
more
time
with
the
G
cask
configuration.
Firstly,
I,
don't
think
is
that
I
am
the
main
reason
to
that
is.
We
are
still
waiting
for
Amazon's,
but
spen.
A
Sorry
sponsoring
to
be
enabled
so
right
now
we
don't
spend
a
lot
of
time
on
see.
I,
don't
think
it's
a
trio.
Well,
we
also
mentioned
last
week
that
we
had
to
renew
to
JIRA
license.
This
is
done
now,
so
we
should
be
able
to
upgrade
it
again,
something
that
you
have
to
work
on
in
the
coming
weeks.
That's
all
do
you
have
any
questions
regarding
this
nope,
so
I
guess
we
can
start
talking
about
the
agenda
items,
so
let
first
are
the
first
time
is
the
update,
Center
and
github
integration.
Oh.
B
A
C
B
B
A
D
A
D
C
Ya
know
now
the
the
issue
that
I
had
flagged.
There
may
not
be
a
general
purpose
issue:
I,
don't
know
how
many
other
plugins
mandate
that
they
must
have
command
line,
get
installed
on
the
Windows
machine,
certainly
begin
plugins
have
to
have
it.
They
can't
run
their
tests
without
it,
but
others
I
know
that
we
use
Jake
it
throughout
the
CI
Jenkins
io
infrastructure.
So
we've
avoided
installing
command
line,
get
on
a
number
of
machines.
As
a
result
of
that
choice
to
you,
Jake
it
yeah.
A
E
Yeah,
you
might,
as
you
guys
might
have
seen
in
the
IRC
in
the
marking.
You
I
sent
out
the
term
sheet
for
the
s/390
and
I
think
mark
put
that
down
on
the
government's
meeting
this
week.
I
think
Oh
Wednesday,
so
I
guess
tomorrow.
So
that's
good
I'm
still
waiting
on
in
terms
of
sheet
term
sheet
for
power
resources,
but
once
we
have
those
you
guys
should
have
full
access
to
pull
those
into
your
infrastructure
in
terms
of
the
PR.
A
Just
for
the
context,
so
Jim
James
said
that
we
need
to
signed,
and
so
basically
in
the
past,
what
we
did
is
was
always
on
a
personal
basis.
Were
someone
sign
that
document
and
one
of
the
reason
to
move
to
the
CAF
was
to
have
a
legal
entity
above
us
so
personally
responsible
for
that
anymore?
Obviously,
the
step
I
think
is
not
really
yet
to
sign
that
documents.
So
the
thing
was
the
to
discuss
tomorrow
during
the
governance
meeting
should
basically
should
should
I
sign
the
contract
or
should
I
should
we
should.
A
A
B
You
kiss
message
to
the
developers
many
increased,
because
if
you
expect
anything
to
be
voted
honor
the
governance
meeting,
there
should
be
at
least
discussion
before.
Okay,
like
an
edge
over
this
key
say
it
might
be
enough
to
just
discuss
it
with
the
Jenkins
bought,
even
resulting
the
public
I,
just
I'm
sure
what
would
be
exactly
the
objective
there.
But.
E
A
D
Its
own
as
your
the
they
only
have
arms.
Well,
they
have
other
ones
but
they're
part
of
this
IOT
edge
product,
which
I've
tried
to
figure
out.
If
you
can
run
like
general
workloads
on
and
there's
no
clear
documentation
on
that,
so
I'm
still
trying
to
figure
out
I
may
contact.
Someone
gave
the
support
and
find
out
if
you
can
run
general
like
docker
type
workloads
on
on
there.
Okay.
E
B
E
D
C
B
Didn't
no,
we
did
not
okay,
so
how
the
communication
basically
says
nothing
about
business
operating
systems
we
support
and
with
regards
to
Windows
and
basically
says
nothing
about
versions
of
Windows.
We
support
so
I
still
have
an
action
item
to
create
a
job
or
what
is
the
communication
page
to
the
commented
but
yeah
right
now?
If
you
can
Java
somewhere,
you
can
expect
the
Jenkins
that's
there,
which
is
obviously
not
exactly
what
would
happen
on
them
by
these
platforms.
B
D
For
and
also
this
is
more
for,
like
docker
agents
not
for
running
the
master
on
that
platform.
Necessarily
so
I
think
this.
The
support
is
a
little
bit
different.
Based
on
that
as
well
now
going
forward,
we
did
talk
about
at
one
point
in
the
platform
sig
meeting
that
we
would
not
be
supporting
1:32
with
the
new
installer.
The
new
installer
is
only
a
64-bit,
so
that
will
come
up
at
some
point
for
Windows
at
least.
C
B
A
C
C
C
Okay,
so
you
see
the
right
page,
then
excellent
duper,
so
I've
spent
a
few
days.
Looking
at
that
monitoring
and
trying
to
understand
our
monitoring
and
what
I
realize
is.
We've
already
got
just
about
everything
we
need
conceptually
to
do
a
good
job
of
monitoring.
My
proposal
is:
let's
take
the
systems
and
the
concepts
we've
already
got
and
extend
them.
So
data
dog
is
a
world-class
monitoring
platform.
We
get
it
for
free
and
it
works
great.
Let's
just
keep
using
it
yeah.
Judging.
A
Piece
of
work
and
I'm
a
German,
my
only
question
is
regarding
because
I
saw
that
data
Dutch
made
several
updates
recently
I.
Think
someone
under
the
text
started
working
in
the
data
to
begin.
We
in
the
past
it
unleash
reef
new
matrix,
so
it
was
not
that
useful
and
so
I
started
investigating
primitives.
So
really
that's
the
changes
that
are
you
I'm
using
primitives.
A
So
if
you
want
to
have
a
look
with
the
kind
of
data
that
you
have
there,
I
think
the
main
difference
is
that
the
data
type
plug-in
is
just
exporting
specific
matrix
where
the
parameters
it's
using
the
matrix
printing
or
something
like
that.
There
is
one
one
field,
specific
ligands
that
export
a
lot
more
matrix.
If
I
can
find
after.
C
A
meeting
thank
you,
I
hadn't,
I,
hadn't
thought
about
Prometheus
I.
Think
it's
valid
to
concern.
Consider
my
thought.
Right
now
was
I
think
we'll
get
faster
results
and
better
results
quickly.
If
we
stay
with
the
solutions
we've
got
rather
than
making
a
shift
of
solution,
I
had
actually
started
my
own
monitoring
system
on
my
local
network
and
was
using
it,
but
realized
data
dog
is
worlds
in
a
way
better
than
anything.
I
might
consider
recreating.
It
does
amazing
things
in
terms
of
dashboarding
and
measurements.
C
A
C
A
A
The
reason
why
I
deployed
the
primitives
plug-in
was
because
I
was
missing
some
specific
metrics,
which
was
provided
by
by
the
primitives
pinion,
but
the
main
difference
is
that
those
metrics
are
exported
to
a
graph,
another
spot
that
you
have
to
maintain
yourself
and
obviously
it's
time
consuming.
If
you
want
to
be
sure
that
the
service
is
always
up
and
running
so
right
now,
it's
more
for
certain
purposes,
but
yeah
I
would
be
curious
to
see
how
the
data
dock
plugins
evolve
over
time.
C
Great
okay,
yeah,
so
one
place
where
the
data
dog
plug-in
actually
could
offer.
Something.
Is
this
second
item
the
canary
jobs?
So
this
was
a
concept
that
Tyler,
Croix
and
and
old
APA
had
started
and
using
what
what
are
listed
as
infrastructure
acceptance,
tests,
they're
set
right
now
of
four
or
five
tests
that
check
very
specific
things
should
work
into
end
and
I.
Think
that
concept
is
a
good
one.
We
should
teach
those
tests
to
notify
data
dogs
so
that
they
can.
C
They
can
then
raise
an
alarm
if
they
start
failing,
so
that
we
have
a
relatively
few
in
the
hundreds
and
hundreds
of
jobs.
Those
relatively
few
will
raise,
alarms
to
us
so
that
we
know
something
is
seriously
wrong
with
the
infrastructure.
Okay,
that's
one
I'll
I'll,
negotiate
our
work
with
you
Olivier
to
try
to
get
it.
The
other
is
that
in
choosing
which
checks
we
should
make.
We've
already
got
thirteen
jeering
tickets
based
on
past
outages
and
I.
Think
that's
a
good
beginning,
so
my
thought
is:
I'm
gonna
create
an
epic.
C
If
we
don't
already
have
one
that
tracks
these
monitoring
improvements
and
then
the
last
and
actually
probably
the
most
important
one
is
be
sure
that
we've
got
more
people
looking
and
that
I
think
is
probably
the
single
most
important
thing.
The
other
things
will
automate,
but
human
beings
can
do
an
awful
lot
to
help
us
as
we
get
surprises
and
learn
how
to
monitor
better,
and
that
was
all
that
I
had
questions
or
comments.
Yeah.
A
My
just
when
comments
I
don't
know
if
she
saw
that
there
is
a
Jenkins
in
for
a
slash
into
the
repository
where
we
automates
checks.
Basically,
we
used
a
reform
to
automate
those
sites,
so
yeah
feel
free
to
have
a
look
at
this
and
usually
it's
easier
to.
First,
look
at
the
dashboard
create
those
check
manually
and
then
export
those
checks
into
our
form.
So
we
have
a
way
to
share
those
and.
A
And
and
I
also
showed
that
the
begin
so
when
I
said
so,
there
is
one
plug-in
called
matrix
that
exported
you,
you,
you
dad's
house,
and
so
this
is
the
one
that
I
the
primitives
plug-in
is
using.
So
basically
the
primitives
or
the
primitives
begin
is
just
a
way
to
explore
those
that
at
those
metrics
to
to
graph
enough,
for
example,.
B
A
Can
you
see
my
browser?
Yes
perfect?
So
basically,
with
the
recent
with
a
recent
issue
that
we
had
with
Michael
Jenkins,
that
yo-yo
gets
enter
the
roster
Jenkins
I
tell
you
I
spent
a
little
bit
more
time
to
investigate
different
solution
that
we
could
put
in
place
to
reduce
the
loads
and
package
the
changes
that
they
use
so
right
now
we
have
one
machine
located
on
Amazon
account
and
that
machine
is
hosting
so
the
three
services
that
I
mentioned
and
more
the
really
the
packaging,
jobs
and
tasks
and
so
on.
A
So
that
machine
is,
as
a
lot
of
things,
is
doing
a
lot
of
things
and
also
is
quite
outdated
because
mirror
brain
is
not
maintained
since
now,
so
you
have
old
version
of
Python
will
be
an
Sun,
so
I
try
to
find
ways
to
split
the
different
services
running
on
that
machine
on
different
container
services,
but
obviously
the
main
challenge
that
I
have
right
now
is:
they
are
all
kind
of
interconnected.
So
this
the
the
screen
that
you
see
right
now
is
one
of
the
services
that
we
could
use
to
replace
mirror
brain.
A
So
the
idea
here
is
that
we
have
your
bits
contains
all
the
files
that
we
want
to
provide
since
the
mirrors
and
also,
basically,
you
can
just
browse
the
files
on
its
own.
For
example,
as
you
can
see
here,
I
just
synchronize
one
of
the
mirrors,
so
there
are
two
possibilities:
eaters
we
have
mirrors
a
variable.
Was
this
big
file?
So
let's
say,
for
example,
I
can
read
that
and
just
set
it
one
oops
this
one
is
not
working
of
this
new
take
a
different
one.
So.
A
The
main
thing
is:
if
we
plan
to
to
use
more
bits,
you
have
to
modify
the
script
that
we
use
when
we
release
a
new
champions
version
in
order
to
push
the
new
artifacts
directly
on
the
orbits
and
obviously
then
we
have
to
update
a
different
yours,
but
that's
one
of
the
work
that
I
did,
and
you
know
this
also
provides
few
views
of
view.
So
this
one
is
just
provides
all
the
mirrors
that
you
can
use.
A
So
this
is
one
of
the
certain
lists.
There
is
an
open
PR
on
the
repository,
Jenkins
and
press
search
charts,
so
right
now,
I'm
not
sure
yet
the
way
we
we
push
the
data
on
that
service,
so
either
it's
a
based
where
we
fetch
the
data
from
remotes
mirrors,
or
we
push
it
at
that
directly
on
your
bits
directly,
when
we
release
that
main
thing
that
I
have
to
figure
out
right
now.
A
We
can
just
have
more
than
yours.
The
main
reason
why
I
started
working
on
this
is
because,
right
now
we
have
archives
and
changes
yeah,
that
orange
I'm
running
on
the
Rackspace
accounts,
and
then
we
need
to
move
that
service
somewhere
else.
So
initially
I
thought
it'd
be
just
a
simple
Muir's
and
then
I
realized.
That
archive
has
a
lot
more
data
than
just
Muir's,
so
I
still
I
have
to
I,
probably
have
to
deploy
and
provision
an
azure
disk
and
move
all
the
data
on
that
I
should
be
so.
A
A
Those
packages
directly
into
the
operating
system
but
I
still
need
more
time
to
test
how
it's
working,
if
it's
working
but
yeah.
So
basically,
what
I
wanted
to
share
is
that
there
is
some
activities
about
the
way
we
do
blow
by
Kato,
junkie,
that
you're
in
your
in
your
brains
and
mirrors
the
Jenkins
at
I/o.
So
if
you
have
any
inputs
or
thoughts
or
suggestions,
I
think
it's
a
right
time.
C
A
He
changed.
No,
no,
no
sorry
so
I
a
man
shouldn't
repo,
but
I
wanted
to
sit
back
here.
Sir.
Thank
you
so
repo.
So
we
are
not
changing
the
artifactory.
This
is
just
a
map
and
repository
that
I
deployed
the
release
environment,
so
I
can
just
test
and
it
was
easier
to
deploy
it
and
artifactory
with
a
setup
that
I
have.
But
this
one
I
mean
we
won't
change.
It's
mainly
Picasso
Jenkins
at
I/o
mirror
also
changes
at
I/o
and.
A
In
the
update
center,
but
regarding
the
update
center
I'm,
not
there
yet
the
main
reason
why
it's
not
easy
to
to
work
on
those
is
because
we
have
quite
a
lot
of
scripts
that
are
either
uploading
or
downloading
packages.
We
are
seeing
or
SSH
with
crunch
up
and
supplied
installed
before
before
deploying
and,
for
example,
before
switching
from
Europe
brains.
To
me,
worse,
I
just
wanted
to
be
sure
that
I
don't
bridge
or
lose
that
in
the
democratic
process.
A
A
So,
from
the
release
process,
part
I
had
some
resources
issues,
so
I
just
deployed
bigger
machines.
So,
as
you
can
see,
it
goes
from
3
hours
to
release
some
champions
to
1
over
35
sore
right
now,
it's
working
again
as
expected.
So
if
someone
want
to
test
the
output
generated
by
this
I
would
be
really
happy,
but
the
more
importantly
is
that
I
worked
on
the
packaging
parts,
which
is
here,
and
so
in
this
case
I,
don't
remember
Alex.
A
D
A
So
I
can
just
mount
the
azure
blob
storage
directly
in
the
release
pipeline,
and
so
just
just
like
move
files
from
here
to
there
and
so
on,
and
it's
also
the
same
edge
of
blob
storage
that
I'm
using
in
the
other
services.
So
I
can
mount
the
same,
the
same,
the
same
files
in
multiple
locations,
and
so
it's
it's
really
simplified
way.
A
A
There
is
one
left
in
history
so
mark
and
I.
We
do
a
small
session
about
configuring,
SSH
access
and
other
changes
appropriate
rates.
So
if
you
are
interested
to
participate
and
he's
feel
for
me
to
say
it's
the
IDS,
we
need
to
do
some
knowledge
transfer
so
to
take
some
time
to
ask
question
and
to
be
able
to
answer
those
questions.