►
From YouTube: 2020 10 20 Jenkins Infra Meeting
Description
Topics included Helm chart deprecation changes, recent disc space issue on ci.jenkins.io agents, DockerHub terms and service changes, Jira upgrade plan and progress, Docker images for Windows, JFrog Artifactory usage monitoring and corrections, LTS release candidate build process improvements, and the Jenkins acceptance test harness runtime environment.
B
Hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting.
We
have
a
bunch
of
things
to
discuss
today,
so
the
first
is
a
few
announcements.
You
think
that
you
need
to
be
aware
is
we
are
still
in
the
process
to
remove
hand
shots
that
we
are
using
from
github.com
charts.
While
I
migrated
a
few
of
them
like
graphing,
our
primitives
and
so
on.
B
We
still
have
to
update
the
nginx
in
class
controller,
and
it
means
that
it
will
require
a
little
bit
more
work
because
we
have
quite
a
lot
of
application
using
it,
and
so
we
cannot
just
upgrade
the
chart.
We
have
to
read,
remove
it
and
then
reinstall
it.
So
we
have
a
bunch
of
way
to
to
do
it
without
having
downtime,
but
it
will
require
a
little
bit
more
work
right
now.
I'm
latina
for
some
help
on
this,
but
I
still
have
to
work
with
them
to
to
deploy
something
new.
B
So
that's
the
most
important
one
and
the
other
two
hem
shots
that
I
did
not
migrate
are
old,
proxy
and
decks.
Those
were
used
for
last
year
for
the
polling
poll
application
that
we
use
for
the
for
the
election
last
year.
This
application
is
not
needed
anymore,
so
I
will
just
remove
those
hem
shots
from
our
infrastructure.
B
Yeah
then
I
guess
we
can
continue.
The
second
thing
that
I
just
want
to
highlight
is
we
have
we
had
a
bunch
of
instabilities
on
seattle
jenkins
that
I
hear
over
the
past
few
days.
Those
were
for
different
reasons,
the
the
best
one
is.
We
enabled
I'm
not
sure
yet
who
and
how
we
did
that,
but
basically
we
enabled
the
wrong
disk
for
ubuntu
machines,
so
the
modis
that
we
use
were
just
eight
gigabytes
of
disk,
which
is
pretty
low
and
we
should
have
been
using.
B
We
should
have
been
using
a
100
gigabyte
of
disk,
because
this
is
the
way
we
we
use
biker
images.
So
now
this
is
fixed
so
yeah
so
far.
We
should
not
have
any
issues
with
ubuntu
machines
anymore,
at
least
for
the
coming
days,
and
the
other
thing
is
so
sorry
yeah.
I.
C
B
Yeah,
so
maybe
so
maybe
something
is
not
fixed
yet
apparently
so.
Basically,
what
I
did
is
I
work
on
that
this
morning,
my
time
european
time
and
I
deleted
all
the
old
machine.
I
reprovisioned
ubuntu
machine
from
seattle,
taking
the
io
and
I
double
checked
that
they
had
100
gigabyte
of
disk
space
and
it
was
a
case
so
yeah.
If
you
have
a
job
that
you
can
share.
Maybe
I
can
investigate,
and
I
was
also
looking
from
the
amazon
console.
C
A
B
Yeah,
so
just
to
continue
this
one,
the
next
one
is
more
for
daniel.
So
I
investigated
what
was
happening
on
serp.com
and
for
some
reason
the
machine
was.
You
know
wrong
states,
it
seems
to
be
a
lack
of
memory,
but
I
couldn't
find
any
out
of
memory
issues
in
the
logs.
So
basically
what
I,
what
I
did
is
I
just
increase
the
size
of
the
machine
and
everything
seems
to
be
working
just
fine.
Now
I
don't
see
all
the
weird
errors
from
the
looks.
B
But
I
also,
I
also
saw
a
lot
of
weird
aerologs
related
to
the
docker
demon.
So
initially
I
thought
that
it
was
like
some
displaced
issues
or
something
like
that,
but
yeah.
B
B
So
I
I
sent
them
an
application,
but
I
haven't
received
any
response
at
the
moment,
so
yeah
I'm
still
waiting
until
the
very
last
minute,
and
if
you,
if
nothing,
is
nothing
clear
happened
there
I'd
probably
just
pay
for
at
least
one
month
and
expense
that
to
the
linux
foundation,
but
the
good
thing
is
yeah.
Linux
foundation
is
aware
of
that,
and
there
are,
I
mean
they
already
to
find
a
solution
for
the
other
project
as
well.
So
I'm
still
going
effort
on
this
topic.
A
B
Yeah
and
another
thing
that
need
to
be
working
on
is,
we
are
still
using
a
bunch
of
docker
images
from
my
own
accounts,
and
so
we
have
to
update
those
as
well.
I
was
I
was
reviewing
my
doctor
habakkuk
on
this
afternoon
and,
let's
say
ham
file,
for
example,
is
published.
B
On
my
own,
I
mean
I
have
some
a
bunch
of
cleanup
to
do
to
push
to
the
right
location
and
use
the
right
docker
image,
but
I'm
not
sure
yet
if
either
I
push
them
to
the
kahab
or
I
use
the
github
registry.
C
Probably
docker
hub
there's
been
a
number
of
issues
with
github's
implementation.
I
think
which.
B
C
Of
issues,
if
you
have
any
documents-
because
I
haven't
activated
this
huge
github
issue
with
interoperability-
problems
between
anything-
that's
not
docker
things
like
container
d,
which
kubernetes
is
moving
to
default
in
the
next.
It's
not
going
to
aks
is
defaulting
to
container
d
in
the
next
version.
B
Okay,
good
to
know
I
have
to
get
that
topic.
The
next
one
is
more
for
carrots
regarding
docker
images
in
the
windows.
E
Yeah,
that's
progressing
pretty.
Well,
I
have
the
packer
image
with
with
the
windows
update
provisioner
correctly
working
now,
so
that's
pushed
up
to
docker
hub
the
build's
been
updated
to
use
that
and
I'm
just
creating
a
few
images
now
just
testing
that
they're
all
working
it's
gonna
take
about
another
hour
or
so
to
get
through
those,
but
once
they're
there
we
should
be
able
to
start
trying
out
the
builds
again
to
see
whether
or
not
they
build
windows.
Docker
images
properly.
B
Yeah,
just
on
on
that
topic,
we
discovered
this
afternoon
that
infrared
ci
did
not
reload
the
configuration
and
it
should
I'm
having
stack
trace
related
to
that.
So
if
someone
is
able
to
help
me
looking
at
what's
wrong
there
yeah
some
help
just
need
it
here.
B
Iceland
hi
the
next
topic,
is
regarding
the
g-frag
artifactory
daniel.
Do
you
have
any
update
here.
D
I
mean
what
it
says
in
this
item,
so
we
got
an
analysis
from
baruch
from
jfrog
about
traffic,
popular
artifacts
stuff
like
that,
and
when
we
looked
at
it,
we
discovered
that
just
a
handful
of
artifacts
make
up
30
of
the
total
bandwidth
used
when
those
artifacts
are
referenced
in
the
tool.
Installer
metadata
for
the
allure
plugin,
which
seems
to
be
some
sort
of
build
tool
or
something
and
what
happens
is
if
you
run
a
build
that
uses
allure.
D
Your
agent
will
download
from
the
jenkins
artifactory
directly
and
these
agents.
If
they're
ephemeral,
they
will
not
have
an
m2
cache
or
anything
like
that,
but
they
will
download
every
time,
and
that
means
we
have
hundreds
of
thousands
or
even
millions
of
downloads
for
some
of
these
command
line
tools.
15
megabytes
each
so
that
accounts
for
30
of
the
total
bandwidth
used.
D
A
week
ago,
I
filed
an
issue
and
pinged
the
maintainer
of
the
plug-in
infra2772,
and
we
have
not
gotten
a
response
until
yesterday,
so
I
filed,
I
basically
blocked
downloads
from
repo
jenkins
ci
org
of
this
tool.
So
anyone
who
relies
on
it
will
see
their
builds
breaking,
and
then
we
got
a
response
from
the
maintainer
this
last
yesterday
afternoon,
offering
to
file
a
correct
correction,
pull
request
in
the
next
few
hours.
I
restored
the
downloads
I
looked.
D
You
know
today
around
noon,
no
pull
request
files,
so
I
blocked
downloads
again
and
have
no
intention
of
restoring
them,
because
if
there
is
a
proper
download
location,
we
can
do
that
on
the
crawler
side,
which
maintains
the
metadata
rather
than
ever,
having
to
allow
downloads
from
repo
jenkins
ci
org
again
and
that's
my
current
plan
here,
I
have
still
not
heard
from
back
from
the
maintainer
again
I
don't
know
what's
happening
there,
but
given
that
this
basically
looks
like
it
might
actually
be
enough
to
make
jfrog
happy
based
on
what
the
other
usage
stats
looks
like.
D
B
B
Time
can
you
do
an
update
on
this.
A
And
yes,
so
this
is
test
week
anton
baranov
of
linux
foundation
reported
late
last
week
that
he
had
completed
the
the
first
migration
or
he'd
completed
the
first
copy
from
our
version
to
his
7.13
version.
He
was
doing
the
upgrade
earlier
this
week
he
reported
during
the
upgrade
he
detected
some
missing
images.
A
They
were
the
images
for
our.
What
are
the
logo
in
a
bunch
of
attachments,
right,
logo
and
avatars?
That's
what
it
was
local
and
avatars,
and
so
I
provided
him
two
gigabytes
of
a
zip
file
of
the
logo,
tiny
and
avatars
enormous,
and
hopefully
that's
enough
for
him
to
get
us
ready.
This
is
test
week.
They
had
agreed
to
be
ready
for
us
this
week,
we're
waiting
for
him
to
tell
us
daniel,
I'm
assuming
you're
a
volunteer
to
help
with
the
testing.
D
Yes,
please:
I
have
a
question,
a
question
about
the
test
setup.
Does
that
also
so?
First
of
all
are
all
projects
going
and
all
data
are
going
to
be
present
in
the
test
environment.
A
Yes,
the
intent
was
that
all
projects
and
all
data
should
be
visible.
So,
for
example,
a
crucial
thing
I
was
hoping
you
would
check
is:
are
the
security
reports
correctly
visible
and
correctly
hidden
to
people
that
shouldn't
be
able
to
see
them
those
kind
of
things?
So
this
should
be
a
full
and
complete
full
and
complete
migration.
D
Yeah,
I
would
like
that
to
be.
You
know
that
there
is
a
basic
validation
of
that
being
the
case
before
the
test
environment
is
made
accessible
in
the
first
place
because
of
the
sensitive
nature,
I
mean
right
right,
just
have
an
unprivileged
account
or
go
anonymous
and
type
in
any
security
issue
id
and
see
whether
you
can
see
it.
That
would
be
very
bad
right
right,
good
point.
A
So
let
me
take
that
as
ask
anton
to
validate
to
validate
that
trivial
security
exposures
are
not
possible.
B
So
just
to
clarify
so
the
migration
they
are
restoring
exactly
the
same
machine
that
we
had
like
when
weeks
ago.
So
it's
a
it's
supposed
to
connect
to
the
dap,
so
we
are
supposed
to
keep
the
same
group
the
same
users.
The
only
thing
is,
the
data
will
be
updated
because
we
keep
updating
the
jira
instance
at
the
moment.
B
Normally,
we
should
also
be
using
the
correct,
dns
record,
so
issues
that
check
inside.org,
because
we
provide
the
information,
so
they
could
generate
the
less
encrypt
certificates
using
dns,
so
it
will
not
mess
up
with
the
current
instance,
so
we
should
really
be
able
to
test
it
just
enforce
the
ip
for
specific
hostname
or
whatever
so
yeah.
A
D
And
it
should
be
easy
enough
to
confirm
that
that's
not
the
case,
it's
just
you
know.
If
if
we
miss
it,
then
we've
saved
a
minute
of
time
and
potentially
lost
a
lot
of
confidential
data.
Exactly.
A
B
And
also
something
to
clarify
here
is:
we
will
not
use
in
predictions
the
instance
that
anton
restart.
So
if
we
validate
that
the
migration
is
okay,
then
we
create
a
new
machine
and
we
restart.
So
we
do
a
new
backup
and
we
start
again
so.
B
A
B
Yeah,
I
guess
we
can
just
skip
to
the
next
point,
which
is
mirrors.
There
is
nothing
really
new
here,
so
just
remove
it
from
the
notes.
I
haven't
worked
on
that
I
think
oops.
Sorry.
D
Latest
information
that
we're
not
burning
money
is
still
correct
right,
so
the
fallback
mirror
does
not
cost
a
lot
of
money.
B
Yeah,
it's
still
a
case
and
that
topic
is
fine.
While
you're
talking
about
germany,
we
still
have
a
broken
process
right
now,
where
the
linux
foundation
is
not
correctly
notified
when
they
have
to
pay
the
bills.
So
we
still
have
two
bills
that
they
have
to
pay,
but
yeah
it's
ongoing
work.
I
made
a
few
changes
to
the
permission
that
the
linux
foundation
has
so
they
should
be
notified
in
the
future,
but
we
still
have
to
actively
monitor
that
to
be
sure
that
they
receive
the
bill
in
the
future.
B
Last
time
I
checked
the
azure
yeah,
we
were
really
good.
A
There
are
two
things
so
first
sorry
to
see,
but
congratulations
to
oliver
gonza
he's
having
twins
as
the
father
of
twins.
I
want
you
all
to
know
that
that
was
the
most
instructive
thing
I
ever
had
experience
as
a
parent
twins
changed
my
life
completely,
so
so
he's
having
twins
and
therefore
he
won't
be
our
release
officer
after
december
3rd.
A
That
means
that
we
need
to
figure
out
find
what
we'll
do
in
terms
of
where
we
execute
the
acceptance
test
harness
and
how
we
construct
release
candidates
and-
and
I
think
we
want
to
do
the
release
candidates
inside
our
release-
automation
info
we
already
have-
but
it
will
require
some
development
work.
A
B
A
C
C
C
A
Yeah
that
that's
my
challenge
with
release.ci,
I
feel
like
acceptance
test.
Harness
is
already
high
maintenance
and
low
visibility,
and
my
worry
is
that,
if
it's,
if
its
results
are
behind
release.ci,
that
it
will
be
very
hard
to
get
people
to
take
action
to
correct
failures
that
are
detected
in
it.
Even
when
the
failures
are
real,
I'm
worried
that
there
are
enough
failures
in
there
that
are
fake.
That
are
false
failures.
You
know
that
are
misleading.
B
That's
a
good
point
because
we
saw
in
the
past
that
when
a
job
is
spelling-
and
it's
not
really
visible
to
a
lot
of
people,
then
it's
it's
it's
more
difficult
to
fix
it.
So.
B
Yeah
anyway,
we
still
have
to
think
about.
B
B
Yep
and
the
other
thing
regarding
the
release
candidates
do
really
have
do.
We
need
anything
more
than
what
we
already
have
with
the
release
environments.
I
mean,
if
you're,
just
adding
a
new
profile
that
would
be.
That
would
be
easy.
C
So
currently
the
rc's
publish
to
snapshots
is
the
only
difference.
It's
that
it
doesn't
get,
doesn't
get
published
to
any
releases
repo
so
needs
some
changes
in
the
get
jenkins
version.
B
Because
if
we
are
just
if
we're
just
pushing
the
the
risk
candidate
to
a
different
margin
repository,
this
is
something
that
could
be
done
at
the
profile
level.
We
just
provide
different
settings.
B
C
B
Okay,
yeah:
let's
take
some
more
time
to
think
about
that.
Any
last
suggestion
regarding
this
topic.
B
Yeah
then,
I'm
really
curious
about
the
following
point,
which
is
azure
credit
offer.
Have
you
discussed
about
that
with
microsoft
I
mean?
Is
there
any
opportunity
there
yeah?
So
I
have
a
meeting.
A
Today,
with
kayla
linville,
our
customer
success
manager,
she
initiated
the
meeting
she's
trying
to
explore.
Is
there
a
way
she
could
submit
a
proposal
to
microsoft,
to
ask
them
to
fund
the
project's
infrastructure
to
some
level
and
I'll?
Have
that
discussion
with
her
olivia
you're
welcome
to
join?
I
assumed
I
can
talk
these
business
things
without
bothering
you
it's
we.
We
want
them
to
give
us
money
and
she
needs
to
act
as
our
advocate
to
ask
them
to
give
us
money.
B
Don't
think
I
mean
yeah,
I
already
had
this
this
discussion
several
times
with
microsoft,
so
I
don't
think
my
present
would
be
more
useful,
so
great.
B
B
The
main
reason
to
that
is
because
the
puppet
master
is
running
in
a
hidden
place,
so
it's
not
always
easy
to
know
if
a
change
has
been
correctly
applied
or
not
so
the
second
thing
is,
we
still
have
a
bunch
of
improvement
that
I
would
like
to
do
is
one
of
them
is
using,
let's
encrypt
with
the
dns
method,
so
we
could
have
real
certificate
for
certainty,
hydrogen
for
puppet
master
and
a
bunch
of
other
services,
so
yeah.
B
So
I
hope
he's
more
than
welcome
on
this
topic
as
well,
but
yeah
more
more
things
should
be
coming
here.
B
Though
we
still
have
a
major
work
that
need
to
be
done
for
packages
and
kids,
that
are
your
machine,
because
the
perpetration
is
still
disabled
there
and
because
we
did
a
bunch
of
manual
changes.
We
have
to
be
sure
that
once
we
reapply
a
pathetic,
we
do
not
hold
back
changes
and
fixes
on
that
machine.
B
B
And
while
I'm
also
talking
about
the
batman
maintenance,
I'm
planning
to
use
update
cli
to
also
update
docker
images
there
in
the
puppets
in
that
puppet
repository.
B
So
I
I
did
the
fix
needed
to
use
it,
so
it
should
be.
I
should
be
able
to
enable
a
bunch
of
configuration
there.
B
B
So
if
you're
talking
about
packages,
they
are
replicated
in
several
locations,
one
of
them
is
archive.jkci.org
and
the
other
one
is
the
communities
cluster
a
double
check.
So
we
have,
I
mean
they
are
back
up
in
different
locations.
B
Okay,
thank
you,
while
you're
talking
about
this
one
specifically
and
it's
something
that
we
have
to
improve
on
the
puppet
codes
right
now,
when
we
upload
we
upload
to
archive-
and
we
upload
to
the
osu
western
house
and
also
send
me,
how
is
the
mirror
that
has
a
nursing
demon
running,
which
means
that
when
people,
when
someone
wants
to
add
a
new
mirror,
they
just
have
to
synchronize
with
oscilloscope,
which
means
that
they
do
not
contain
every
artifacts
that
we
have
on
packages.
B
They
only
sync
with
artifacts
that
were
published
over
the
last
year.
I
think
so.
There
are
new
horse
that
deletes
automatically
files
older
than
their
gear.
Others.
Not
so
that's
why
some
files
are
available
on
old
vr
and
not
necessarily
on
every
mirrors,
but
something
that
I
would
like
to
do
is.
I
would
like
to
enable
ersync
on
archive.jenkinscia.org
so
or
maybe
another
service,
but
the
id
would
be
to
allow
the
mihor
to
synchronize
the
full
content,
and
not
only
those
that
were
created
over
the
last
year.
B
Yep
bye-bye,
so
I
propose
to
end
that
to
finish
a
meeting
now,
unless
the
last
topic
that
you
want
to
bring
one
time,
two
three,
then,
thanks
for
your
time,
have
a
great
day
great
evening
for
other
people
and
see
you
in
rc,
bye.