►
From YouTube: 2023 06 13 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone
welcome
to
the
Jenkins
weekly
infrastructure
team
meeting.
They
were
the
13th
of
June
2023
around
the
table.
We
have
myself,
Diamond
report
won't
be
able
to
join.
Is
it
on
the
train
and
it
appeared?
That's
the
data.
Cellular
network
is
not
really
working.
As
expected.
We
have
Mark
weights,
Stephen,
Merton
and
Kevin
hello.
Folks,
let's
get
started
with
the
announcements.
The
weekly
2.4
10
410
was
released
successfully,
at
least
on
the
technical
bits
we
have
packages.
A
We
have
released
an
artifactory,
we
have
Docker
image
and
I
assume
the
last
bit
need
to
be
done
later
around
the
exchange
log
and
the
last
items.
A
A
Artifacts
packages
and
so
I
checked
the
E
Docker
image,
thanks
for
creating
the
tag,
Mark
and
change
log
bits
published.
So
we
are
okay,
so
Stefan,
you
are
ready
to
roll
for
updating
our
images
and
deploy
that
new
version
to
infrastia
just
a
point,
more
I
realized
last
week.
I,
don't
know
if
you
remember
when
you
created
the
release
on
the
docker
on
the
container
image
for
the
previous
release.
A
The
trusted
CI
saw
the
tag
because
it's
checking
every
five
minutes,
but
it
did
not
trigger
a
build.
The
reason
is
because
we
have
hadidus
setup
by
default.
That
say,
don't
build
tags
older
than
three
days,
but
you
created
the
tag
the
same
day
right
and
that's
the
the
really
sensitive,
no
no
transition,
the
right
wording.
A
Exactly
from
your
machine,
that's
okay
from
wherever
you
want.
The
reason
is
because
when
you
publish
a
GitHub
release,
if
the
the
point
attack
does
not
exist,
then
it
will
create
it,
but
the
GitHub
API
does
not
provide
annotation,
because
The
annotation
means
adding
metadata
that
can
be
signed.
That's
why
it's
not
possible
through
the
release
UI,
but
there
is
no
problem
to
create
the
release
and
instead
of
writing
the
tags,
you
can
point
to
an
existing
tag
that
you
created
earlier.
B
A
No
problem,
if
you
create
a
tag
that
is
not
educated,
the
only
check
is
once
you
have
created
the
tag
for
now.
Let's
just
check
that
in
the
next
five
minutes
there
is
a
build
that
triggers
and
trusted
CI.
If
it
doesn't
like
last
week,
you
have
to
start
the
build
manually
by
clicking
build
on
the
Discover
Diagon
Jenkins
and
that's
okay,
that's
the
only
that
that's
the
way
to
say
to
Jenkins.
Okay,
you've
seen
the
tag
you
don't
want
to
trigger
it
automatically.
Please
trigger
it
manually.
B
Okay,
so
so
the
fact
that
if
I
do
and
and
I
don't
I
don't
currently
have
access
to
trusted,
so
it
would
have
to
be
someone
who
does
have
access,
but
the
technique
then,
is
run
the
build
interactively
launch
the
build
interactively
on
trusted.ci
and
it
will
detect
that
tag.
Even
though
it's
an
ant
not
an
annotated
tag,
it
will
see
that
it
is
a
new
tag
and
decide
to
build
based
on
the
fact
that
it's
new
or
I'm
not
sure
how
it
okay,
all
right.
A
A
I'm
mentioning
that,
because
right
now
we
just
have
to
ensure
that
someone
kicked
the
build
so
last
week.
I
had
to
do
it
when
I
saw
the
problem
and
I
forgot
to
explain
that,
because
it
was
after
the
team
meeting
this
week
we
had
commits
in
the
past
days
that
were
merged,
usually
the
we
have
dependables
on
Sunday,
so
we
merge
pull
request,
Sunday
Monday.
So
that's
okay
for
the
weekly
release,
but
yeah
that's
important
to
have
this
in
mind.
If
the
build
doesn't
start
automatically,
it
mean
it's.
A
The
the
perceived
timestamp
by
Jenkins
is
older
than
three
days,
and
in
that
case
you
can
start
it
manually,
I'm
saying
that,
because
right
now
we
know
if
it
doesn't
appear
here
what
object
and
the
goal
is
still
automating
or
that
wall
part.
So
when
the
radius
process
will
have
to
create
the
tag,
we
will
just
have
to
ensure
that
it
will
first
create
an
annotated
tag
and
then
create
a
GitHub
release
with
a
JH
common
line
that
Associated
to
that
tag.
That's
what
the
CD
process
is
doing
since
one
year.
A
A
I've
already
forgot
about
the
next
LTS.
Let
me
copy
past
from
last
week.
B
A
I'm
opening
the
Jenkins
advisories
no
emails
in
the
16th
of
May,
so
nothing
and
no
major,
even
for
me,
as
far
as
I
can
tell
same
for
you,
okay,
so
let's
get
started
on
the
test.
It
will
be
to
finish
one
that
was
closed
just
after
I
generated
the
node.
So
let
me
get
started
from
this
one,
so
Thanksgiving
for
opening
an
issue
with
a
lot
of
information
that
were
really
useful
for
us
to
quickly
debug
and
identify
the
problem.
A
Since
we
may
read
the
head
up
a
trusted
CI
whilst
having
some
hiccups,
the
main
reason
was
because
one
of
the
security
group
we
applied
on
the
new
trusted
machine
was
pointing
to
the
former
ldap
IP
that
was
outside
our
system
and
now
that
it's
fully
managed.
We
had
to
fix
that.
So
that's
the
it's
automatically
updated
when
the
IP
changed,
which
was
the
case
with
the
migration
to
the
new
cluster.
A
A
C
A
A
So
we
added
an
exception
on
short
term
to
unblock
this
user,
so
they
can
still
benefit
from
the
artifact
caching
proxy
it's
enabled
and
used.
But
when
the
one
or
two
projects
are
building
on
sea
agent
in
sayu
and
are
trying
to
find
artifact
on
the
jit
pack
external
repository,
then
it's
not
using
ACP
anymore,
so
the
user
I'm
not
sure
if
the
user
was
able
to
find,
but
there
was
Airway,
was
able
to
propose
them
to
changes,
so
they
can
re-enable
ACP.
A
There's
been
a
repository
where
they
helped
other
team
members
to
use
renovates
for
dating
software
dependencies.
I,
don't
remember
the
repository
but
yeah
that
everything
was
okay,
because
we
saw
the
renovate
dashboard
issue
being
opened
by
the
Bots,
and
now
they
have
a
bunch
of
dependency
updates,
so
that
works
thanks.
Rv
thanks
Alex
for
pointing
us
that
we
had
a
data
bedok
public
dashboard,
which
is
on
status,
Jenkins
IU.
That's
why
it's
a
public
one!
A
We
have
a
lot
of
private
ones,
but
on
status,
Jenkins
Sayo
there
is
I,
think
it's
services
on
monitoring,
no
I
might
be
monitoring,
and
this
dashboard
shows
metric
endpoint
for
us,
such
as
the
latest
Jenkins
score
package
available,
and
that
one
was
on
Tish.
We
had
the
typo
on
the
configuration
file.
So
thanks
for
saying,
thanks
to
that
issue,
Airway
was
able
to
fix
that
that
problem.
A
Next
one
after
last,
two,
the
the
previous
Milestone,
we
had
to
migrate
puppet
Jenkins,
IO
virtual
machine
out
from
osirosl
to
a
new
machine
in
Azure
virtual
machine,
because
the
puppet
Enterprise
Edition
we
have
is
not
supporting
the
latest
Ubuntu
22.
So
we
require
the
Ubuntu
20
machine
also
for
security
method,
that's
better
to
have
a
security
and
credentials
on
area
that
we
manage.
A
Following
that
big
change.
I
forgot
something:
we
have
a
Web
book
notification
on
the
Jenkins
repository
which
host
the
puppet
code.
The
goal
of
that
Web
book
is
that
when
we
merge
something
on
the
master
Branch,
it
sends
an
event
to
the
puppet
machine
to
delete
a
you
need
to
pull
the
latest
changes
from
the
Repository,
so
that
has
been
fixed
and
now
working.
Well.
So
when
you
merge
people
requests,
the
puppet
agent
are
now
in
the
upcoming
minutes.
Getting
the
latest
version
of
the
code,
which
wasn't
the
case
before.
A
Any
question
on
this
one
trusted
CI
out
from
AWS
to
Azure,
so
the
the
heavy
part
was
already
done
two
Milestone
ago.
These
were
the
latest
cleanup
phases
now
trusted.
Cia
is
now
using
optimized
agents.
We
don't
need
to
add
ssds
to
the
machine.
They
use
local
disk.
They
are
ephemeral,
they
use
the
same
spot
instances
as
what
we
have
on
cigenkins
IU
forecast
reasons
and
performances
are
better.
The
build
time
for
the
docker
image
has
increased
of
50
percent,
so
we
are
better
cpused.
The
machine
have
the
same
size.
A
It's
just
the
CPU
generation
that
changed
so
a
decreased,
50
percent,
yeah,
sorry
not
increased,
and
all
the
network
security
group
were
closed,
as
we
saw
earlier
a
bit
a
bit
too
much
network
security
because
we
weren't
able
to
reach
ldap
today,
but
that
is
it
shows
that
the
trustees
has
improved
in
terms
of
security
and
accesses
it's
far
from
perfect,
but
still
better
than
before.
A
A
I
so
clean
up
of
AWS
resources
to
do
also,
we
introduced
a
new
change
and
I
wanted
to
to
have
a
round
table
of
advices
here.
In
order
to
reach
the
SSH
question
to
go
to
trusted.
Ci
we've
added
an
allow
list
of
public
IP,
that's
an
additional
security
layer.
We
did
that
initially
to
protect
the
virtual
machine.
A
Is
there
anything
that
could
be
blocking
at
first
sight
for
you.
Do
you
see
that
taking
in
account
that
the
process
will
be?
If
you
need
to
access
it,
you
need
to
open
a
pull
request
on
the
Azure
repository
to
add
your
IP
in
a
list
of
trusted
admins,
and
once
that
request
has
been
merged,
then
you
can
start
accessing
the
virtual
machine.
Otherwise
you
are
locked
out.
A
That
has
been
a
bit
annoying
for
me,
but
it's
just
anions
and
not
slower
or
blocker.
Since
I
was
traveling,
I
had
to
change
regularly.
My
public
IP
but
I'm
traveling,
so
that's
an
additional
layer
of
security.
If
someone
steal
my
machine,
they
cannot
add
trusted.
Unless
there
is
the
full
pull
request,
change
I
was
also
thinking,
maybe
studying
how
to.
Instead
of
that
only
a
low
connection
through
the
private
VPN.
A
That
one
requires
people
rework,
because
we
need
to
create
a
network
connection
between
private
Network
and
trusted
Network,
because
they
are
separately
virtual
Nets,
but
that
will
allow
less
maintenance
and
less
pain,
because
if
you
have
your
VPN,
that's
enough
faster,
the
new
quality
from
my
point
of
view,
but
maybe
that's
more
initial
work,
but
then
less
maintenance.
Each
time
someone
need
to
access
trusted.
A
So
please
I
propose
a
vote
just
to
to
get
to
the
site
of
what
would
be
the
preferred
option,
at
least
for
you
folks.
Can
you
raise
your
hand
if
you,
if
you
say
we
keep
the
current
model
so
allowing
an
exhaustive
list
of
public
IP?
A
Okay,
two
hands?
Can
you
raise
your
hand
for
allowing
from
the
private
VPN.
A
Per
okay,
so
that
means
right
now,
maybe
continuing
as
it
and
see
if
it's
a
burden,
if
it
is,
we
have
a
fallback,
is
that
did
I
understood
correctly
the
result
of
the
vote.
B
B
A
Of
course,
sure,
okay,
so
for
everyone
here,
if
you
were
used
to
access
trusted
agent
in
skills,
you
need
to
browse
to
the
Run
books
and
looked
at
the
updated
runbook.
That
shows
you,
the
brand
new
SSH
configuration
to
replace
your
401,
because
everything
is
done
for
your
SSH,
your
home.ssh
config
file.
So
you
needs
to
check
the
runbook
mansion
and
points
to
the
list
of
a
load
IP.
A
We
have
there
is
a
a
DNS
DNS
record
for
bones
machine
from
outside,
and
then
you
can
use
internal
private
DNS
from
the
bones
to
the
secondary
missions.
So
the
setup
should
text
in
account
using
this
as
much
as
possible
to
let
us
the
ability
to
to
change
the
IPS
if
needed.
A
A
Making
public
the
plugin
is
course
project
for
gsoc,
so
that
has
been
done
thanks
for
everyone.
Working
on
that
part
12,
our
new
gsoc
users
and
we
rotated
the
p80
the
personal
access
token
used
by
infrasti
to
manage
the
digitalism
infrastructure.
We
do
it
every
19
days.
That's
the
usual
process.
It's
an
all
process
to
automate
but
yeah.
So
we
just
have
a
calendar
date.
It's
and
everything
is
still
connected.
A
Any
question:
okay:
let's
check
now
the
work
in
progress.
First
tissue,
remote
repository
for
repo
care-ups
Labs.
That
was
an
holy
shoe,
not
prioritized,
but
it
has
been
moved
following
request
from
car
maintenance.
Some
containers
that
really
needed
this
one
so
I
need
help
on
this.
A
So
you
would
only
download
what
you
really
require
on
a
remote
repository
and
avoid
outbound
bandwidth.
Of
course
we
said
no
to
other
repositories
and
that
one
is
the
same,
except
is
for
sensitive
project,
so
I'm
not
really
sure
what
will
be
the
direction.
I
might
have
missed
something
obvious,
but
I
wouldn't
say
no
to
another
pair
of
iron
orbit,
artifactory
and
maven.
B
Yeah
I
bet
that
seems
reasonable
to
me.
So
what
I
think
I
heard
you
say
is
that
the
artifactory
request
to
mirror
the
remote
repository
is
being
rejected
because
they're
they're
refusing
to
deliver
the
content
to
something
that
identifies
itself
as
artifactory
and
because
of
that,
we
then
can't
create
a
mirror
of
that
thing.
So
we
have
to
create
our
own
private
copy
of
it
and
then
maintain
that
private
copy,
rather
than
letting
artifactory
do
the
mirroring
maintenance
of
it.
A
A
Yes,
so
the
initial
reason
was
to
ensure
the
quality
of
service,
so
I
don't
see
that
as
a
publish,
so
I
will
also
get
back
to
the
the
person
who
raised
the
issue
and
will
raise
the
priority
of
the
issue
two
days
ago.
We'll
just
get
back
to
them
to
to
show
them
the
problem
and
see
if
they
have
other
blockage.
That
might
not.
A
Let
that
check
with
the
persons
who
raised
the
issue-
my
guess
is
I
haven't
searched
in
details.
We
could
also
contact
the
repository
administrator
and
ask
them
ask
them
if
it's
possible
or
if
it's
a
wanted
policy.
Maybe
it's
an
error
or
maybe
I,
just
use
the
wrong
URL
or
misunderstood
something
maintenance.
A
C
We
would
like
to
avoid
application
to
to
spawn
on
it
if
they
are
not
compatible
with
a
arm
64
that
will
probably
be
dealt
by
kubernetes,
but
as
we
cannot
be
sure,
100
percent,
we
need
to
add
some
taint
to
make
sure
that
not
compatible
application
will
not
be
spawned
on
it.
A
That's
okay,
absolutely
I've
added!
We
had
the
issue:
I've
updated
one
of
the
application
that
wasn't
sticking
the
M
chart
to
the
name
of
the
node
pool.
So
what's
what
did
kubernetes
try
to
do
so?
Oh
I
got
that
mission
here,
I,
don't
see
any
anti-affinity
or
blockers,
so
I
can
schedule
on
that.
There
is
no
constraint
for
bidding
me
from
trying
to
start
the
machine.
A
The
thing
is
that
application
has
an
image
that
we
build
and
that
we
only
build
with
the
Intel
Docker
image
for
that
case,
which
is
a
different
case
than
the
other
images
that
Stefan
already
tried
which
are
running
on
iron.
But
in
the
case
of
that
application
it
wasn't
so
it
was
failing
trying
to
deploy
the
new
version.
So
the
good
thing
is
that
there
were
no
service
Interruption,
because
kubernetes
does
not
remove
immediately
the
old
version.
A
So
we
did
a
quick
fix
by
pinning
the
application
to
the
Intel,
not
pool
that
was
a
quick
fix,
but
now,
as
Stefan
said,
if
we
want
to
create
a
secondary
node
pool,
if
you
want
to
update
a
node
pool,
we
want
to
change
the
disk.
You
want
to
add
Danes
these
operations
delete,
do
not
pull
and
create
a
new
one.
A
So
that
means
everything
running
on
a
notebook
will
be
stopped
and
removed,
the
dot
notebook
will
be
deleted
and
the
new
notebook
will
be
created
and
during
that
time,
kubernetes
will
try
to
constantly
reschedule
the
pods
removed
from
netball.
This
will
create
service
Interruption.
That
means
we
are
too
constrained
for
scheduling,
so
we
need
to
relax
the
constraint
so
that,
instead
of
that,
we
will
say,
oh
I
need
a
new
node
pool
with
a
new
set
of
Toleration.
A
A
A
A
A
Okay
so
backlog
or
bonus
the
big
one,
migration
of
prod
public
Gates
cluster
to
the
new
predicates
cluster.
All
the
services
have
been
successfully
migrated
to
the
new
cluster,
the
last
one
being
jenkins.io
earlier
today.
A
Now
the
next
step
are,
we
started
to
collect
the
cleanup
process.
We
haven't
cleaned
up
everything.
For
instance,
we
still
have
the
former
mirror
bits
name
space
running
on
the
previous
cluster,
because
we
were
waiting
24
hours,
particularly
for
the
mirrors
for
the
DNS
records
to
be
propagated.
Everyone
in
the
world,
because
we
still
so
incoming
requests
yesterday.
A
So
now
we
will
do
step-by-step
process.
Almost
all
services
has
been
removed
from
the
formal
cluster.
To
be
sure,
we
don't
accidentally
have
a
request
and
we
haven't
seen
her
any
error.
Yet
mirabit
is
really
the
last
one.
We
have
stopped
one
hour
ago
stopped
managing
that
old
cluster.
So
now
the
next
step
will
be
removing
the
namespaces
waiting
one
hour
before
deleting
and
releasing
all
the
resources
of
the
former
cluster,
and
then
we
will
have
a
set
of
tiny,
a
myriad
of
minor
tasks
about
clean
up
the
former
reference
to
that
character.
A
So
really
nice
job
Harvey,
the
ldap
migration,
the
mirror
and
Jenkins
IO
were
sensitive
in
terms
of
infrastructure
and
the
amount
of
requests
and
everything
went
really
fine.
That
was
a
great
example
of
team
building.
We
had
issues
due
to
the
tentative
of
the
ldap
last
week
that
has
been
improved
and
the
final
migration
was
done
without
any
request
closed,
so
nice
job
on
that
port.
A
A
A
Okay,
next
one
install
and
configure
datadog
plugin
on
cig
and
Kim
Sayo,
so
that
one
accidentally
LEDs
to
a
five
minute
of
see
agents.
Are
you
unavailable
so
that
open?
We
have
learned
from
the
process
the
main
point
to
have
in
mind
for
everyone
when
you
want
to
remove
a
plugin
from
a
production
controller
start
by
checking
the
gcas
configuration.
A
For
everyone,
so
better
roll
backing
the
configuration
first
and
then
removing
the
plugin.
That's
the
proper
order,
that's
a
that's!
That
could
be
a
great
warning
to
your
blog
post
Bruno.
By
the
way,
just
an
admonition
saying,
hey,
don't
forget
to
remove
gcas
configuration
before
deleting
that
plugin.
Otherwise,
your
controller
won't
restart.
B
A
But
that
only
from
my
experience
that
only
covers
60
of
the
cases,
for
instance,
when
you
have
a
production,
ldap
or
SSO
system,
that's
really
hard
to
go
until
the
moment
where
Jenkins
load
the
production
like
gcask
setup
D4
is
not
worth
it
so,
but
yeah.
That
could
be
a
good
point
to
propose
by
default
for
most
of
the
cases,
because
the
people
who
have
reached
this
kind
of
of
the
kind
of
parameters
describe
our
maybe
Edge
case.
A
So
roll
back
to
setup
and
begin
I
haven't
had
time
to
synchronize.
The
survey
I
would
like
to
do
that
to
to
take
that
issue
and
at
least
set
up
the
basics
for
sending
metrics
and
enabling
and
I
will
then
hand
over
to
irvi
when
it's
be,
when
you'll
be
back
on
what
to
do
with
this
Matrix.
If
he's
okay
with
that,
if
you
want
to
take
care
of
the
wolf
thing
to
to
increase
his
understanding
of
the
network
problem
here,
then
in
that
case
I
will
let
him
do.
B
And
I
have
not
I
have
not
done
my
action
item
Damian
and
I
had
that
action
item
already
from
last
week.
I
have
the
action
item
to
summarize
the
results
of
our
Brown
out
our
reduced
our
intentionally
reduced
functionality
period.
What
Damian
I
did
was
we
disabled
Damien
did
the
work
we
just
disabled,
the
J
git
repository
at
the
top
level,
making
it
private
and
by
making
it
private,
we
then
ran
some
tests
to
see.
Would
it
still
be
visible,
even
though
the
root
level
mirror
was
Private?
B
Would
it
still
be
visible
under
our
public
definition,
and
the
answer
came
back
nope,
it's
not
public
as
soon
as
as
soon
as
you
make
the
root
thing,
private.
All
of
the
pointers
to
that
root
thing
that
are
hidden
inside
the
public
repository
the
public
virtual
repository
are
also
private,
and
therefore
we
were
locked
out.
We
couldn't
download
Jake
it
at
that
point,
so
the
the
desired
G.
This
would
be
an
easy
way
to
do.
B
B
A
B
So
I'll
and
I'll
write
a
summary
of
this
and
send
it
by
email
to
to
jfrog
because
we
need
we
need
some
help
from
them
in
terms
of
the
approach
it's
we've,
we've
got
well
we'll
do.
We
also
need
to
do
some
data
analysis
on
the
most
recent
data,
they've
provided
a
new
week's
worth
of
data
and
we'll
do
some
analysis
in
hopes
of
finding
other
things
where
we
could.
We
could
reduce
bandwidth,
use,
okay,.
A
I've,
an
additional
things
that
I
even
shared
with
you
yet
Mark,
but
given
that
and
the
direction
it's
going
and
given
the
work
that
erva
did
for
migrating
ldap
on
the
new
cluster
I
feel
like
I,
should
prioritize
again
working
on
a
available
ldap
instance
with
replication
on
the
new
cluster,
because
now
we
have
a
fixed
Network
on
that
new
cluster.
So
the
issue
I
had
with
the
replication
should
not
be
present
there.
So
is
there
any
objection
if
I
start?
A
Eventually,
we
can
burn
that
Stefan
and
I
saw
n
charts
that
proposed
to
have
a
replicated
ldap.
So
the
goal
would
be
to
install
the
new
instance
of
ldap.
As
early
rediscovered,
we
have
a
backup,
efficient,
backup,
recover
mechanism
of
our
current
held
up,
which
means
creating
a
brand
new
instance
with
brand
new
IP,
since
we
did
it
recently.
We
are
quite
at
ease
with
that
process.
We
should
then
be
able
to
get
restore
and
see
how
the
replication
work
on
that
test.
A
Instance
that
might
need
trying,
with
a
beta,
dotted
up
inside
your
temporarily
that
I
hope
we
want
with
the
state
and
probably,
but
the
goal
will
be
to
say
what
is
the
behavior
when
we
start
draining
the
underlying
machine
on
one
of
the
twin
stones,
and
even
the
adap
is
still
there.
That
will
also
allow
us
to
scale
horizontally
if
we
start
seeing
a
lot
of
requests
on
desired,
app
machines
right.
C
A
That
part
is
a
is
a
let's
say,
a
implementation
detail.
It
will
depend
on.
How
is
the
replication
process
working
that
we
need
to
assess?
Okay,
okay,
I!
Don't
want
to
spoil
everything
yet,
but
that's
an
implementation
detail.
The
goal
for
us
is
to
be
sure
that
the
problem
here
that
we
need
to
study
we
want
to
have
an
ldap
instance
that
will
first
be
able
to
under
the
additional
load.
If
we
enable
authentication,
that's
only
the
technical
part.
A
A
A
We
don't
want
user
running
student
request
being
in
error,
because
authentication
is
done
for
one
or
two
minutes
that
was
acceptable
until
now,
but
if
we
enable
ldap
on
artifactory
for
all
requests,
then
that
would
be
a
problem
print.
Hence
the
need
for
highly
available
adapt
just
to
be
sure
we
can
have
an
ins.
We
support
one
instance
being
done
while
maintenance
or
surprise
maintenance.
A
B
A
A
B
A
A
A
So
now
the
target
for
me
will
be
to
run.
We
have
two
machines
running
on
AWS,
two
tiny
machines,
I
think
it's
now
rating
is
already
a
cluster,
so
we
have
usage
and
stats
to
drink
inside.
That's
two
minor
Services
regarding
Telemetry,
so
I
propose
that
we
upgrade
these
two
machines
to
Ubuntu
22
during
the
upcoming
Milestone.
A
A
C
C
A
So
we
have
that
request
from
giving.
We
didn't
add
the
time
to
ask
to
do
the
initial
assessment
I
added
a
message
earlier
on
neither
pull
request
or
the
issue.
The
goal
is
just
to
synchronize
with
giving
to
do
the
initial
deployment
of
a
matoma
instance
on
our
infrastructure,
so
I
guess
we
should
be
able
to
start
working
on
that
during
the
upcoming
Milestone
that
it
looks
good
for
you,
Stefan
I,
assume
we
can
walk
on
pair
on
that
on
this
one
or
I
will
take
it
by
default.
Yeah.
A
A
Main
task
is
from
the
assessment
I
did
earlier
today.
We
have
all
the
information
we
need
from
giving.
So
it's
it's
our
job
now
to
get
started
and
do
feedback
if
we
need
I
have
all
in
all
required
permissions.
A
A
A
Cool
I,
don't
I
don't
require
help,
but
anyone
interesting
to
walk
opera
on
this
one
is
welcome
to
join
and
opinion.
B
C
A
Was
that
27.?
Okay,
exactly
we
have
to
roll
on
1.25
before
end
of
July,
got
it
in
any
case,
and
yes,
126
should
be
done
during
the
summer.
Thank
you,
1.26
will
be
a
tiny
release.
1.25
is
a
huge
one.
It's
every
free
releases
that
you
have
huge
steps.
Thank
you
so
next
issue
we
had
the
user
having
issues
with
changing
Jenkins
file
and
permission
on
their
Repository.
A
We
haven't
heard
back
from
them
I'm
a
bit
annoyed
because
I
feel
bad
closing
the
issue,
but
we
don't
have
feedbacks
and
I'm
not
sure
what
could
be
done.
I'm
currently
checking
if
they
merged
people
request.
They
asked
me
help
on
no
I'm,
not
looking
at
the
proper,
so
yeah
I,
guess
yeah.
They
haven't
done
anything.
So
if
it's
is
it
okay
for
everyone,
if
I
keep
that
issue
until
last
Milestone,
but
if
we
don't
hear
from
them
next
Tuesday,
then
we
will
close
the
issue,
because
there
is
nothing
else
we
can
do.
A
The
pull
request
is
now
green
with
the
change
that
applies,
the
recommended
setup
for
plugins.
The
build
is
okay,
so
they
should
be
there
and
for
the
problem
about
the
not
seeing
not
being
seen
as
a
trusted
user.
They
have
to
use
and
follow
the
procedure
for
being
plug-in
maintenance.
They
have
direct
adminary
access
to
the
repository,
so
they
should.
They
have
everything
on
their
mind
on
their
on
their
hands.
So
I
don't
know
what
we
can
do
more
for
help
there
any
question:
Point
feedback.
A
No
okay,
just
a
reporting
about
the
new
CIA
Junction,
say
you
VM
instance
type
so
right
now,
I'm
fighting
with
the
new
inbound
agents
for
Azure
VM
agents
that
a
team
did
I'm
having
it's
really
hard
to
set
up
on
instances
such
as
trusted.
Ci
I
thought
it
could
be
easy
to
test
the
world
scripting
of
that
part,
because
you
need
to
write
shell
script
that
will
take
care
of
creating
the
service
that
connects
back
to
Jenkins.
A
It's
not
that
much,
except
that
it
tend
to
connect
to
a
public
URL
and
with
trusted.
We
have
a
specific
setup
that
makes
it
hard.
So
my
initial
assessment
of
it
will
be
easy
to
test
on
trusted.
Ci
is
not
a
good
assessment,
so
I'm
gonna
start
testing
with
the
new
testing
instance
like
Stefan
used
to
do
on
cigo
directly
I
will
try
to
work
on
this
one.
A
The
reason
I'm
doing
this
is
because
we
need
to
first
move
the
agents
to
the
new
network
and
we
need
inbound
agent
for
that
speaker,
otherwise
see
I
drink
inside.
You
won't
be
allowed
to
SSH
to
that
subnet
and
once
this
agent,
with
a
new,
proper
setup
on
a
proper
network
with
the
proper
storage
account
will
be
set
up,
then
I
should
be
able
to
bootstrap
the
new
machine
and
start
the
test.
A
I
did
a
full
test
of
full
restoring
tests
so
that
take
four
hours
to
backup
the
full
Jenkins
home
and
sync
it
to
the
machine
and
I
have
an
instance
with
no
internet
access
that
starts
and
I
see
everything
green
like
I
expected.
Here
again
you
can
say
you
so
that
should
not
create
any
problem.
The
thing
I
wasn't
able
to
test
was
to
spin
up.
A
A
We
will
have
to
announce
that
change
and
check
it
carefully
during
2012
hours,
so
not,
but
when
a
lot
of
bombings
are
running,
for
instance
or
not,
if
we
need
a
lot
of
merge
of
the
code,
my
plan
is
not
to
do
that
for
the
upcoming
Milestone,
but
in
two
weeks
I
will
prepare
all
the
work
for
that.
This
week's,
though
it's
constrained
by
the
inbound
agent,
of
course,
so
that's
why
I
don't
want
to
be
too
optimistic
here.
C
So
the
the
invent
the
agent
someone
who's
working
on
it
to
help
you
out
right
now
is
is
changing
the
code
of
of
the
way
it's
dealing
on
azure.
A
Oh
technically,
it
work
it's
just
you
have
to
to
it's
just
that
the
proper
setup
to
have
something
working
is
not
easy
to
get
compared
to
kubernetes
frigging.
You
have
to
find
the
correct
Jenkins,
URL,
the
correct
URL
and
the
correct
parameters,
and
you
have
to
write
the
shell
script
properly
and
adapt
it
to
our
system
where
Jenkins
is
not
admin.
A
A
Part
exactly
okay,
so
this
morning,
I
successfully
with
trusted
I
successfully
started
an
inbound
agent,
which
script
is
sleep.
Ten
thousand
students
I
was
able
to
SSH
to
the
agents
login
as
Jenkins
and
run
every
manual
steps
with
success
that
created
an
agents
that
built
Jenkins
IU.
The
thing
is
that
the
parameters
I
use
for
spinning
up
the
agent
were
where,
where
are
not,
we
cannot
automate
them
because
I
use
a
direct
TCP
connection
and
I
don't
try
to
use
a
general
Purl.
A
So
I
use
a
specific
set
of
flags
for
the
agent
process,
because
I
need
information
from
the
controller.
In
that
case,
that
was
manual,
so
I
was
able
to
retrieve
the
information
directly
on
the
ephemeral
agent,
but
it's
not
easy
to
run
okay,
and
that
is
caused
by
the
specific
network
setup
we
have
for
trusted
CA,
so
I've
already
opened
issues
and
I
will
report
back.
But
now
the
goal
is
to
switch
that
to
crjink
inside
does.
A
A
No
okay!
So,
let's
check
for
the
new
issues
to
see
if
we
have
tray
h,
not
Toleration
intent,
we
already
covered
it
and
I
forgot
to
remove
the
trail.
Edge
done
proposal
for
application
in
duplicates
migrate
to
rm64,
oh
Stefan,
that's
yours!
Right
and.
A
C
A
A
A
No
need
to,
in
any
case
we
can
have
both
taint.
Adding
the
tents
is
a
let's
say
it
removed
the
burden
of
specifying
this.
The
not
pulled,
not
selection,
the
values.
I,
don't
see.
Other
trayage
issue
recently
opened.