►
From YouTube: 2022 03 22 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
I
even
checked
the
docker
hub.
I
assume.
C
C
A
A
A
So
the
idea
is
that
we
start
by
checking
the
milestone
of
the
current
date
of
today
and
we
start
by
the
closed
issues.
So
all
of
the
help
desk
issues
that
we
worked
on
and
finished
our
pull
requests
on
that
repo
have
been
associated
to
that
milestone.
So,
let's
see
what
we
did
since
last
week
by
checking
this
closed
issue,
so
that
should
be
quite
quick.
We
have
one
two,
three,
four,
five,
six,
seven,
seven
or
eight
a
major
one.
A
It
has
been
closed
under
the
assumption
that
the
the
keys
have
been
rotated.
We
have
been
able
to
demonstrate
that
it
wasn't
been
used
for
anything
and
the
root
cause
was
fixed
unless
someone
need
more
details
or
if
the
issue
is
not
clear,
I
am.
I
won't
spend
time
detailing,
we
put
a
lot
of
informations
and
we
have
just
for
information
before
I
let
you
have
a
point.
We
have
two
sub-tasks
that
are
long-term
improvement
related
to
that
area
that
have
been
opened.
A
B
A
There
are
a
lot
of
things
that
can
be
improved,
summary
be
careful
when
you
have
github
check
because
it's
enabled
by
default
and
it
can
exfictrate
sensitive
information
from
jenkins
stock
output.
B
A
Thanks
is
there
any
question
point
things
I
could
have
forgotten
things
not
clear
on
that:
one:
okay,
next
one
fastly!
So
now
let
me
click
from
here
now.
Firstly,
thanks
to
the
work
of
arvey
is
managed
by
terraform,
so
we
have
a
fully
managed
public
repository,
which
means
that
we
hope
that
next
fastly
option
update.
If
someone
has
to
add
another
change
of
settings
instead
of
being
done
manually
on
fastly
web
ui
should
be
done
through
pull
requests
to
that
repository.
C
B
C
B
I
don't
think
so.
I
think
it's
some
purge
requests.
I
have
I'm.
I
have
to
check
the
pipeline
of
release,
but
I
think
they
are
calling
the
vastly
happy
with
token
allowing
them
to
purge
complete
services.
C
B
Good
to
identify
the
token
used
by
the
release-
okay,
so
we
know
this
one
is
used
for
this.
Unfortunately,
unfortunately
I
don't
know
the
can
can't
doesn't
seem
to
we.
It
doesn't
seem
to
be
possible
to
generate
token
ascot.
B
A
So
yeah
that
that,
let's
say
the
limit
managing
tokens
and
validating
cash.
However
invalidating
cash
might
not
be
needed
on
terraform,
but
at
least
we
gain
some
auditing
and
the
ability
to
everyone
to
see
the
settings
publicly
and
to
propose
improvement,
especially
in
area
where
a
security
header
is
required.
B
C
A
Next
item
in
their
azure
resources
are
now
managed
again
with
terraform,
so
that's
the
marketing
branding.
In
fact,
we
don't
manage
anything.
It's
just
that
the
the
we
recreate
we
re-initialized
the
wall
project.
We
removed
all
the
old
terraform
definitions
that
weren't
updated
since
one
year,
one
year
and
a
half
at
least
so
we
will
have
a
task
to
re-import
resources,
the
existing
resources
and
creating
the
new
one.
But
now
we
are
ready
to
operate.
A
A
So
that
would
use
this
one.
Please
note
that
these
two
terraform
manage
projects
are
using
exactly
the
same
shared
tooling
as
the
existing
aws
datadog
and
digitalocean.
We
use
the
same
pipeline
and
the
same
make
file
and
the
same
versions
so
great
job
people,
thanks
rv
for
adding
a
new
theme
for
terraform
on
github.
That
allows
us
to
have
code
review
automatically
pinging
the
correct
person
and
to
ease
the
management
of
the
repositories.
A
Thanks
also
stefan
rv
and
team
around
the
auto
label
issues
so
now,
based
on
the
kind
of
service
people
select
on
help
desk,
we
have
more
labels
automatically
set.
So
it's
easier
for
us
to
get
statistics
and
to
use
the
labels
on
the
bunch
of
issues
we
have
so
really
cool.
A
I
forgot.
I
realized.
I
forgot
one
issue
that
we
closed
during
the
weekend.
Sorry,
I'm
opening
it
right.
Now
we
were
a
bit
too
enthusiastic
to
deliver
the
new
maven
version.
I
mean
it's
full
automated.
It
was
just
one
button,
except
that
maven
3.8.5
is
subject
to
a
regression.
So
thanks
bazil
for
letting
us
know.
A
C
A
A
I
always
have
mixed
feeling
about
that
kind
of
rule.
Honestly,
because
the
I
mean
it
broke
some
usages,
but
the
problem
is
not
because
we
deployed
quickly
it's
better
to
have
the
latest
version
quickly.
What
is
missing
is
a
way
for
our
users
to
be
able
to
test
some
changes
like
canary
deployment
or
elements
like
this,
so
the
don't
deploy
on
friday
is
just
for
me,
a
temporary
hack,
to
avoid
having
our
user
frustrated
by
the
fact
that
this
does
not
work.
C
C
D
A
So
I
will
say
the
point
here
is:
oh:
do
we
ensure
that
there
isn't
any
bug?
We
could
ask
basil
since
he
opened
the
issue,
so
he
cooked
the
issue
if
he
has
any
insight,
because
he
is
the
only
person
who
was
able
to
report
that
to
us
so
having
his
insight
on
the
ideas
on
that
area.
What
do
you
think
about?
We
ask
him
because
he
might
have
ideas
on?
Oh,
we
could
add
a
health
check.
A
I
know
mark
that
you
already
use
a
job
that
I
tend
to
do,
and
I
I
checked
that
this
job
was
working
after
the
deployment
that
check
that
the
agent
could
be
spawned
and
that
job
is
called
by
the
bomb
archive
builds,
which
are
big
big
bills.
So
it
runs
pre-build.
So
it
starts
by
running
the
illustrate
job.
A
C
D
D
A
Exactly
hence
the
check
proposal,
even
with
open
telemetry
enabled
I
don't
know
so
that's
why,
as
basil
for
advice
idea,
oh
could
we
cook
that
the
question
is,
I
understand
our
developer
frustration,
but
if
it's,
if
it
happens
once
or
twice
a
year
for
one
or
two
people,
that's
that's
to
be
challenged.
That
rule
has
to
be
challenged,
but
I
propose
that
we
apply
it
for
now
just
be
careful
on
when
we
deploy.
D
D
B
D
A
Okay,
if
it's
okay
for
you,
I
will
transform
that
as
an
actionable
in
the
ci
documentation
public
page,
where
we
list
the
labels
and
capabilities
available
for
the
builds.
I
will
add,
so
I
will
open
the
pull
request
and
ask
the
three
of
you
for
sure
to
validate
it
before
merging.
So
not
only
one
of
us-
and
I
will
also
mention
basil-
to
have
an
advice
on
the
pull
requests.
Does
it
sound
good
for
you
in
term
of
process.
A
So
thanks
for
the
work
on
that
part,
stefan
so
now
we
have
a
new
m
chart
that
deployed
that
credential
on
every
namespaces
that
require
it,
including
the
jenkins
agents
for
crg
and
kimceo,
where
it
was
manually
managed
previously.
So
now
it's
automatically
managed
from
subs
same
credential
and
then
our
metadoc
benefits
from
decrease
the
amount
of
failed
data
distillation.
A
Okay,
however,
we
still
have
issues
even
with
that
add-on
to
datadog.
The
issue
still
happen
with
the
rate
limit.
That
will
be
the
next
topic
on
work
in
progress
before
we
jump
to
that.
Did
I
forget
some
tasks
that
you
folks
closed
during
the
past
week.
A
Okay,
so
that's
already
a
lot
of
work
so
now
work
in
progress
that
should
that
should
correspond
to
open
issue
on
the
current
milestone
so
we'll
for
each
one.
We
will
cover
it
and
see
if
it's
something
that
we
continue
working
on
or
if
we
remove
it.
The
goal
is
that
that
milestone
should
be
closed
after
that,
with
no
open
issues,
either
back
to
triage
or
delayed
to
the
next
milestone.
B
B
A
After
checking,
we
thought
that
we
weren't
using
authenticated,
call
to
the
docker
hub,
but
in
fact
we
are
and
with
it
the
thing
is
we
reach
a
new
kind
of
api
limit.
If
you
don't
use
authenticated
docker
engine,
then
you
are
rate
limited
per
public
iep.
That's
the
only
way
for
the
docker
hub
to
track
your
request.
A
What
are
the
solutions
so
right
now?
There
is
a
work
on
progress
to
improve
the
code
of
the
pipe
and
library
that
used
that
define
a
credential,
and
the
idea
will
be
to
split
on
multiple
accounts
load.
So
there
are
different
working
angles
there
having
at
least
one
pull
account
and
push
account
separately.
A
So
if
you
reach
rate
limit
for
the
pull
account,
you
don't
endanger
the
ability
to
push
new
image
and
also
the
the
angle
of
one
pool
account
for
each
jenkins
controller
that
we
use
at
least,
and
we
can
even
push
forward
with
different
docker
pool
accounts
depending
on
the
cloud
like
the
kind
of
agent.
A
Is
it
a
virtual
machine
or
a
container
or
worker,
node
and
kubernetes,
because
we
saw
exactly
the
same
thing
with
the
datadog
deployment
when
we
have
a
big
bomb,
build
that
schedule
a
bunch
of
pod
and
trigger
auto
scaling
on
aws
kubernetes
cluster
data
dogs
start
to
reach
the
rate
limit
with
the
images
so
for
data
dock.
There
are
other
way
to
fix
that
by
not
using
images
or
steamed
on
docker
hub.
We
could
have
our
mirror,
we
could
add
docker
hub
proxies,
but
on
short
term
at
least
spreading.
A
The
idea
is
to
check
the
open
source
program,
because
we
know
that
we
already
applied
to
that
program
and
we
are
currently
gathering
information
which
docker
account
of
chen
king's
related
stuff
is,
is
subject
to
this
one,
olivier
shared
an
information
with
her
van
I
by
email
that
he
never
got
answer
back
from
docker
about
jenkins
foreign
accounts.
A
A
A
A
A
A
A
Next
work
in
progress,
add
an
email
alias
for
press,
so
we
don't
have.
The
question
is
from
gavin.
Is
he
want
to
add
a
new
email
alias,
but
the
amex
of
jenkins
io
is
currently
delegated
to
mail
gun
accounts
that
neither
olivier
mark
I
gavin
neither
tyler.
We
learned
that
a
few
days
ago
have
access
to
so
the
last
person
that
could
have
access
seems
to
be
kk.
We
haven't
heard
from
kosuke
by
email.
So
let's
wait
one
more
week
before
the
thing
is.
A
We
could
always
change
the
mx
to
whatever
mail
system,
but
we
might
lose
the
list
of
email
accounts
that
are
configured
on
that
mx.
So
if
we
do
that,
if
I
understand
correctly
stephan
I'll,
let
you
stop
me
and
correct.
If
I'm
wrong,
we
need
a
kind
of
catch-all
account
and
we
need
all
to
have
access
to
discover
again.
D
Yeah
we
can,
we
can
receive
all
the
email
and
then
by
analyzing
the
the
mail
coming
in
understand
who
address
will
get
an
address
or
not.
We.
We
have
to
make
sure
that
the
spam
is
not
making
us
think
that
some
account
exists.
You
will
always
receive
the
email
to
james
or
tom
or
usual
names.
A
On
the
other
side,
thanks
again
olivier
for
pointing
us
to
the
correct
interlocutor
at
the
linux
foundation
scope.
So
it
looks
like
that
if
we
want
to
take
care
of
the
email
mix
instead
of
mail
gun
once
we
know
which
what
do
we
want,
we,
we
only
have
to
open
an
issue
like
we
do
for
jira
at
the
linux
foundation.
They
have
the
it
service
for
that.
C
D
Maybe
we
can
also
try
to
get
in
touch
with
mel
gun
to
find
a
way
to
discover
all
the
account
we
have
now,
even
if
we
don't
get
any
any
login
yep,
maybe
they
can
at
least
provide
the
list
of
accounts.
A
Yep
a
good
point
with
any
answer
from
kkk.
Is
it
okay?
If
we
contact
again,
let's,
let's
say
deadline
is
next
in
for
a
structure
meeting.
A
D
A
A
For
at
least
being
able
to
migrate,
safely
infrastructure
reports,
job
from
trusted
ci
to
infra
ci
for
improving
the
aws
exposure
risk
with
the
terraform
we
want.
We
don't
want
terraform
for
aws
to
be
able
to
access
the
credential
of
terraform,
firstly,
and
we
also
need
that
for
the
update
cli
migration
on
the
centralized
multibrand
job.
A
B
A
A
B
B
A
Yeah,
okay,
is
it
okay,
if
we
add
back
triage
or
should
we
just
remove
any
milestone.
A
A
B
A
Okay
cool,
so
I'm
moving
to
the
upcoming
milestone,
docker
app
credential
defined
credential,
I'm
clearing
my
great
infrareport
milestone
because
we
are
blocked
by
another
tasks
so
either
right.
We
add
it
if
we
are
quick
enough
but
given
our
rate
doesn't
make
sense,
a
switch
from
github
action
to
jenkins
for
update
cli
tasks.
A
A
A
So
let
me
clear
the
milestone
which
in
turn
that
one
required
the
it
has
no
milestone,
because
it
required
the
credential
to
be
defined
at
job
level.
So
it's
sequential
so
no
milestone
as
well.
A
A
Okay,
so
I'm
just
double
checking
the
work
in
progress
and
just
to
be
sure
that
we
agree
on
what
can
be
done
on
the
next
iteration.
And
then
we
can
finish
with
the
new
elements
to
see
if
we
add
them
back
migrate
rating
jenkins
io
to
azure,
which
means
that's
the
adding
the
manage
database
on
terraform
azure.
A
Okay,
there
is
one,
I'm
not
sure
if
it
should
be
update,
terraform
chair
tool
to
write
the
sdr
to
local
file.
That's
a
to-do
minor
to
do
after
the
aws
exposure.
B
A
A
A
A
B
D
B
A
So,
as
we
discussed
a
bit
in
private
before
I
was
the
person
in
contact
with
scale
way,
so
I'm
gonna
send
an
email
to
the
scaled
way
team
introducing
ervia's
the
person
that
will
lead
the
account
for
now
and
with
the
jenkins
infra
team,
private
email
mcc
so
that
irving
can
drive
the
partnership,
create
the
scaleware
account
see
if
we
have
the
credits
and
start
the
work
on
terraforming
all
of
this
in
autonomy,
and
so
we
planned
the
oracle
part
on
once.
A
A
A
A
kish
solution
that
is
more
than
hakish,
where
an
external
job
will
check
and
export
json
export
with
the
that
time,
because
not
only
we
have
to
watch
if
a
job
fails,
but
we
also
have
to
watch
if
if
a
job
was
scheduled,
if
the
word,
if
the
job
wasn't
scheduled,
then
you
don't
have
a
failure
notification.
So
we
need
to
watch
the
work
and
we
might
have
the
same
issue
on
infrasi
as
well
and
eventually
release
ci.
So
it's
a
multi-step
process,
but
we
have
to.
A
Not
that
he's
doing
a
good
job
at
that.
But
yeah
I
mean
that's,
not
sustainable.
So
thanks
daniel,
for
that
there
have
been
some
details
on
that
part.
I
propose
that
we
put
that
as
after
the
current
priority,
but
still
I
keeping
it
in
the
upcoming
because,
as
soon
as
you
have
some
time,
thinking
about
how
to
implement
it
there
asking
daniel
for
more
details.
Checking
the
existing
elements
could
be
a
great
help
for
information.
There
is
an
existing
process
on
datadog
that
watches
the
last
updated
time
of
the
update
center
json
file.
A
Yep,
I
don't
know
yep,
so
let
me
update
the
not
just
after.
I
don't
want
to
take
too
much
of
your
time.
A
The
issue
has
been
created
with
details.
It's
just
four
information
and
on
my
side
I
will
add
that
issue
the
add
private
kate's
aks
cluster.
I
don't
add
it
to
the
current
milestone.
A
I
want
to
first
drive
stefan
on
the
right,
the
rating,
but
that
one
will
be
exactly
the
same,
so
we
can
also
have
another
opportunity
to
work
together.
I
feel
like
I
will
have
a
lot
of
work
after
that.
Yes,.