►
From YouTube: 2020 11 17 LTS Certification Overview
Description
Jenkins long term support release certification testing overview presented by Oliver Gondza
See https://docs.google.com/document/d/1fgHnn2iD_Oms26o78nFQIaUiar4mDNe4-1FAEhU5IFE/edit?usp=sharing
A
A
C
I'll
definitely
try.
I
see
that
you
put
together
quite
a
lot,
quite
a
nice
list,
so
with
you
guys,
I
see
that
many
of
you
have
joined
in
the
meantime,
so
hi
everyone.
So
if
some
of
you
get
some
questions
or
remarks,
I
would
be
happy
to
to
be
interrupted
at
any
moment
and
answer
those,
except
for
that.
I
would
try
to
get
give
you
as
much
of
a
concise
rumbling
as
I
possibly
can
so
the
ltf
certification,
just
just
just
very
briefly,
our
lts
are
moving
in
four
week
cycles.
C
We
get
two
weeks
for
backboarding.
We
got
two
weeks
for
for
for
testing
and
verification.
Actually
this
is
documented
in
its
full
length.
C
So
let
me
attach,
let
me
attach
a
dock
where
this
is
where
this
is
documented.
So
what
we're
going
to
focus
on
is
specifically
how
this
is
certified,
since
I
am
the
single
person
that
does
the
backboarding
perhaps
would
be
interesting
to
cover
in
the
end,
if
we
get
some
some
extra
time,
because
it's
definitely
knowledge
that
needs
to
be
transferred
for
whoever
is
going
to
be.
You
know
doing
this
after
no
longer
the
release
officer.
C
C
You
very
well
know
because
you've
been
contributing
to
this
for
years
and
years,
so
we,
you
know
just
inviting
pretty
much
everyone
and
I'm
pretty
sure
that
cloudbees
have
been
working
on
this
very
hard
to
provide
us
with
the
feedback,
and
actually
a
lot
of
a
lot
of
problems
was
captured
this
way
by
some
of
your
colleagues.
C
So
this
is
the
way
how
we're
you
know
receiving
these
from
from
the
upstream
and
at
the
same
time,
for
instance,
for
us
we've
been
running
our
internal
verification
based
on
ath,
which
you
know
we
also
contributed
to
to
these
results
same
as
same
as
we
encourage
people
to
do
so.
This
is
what
we've
been.
What
we've
been
doing.
C
As
I
said,
I
will
get
back
to
the
you
know:
okay,
so
one
other
thing
is
that
the
the
release
officer
role
was
not
only
just
doing
this
back
porting
and
orchestrating
this.
This
feedback
collection,
but
you
know
keeping
all
this
all
this
all
this
space
going
right,
since
you
know
scheduling
all
these
events
in
calendar,
making
sure
that
things
happen
happen
at
a
particular
time,
initiating
new
lts
cycles
etc.
C
So
again
that
would
I
I
can
also
cover
that
in
case
that
there
is
an
interest
right
so
mark
I'm
going
to
use.
Your
notes
to
you
know,
keep
me
on
track.
So
where
does
the
certification
process
run?
Actually
multiple
places
due
to
the
distributed
nature,
nature
of
what
we're
doing
and
how
we're
doing
it?
It's
sort
of
hard
to
get
a
complete
picture,
so
we
just
you
know,
rely
on
on
the
people
to
contribute
the
wrong
results
in
the
past.
C
I
was
looking
for
some
more
structured
way
of
how
to
do
this.
In
the
past
we
use
the
wiki
page
and
it
didn't
work
very
well,
in
my
opinion-
and
I
haven't
found
a
better
better
tool
to
actually
get
everybody
who
is
interested
who's
got
some
interesting
result
to
to
report
them
back,
so
we
stick
with
a
good
old
email,
so
the
certification
runs
in
both
our
ci
jenkins
io
instance.
C
So
that's
where
we're
running
the
full
ath
suite
against
against
the
lts.
It's
not
only
the
the
certification,
it's
actually
a
full-blown
ci,
so
even
during
back
porting
we're
just
running
the
same
same
verification
testing,
so
we've
got
an
early
back
early
feedback
even
before
we
cut
the
cutter
release,
candidate,
etc.
So
this
is
the
one
place
you
know
the
advantage
of
doing
this
is
that
everybody
can
see
the
results
can
investigate.
If
failures
can
you
know
spot
the
problems
etc?
C
So
there
was
a
couple
of
ideas
in
the
past
to
actually
move
this
into
one
of
our
maybe
red
hat,
maybe
maybe
clobbies
or
even
some
somewhere
else
and
to
do
a
dumbstream,
though
you
know
the
obvious
obstacle
would
be
that
this
is
sort
of
hidden
and
you
know
not.
Everybody
got
access
to
the
results,
the
possibility
to
investigate
and
work
on
fixing
the
issues
which
was
one
of
the
most
tedious
and
time
consuming
part
on
on
maintaining
the
acceptance
to
harness.
So
I
felt
really
really
strongly
keeping
this
in
the
upstream.
C
So
we
can,
you
know,
lower
the
lower
the
barrier
for
entry
into
you
know
getting
new
contributors
to
actually
fixing
and
updating
the
tests,
because
that
was
a
real,
a
real
struggle,
so
yeah
that
would
be
the
upstream
environment
where
it
runs
and
except
for
that
there
is
a,
I
guess,
a
number
of
internal
environments.
C
I
know
that
the
mark,
you
know
that
they're
doing
some
some
intensive
exploratory
testing
on
these,
which
is
very
useful,
except
for
that
and
you
know
and
there's
some
other
environments.
I
presume
in
cloudbees
that
I
don't
have
that
much
information
on.
C
So
speaking
on
the
part
that
we've
been
doing
in
red
hat,
we've
been
testing
this
exclusively
on
openstack,
both
on
on
the
host
on
we're
actually
using
row
7
for
this,
our
you
know
previous
versions
before
that,
of
course,
so
it
was,
you
know,
tested
either
on
vm
or
into
the
container
the
the
ath
container,
not
the
jenkins
container.
C
So
those
were
the
two
environments
that
we've
been
covering
and
testing
this,
and
you
know
sort
of
comparing
the
results
with
what
we've
seen
in
the
upstream,
what
we
we've
seen
them
internal,
etc,
because
you
know
due
to
the
complexity
of
the
whole
test,
suite
and
and
the
fragility
of
the
approach
of
using
the
selenium
ui
testing.
Those
of
you
who
play
with
that
know
what
I'm
talking
about.
So
there
was
quite
a
lot
of
you
know
false
positives
and
and
problems.
It
wasn't
really
related
to
any
bugs
or
anything.
C
A
C
Well,
the
openstack
was
not
that
much
involved
in
that.
Simply
because
ath
does
not,
you
know,
interact
with
it
in
any
way,
so
it
just
happens
to
be
jenkins
agents
running
in
the
openstack
yeah.
So
the
point
is
that
it
involves
the
vms
with
the
actual
os
right
got.
It.
A
C
Yeah,
this
is
what
we've
been
doing.
Historically,
even
upstream
we've
been
doing
this
before
we
moved
to
ci
jenkins,
io,
some
wow,
maybe
four
or
five
years
back
actually,
but
you
know
we
find
out
that
the
results
are
a
lot
more
reliable
on
virtual
machines
than
we
ever
get
in
the
containerized
world.
C
D
So
maybe
maybe
that's
a
dumb
question
and
it's
getting
late
here,
so
maybe
I
didn't
just
follow
clark
correctly.
Are
you
saying
that
what
what
you
know
using
the
vms
and
the
openstack
stack
is
something
you're
still
doing
nowadays
or
was
the
past
things
before
you
migrated
to
see
io.
C
Yeah,
sorry,
I
might
have
misspoke
in
internally,
we've
been
using
both
approaches
to
running
it
on
vm
and
actually
running
it
on
vm
and
inside
ath
container,
and
what
we
have
verified
is
that
the
pure
vm
approach
you
know
delivered
more
stable
and
predictable
results
compared
to
the
containerized
right.
The
the
the
remark
on
the
cia
jenkins,
I
o,
was
really
a
side
note
that
even
there,
some
years
ago,
we've
been
using
this
pure
virtual
machines
before
we
move
move
to
containers-
and
we
also,
you
know,
observe
the
instability
in
there.
C
A
Thanks
so
olivia,
so
that
is
question.
Are
you
still
running?
I
think
I
hear
that.
Yes,
you
are
still
running.
Is
that
likely
to
continue,
or
does
is
red
hat
likely
to
say,
hey
we're
going
to
stop
running
those
tests
internally,
so
we
will
have
effectively
as
a
community
lost
one
resource
that
could
have
found
problems
for
us.
C
That's
a
very
good
question.
I
mean
one
of
the
reasons
why
I'm
giving
up
on
a
release.
Officer
position
is
that
I
just
cannot
predict
my
allocation,
so
I
would
definitely
love
to
stay
in
loop
and
continue
to
be
one
of
the
people
contributing
the
results
find
you
know
found
on
regression
and
possible
problems.
So
I
would
definitely
love
to
be
that,
but
you
know
I
prefer
not
to
promise
is
something
I
won't
be
able
to
deliver
right.
C
A
C
Yeah
I
mean
when
it
comes
to
allocation.
I
would
say
that
definitely.
This
is
a
lot
more
expensive
in
terms
of
human
time,
rather
than
a
machine
time
right.
Somebody
needs
to
be
looking
at
it,
keeping
the
ath
in
good
shape
prone
of
some
infrastructure
issues,
some
random
glitches,
incompatibilities
between
page
objects
and
plug-in
versions.
So
it's
a
lot
more
expensive
on
that
side,
and
I
honestly
cannot
quite
imagine
that
would
say
no
no
more
resources
for
running
these
tests.
I
mean.
C
You
so
folks,
once
again,
I'd
like
to
encourage
you
guys
any
if
you
have
got
any
questions,
just
feel
free
to
shoot.
E
So
regarding
this
openstack
and
red
hat,
and
so
on,
so
it's
still
something
that
we
cannot
see.
Even
though
I
mean
we
probably
we
don't
interact
with,
but
we
cannot
see
as
a
read-only
access
or
anything
like
that.
C
That's
that's
the
problem
with
this.
You
know
downstream
internal
internal
testing
eye
was
it
james
who
was
you
know,
doing
something
quite
similar
inside
inside
cloud
base
and,
as
I
said,
I've
got
pretty
much
zero
information
except
for
the
fact
that
he
sometimes
reported
that
he
found
a
problem.
So
this
was
one
of
one
of
the
issues
and
one
of
the
reasons
I
so
I
mean
I'm
a
huge
advocate
for
having
this.
C
D
Yeah,
I'm
just
I'm
wondering
actually
so
right
now.
Those
days
you
know
do
as
far
as
I
remember
in
the
past,
it's
very
vague
now,
but
I
think
you
were
looking
at
the
open
source
anti-aging
his
eye
or
the
results
of
a
th.
But
then
with
your
expertise,
you
would
actually
know
yourself
what
are
the
tests
that
are
failing,
that
should
be
considered
and
the
one
that
should
be
ignored.
D
C
D
Somehow
I
mean
and
that's
my
bad,
because
I
should
have
prepared
this
school
a
bit
more,
but
I
remember
that
back
in
the
days
most
of
the
time,
the
acceptance
harness
executions
would
basically
be
red.
All
the
time
like
like
never
be
green,
really
because
actually
monitor's
failing
and
you
would
be
one
of
the
rare
people
knowing
which
tests
should
be
actual
concerns
and
which
ones
are
known
to
be
failing
all
the
time
be
flaky
whatever.
D
So
not
really
be
a
concern
at
all
ever
which
I
think
might
be
if
this
is
decade
which
I'm
looking
right
now
at.
If
you
still
care
right
now
might
become
more
and
more
of
a
concern,
because
indeed
we
may
lose
that
you
know
expertise.
That's
only
in
your
brain
in
your
in
your
expertise,
you
need
yeah
to
know
which
tests
should
be
looked
into
and
the
tests
should
be
ignored
because
they've
been
flaky
forever.
C
That's
a
good
point
in
in
a
sense
you're
right.
We
never
after
easily
five
or
six
years
of
effort
to
actually
get
the
ath
stabilized,
which
I
blame
for
lack
of
time
and
and
the
inherent
nature
of
selenium
and
the
fact
that
we
are
trying
to
verify
something
like
I
don't
know,
I'm
pulling
numbers
up
from
my
sleeve,
but
something
like
40,
independently
developed
components,
which
you
know
maintainers
not
really
know
how
it's
being
tested
and
what
particular
effect
is
going
to
have.
C
So
we
usually
just
find
out
that
somebody
had
released
a
new
version
of
plugin
with
changing
ui
a
little
bit.
It
started
failing
and
it's
sort
of
tedious
in
this
way,
but
yeah.
So
this
is
this
is
the
part
of
what
I
what
I
mentioned,
that
it
requires
pretty
much
constant
attention
and
adjustment
to
to
the
plug-in
and
to
the
uis
and
to
whatever
specifics.
So
it
very
often
happens
that
some
test
starts
failing
simply
because
something
have
changed.
C
Usually
it's
ui,
but
there
might
be
different
things
that
do
change
and
it
needs
to
be
adjusted.
So
it's
not
quite
that
we
would
know
that.
I
don't
know
some
small
fracture
of
test
would
be
just
constantly
broken
or
unfixable
or
just
fragile.
We
have
a
ways
to
deal
with
that.
You
know
by
actually
fixing
them,
but
it
requires
the
time
it
requires
somebody
to
have
a
look
at
that
and
these
tests
are,
you
know,
getting
broken.
D
Yeah,
maybe
I
would
make
sense,
actually
you
know:
do
you
think
it
would
be
a
good
idea,
maybe
you're,
especially
for
people
who
may
not
be
aware
of
what's
going
on
or
how
it
works,
to
share
what
we're
talking
about
on
the
screen,
because
I
actually
opened
it
just
now,
so
I
guess
it
might
be
useful,
especially
for
people
you
know
like.
Are
they
recording
more
than
us?
D
In
the
end,
maybe
you
mean
the
results
yeah
just
this
is
just
what
we're
talking
about
right
now,
and
so,
if
we
look
at
the
situation
right
now,
yeah
we
see
indeed
that
we
don't
really
often
or
ever
have
had
and
again
I
mean
I
hope
it
doesn't
sound
like
like
a
reproach,
because
that's
not
the
case
at
all.
I
absolutely
understand
that
we
all
have
limited
time
and
everything
so
just
to
you
know
kind
of
make
a
checkpoint
on
this
yeah.
D
So-
and
I
was
looking
at
the
thing
right
now
just
a
few
seconds
ago
and
that's
what
I
was
referring
to
in
the
number
of
tests
failing-
and
I
assume
that
whoever
knows
which
ones
you
know
have
been
failing
forever,
which
ones
may
be
new
and
so
which
one
are
probably
to
be
looked
at
right.
C
Yeah
I
mean
I
100
get
your
point,
it's
fair
to
say
that
it's
rarely
rarely
rarely
ever
see
to
have
these
100
successful,
and
you
know,
let's
not
lie
to
ourselves.
So
yeah
you're
right,
as
I
said,
the
the
thing
is
that
you
know
like
you,
see
a
lot
of
like
let's
say
job
dsl
plugin
tests
are
failing
without
examining
closely.
Presumably
this
is
because
of
a
single
problem
that
has
you
know,
broke
at
some
point
in
time
and
you
know
needs
to
be
looked
at
adjusted
the
test
or
the
plug-in.
C
B
C
Once
in
a
while
somebody
to
actually
adjust
the
test,
so
it
matches
the
code
base
in
in
these
plugins
and
it's
sort
of
a
you
know
a
constant
struggle
because
of
the
way
how
this
is
architecture,
we're
trying
to
verify
the
let's
say
entire
ecosystem
to
verify
that
the
experience
of
a
user
that
install
jenkins
with
all
these
plugins
is
actually
going
to
have
something
functional.
So,
in
the
end,
we're
verifying
quite
a
lot
of
components
which
you
know
are
how
I
put
it
was
not
integrated
before
right.
C
Nobody
has
done
any
any
checking
on
verification
on
that.
So
there's
a
various
reasons
why
this
can
break
most.
You
know
prominently
because
they're,
independently
developed
and
released
by
individual
maintainers.
D
Right,
and
so
it
may
be
an
interesting
question
to
ask
to
you,
you
know
if
you
had
more
time
if
you
had
more
time,
if
you
had
like
you,
know
multiple
people
or
people
available
or
whatever,
what
would
you
advise
people
to
focus
on
to
improve
the
process
in
general?
Maybe
improve
the
lts
process
delivery.
D
You
know,
maybe
the
test
aspects
or
maybe
focus
on
the
test
aspect.
First,
and
maybe
I
was
thinking
yeah,
maybe
at
the
end
of
the
meeting
or
something
we
would
ask
you
whether
you
feel
the
whole
lts
process,
what
you?
What
would
you
keep?
What
would
you
actually
change?
You
know
accelerate
whatever
you
know,
I
guess
you
have
a
lot
of
ideas
or
I.
C
Actually
I
put
put
a
paragraph
close
to
the
end
of
the
agenda
future
of
ltf
testing,
where
I
got
a
couple
of
suggestions
and
I
guess
we're
thinking
quite
along
the
same
lines
in
the
end.
So
would
you
like
to
you
know
hop
right
on
that
or
we
would
like
to
keep
it
on
the
end.
A
I'm
I'm
great
with
going
there.
I
think
if
there's
interest
there,
let's
go
there
I
do
want
to.
I
will
beg
for
the
privilege
to
come
back
to
ask
specifically
about
tables
to
divs,
but
I
think
batiste's
question
is
a
good
one:
let's
go
there
and
then
then
I'll
make
sure
that
my
questions
I
can
come
back
to
you
later.
D
Yeah
I'll
be
fine.
To
be
honest,
I'm
trying
also
to
kind
of
abide
by
the
the
request
of
a
lever
which
is
to
ask
questions
to
keep
going.
C
E
Oliver,
the
regarding
the
ath,
if
I
recall
james,
I
actually
saw
as
well
the
commit
so
similar
he's
now
somehow
leading
the
project.
The
ath
one
way
right.
C
D
C
Anyway,
there
should
be
a
team
of
maintainers.
I
was
the
most
active
ones,
but
I
know
that
antonio
has
stepped
in
and
pretty
much
as
everywhere
else
and
components
in
jenkins.
It
hasn't.
We
haven't
been
very
rigid
or
something
like
that.
So
there's
number
of
people
that
have
the
release
permissions
and
been
doing
the
releases.
I
remember
that
couple
of
people
from
cloudbase
got
released
permissions.
I
guess,
as
you
said,
james
was
doing
multiple,
definitely
merging,
not
sure
about
releasing
yeah.
Actually
he
did
so.
C
A
couple
of
other
people
from
red
hat
should
have
the
release
permissions
at
the
same
time.
So,
for
what
makes
sense
to
me
is
to
have
a
couple
of
people
and
that's
what
I
try
to
achieve
and
to
large
extent
I
failed
on
that
to
have
a
group
of
people
it
would
actually
be.
You
know,
as
pretty
much
in
any
component.
You
know
permitted
to
merge
and
release
the
adh
as
it's
happening,
but
it's
you
know
fairly
informal
at
this
point.
A
C
Right,
another
good
question:
well,
from
that
standpoint
the
ath
get
two
roles:
it
contains
the
tests
that
actually
verify
the
you
know
the
product,
but
at
the
same
time
it's
a
framework
from
composing
similar
tasks
that
can
be
in
other
components,
so
you
can
use
ath
as
a
dependency
and
to
use
the
framework
and
the
page
objects
to
actually
write
your
own
tests.
We
actually
use
this
in
a
couple
of
other
plugins
yeah,
so
you
can
pull
it
and
use
the
same
mechanisms.
C
C
Right
so
well,
I
guess
it
will
be
for
a
separate
topic
how
to
actually
use
this
in
the
plug-in.
I
spent
quite
a
quite
a
lot
of
time,
putting
it
in
a
better
shape
some
year
or
two
ago,
but
yeah
that
would
be
for
some
documentation
effort
or
something
like
that.
C
Thank
you,
yeah
thanks
for
the
question
just
covering
the
rest,
there
is,
can
lts
certification
be
run
on
a
weekly
release,
not
only
it
can,
but
at
the
same
time
it
is
when
we're
running
the
acceptance
to
harness.
If
I'm
not
mistaken,
its
ci
is
actually
running
against
weekly.
A
D
Not
really
like
yeah,
exactly
because
smokeless
meaning
the
one
that
are
like
I've
been
qualified
as
really
non-flaky,
but
it's
I
mean
people
should
be
kind
of
aware
that
you
know
the
set
is
very,
very
limited
on
purpose
because
we
don't
want
to
be
ever
flaky
but
yeah.
Probably
we
should
actually
consider
you
know
fixing
deflecting
what
is
you
know
the
rest
and
growing
this
street,
because
in
the
end
it
makes
the
weeklies
plus
the
lts
is
stabler
over
time
yeah.
It's
a
never.
C
Right
yeah,
that's
that's
true.
As
I
look
at
it,
I
see
it
run
over
thousands
of
tests,
which
I
believe
it
feels
like
the
entire
test
suite
but
yeah.
I
recall
that,
except
for
the
flakiness,
there
was
also
another
problem,
and
that
was
the
resource
utilization
of
ci
jenkins
io,
because
there
was
quite
a
lot
of
this.
Now
we
tested
everything
twice
for
java
8
and
java
11,
and
there
was
this.
You
know,
certification
of
lts
and
this
sort
of
ci
that
actually
used
latest
course
etc.
C
So
yeah,
I
remember
that
infertility
was
not
very
happy
about
how
much
resources
this
is
using
because
the
entire
test,
so
it
takes
quite
a
lot
of
time
to
run
and
at
times
it's
picky
about
resources,
but
you
know
not
not
using
it
very
very
efficiently,
because
this
is
ui
testing.
So
there
was
another
another
consideration
there:
how
to
actually
do
this
resource-wise-
and
I
remember
that,
as
you
said,
there
was
a
push
to
use
only
a
smoke
test
for
this.
A
Okay,
so
you're
thinking
that,
so
when
I
look
at
the
execution
time
of
the
master
branch
on
sierra
jenkins
that
I
o
for
jenkins
core,
it
says
there
that
it
is
in
fact
running
a
running
ath,
but
it
doesn't
take
nearly
the
8
or
10
hours
that
it
takes
to
run
ath
elsewhere.
So
what
have
I
misunderstood
right.
D
C
That's
that's
a
good
point
so
batista
and
mark
you're
talking
about
the
same
thing
right.
So
this
is
the
jenkins
repository
ci,
which
is
actually
only
running
the
smoke
test
in
stable
branches
for
lts.
I
actually
replace
the
smoke
test
category
with
every
test
so
on
the
stable
branches
after
backport
we're
running
the
entire
test
suite.
A
C
What
you
talk
about
is
a
ci
for
for
a
jenkins
core.
So
let
me
just
give
you
the
link
and
that's
this
thing
so
yeah,
it's
a
bit
contra
intuitive,
but
in
fact
we
are
verifying
the
latest
ath
against
the
latest
core.
But
it's
running
you
know
not
as
a
part
of
a
jenkins
core
ci
partially
because
it
would
be
too
fragile
and
pretty
much.
Every
every
change
would
be
flagged
as
broke.
D
C
C
Let
me
feed
it
into
the
into
the
chat
of
the
zoom
meeting,
so
you've
got
a
couple
of
individual
use
cases
and
all
of
them
require
something
else.
Presuming
that
you
use
this
for
testing
your
your
plugin.
You
definitely
want
these
tests
to
run
with
the
dependency
versions,
as
they
are
hard
coded
in
your
phone.
C
So
obviously
you
want
this
to
be
pinned,
but
it's
not
a
requirement
from
from
the
ath
standpoint,
precisely
when
we
are
verifying
lts,
we
sort
of
want
late
east
versions
to
be
used
from
the
upstream.
Imagine
that
we
release
a
new
lts
version
today.
We
obviously
want
it
work,
for
you
know,
for
the
users
or
for
the
customers
with
whatever
plugins
version.
There
are
right
it
doesn't.
It
doesn't
make
much
sense
to
test
it
with
last
year's
versions,
I
mean
it
would
give
us
more
stable
and
predictable
tests
to
it.
C
D
Yeah,
absolutely
absolutely
I'm
just
thinking
that
you
know.
We
all
know
how
hard
it
is
to
have
things
you
know
break
suddenly
from
one
day
to
another
for
an
unrelated
reason.
You
know,
and
I'm
wondering
whether
we
could
have
something
in
the
middle
which
is
like
you
know,
having
a
power
xml
or
something
that
would
pin
every
single
version
we're
using
in
ath,
but
then
enable
dependable
to
have
a
controlled
and
constant
flow
of
updates.
D
C
Yeah,
I
would
have
to
give
it
a
you
know,
more
of
a
thought
to
see
how
this
can
possibly
work,
but
you
know
I
mean
I
can
definitely
describe
the
current
state
when
we
just
grab
from
the
from
the
update
center.
What
is
available
for
given
given
jenkins
version,
either
the
latest
or
or
the
latest
lts
we
test.
So
yeah
I
mean
that's.
I
guess
something
something
to
think
think
about.
You
know.
C
I
see
some
some
issues
of
how
to
actually
implement
that
partially,
because
you
know
there's
a
huge
number
of
of
the
plugins
that
are
involved,
there's
also
a
lot
of
plugins
that
are
being
installed
but
not
necessarily
tested
right.
You
know
it's
like
I
don't
know
what
is
jdk
tool
installer.
C
Let's
presume
there
are
no
tests
for
it,
but
it's
one
of
the
one
of
or
one
of
these
bundled
or
dependency
plugins,
so
it
would
get
there
anyway,
so
it
yeah,
I
sort
of
see
somewhat
hard
to
implement
how
to
actually
get
this
get
this
spinned
but
yeah.
That's
you
know,
I
guess
that's
definitely
something
that
can
be
considered
as
a
next
step
for
ath
evolution,
but
probably-
and
just
to
finish,
my
thought.
The
reason
I
share
this
contributing
guide
there
is
this
list
of
recognized
use
cases
so
oftentimes.
C
When
we
get
these
discussions,
you
know
people
tend
to
forget
what
are
the
other
use
cases
for
the
entire
adh,
and
you
know,
try
to
you
know,
make
things
behave
sane
in
the
use
cases
they
care
for.
So
this
is
sort
of
a
complete
list
of
all
the
recognized
use
cases.
So
these
kind
of
changes
can
be
discussed
without
forgetting
the
other
stuff.
A
C
C
Question
right,
so
let
me
let
me
get
through
the
dock
to
make
sure
that
they're,
not
forgetting
anything.
What
is
the
storage
requirement?
I
wouldn't
say
that
you
know
it's
anyway,
demanding
it's
sort
of
a
cpu
and
memory
demanding,
as
you
can
imagine
right.
It's
the
suit
jenkins,
the
browsers.
You
know
this
selenium,
it
always
got
all
got
its
its
footprints.
So
it's
somewhat
memory
and
cpu
heavy,
but
I
wouldn't
say
that
much
for
for.
A
C
The
thing
is
that
you
know
from
the
nature
of
of
the
ui
testing.
There
is
a
plenty
of
waiting,
it's
very
hard
to
write
a
test
to
be
synchronous,
and
you
know
there's
a
plenty
of
waiting
when
it's
such
a
constrained
environment
to
interact
through
ui.
You
click
a
blink
and
you
get
to
wait
for
page
to
be
in
the
right
environment.
So
there's
a
lot
of
waiting
and
obviously
a
lot
of
time
out.
C
So
because
this
thing
doesn't
happen
in
time,
it
can
be
a
symptom
of
a
problem,
but
at
the
same
time
can
be
just
slowness
or
something
like
that.
So
we
ended
up
never
to
parallelize
this
on
a
single
machine
to
avoid
this
to
be
interacting
with
each
other,
because
there's
quite
a
lot
of
going
on.
So
this
can
easily
happen
and
be
very,
very
unpredictable.
C
C
What
is
a
real
relevant
for
this
is
there's
a
couple
of
modes:
how
to
execute
this.
I
mean
how
to
execute
the
ath
and
when
to
specify
a
browser
which
are
somewhat
noteworthy,
and
I
will
do
a
very
short
detour
to
what
these
are.
So
the
docs
are
all
referenced
from
from
readme,
so
there's
a
selecting
a
browser
which
is
sort
of
misleading,
because
it
doesn't
only
specify
where
the
browser
is,
but
there's
a
couple
of,
I
would
say,
most
substantial
differences
press.
Similarly,
the
firefox
and
chrome
container.
C
So
what
these
do
is
they
just
wrap
the
old
frame
buffer,
this
xvnc
and
the
firefox
inside
a
container?
So
and
if
you
run
number
of
tests
in
parallel,
they
would
get
the
separate
containers
for
every
task
when
the
ui
is
running,
so
it
will
be
hopefully
impossible
for
them
to
clash.
A
A
D
D
Yeah
plus
trying
to
actually
use,
for
instance,
you
know
running
everything
on
weeklies
in
advance
and
and
continuously
to
be
able
to
not
end
them.
Taking
everything
in
the
face.
You
know
at
the
last
minute
when
the
lts
baseline
has
been
chosen.
Finally,
and
then
you
know
executing
every
test
at
that
moment,
instead
of
having
tried
to
fix
them,
you
know
continuously
every
time
every
week,
every
day
when
things
start
to
fail,
it's
hard,
because
it's
actually
a
lot
of
work.
This
ecosystem
is
huge,
so
yeah.
C
Right
yeah,
I
mean
the
entire
ath
approach
that
that
I've
been
leading
for
years.
The
goal
was
just
this
right,
but
it
turned
out
to
be
you
know
so
somewhat.
It
required
a
consistent
attention
and
and
work
to
be
put
into
just
keeping
these
tests
up
and
running,
and
historically,
we've
been
quite
short
on
on
cycles
and
for
the
reasons
that
are
absolutely
understandable.
It
wasn't
very
attractive
for
the
people
to
actually
join
help.
You
know
fixing
analyzing,
resolving
and
situation
like
this.
C
Okay,
so
let's
have
a
look
at
the
rest,
so
we
can
spend
the
time
on
the
future
of
lts
testing
who
the
texas
the
failures
right.
I
guess
we
briefly
covered
that
so
in
in
the
upstream
upstream.
Ci
is
pretty
much.
Whoever
comes
comes
around
so
that
would
be
you
know,
not
necessarily
the
maintainers,
but
whoever
contributes
to
this.
C
So
I
I
must
say
that
cloudbeats
has
been
doing
a
lion's
share
of
work
in
there
there's
a
quite
a
lot
of
people
that
have
contributed
fixed
and
and
improve
these
tests
as
a
tango.
I
also
know
that
uric
hefner
was
working
very
hard
on
these.
You
know
invaded
our
part,
so
yeah
for
the
for
the
upstream
I
mean
that's,
for
you
know
just
keeping
the
test
suite
healthy
when
it
comes
to
lts
certification.
That's
the
mailing
list
approach
I
described
so
every
four
weeks.
C
Speaking
of
failure,
investigation
well
for
for
ath,
only
it's
done
in
a
way
that
it
can
be
usually
in
you
know,
invoked
on
online
desktop,
so
there's
also
a
way
to
run
this
locally
again.
The
documentation
is
somewhat
somewhat
complex
and
elaborate
on
how
to
actually
do
this,
but
in
pretty
much
vast
majority
of
the
cases
we
are
able
to
rerun
the
tests
locally
single
tests
and
any
iterate
on
that
so
or
even
you
know,
pause
it
and
investigate
so
fair
part
of
that
is
documentation.
C
C
Another
failure
modes
that
are
not
so
frequent
are
various
changes
in
inversioning,
etc.
Where,
for
whatever
reason,
when
a
tests
are
in
ath,
it
runs
with
slightly
different
versions
of
plugins
for,
for
some
reason
might
be,
maybe
some
dependencies
optional
dependencies
or
in
the
past,
this
bundled
plugins.
C
C
Right
and
speaking
of
the
duration,
well,
it's
been
a
while,
since
I
actually
seen
this
executed
necessarily,
but
last
time
I
checked
it
was
somewhere
close
to
12
hours.
Since
then,
we've
done
some,
you
know
some
reduction,
which
is
a
project
that
kick-started
some.
I
guess
years
a
year
ago,
when
the
sort
of-
and
I
guess
also
in
a
contributing
dock
sort
of
specify
what
tests
does
and
what
does
not
belong
into
the
acceptance's
harness.
C
Let
me
see
yeah
test
contribution,
so
the
idea
is
that
you
know,
while
we're
interested
in
having
a
reasonable
coverage,
we
definitely
cannot
have
tests
for
every
plug-in
for
all
the
use
cases,
or
even
you
know,
even
thinking
about
the
code
coverage
is
close
to
insane,
given
how
expensive
the
test
run
is
how
slow
it
is.
So
we
sort
of
specified
what
is
expected
to
be
in
and
what
is
not
so
tests
for
plugins
that
are
doesn't
have
enough
installations
or
there's
just
you
know
occupying
too
much
too
much
time.
C
We
try
to
reduce
them
or
move
them
away,
which
we
partially
partially
succeeded.
So
again,
the
idea
was
to
keep
this
sort
of
constrained
and
prevent
the
adh
from
bloating,
and
you
know
hopefully
keep
the
runtime
and
the
scope
and
the
amount
of
code
in
there
reasonable
and
easier
to
maintain.
C
But
you
know,
I
guess,
for
the
reduction
is
still
something
desirable,
so
this
will
be
the
full
runtime.
So
when
it's
running
parallel,
it's
usually
or
what
was
it
would
be
using
like
something
like
10
splits,
so
they
usually
fit
into
three
hours.
I
mean
the
slowest
one
there
is.
You
know
it's
not
just
12
hours
divided
by
by
10.
In
this
case,
it's
usually,
you
know
taking
a
lot
longer
than
than
an
ideal
case,
but
this
is
roughly
the
time
estimate.
C
C
Moving
further,
what
types
of
issues
does
ath
salt
certification
detect?
That's
a
good
point.
Oftentimes
we
run
into
the
situation
that
this
is
an
incompatibility
in,
I
would
say,
in
a
substantial
majority
of
the
cases,
it's
incompatibility
between
the
test,
suite
and
and
the
plug-in.
You
know
when
the
ui
changes
users
might
be
surprised
or
used
to
something
else,
but
it's
not
necessarily,
you
know
considered
a
regression
from
any
reasonable
standpoint,
though
the
tests
are
going
to
break
anyway.
C
So
vast
majority
of
the
of
the
problems
that
lts
the
texts
are
these
incompatibilities
or
some
some
flakes
in
the
in
the
environment,
which
you
know
no
amount
of
effort
we
were
able
ever
able
to
dedicate
to
this
resolved.
So
I
consider
this
to
be
inherent
to
the
selenium
and
the
entire
ui
verification
approach.
So
the
problem
this
do
detect
are
usually
something
that
well.
C
C
A
So
when
you
say
incompatibility,
I
like
that
one,
because
there
was
a
there's,
a
current
failure
in
a
plug-in
I
maintain
that
is
due
to
a
a
loading
prior
to
the
plug-in
zone,
tests
of
a
library,
that's
older
than
the
one
that
plug-in
needs,
and
it's
it's
a
valid.
It's
a
valid
condition,
it's
a
valid
case
and
the
solution
is
restart
jenkins
after
installing
a
test
or
after
installing
a
plug-in,
and
so
it's
those
kinds
of
things
that
it
surfaced,
those
for
you
as
well.
It's
not
just
me.
That's
benefited
from
ath.
D
Yeah
I
was
thinking
along
on
this
line.
You
know,
I
think,
from
an
lts
standpoint,
it's
hard
to
use
typically
the
so-called
pct
plugin
compatibility
tester,
but
in
the
case
you're
mentioning
mark
it
might
be
that
it's
easier
to
use
and
less
flaky
or
less.
You
know
full-blown
thing
selenium
related
because
it's
full
maven,
so
pct
is
basically
a
thing
that
will
look
into
your
dependency
tree
and
bump
everything
using
the
updates
and
the
central
data
to
you
know,
use
the
latest
and
run
nvn
test
the
same
way
somehow.
D
So
it's
going
to
be
more
maven
based
than
selenium-based,
so
probably
a
bit
more
stable.
But
I
was
thinking
you
know.
Yeah
on
the
cloud
on
the
the
clubby
side
is
different,
but
indeed
on
the
journey
inside
we
have
1700
plugins
plus
cloud
counting,
so
we
can't
really
run
one
once
you
know.
1700
pct
runs
off
my
event.
Tests
on
everything
constantly
well,
the
right
way
would
be
to
actually
do
this
on
single
plugins.
But
it's
a
I
don't
know
it's
a
quality
thing
more
than
just
an
lts
thing.
I
guess
it's.
C
C
Right
yeah,
I
mean
my
you
know:
visibility
into
the
into
the
plug-in
compatible
tester
is
practically
nil,
though
yeah.
What
actually
can
help
is
you
know,
drawing
a
line
somewhere
like
for
for
his
lts?
We
require
at
least
one
percent
of
all
jenkins
installations
need
to
have
the
plug-in
installed
for
it
to
be
considered
something
that
ath
is
being
run
against,
which
you
know,
I
guess
it
could
eliminate
a
lot
of
these.
You
know
potential
runs
so
you
know
it
depends
on
where
we
very
put
the
bar,
but
yeah.
D
Non-Sentiment,
based
somehow
things
like
you
know,
last
release
date
plus
number
of
installs.
Something
like
this
like
a
mix
to
test
like
you
know
like
pipe,
because
basically,
for
instance,
if
jenkins,
if
pipelines
are
broken
jenkins,
smooth
so
yeah,
we
should
probably
we
probably
want
to
detect
it.
E
Oliver,
yes,
a
ref
summary
regarding
statistics,
just
to
get
a
sense
of
for
the
all
the
releases
that
you've
been
dealing
with
and
leading.
So
how?
How
often
do
you
see
regulations
on
mostly
all
when
those
regressions
are
based
on
the
ui
changes
just
to
have
like
a
you
know,
overall,
numbers
that
give
us
a
sense
of
how
is
the?
E
C
Well,
when
it
comes
to
the
lts's
itself,
I
don't
keep
the
keep
the
numbers,
but
from
the
mailing
list
we
should
be
able
to
find
it
out.
So
it's
like
in
every
one
in
three
or
four
from
my
you
know
gut
feeling
of
the
rc
testing
vds.
We
discover
something
that
something
stopped
working,
not
always
it
requires
further
backboarding.
C
Sometimes
it's
just
plug-in
that
is
incompatible
or
something
that
can
be
fixed
on
the
side
of
a
plug-in
and
we
get
to
rush
it
in
you
know
merging
it
and
releasing
it
oftentimes,
it's
a
glitch
that
just
happened,
and
people
need
to
do
something.
You
know
reconfigure
this
stop
using
that
or
something
like
that,
so
it's
being
documented
into
into
upgrade
guy
upgrade
guy.
C
Did
we
produce
with
every
lts,
so
mark
was
doing
a
great
job
on
composing
these
lately,
you
know
so
short
warning
or
guidance
for
people
what
to
do
and
what
to
change
so
yeah.
I
would
say
that
can
be
one
in
three
one
and
four
requires
something
right
when
it
comes
to
the
failed
tests.
As
I
said,
it's
probably
a
vast
majority
of
the
failures
that
we
observe
are
some
kind
of
incompatibilities
or
flukes.
C
This
is
somewhat
given
by
you
know
what
amount
of
time
we
actually
preventively
invest
into
the
maintaining
the
at
the
adh
right
I
mean
if
somebody
would
be
looking
at
the
failures
every
day
and
working
on.
You
know
one
hour
to
actually
fixing
them.
It's
not
that
much
of
an
allocation,
but
we
keep.
You
know
we
discover
we
discover
these
incompatibilities
fairly
soon
it
gets
fixed
and
it
doesn't
pollute
the
statistics
anymore,
which
is
quite
far
from
the
reality
of
the
past
years.
C
So
you
see
quite
a
lot
of
issues
that
is
being
reported
and
vast
majority
of
these
are
actually
you
know.
Just
plug-in
has
diverged
at
some
point.
Any
ath
haven't
been
adjusted
yet.
C
Right,
but
still
the
statistics
does
not,
you
know
are
not
very
encouraging
in
this
regard
anyway,
because
the
number
of
time
that
something
needs
to
be
touched
is
still
something
like
I
don't
know
like
nine
times
in
nine
times
in
ten,
when
something
breaks,
it's
just
the
incompatibility
which
is
otherwise
harmless,
and
one
in
this
ten
cases
would
be.
You
know
some
actual
bug
or
some
actual
problem.
C
Right
so
thanks
for
the
questions
that
was
really
really
good,
we
discovered
you
know
a
lot
of
things
I
didn't
even
thought
of
of
covering.
So
definitely
definitely
helped
me
to
to
completing
this,
and
thank
you
mark
on
helping
to
capture
all
this
in
in
dogs.
So
that's
a
good
job
down
there
I
focused,
I
mean
I
suggested
a
couple
of
things
that
should
be
done
in
the
in
the
future
of
lts
testing
with
ath
or
not
one
of
them.
C
We
briefly
scratched
that
would
be
the
simplification
and
reduction
to
cut.
You
know,
runtime,
to
cut
resources
to
cut
the
feedback
time
right.
C
You
know
one
of
the
reasons
why
we
don't
do
this
as
a
part
of
pr
testing
is
that
they
probably
don't
want
to
wait
three
hours
for
a
pull
request
to
get
a
green
light,
even
if
this
would
be
all
stable
and
and
all
that,
so
that's
one
of
the
one
of
the
you
know
possible
ways
to
go,
though
the
improvement
there
is
only
linear
like,
even
if
we
just
sacrifice
all
of
the
tests
or
make
them
running
twice
as
fast.
It
still
get
from.
C
You
know
three
hours
to
one
and
a
half
which,
and
it
would
be
a
heck
of
an
effort
to
actually
get
there.
So
it's
sort
of
questionable
to
you
know
yeah,
it's
a
linear
improvement.
I
guess
that's
it
all.
Another
thing
is
that
what
we
do
with
ath
is
becoming
less
and
less
relevant
to
modern
jenkins.
C
Simply
we
are
testing
the
ui.
Both
I
mean
you
know
to
pulling
data
from
it
or
observing.
It
is
still
still
you
know
valid
as
of
before,
though
configuring
it
and
clicking
through
the
ui
in
order
to
put
things
in
there,
mostly
configuration
is
becoming
less
and
less.
You
know
popular
use
case.
So
that's
another
thing
to
to
consider
how
to
actually
I
mean
this
is
you
know
what
we
encouraging
user
to
do,
and
you
know
speaking
for
red
hat.
C
This
is
what
we
are
pushing
very,
very
hard
on
to
actually
get
every
internal
user
of
jenkins
away
to
actually
get
it
configured
without
touching
anything.
So
you
know
the
complexity
of
putting
things
in
jenkins
forms
through
page
objects
is
just
immense
and
you
know
ideally
should
be
used
less
and
less
every
year.
So
one
of
the
thing
is
that
I
believe
it's
not
putting
in
an
enough
empathy,
sorry
inaudibly
on
on
jcask
and
job
dsl
configuration
you
know
simply.
The
ui
is
not
as
used
as
before.
C
So
that's
something
that
I
guess
it
would
benefit
and
make
this
a
lot
more,
a
lot
more
relevant
and
also
it's
just
verifying
the
the
war
file.
I
mean
there's
a
couple
of
other
ways
how
to
how
to
run
this.
It's
also
configurable.
But
what
vr,
I
guess
only
focusing
is
how
the
jenkins
var
is
running,
but
not
necessarily
how
it's
running
in
inside
of
a
jenkins
container,
which
again
sorry
I
don't
have
the
numbers
at
the
end
but
yeah.
C
I
guess
it's
just
you
know
being
used
more
and
more
often,
and
that's
probably
something
we
would
like
to
verify-
maybe
as
well,
but
eventually
switching
to
this
almost.
A
C
This
is
all
linux
based.
There
were
fixes
and
improvements
coming
from
james
in
the
past,
so
I
guess
he
might
have
more
information
so
sort
of
no
investment
done
from
us
on
windows.
We,
you
know,
obviously
work
with
the
contributors
and
try
to
make
sure
that
this
is
this
is
running,
but.
E
D
C
C
I
see
so
you
know,
as
I
said,
I've
been
running
this
project
for
jesus
six
seven
years.
I
guess
quite
a
lot
and
honestly,
it's
been
a
big
back
burner
for
a
vast
majority
of
that
time.
So
definitely
there
are
things
to
improve.
Definitely
do
not
hesitate
to
reach
to
me.
C
I
would
be
more
than
happy
to
provide
my
guidance
and
experience
of
what
can
be
done
with
this,
and
you
know
I'm
currently
in
a
state
at
I'm
almost
certain
that
something
needs
to
be
done
with
this
for
partially
the
reason
that
I've
already
mentioned
so
yeah,
I'm
not.
I
was
glad
that
the
you
know
pla.
The
pct
was
mentioned,
which
I
was
looking
at
a
possible
alternative.
C
Maybe
I'm
not
going
to
say
replacement,
but
a
possible
thing
to
converge,
to
to
actually
figure
out
how
exactly
they
would
like
to
certify
the
lts
and
also-
and
it
was
really
more
of
a
vile
idea,
rather
than
anything
else
and
that's
composing-
something
utterly
different,
based
on
container
jks,
job
dsl
and
some
other
form
of
verification
which
would
be
prone
of
selenium,
which
would
hopefully
save
us
some
of
the
problems
and
it
would
resolve
the
thing
that
you
know.
Configuration
through
ui
is
becoming
less
and
less
relevant.
C
Whether
that
would
be
you
know,
verification
through
some
groupie
scripts,
which
feels
sort
of
white
boxy,
and
not
quite
nice,
but
or
alternatively,
using
this
http
client.
What
is
it
called?
Jing
knowledge
in
cli,
you
know
using
the
http
client
for
jenkins
that
is
based
in
java.
To
actually
very
you
know,
start
build
verifying
states
getting
locks
and
things
like
that.
So
that's
more
of
a
file
idea
how
to
how
to
advance
further
and
avoid
a
lot
of
the
hassle
that
we
currently
have.
I
mean
that
we
always
had
with
selenium.
A
C
Yeah
folks,
definitely
thanks
a
lot.
You
know
you
brought
quite
a
lot
of
interesting
questions.
The
thing
is,
as
I
said,
I
would
like
somebody
else
to
take
this
over
for
now,
so
I
guess
it
would
be
the
number
of
contributors
we
currently
have.
If
james
would
like
to
step
in
and
becoming
the
the
main
maintainer,
I
would
be
more
than
happy
it's
not
that
I'm
leaving
for
good,
I
should
still
be
around
still,
can
help
with
things
and
provide
some
guidance
and
and
some
historical
content
in
context.
A
Great
well
and
and
just
to
share
gareth
evans,
who
is
on
the
call,
is,
has
agreed
to
take
on
a
piece
of
this
as
part
of
his
responsibility
in
the
jenkins
community
team
or
in
the
community
team
at
cloud
view.
So
gareth
has
the
unfortunate
privilege
of
reporting
to
me
and
therefore
he
gets
to.
He
gets
to
listen
as
I
try
to
guide
and
steer,
so
you
you'll
probably
be
pinged
by
gareth
or
by
me
as
we
look
at
how
should
this
evolve.
D
A
Anything
else,
oliver
not
for
me
guys
all
right
thanks
everyone.
I
will
post
the
recording
after
its
process,
the
recording,
usually
processing,
takes
on
the
order
of
30
minutes
or
an
hour
I'll
also
place
a
link
to
the
recording
and
to
these
notes
in
a
comment
to
the
jenkins
developers
list.
So
we
have
them
archived
thanks,
everybody
very,
very
much.
Thank
you.