►
From YouTube: OMR Architecture Meeting 20200507
Description
Agenda:
* Improvements to OMR CI pipeline (#5067) [ @janvrany ]
A
B
B
B
So
what
I'm
going
to
talk?
And
you
know
all
these
problems-
you
know
it's
all
based
on
certain
assumptions
and
if
the
sanctions
doesn't
hold
and
I'm
not
saying
they
should
hold,
or
you
know,
that's
the
correct
thing
to
do.
I'm
just
saying
those
are
those
are
the
assumptions
I
started
to
think
and
improve
stuff.
B
So
the
first
thing
is
I
assume
that
we
want
to
want
the
pipe
pipe
lines:
the
drinking's
pipelines
to
be
usable
for
everyone.
Obviously,
nothing
is
ever
usable
for
everyone,
but
at
least
make
it
usable
for
me
as
someone
who
is
always
outside
the
main
main
team
and
maybe
wants
to
run
the
CI
for
whatever
reason.
B
Another
another
thing
I
was
I
was
thinking
and
assuming
is
that
we
want
the
pipeline
to
be
a
bit
more
modular,
and
so
we
can
build
more
complex
pipelines.
Now
we
just
built,
you
know
to
test,
you
know
for
requests,
that's
once
one
kind
of
build
so
for
every
pull
request
we
build
and
we
also
can
build.
You
know
the
master
or
whatever
branch,
but
I
can
think
about
about
more
more.
B
Pipelines
like
what,
if
we
want
to
have
a
pipeline
to
run
it
not
on
every
commit
or
something
like
that,
maybe
time
to
time
every
week,
I
don't
know,
doesn't
matter,
but
the
pipeline.
That,
for
example,
runs
other
sanitiser
different
behavior
sanitizer
and
on
all
platforms
and
collects
the
results.
What
if
we
want
to
test
more
like
because,
for
example,
now
now
we
are
testing
on
each
platform
or
target
I
should
say
we
are
testing
pretty
much
only
one
configuration,
but
there
is
a
lot
of
a
lot
of
other
things
you
can
configure
like.
B
The
compressed
pointers
now
comes
to
mind,
and
maybe
there
is
adder,
so
maybe
we
want
to
test
that
what
this
configurability
compile-time
still
works.
So,
for
example,
you
want
to
test
on
risk
five
or
arm
that
pose
the
compressed
and
uncompressed
pointers
works,
and
things
like
this,
maybe
we
own
that,
maybe
maybe
we
don't
its
to
discussion,
and
the
other
thing
that
initially
was
bit
confusing
to
me-
is
that
the
bill
builds
at
least
as
of
today
differ
quite
a
lot
like
there
are
different
options
specified.
Some
some
builds
are
built
without
a
tools.
B
B
Assume
that
you
know
this
is
this:
is
the
situation
how
it
is
now
for
technical
reasons,
but
in
a
long
run
we
want
to.
We
want
to
unify
those
and
make
and
have
the
build
on
all
targets,
pretty
much
the
same,
using
pretty
much
the
same
configuration,
and
even
if
it's
not
possible,
we
want
or
I
want
I
want
it
to
be
obvious
where
to
build,
differs
from
what
is
considered
the
main
into
standard
build
anyway.
I,
don't
know
whether
you
agree
that
this
is
something
desirable
or
not,
but
please
just.
D
B
B
B
You
see,
then
more
fundamental
problem
when
I
see
is
that
the
build
depends
on
I
mean
the
whole
path
line
means
a
target
to
be
specified
as
a
as
a
parameter
in
Jenkins
that
works,
but
the
setup
I
found
it
a
bit.
Awkward
like
you
define
a
choice,
parameter
with
only
one
choice,
and
then
you
kind
of
never
really
changed
the
value
of
the
parameter.
B
You
kind
of
depend
on
the
behavior
of
Jenkins
and
when,
when
the
belt
is
triggered
automatically,
then
it
then
it
chooses
the
first
choice.
This
is
actually
white.
There
is
only
one
choice,
or
at
least
design
I
understand.
So
it's
bit
awkward
to
set
up.
That's
the
first
problem,
the
second
problem.
It
doesn't
really
work
when
you
use
a
multi
branch,
multi
branch
pipeline
on
Jenkins,
which
is
something
that
you
know
reads
all
the
branches
and
builds
all
the
branches
to
aureum.
B
There
possibly
filters
but
essentially
builds
all
the
branches,
which
is
what
I
use
on
my
Jenkins.
So
I
have
a
master
branch,
pipe
line
for
each
per
request,
and
then
it
builds
it
for
me,
you
know
with
on
my
setup
and
with
both
parameters.
You
cannot
do
it.
You
cannot
simply
have
this
multi
branch
behavior.
A
B
B
Kind
of
what
I'm
using
it
this
for
is
that,
in
my,
in
my
run,
fork
I
have
like,
let's
say,
state-of-the-art
risk,
5
branch,
which
contains
all
all
my
code,
including
the
code
that
is
not
yet
merged.
Now.
This
is
not
really
that
that
important,
but
it
was
more
important
initially
and
you
know
to
test
this
on
some
other
projects.
B
Don't
really
understand
why
but
that's
a
different
story,
and
so
this
is
essentially
the
risk
five
stuff
with
a
few
other
changes
and
I
want
to
test
that,
and
then
there
is
a
you
know
for
every
four
requests.
There
is
a
branch
and
I
test
all
these
branches,
mainly
because
I
work
on
risk
five
and
there
is
no
risk
five
life,
so
I
push
it.
There
then
see
whether
my
whether
my
risk
five
slave
builds
it
whether
all
test
passes
and
if
they
do
and
everything
is
okay
ish
then
I
open
the
pull
request.
A
Difference
between
yeah,
that's
the
difference
between
what
you're,
like
the
workflow
that
you're
describing,
is
different
than
what
has
been
implemented
right,
because
what
what's
there
right
now
is
really
you
create
a
PR
and
then
some
committer
can
come
along
and
launch
some
extra
testing
on
it
or
automatic
testing
happens
as
soon
as
you
as
soon
as
you
check
it
in
you're
asking
more
about
how
can
a
developer
perform
testing
on
sort
of
work-in-progress
commits
prior
to
creating
pull
requests?
Yeah.
B
A
Say
it's
different:
it's
it's
I
mean
if
you
don't
as
a
as
a
contributor,
if
you
don't
have
access
to
the
breadth
of
hardware
that
that
others
do
I
mean.
Obviously
you
want
to
do
testing
of
your
work
before
committing
it
or
turning
it
around
any
one
of
anybody
else
in
the
community
that
wants
to
do
let's
say
and
risk
five
don't
have
access
to
risk
five
Hardware
either.
So
they
may
want
to
have
some
means
of
doing
some
free
testing
before
creating
a
pull
request.
So
I
don't
think
it's
that
unreasonable
a
workflow.
B
Well,
if
I
would
think
it's
unreasonable,
I,
wouldn't
even
talk
about
it
right
and
obviously
I
see
everything
from
my
perspective
and
that's
that's
different
than
probably
the
perspective
of
most
of
you,
so
I
agree,
but
anyway
that
workflow
that
that,
in
order
to
get
stuff
test
tested,
some
remark
emitter
has
to
step
in,
and
you
know,
poke
poke
the
gene.
And
then
you
know
this
doesn't
really
work
for
me.
Maybe
it's
just
because
there
is
I
said
there
is
no
risk.
B
Five
slave,
but
still
I
prefer
I,
prefer
the
peers
to
be
tested
automatically
before
before,
actually
opening
them
and
asking
people
to
review,
because
I
make
a
lot
of
mistakes,
and
that
this
gives
me
gives
me
kind
of
assurance
to
certain
degree
that
it's
not
only
my
ver,
my
machine
on
which
it
works.
There
is
some
another
set
up
which
is
more
under
control.
I
mean
it's
more:
it's
not
a
development
machine.
It's
it's
my
CIC!
C
Being
able
to
reuse
some
of
the
CIA
infrastructure,
we
currently
have
and
seek
him
back
to
me
when
we
were
primarily
using
try
to
CI
for
testing
I
know
a
few
developers,
including
myself.
We
also
enabled
Travis
CI
in
our
Forks
of
OMR,
so
that
when
we
pushed
a
change
to
our
fork
and
would
run
the
same
tests
on
Fork
without
us
having
to
open
it
to
you
directly.
So
we
could
never
go
to
a
speed
of
development
and
testing
for
us
because,
first
of
all,
they
would
use
ours.
C
Try
to
share
resources,
as
opposed
to
there
was
a
clip
so
Amara,
and
they
also
made
a
user
to
incrementally
test
the
changes
in
a
pull
request.
So
they
know
that's
not
something
we
can,
as
is
a
routine
with
Jenkins,
and
if
I
understood
correctly
when
Jana
is
proposing,
could
take
us
to
or
something
like
that.
B
Yeah,
essentially,
that's
that's
what
I'm
proposing
yeah
and
you
know
the
Travis
and
the
player
and
and
here
and
everything
that's
great,
and
it's
really
easy
to
set
up
much
easier
than
you
know
to
set
up
your
own
Jenkins
instance.
The
problem
is
that,
at
least
for
me,
is
that
there
is
no
hope
that
in
foreseeable
future
there
will
be
a
risk
5.
There
will
be
Power,
PC,
64-bit
arm
things
like
this,
so
it
essentially
works
only
for
x86.
B
D
D
Background
as
a
comparison
as
IBM
as
we've
always
had
internal
build
parts,
we've
always
had
the
notional
personal
bills
which
would
use
the
same
infrastructure,
except
that
have
say
a
hundred
developers,
all
launching
the
same,
build
and
basically
supply
in
their
own
fork
and
branch
as
parameters
to
get
the
build.
They
want.
It's
not
entirely
different
from
what
you're
doing
the
difference
that
you
have
is
you
have
the
whole
build
part
of
yourself
you
haven't
set
up
so
that
you
have
testing
it's
every
single
branch
automatically
this.
B
Well,
the
thing
is
people,
it's
all.
People
at
IBM,
obviously
done
have
done,
have
access
to
the
IBM
CI
infrastructure,
so
they
cannot
use
this
this
person,
all
the
other.
The
other
other
use
case
I
was
think
about
this,
that
you
know
my
my
client.
Actually,
if
working
on
the
RISC
five
risk
five
chip
and
he
has
a
he
made
some
measurements
and
I-
don't
really
know
the
details,
but
but
as
far
as
I
know,
he
proposes
some.
A
I
mean
I,
think
the
thirst
I
mean
there's
some
some
good
ideas
and
I
think
some
some
fairly
noble
intentions
here.
There
are
some
complications,
I
think
that
that
are
going
to
make
this
difficult
to
implement
in
practice,
I'm,
not
saying
that
they're
impossible,
but
but
but
it's
as
difficult,
one
of
which
is.
If
there
is
some
scheme
in
place
to
use
the
use,
the
the
hardware
that's
already
in
the
farm
that
we
have
for
eclipse
OMR.
A
B
No,
no,
no
I
think
there
is
a
great
misunderstanding.
What
I
am
proposing
is
not
that
the
IBM
build
farm
or
Eclipse
build
farm
or
any
bin
far
will
be
open
for
every
developer
to
build
their
own
stuff.
This
is
like,
if
you
have
your
own
and
you
make
you
are
you're.
You
know
taking
taking
the
the
headache
of
meeting
your
own
farm,
for
example,
because
you
are,
you
are
company
that
depends
on
this
regard.
B
B
B
B
It's
twofold:
first
of
all,
the
structure
is
quite
big
and
each
built
that
is
described
there
is
kind
of
a
self-contained.
So
it's
complete
complete
set
of
all
the
parameters
that
control
various
different
things,
and
this
is
difficult
to
read
and
there
is
a
lot
of
redundancy-
and
this
is.
This
is
actually
why
it's
so
difficult
to
read.
So
what
I,
whenever
I,
looked
into
the
structure,
what
I
usually
want
to
see
is
is
what's
happening
and
also
how
the
built
for
I
don't
know.
B
B
Maybe
we
can
do
better
and
also
if
we,
if
you
just
kind
of
describe
what
is
the,
what
is
the
main
standard
built
and
then
describe
the
deviations
from
it,
then
it
would
be
clear
where
they
were
they
differ
and
what
has
to
be
worked
on
in
case.
We
want
to
make
them
as
similar
as
possible,
so
does
have
the
issues
with
it
and.
E
B
B
D
E
C
D
B
B
B
So
that,
but
that's
the
death,
the
best,
the
minor
minor
part,
the
other
thing
was
to
get
rid
of
rid
of
the
choice.
Parameter
I
did
that
by
a
simple
trick
and
that's
I
I
essentially
have
you
know
a
pipeline
script
diff
for
each
target
like
a
different
file
like
it
was
before,
but
instead
of
having
instead
of
having
having
you
know
the
whole
full
pipeline
in
inside
that
file
and
therefore
have
a
lot
of
duplications
and
doing
you
know
to
build
descriptions
are
scattered
over
over
multiple
file
files.
B
B
B
D
B
B
Okay,
so
that's
that's
the
trick
how
I
get
rid
of
rid
of
the
built
parameter.
Now,
if
you,
if
you
look
at
the
latest
latest
version
of
the
OEM,
are
groovy
from
my
poor
request,
then
you
will
see
that
it
also
works
with
the
with
the
current
setup.
You
know
with
the
built
parameter.
So
if
you
use
it,
you
can
still
still
use
that
file
and
that's
because
at
the
end
of
file
there
is.
B
Tried
to
do
this
by
defining
DSL
for
for
specifying
these
bills
and
then
use
you
know,
kind
of
inheritance
or
or
or
class
or
use
or
I,
don't
know
how
to
how
to
call
it
to
actually
define
what
is
the?
What
is
the
default
and
then
just
each
build
either
takes
the
default
or
takes
the
default
and
tweak
tweak
you
things
so
I
am
not
yet
done.
I
mean
there
is
more
more
I
can
factor
out
already,
but.
B
More
details,
specification
of
you
know
what
what
are
the
parameters
to
DC
make?
What
is
the
build
command?
There
is
nothing
like
collect
tests,
because
I
said
by
default.
We
want
to
run
the
test
and
collect
them.
Obviously,
this
can
be
also
simplified
on
the
more
so
under
on
the
top
level.
It's
still
an
array,
I
didn't
didn't
get
that
far
to
remove
that,
but
you
see
that
the
belts
are
replaced
by
a
new
C
make
which
is
which
is
the
default
all
right.
If
you
want
to
do,
do
something
different.
B
B
So,
for
me,
this
is
easier
to
read
in
a
sense
that
it's
clear
it's
clear
how
the
32-bit
Intel
differs
from
what
is
the
standard
build
and
also
offers
an
obvious
place
where
to
document
why
we
are
doing
something
different
if
it
needs
needs
to
be
documented,
for
example,
I
was
puzzled,
why
we
are
using
ninja
and
I
thought.
This
is
some
some
mistake
because
of
historical
reasons,
or
maybe
people
who
wrote
the
pipelines
when
they
were
in
individual
files
and
only
when
I
asked
I
found
out
that
this
is
actually
by
by
purpose.
D
D
B
B
B
No
all
right
so
this
is
this-
is
what
I
have
right
now.
I
asked
Dario
today
to
try
it
on
on
Eclipse
infrastructure
to
see
whether
it
works
there,
because
it
obviously
works.
This
very
very
code
works
on
my
CI,
but
unfortunately
it
doesn't
work
on
Eclipse
the
CI,
because
there
are
some
issues
with
the
sandbox,
so
this
is
something
that
has
to
be
fixed
or
workaround
in
one
way
or
another.
I
have
to.
D
B
Couple
years
ago,
when
I,
when
I
start
building
building
stuff
on
my
CI
and
start
writing
scripts
for
different
projects,
then
then
this
cost
me
so
much
pain
and
complicated
things.
So
much
then
I
just
disabled
or
enabled
this
particular
method
to
be
invoked
within
the
cell
and
forget
about
it.
So
that
is
why
might
blown
on
Eclipse
but
I
understand
that
maybe
the
administrators
wouldn't
be
so
happy
to
and
they
predict
so
I
need
to
find
out
how
how
to
how
to
do
it
in
innocent
books.
Maybe
not
not
that
my
me
well.
D
B
E
B
Right
we
can.
We
can
discuss
that
if
we,
if
we
decide
we
should
go
this
way
or
I
can
try
to
find
a
workaround
or
you
know
that
that's
that's
a
technical,
that's
a
technical
detail
right
now,
but
unfortunately,
I
was
kind
of
hoping.
I
can
show
that
it
works.
Is
the
current
set
up
without
any
changes,
but
it
doesn't
which
I
did
imagine
I
said
I.
Think.
B
So
really
you
should
not
approve
it.
I,
don't
know.
I
am
NOT
expert
in
that
area.
I.
Don't
really
understand
why
it
is
so
so
tricky,
but,
as
I
said,
if
I
have
few
ideas
how
to
how
to
get
around
it,
if
it's,
if,
if
the
workaround
would
be
to
actually,
then
we
can
discuss
or
if,
if,
if
the
worker
and
wouldn't
work,
then
yeah,
maybe
we
can
just
allow
it
anyway.
That's
that's
one
of
the
one
of
the
things
on
to
do.
B
B
B
Specifying
that,
then
there
is
a
there
is
a
interesting
thing
like
we
use,
at
least
in
the
compiler
code.
We
use
this
skip
before
skip
on
for
stuff
that
doesn't
really
work
on
the
platform,
because
it's
not
implemented
or
it's
known,
buggy
or
whatever
reason
we
use
descriptive
inside
the
G
test,
whereas
I
think
on
Windows
and
somewhere
else.
B
C
Actually
being
used
for
debugging
local
deity,
employees
or
when
they're
running
the
tests
locally,
if
you
find
a
failure
at
google
test,
filters
will
allow
you
to
run
just
and
that
point
failing
test.
So
that's
really
because
filters
are
used
and
we
should
not
be
using
them
to
specify
which
tests
are
not
total.
E
B
C
B
Yeah
I
think
that
you
know
good
idea
that
yeah,
as
he
said,
slightly
different
story.
My
point
was
was
like
now
they
are
actually
actually
three
ways
of
skipping
tests,
because
I
also
also
noticed
I,
think
energy
Builder
component,
that
some
tests
are
actually
not
even
compiled,
depending
on
the
platform
in
us
in
a/c
make
files
that
cost
me
few
few
surprises
so
but
I
mean
I,
understand
the
reasons,
but
it
would
be
nice
to
kind
of
only
one.
One
mechanism.
B
B
B
But
maybe
maybe
I
can
just
just
have
it
there
and
you
know
push
it
to
some
other
issue
or
something
like
that.
That's
all
open
to
discussion,
the
other
other
thing
is,
would
be
nice
to
refactor
the
arm
and
they
are
64
jobs
or
at
least
provide
the
see,
make
versions
of
them
because,
as
far
as
I
see
now
the
arm
and
air
64
jobs
are
only
cross-compile.
So
we
are
not
running
tests.
So
I
was
thinking
more
about
more
about
like
following
what
I
did
for
the
risk
5.
B
So
then
we
will
have
like
a
common
infrastructure
how
to
do
a
cross
cross
cross
compilation
on
a
CI
which,
which
also
requires
you
know,
to
document
the
whole
setup
because
for
for
example,
for
the
risk
5c
make
cross
compile
build.
You
need
to
specify
the
sisterhood.
You
know
where
all
the
libraries,
the
risk
5
libraries
are
and-
and
things
like
this
and
obviously
the
sis
route-
is
a
pass
on
file
system,
but
in
order
to
make
it
more
more
usable
for
quote-unquote
everyone
well
the
where
the
societies
should
not
be
hard-coded
inside.
B
Inside
the
pipeline
itself,
it
should
be
configuration
on
the
Jenkins
instance
level,
which
is
something
I
try
to
do
on
a
risk.
5
risk
five
built
like
The
Sisterhood
is
not
there.
The
hard-coded
it's
taken
from
environment
variable
and
the
environment
variable
is
then
defined
on
Jenkins
level,
which
allows
me
to
actually
or
move
it
freely
around.
B
So
this
is
a
configuration
under
under
Jenkins
instance.
So,
the
sorry
sorry
that
brings
me
to
the
last
point.
What
is
not
yet
done
is
to
document
how
to
actually
use
these
scripts.
You
know
what
is
the
setup,
something
something
like
Adam
did
for
for
the
current
current
om,
our
groovy
pipeline.
You
know
how
to
define
the
parameter
orders
to
be
there,
so
this
documentation
should
also
also
handle
you
know
what
are
the
variables
that
have
to
be
defined
in
order
to
find
a
suit
for
cross-compilation
and
things.
Ladies
yeah,
that's
that's
pretty
much.
B
A
A
The
the
one
thing
I
guess
I
wanted
to
bring
up
in
front
of
the
whole
group
here
is
to
see
whether
or
not
there
are
any
sort
of
fundamental
objections
to
the
to
the
path
that
you're
on
here
and
making
the
kinds
of
changes
that
you
are
posing
I
mean
personally
I.
Think
some
of
the
generalization
ideas
that
you
have
they're
all
they're,
all
good
good
to
have
so
but
I'll
leave
it
open
to
anybody.
Maybe
Adam
in
particular,
if
you've
got
any
objections
to
what
you're
you're
actually
trying
to
accomplish
here.
D
B
B
Point
whether
you
know
it's
more
work,
not
only
for
me,
but
also
for
you,
because
then
I
would
I
would
bother
you
with
you
know
this.
Do
this.
Do
that
poke
test
this?
So
the
question
is
whether
it's
worse
or
not,
I
think
it
is
otherwise
I
wouldn't
bring
it
up,
but
there
are
others.
If
we
are
all
on
the
same
wave,
then
yeah,
let's
slowly,
slowly
continue
continuing
on
try
to
polish,
or
is
that
really
rough
rough
draft.
E
Notwithstanding
that
the
experimental
pull
request
that
you
put
together
didn't
doesn't
in
fact
work
yet
I
was
scanning
through
the
pull
request
and
the
changes
that
you
made
and
I
think
it
does
even
make
the
files
a
little
bit
more
readable
and
easier
to
sort
of
understand
how
it
all
fits
together.
So
I,
you
know
I
I'll,
defer
to
add
him
here,
because
he
works
with
the
files
on
a
on
a
more
regular
basis
and
understands
the
needs
for
all
the
various
pieces.
E
E
A
B
D
A
B
B
Just
just
one
comment:
I
mean
I
did
two
commits
commits
more,
let's
say
for
Adam
to
understand
individual
steps.
So
maybe
they
you
know
some
some
of
the
comments
should
be
folded.
Some
of
the
comments
should
be
polished
in
one
way
or
another.
It
was
more
like
they're
structured
in
a
way
to
to
demonstrate
the
development
of
the
ideas,
so
whether
it
will
be,
you
know
that
a
bunch
of
comments
or
or
just
do
other
commits
that
will
see
us
as
we
as
we
polish
it.
I
would
say.
A
Okay,
fewer
number
of
comment-
fewer
number
of
commits-
is
generally
easier
than
asking
you
later
to
break
it
up
into
many
commits.
So
I
guess
we're
not
asking
at
this
point.
I,
don't
think
we're
asking
you
to
do
any
more
work
to
break
it
up,
but
there's
potentially
the
need
to
squash
some
of
those
commits
later
so
maybe
they'll
come
through
on
the
review.