►
From YouTube: OMR Architecture Meeting 20211125
Description
Agenda:
* Vector IL opcode redesign
A
Welcome
everyone
to
the
november
25th
edition
of
the
omar
architecture.
Meeting
today
we
have
one
topic:
gita
coupons
will
be
talking
about
re-architecting
the
vector
il
op
codes.
So
giter,
can
you
take
it
away.
B
I
sure
think
there
so
yeah
so
today
I'll
present
a
redesign
of
vector
io.
Of
course
I
have
to
mention
first
that
we
already
had
the
presentation
like
a
few
years
ago.
This
is
a
new
one
based
on
previous
proposal,
but
it
changed
so
much
that
I
essentially
started
from
scratch.
You're
only
writing.
One
slide
remains
from
previous
presentation,
so
it's
basically
a
completely
new
redesign
and
first
I'll
talk
about
motivation.
B
Our
proposal
and
examples
of
that
proposal
and
summary
and
of
course,
at
the
end,
feel
free
to
ask
questions,
and
hopefully
you
have
a
discussion.
Oh
yeah,
so
motivation
is
actually
like
some
presentation
with
some
amendments.
Since
then,
we
have
we're
considering
more
vector
length.
We
would
like
to
support
architectures
like
more
architectures,
actually
intel,
powersd
and
arm
and
as
before.
Of
course,
it
will
help
others.
Indeed
it
actually
can
be
expanded.
B
There
are
several
pieces,
essentially
vector
sizes
and
there's
one
called
species
preferred
which
can
have
any
length
supported
on
a
platform
and,
of
course
we
should
be
able
to
support
different
sizes
in
one
compilation,
and
the
current
approach
is
only
one
100
like
we
have
some
support
for
rsmd
right,
so
128
bit
length
is
implied
only
one
length
and
data
data
types
describe
vector
layout.
Precisely
since
we
don't
have
many.
B
Essentially,
we
listed
all
possible
combinations
of
that
fit
into
128
bit
or
scalar
types
and
of
codes
are
what
we
call
typeless
like
we
add,
the
const
and
data
type
can
be
derived
from
children
from
symbol
reference
or
actually
can
be
cached
on
node,
for
example,
for
the
const,
and
there
are
of
course,
limitations.
Only
one
length
is
supported
right
now,
adding
new
enum
for
every
possible
combination.
We
need
to
add,
like
manually
and,
of
course,
like
with
every
new
length
sort
of
growth
considerably
and
current
typeless
implementation
actually
had
some
issues.
B
I
think
maybe
they're
fixable
maybe
not
like,
but
there
are
definitely
issues
with
current
implementation,
namely
when
we
derive.
B
B
I
first
like,
let's
covered
our
existing
data
structures,
of
course,
without
the
basic
file
structures,
and
we
have
a
node
class.
Of
course
that
has
refers
to
I
love
code
class
and
in
turn,
I'll
opcode
class
has
just
one
member
like
enum
I'll
codes
num.
So
it's
like
one
to
one
corresponding
they're
like
you
can
go
from
enum
to
class
from
class
to
in
inaudible
kind
of
conversion
operators,
so
they
almost
like
identical,
except
that
in
the
class.
B
B
They're,
like
interchangeable,
due
to
all
kind
of
operators
being
defined
and
then
enum,
of
course,
the
one
that
has
like
around
600
enums
and
conceptually
not
like.
We
don't
encode
anything
which
is
least
enough
by
number,
but
conceptually
each
from
each
enum.
We
can
derive
the
result,
type
operation
and
source
type
in
in
most
general
case,
like,
for
example,
I
to
f
right
result.
Type
is
f
right
for
float.
B
B
E
is
a
set
of
all
vector,
element,
types
and
so
far
this
is
just
all
known,
scalar
types
that
we
already
have
from
into
a
to
double,
and
actually
there
may
be
others,
but
the
ones
that
we
already
have
in
a
set
of
vector
length
is,
for
example,
64
and
28,
and
I
will
define
a
full
set
second
and
then
a
set
of
operations
like
instead
of
in
general
terms,
not
op
codes,
but
operations,
it's
like
load,
add
subtract
vectors
and
so
on
so
again
a
most
general
case
data
types.
B
It's
a
set
of
all
possible
data
types,
it's
a
product
of
e
and
s
like
elements
and
sizes.
Right,
yes,
and
set
of
all
possible
objects,
as
I
said,
like
things,
are
important,
so
result
type,
source,
type
and
operation,
so
product
of
three
sets
and
the
source
type,
maybe
not
a
best
name
but
say,
for
example,
it's
clear
for
conversion
right
the
resource
and
result
for
say,
for
example,
vector
load.
It's
not
not
if,
like
first
child
in
the
tree,
it's
it's
a
vector.
B
That
can
read
might
return
the
same
type
or
different,
like,
for
example,
reduction
might
be
from
different
type
and
and
then
we
would
like
to
impose
some
constraints
like
basically,
I
think
like,
for
example,
that
optimizer
should
be
agnostic
to
number
of
vector
elements
and
should
not,
even
I
would
say
ask
for
like
maybe
when
creating
new
term
but
generally
doesn't
need
to
know
any
specific
length
and
should
work
for
any
length
that
you're
currently
supporting
in
the
future.
B
Without
any
modifications,
I
would
say
and
also
koji-
and
I
would
say
just
due
to
the
sheer
number
of
all
these
combinations
right-
it's
needs.
It
needs
to
be
structured
right.
First
of
all,
for
example,
we
should
not
ask
like
anywhere,
I
think,
for
specific
combinations,
specific
member
or
this
set,
but
like
group,
it
may
be
by
operation
and
by
type
by
length
depends
on
platform.
I
think,
but
definitely
need
to
structure
it
somehow
in
order
to
maintain
the
code.
B
So
they
said,
there's
two
constraints
right,
and
these
are
two
steps
that
we
need
to
represent,
enop
and,
and
so
would
actually
providing
only
really
three
ways
to
represent
data
sets.
In
a
program
in
general
like
in
any
program
but
in
omar
specifically,
we
have-
and
you
of
course
need
to
consider
existing
code
in
omr.
B
This
thing
implementation,
so
first
we
can
represent
them
as
classes
right,
for
example,
with
that
d,
can
we
can
reuse
that
data
type
class
that
we
have,
which
has
to
add
it's
already,
has
essentially
element
type,
add
the
size
to
it.
Another
instance
variables
right
variable
and
the
op
code.
We
already
have
a
class,
we
just
kind
of
split
like
into
three
variables
right
now.
We
just
have
op
code
right,
but
we
would
have
like
that
two
types.
C
B
Operation,
so
this
I
would
say
most
like
proper,
I
would
say
kind
of
theoretically
proper
approach
like
object-oriented
and
so
on
and
but
in
practical
in
practice,
it's
considerable.
It
implies
a
lot
of
changes
to
omar
and
downstream
projects.
B
Another
approach
is
essentially
hash.
These
values
into
one
number
like
we
can.
We
can
represent
each
tuple
or
triple
as
one
integral
value
unique
integral
value.
For
example,
there
is
some
mapping
function
from
this
triple
to
integer
or
like
from
this
both
to
integer,
but
then
we
also
need
a
function
that,
given
that
unique
number
I
you
can
get
each
member
from
that
number.
B
So
yeah
this
function
is
not
hard
to
create.
You
know,
let's
basically
encode
each
member
to
group
of
bits
with
an
I
one
disadvantage
of
this
approach.
B
We
discussed
that
and
basically
debugging
like
not
easy
to
understand
the
meaning
of
that
integral
value
like
what
exactly
of
code
it
refers
to,
and,
of
course,
we
need
to
can
make
it
compatible
with
existing
inauds,
because
we
already
you
know
they
have
to
be
this.
I
has
to
be
the
same
size
as
existing
num.
B
I
will
show
examples
and
then
so
this
approaches
a
b
and
c.
We
just
can
list
all
set
members
in
the
form
of
nouns,
as
we
do
right
now,
kind
of
like
in
a
dance
form
like
listen
and
they
can
be
referred
in
the
source
and
can
be
in
a
debugger,
but
definitely
that
part
kind
of
directly
referred
in
the
source
violates
constraints.
One
and
two
that
I
mentioned
before
size,
of
course,
will
be
smaller.
B
I
would
say
at
least
like
a
thousand
you
know
to
me,
one
of
the
new
op
codes
need
to
be
added.
We
can
also
consider
like
automatically
generating
emails
and
encoding
this
that
I
essentially
into
enum
value
like
saying
some
enum,
like
I
don't
know
just
some
reload
equal,
some
encoded
value,
but
I
then
I
sort
of
it
becomes
like
similar
to
b.
I
would
consider
as
part
of
b
at
least
maybe
there's
some
modifications,
but
then
c
becomes
close
to
b
distance
some
way.
B
So
we
had
a
lot
of
discussions
like
and
discussed
with
my
colleagues
right
so,
and
this
proposal
is
b
in
this
presentation,
I'm
proposing
b,
so
the
following.
Every
all
the
following,
slides
are
about
proposal
b.
B
D
B
So
basically
yeah
like
it's
the
classes,
nice,
but
a
lot
of
changes,
encoding
easier,
like
sort
of
three
vector
lengths,
almost
like
as
the
variable
like
a
member
of
some
class,
because
we're
gonna
have
many
and
treat
them
as
variables.
Essentially,
for
instance,
like
midfield
and
c
kind
of
you
know,
can
be
modified
but
sort
of
in
a
sort
of
to
distinguish
from
kind
of
like
I
would
say.
Yes,
it's
essentially
the
current
approach
of
one
listing
or
all
enums.
E
So
if
there
are
two
sources
for
an
example,
I
guess
a
mask
or
person
also
full
operation.
How?
How
would
this
represent
those
operations?
Yeah.
B
Good
question
say,
for
example,
say:
mask
operation
yeah,
it's
actually
a
very
good
question
right.
All
right
a
source
would
be
a
vector
which
in
itself
would
be
tuple
right.
So,
for
example,
vector
of
four
ins
right.
B
Okay,
like
more
in
this
terms
like,
for
example,
vector
of
length
128
consisting
of
it
that
will
be
source
type
right,
okay
mask
will
be
of
mask
type.
It
was
going
to
be
vector,
but
we
probably
might
we
might
need
to
have
mask
separate
type
for
mask.
But
let's
assume
mask
it's
like
a
vector
as
well.
B
B
It
will
be
essentially
vector
of
different
type
right.
It
will
be
yeah.
It's
a
very
good
example.
Right
result
will
be
also
for
elements,
but
they
will
be
of
type,
probably
actually
one.
Second,
it's
still
it
depends.
We
need
to
look
at
the
library,
it
will
return,
also
four
elements,
but
they
will
be
sort
of
like
boolean
right.
E
B
B
B
E
B
In
the
library
moscow
presented
is
just,
for
example,
moscow
four
will
be
four
bytes,
I
think
yeah
yeah,
but
we
in
practice
right
in
the
hardware.
We
don't
have
that
we,
our
mass,
can
operations
return,
result
in
a
vector
register
right.
E
B
E
B
Yeah,
but
very
good,
very
good
question,
so
any
questions
about
this
yeah.
So
basically,
this
is
more
like.
If
you
don't
consider
mask
it's
like
well,
even
with
masks,
I
would
say
it's
sort
of
it's
a
results
course
type
and
operation,
for
example,
for
reduction
right.
A
typical
example
source
will
be,
I
don't
know
floating
point
vector,
operation
will
be
reduction,
vector,
reduction
and
result
type
will
be
floating
point.
B
So
yeah
yeah,
even
in
the
pan
of
any
proposal,
we
need
to
consider
sv
right
a
c.
It's
different,
a
bit
from
other
architectures.
It
introduces
something
called
scalable
vectors
and
there
are
a
couple
rules,
but
in
most
general
case
you
can
have
vectors
of
128
bits
up
to
2k
and
in
128
bit
increments.
B
So
you
can
have
already
like
16
at
least
16
lengths,
but
only
one
length,
but
the
length
can
be
configured
by
implementation,
like
I
think
it's
by
maybe
by
alcoholic
or
you
can
consider
at
some
point,
and
it
can
be
clear
that
well
if
for
studying,
compilers,
basically
they
need
to
generate.
Essentially
they
don't
know
what
it
is.
So
they
need
to
generate
steady.
Compilers
have
to
generate,
like
essentially
dynamic
type.
B
You
know
like
like
dynamic
stack,
you
know
and
the
sort
of
unknown
temporaries
of
unknown
type,
even
but
for
just
a
time
compiler
it
can
be
queried
queried
at
at
runtime
during
common
compilation.
So
it
was.
I
couldn't
like
figure
out
exactly
sort
of
unknown.
Theoretically,
I
think
it
can
be
that
configured
length
can
be
changed
while
process
is
running
even
at
certain
points,
but
I
think
you
just
can
call
some
functions
some.
B
B
B
Proposal
is
that
just
take
a
union
of
all
architectures
that
we
want
to
support
and
all
known
like
languages
and
libraries
that
we
currently
at
least
would
like
to
support
so
on
z,
supports
128
intel.
This
three
lengths
and
arm
three
will
present.
C
B
One
like
variable
length
and
because
we
don't
even
know
what
it
is
right
like
we
can
before
we
start
running
right
at
runtime
and
and
also
electric.
In
addition,
those
three
lengths
also
has
64
bit
length,
even
though
we
probably
not
gonna
use
like
we
don't
have
64-bit
registers,
but
I
think
it's
good
to
represent
it
as
a
separate
type,
because
maybe
a
half
of
128
bit
register
can
be
used,
vector
register.
B
B
We
can
use
like
you
know,
just
one
bite,
one
to
represent
all
combinations
use,
say
four
upper
four
beats
for
vector
length.
As
I
said,
we
need
three,
but
maybe
one
is
for
future
and
lower
four
bits
for
existing
scalar
types,
like
those
data
types
that
we
have
and
the
last
one
is
num
types,
so
we
can
use
static,
assert
right
for
num
types.
Actually,
maybe
it's
incorrect
right.
It's
basically!
B
B
So
we
need
to
have
that
static,
asserted
existing
scale
types
fit
into
full
beats,
and
if,
in
the
future
we
need
more
space,
then
I
think
the
solution
is
just
to
make
data
types
in
bigger
right.
If
we
want
to
encode
more
and
and
then
get
data
type
will
return
this
enum.
B
B
And
then
what
about
supported,
vector,
lengths
and
types?
I
think
it's
easy
to
control
like
because
there
will
be
a
limited
number
of
actually
places
where
we
create
and
we
vector
types
and
go
to
vector
p,
expansion
for
vector
library
or
rsmd,
and
then
we'll
just
be
asking
why
cody
and
which
combinations
are
supported,
which
already
do
seem
for
other,
simply
because
I
didn't
do
that
and
then,
when
we
convert
one
type
to
another,
some
transformation
happen
again.
We
just
need
to.
I
think
it's
similar
to
other
types.
B
So
this
is
data
type.
So
basically
the
bottom
line
it
can
be
encoded,
combination
of
element,
type
types
and
vectors
can
be
encoded
in
one
white
as
well
right
now.
So
what
about
opcode
opcode
right
now
stored
this
variable,
let's
say
ignom,
and
then
it
currently
has
its
size,
four
bytes,
and
so
we
can
encode
all
op
codes
into
four
bytes,
because
data
type
is
one
byte
operation.
B
Two
bytes
should
be
enough
because
we
have
right
now
around
600.
We
don't
plan
to
add
more
and
then
we
need,
as
I
said,
two
data
types.
So
it's
like
one
for
the
result,
one
for
source
and
two
bytes
of
operations.
So
four
and
and
again,
if
it's
at
some
point
not
enough,
we
just
need
to
make
that
enum
bigger
and
again.
B
We
need
to
a
cert
that
exists
essentially
operation
right
fits
into
two
bytes
right,
so
we
need
to
cover
the
third
and
also
very
important
that
I
start,
of
course,
that
the
whole
enum
fits
because
we
essentially
we
need
to
fit
all
this
and
correct
values
here
right.
So
we
need
to
assert,
but
that's
nice,
that
it
fits
right
and
maybe
even
specify
the
size
of
enamel
explicitly.
B
I
will
show
an
example
and
as
similar
to
data
types
and
then
can
continue
getting
the
value
vector
op
code,
the
value
of
vector
upcode.
I
get
off
code
value
that
method.
We
have
exactly
the
same
way
as
for
other
op
codes
and
use
them
and
compare
compare
them
with
the
set
them
on
the
node
and
so
on,
but
but
they
just
won't
be
able
to
refer
them
explicitly
in
the
source
sort
of
like
hidden
enough
and
vector
op
calls
can
be.
B
You
can
say,
ask
what
code
is
a
tractor
right
or
get
result
type
get
source
get
operation
and
in
this
case,
operation
will
be
like
separate
email.
I
will
show
an
example
and
it
can
be
used.
It
will
be
used
to
create
new
op
calls
vector,
of
course,
and
also
in
the
code,
maybe
like
grouping
them
like
by
add
reducing
spell
by
operation.
B
I
will
show
example
and
yeah
and
of
course,
as
you
know
like,
essentially
this
value
is
used
for
referring
to
different
tables
and
dispatching
all
kind
of
handlers
and
that
code
will
have
to
be
changed
like,
but,
hopefully,
somehow
encapsulated
like
say
for
vector
of
code.
U,
in
order
to
get
handler
for
vector
of
coding
in
special
case
and
regarding
rt,
I
think
again,
it's
similar
to
other
code,
and
you
know
instructions
we'll
need
to
invalidate
method
if
compiled
in
vector
compiled
in
actual
vector
length
don't
match.
B
B
B
B
Unique
value
I
mean
I'm
essentially
getting
exercise
contact
yeah
so,
and
here
we
now
have
those
old
vector
types
and
we'll
remove
them
for
consistency
right
only
in
known
vector,
all
known
vector,
of
course
will
remain,
and
then
here
will
be
essentially
sort
of
like
invisible
data
type
in
us,
which
will
be
generated
on
the
fly
and
for
opcode
is
very
similar
right,
we'll
remove
all
like
vector,
of
course,
from
here
that
we
have
right
now
we
add.
B
Instead,
we'll
have
new
enum
vector
operations.
I
would
call
it
not
opcos.
Of
course
it's
something
not
code.
It's
already
like
it's
a
triple
red
operation.
It's
just
a
sort
of
abstract
operation,
so
probably
better
to
call
it
vector
operations
and
that
will
be
used
to
create
vector,
op
codes
right
and
then
create
all
these
separate
parts
of
it
results.
Source
operation.
A
B
A
That
query
for
his
vector
there
that
can't
be
last
omar
up
that
you
compare
against
right.
That
would
have
to
be
the
number
of
top
codes
from
omr
and
any
other
downstream
project.
So,
for
example,
if
openj9
were
to
find
their
own
op
codes,
which
it
does
is
after
that.
B
B
B
All
right,
we
can
always
come
back
to
here
examples
yeah,
so,
for
example,
yeah,
as
I
was
saying
like
for
data
types
right,
we
can
use
data
type,
they
get
data
type
as
before.
Like
say,
even
if
it's
a
vector-
and
this
is
not
vector,
it
will
return
correct
value
like
they
will
be
correct.
B
You
know
equivalence
results
like
room.
You
know
we
can
still
compare
different
op
codes.
Sorry
data
types
here,
for
example,
if
the
selector,
then
it's
not
gonna,
match
right
if
it's
the
intent
to
match.
B
If
you
want
some
extra
code
for
vectors,
then
we'll
just
add
some
extra
code
like,
but
usually
in
most
of
the
cases
like
you
know,
say
if
you
know
as
a
first
step,
we
might
not
even
need
to
add
such
special
cases
if
we
want
to
optimize
something
right
and
of
course,
if
we
want
to
generate
code,
evaluators
we'll
need
to
add
this
type
of
statements,
first
check
if
it's
vector
and
then
get
the
type.
B
In
switch
statements
similar
right,
we
still
can
use
nodegate,
we
get
data
type,
even
if
it's
vector
right.
It's
all
the
same
way
and
just
add
extra
code
for
vectors
right.
B
So
that
data
type
examples
note
creation
on
how
to
create
nodes,
for
example
before
we
would
say
you
know
just
explicit
type
data
type,
but
if
you
want
to
create
vector
when
you
create
type-
and
in
this
case
I
I
specify
not
the
vector
length
but
number
of
elements,
but
we
can
decide
which
exactly
api
is
more
intuitive
right.
B
B
And
if
it's
type
is
vector
we
will
call
create
vector,
op
code
right.
This
will
be,
of
course,
definitely
it'll
be
function,
separate
function
that
creates
that's
the
whole
idea.
It
has
to
be
kind
of
encapsulated
right.
That
encoding
has
to
be
in
separate
function,
create
vector
op
code
and
in
case
of
load,
as
I
mentioned,
like
type
result
and
source
type
are
the
same
and
vector
operation
is
reload,
indirect,
read,
so
that's
a
node
creation
example
and
then
again
using
opcode
similar
to
using
data
types.
F
B
B
Maybe
yeah
combine
these
two
checks
together
now,
actually
yeah
we're
probably
going
to
have
something
like
this
vector
address
and
then
check
element
type
and
do
something
special
for
vectors
and
similar
forces
which
are
actually
with
switch
still.
We
don't
need
to
rewrite
switch
like
just
a
sensation,
add
default
gates,
so
code,
examples
and
in
trace
file.
B
It
will
look
like
this.
It's
a
suggestion
right
like
to
print
real
print
if,
if
source
and
the
result
are
the
same,
just
print
the
source,
if
it's
conversion,
it's
like
this
will
be
sourced.
This
will
be
result.
B
What
I
sort
of
like
about
it
is
that
it's
a
conceptual
it's
very
close
to
the
current.
I
would
say:
non-vector,
of
course,
regular
of
codes
in
a
sense
that
all
information
is
kept
in
the
op
code
yeah.
It
may
be
encoded
differently
right,
but
the
encoding
is
the
separate
function.
You
know,
we
don't
even
see
how
it
is
encoded.
Usually
we
don't
even
look
there
right.
B
We
just
know
that
our
op
code
uniquely
represents
sort
of
contains
all
information
we
need
and
in
coding
and
sort
of
like
travels,
find
like
throughout
the
optimizer
and
all
the
transformations
and
all
the
information
is
preserved
in
the
hope,
code
and
yeah
and,
of
course,
I
generated
on
the
fly.
So
we
don't
need,
we
don't
generate
like
the
ones
that
we
don't
need,
because
there
are
quite
a
few
of
them
right
and
I
would
say
another
good
part
is
like
yeah
sort
of
minimum
changes
to
existing
code
non-vector
code.
B
I
would
say
no,
of
course,
if
you
release
somebody
really
also,
you
can
create
like
reference
to
very
specific
vector
type
or
op
code,
but
generally
there
is
no
like
enums
that
can
know
that
might
be
used
in
a
source,
just
only
like
general
queries
right,
and
I
think,
because
it's
so
close
to
current
approach,
we
can
consider
like
somehow
extending
it
or
optimizing
it.
For
example,
right
now
right,
most
of
the
op
codes
result
in
source
type
is
the
same,
for
example
right.
G
I
have
a
question
about
the
get
data
type,
because
you
were
saying
that,
with
this
proposal,
any
values
outside
the
range
of
the
enum
data
type
cannot
be
used
to
index
into
the
various
tables.
G
So
doesn't
that
mean,
then
that
there
is
a
risk
that,
with
this
change,
existing
code
that
currently
has
some
like
calls
get
data
type
on
something
and
then
just
straight
uses
that
value
to
look
something
up
in
a
table?
Wouldn't
that
start
breaking.
G
H
B
Check
all
the
places
kind
of
the
hope
is
that
we
don't
have
that
many
places
all
right,
for
example,
for
op
codes.
We
know
all
the
tables
right
and
usually
we
use
some
general
sort
of
dispatch
code
for
data
types.
I
also,
I
don't
think
we
have
many
places
when
built
tables
indexed
by
data
type,
but
yeah
that's
sort
of
it's
where
we
need
to
be
careful.
Yeah.
G
Yes,
among
others,
also
just
like,
if
I
can
easily
imagine
a
case
in
omr
itself,
right
where
we
might
grab
the
data
type
of
something
stored
in
a
variable
and
then
way
later
on,
use
it
in
some
way
that.
G
B
G
Now
the
problem
is
you're
going
to
have
to
change
all
the
code,
all
the
existing
code.
That
might
currently
have
a
problem
like
this
right,
like
anywhere,
where
you're
now
manipulating
the
data
type.
You
could
all
of
a
sudden
have
a
value
that
is
outside
the
range
and
everywhere,
where
that
code
is
used,
which
isn't
necessarily
using
get
data
type
directly,
would
need
that
kind
of
assert
or
masking
or
whatever.
B
B
G
Well,
not
necessarily
right
so,
at
least
at
the
omr
level.
We
have
to
assume
that
vector
types
are
going
to
show
up,
because
the
whole
point
of
this
is
that
we
implement
support
in
a
downstream
project.
G
B
B
G
Yeah
well
at
least
I
know
mmr.
I
guess
that's
something
we
can
check
by
just
scanning
through
the
code.
The
concern,
then,
is:
are
there
any
non-obvious
cases
because
of
possible
indirections,
like
I
said
before,
where
you're
storing
something
in
the
variable
and
then
pass
that
on
somewhere
and
then
that
variable
gets
used
to
index
somewhere?
It's
it
it's
it's
possible
to
find
out,
but
possibly
it's
going
to
take
some
time
and
some
work
and
then
for
downstream
projects.
G
Right,
but
the
thing
is:
how
can
we
know
what
tables
there
are
right?
You
could
have
like
even
just
a
simple
switch
case,
for
example,
and
you
that
doesn't
have
a
name.
You
can't
really
look
that
up.
G
G
B
D
There's
also
a
danger
that,
if
you're
not
changing
the
num
types
value,
then
even
even
can
like
downstream
consumers
that
wrote
safe
code
that
asserts
it
or
that
checks
that
that
num
types
isn't
great
or
they
could
run
into
trouble.
I
guess
if
they,
I
guess,
they'll
still
detect
something
that
an
actual
value
that
is
greater
than
num
types
in
a
cert.
So
that's
good,
but
I
mean
we
can't.
We
can't
know
what
downstream
consumers
may
have
created,
what
tables
they
may
have
created
and
just
index
into
right.
That's,
I
guess.
C
G
In
omr
there
might
be
a
table
buried
deep
inside,
I
don't
know
vp
or
something
right
that
everyone
has
forgotten
about
and
finding
those
could
be
tricky.
B
B
G
B
In
this
case
is
a
problem
tables
because
essentially
we're
different
between.
If
you
go
back
to
this
approaches,
b
and
c
right,
what
is
different
here,
it's
kind
of
like
what
I
would
call
sort
of
dense,
enum
right,
which
you
can
use
as
the
index,
and
here
it's
some
kind
of
almost
like
you
know
it's
a
unknown
value.
It's
you
cannot
use
it
as
a
text,
so
that
makes
difference
for
tables.
I
would
say.
D
The
problem
is
because
sometimes
isn't
changing
right,
so
if
somebody
is
indexing
into
a
table,
they're
probably
checking
that
the
size
of
the
table
that
they've
got
is
like,
if
they're
going
to
assert
on
something
it
would
be
a
static,
assert
that
the
size
of
the
table
has
num
types
in
it
and
we're
not
in
this
scheme.
Num
types
doesn't
change
so
that
doesn't
break
but
then
later
on,
you
index
into
it
with
something
that
has
a
value
greater
than
num
types.
B
I
I
B
D
B
I
Or
just
rename
it
to
something
else,
some
name
that
doesn't
suggest
that
it's
all-encompassing
of
oh.
C
I
On
all
of
the
type
values,
and
then
you
know,
if
somebody
had
a
static,
assert
well
you're,
setting
off
that
static
assert,
because
it
can't
find
num
types
in
in
a
way.
D
D
Which
means-
and
you
know
when
renaming
them
types
or
some
other
similar
kind
of
thing.
Sorry.
I
It
would
also
cause
any
an
error
on
any
array
that
was
statically
declared
to
have
size,
gnome
types
right.
B
G
Yeah,
if
don't
stream
projects
are
not
checking
this,
I
don't
know
that
there
is
a
a
good
way
to
avoid
breaking.
G
D
I
know,
but
it's
it's
like
that's
already,
there's
an
argument
that
that's
already
brittle
code.
That's
been
yeah.
D
D
B
So
I
guess
I
think
that
address
than
that
concern
right
like
if
we
because
those
tables
they
all
most
likely,
you
can
type
cert
and
then
we'll
know.
B
A
A
B
That
remaining
yet
that
obviously
will
break
something.
But
apart
from
that,
just
adding
this
is
not
going
to
break
anything
right.
Creating
all
these
extra
functions
would
not
break.
D
K
B
B
G
C
G
This
right,
yeah,
no,
I
know,
but
I
guess
what
I'm
saying
is
whatever
we
do
in
log
files
would
probably
also
have
to
happen
in
kca.
If
we
wanted
kca
to
be
able
to.
B
K
Yeah
the
what
kca
does
today
is
kind
of
you
know
it's
not
it's
not
complete,
I
would
say
so.
K
K
Yeah,
that
would
be
ideal
but
yeah,
even
today
it
does
sort
of
wrong
things
if,
if
you
supply
it
with
wrong
data,
so
it
doesn't
doesn't
have
built-in
knowledge
of
the
il.
So
you
have
to
sort
of
supply
it
with
header
files
and
stuff
like
that.
D
One
of
the
areas
where
I
had
a
moderate
concern
was
if
you're
working
in
a
native
debugger
like
gdb
and
you're,
looking
at
a
node
structure
and
right
now,
if
you
print
out
a
tr
node
structure,
you'll
at
least,
if
you're
working
with
omar
with
a
debug
build,
you
can
actually
get
the
op
code,
values
you'll
get
them,
you
can
print
out
the
structure
and
it
will.
It
will
see
them
and
print
out
what
the
op
code
is.
But
in
this
scenario
it
will
that
will
turn
into.
B
So
what
do
you
think
is
to
be
concerned
like
maybe
we
can
think
of
something
like
somehow
maybe
generate
those
names
for
debug
build.
H
D
I'm
not
going
to
be
the
one
to
work
with
this
very
much
at
least
not
recently.
It
hasn't
been
me
working
with
this,
so
the
team's
going
to
have
to
deal
with
this
more
than
I'm
going
to
have
to
deal
with
it.
So
I'm
I'm
willing
to
swallow
my
concern
about
that.
If
no
one
else
is
concerned
about
it
and
it
doesn't
seem
like
there's
a
there's-
a
widespread
concern
about
it.
D
So,
given
that,
I
guess
that
I
I'm
not
gonna,
I'm
not
gonna
argue
for
c
or
b
in
that
case,
for
sure
I
think
b
is
a
fine
approach.
As
long
as
we
address
the
concerns
that
were
raised
earlier
by
leo,
I
think
devin's
suggestion
is
a
good
one,
yeah
consistently
for
data
type
and
op
code,
because
I
think
you
have
the
same
problem
with
both
of
those.
B
D
More
or
less
I
got
I
don't
know,
I
don't
know
how
much
more
of
a
decision
we
need
to
make
here
I
mean
you're
gonna
have
to
make
a
thousand
more
decisions
once
you're
as
you're
going
through
and
implementing
this
and
deciding
it,
and
you
know
unless
somebody
wants
to
really
object
to
to
be
which
I
haven't
heard
anybody
getting
upset
about,
but
please
do
get
upset
if
you
are
upset
about
it
like
tell
us
that
you're
upset
about
it.
Otherwise,
I
would
say
just
move
forward.
G
D
G
B
B
G
You
know
yeah
yeah.
My
preference
for
c
is
to
do
more
with
more
than
just
that
right,
because,
among
other
things,
it
would
also
solve
a
lot
of
debuggability
issues
because
sure
you
could
just
generate
the
values
so
that
they
match
whatever
encoding
format
has
been
proposed
here,
but
you
would
also
have
the
enum
name
associated
with
it.
So
looking
in
gdb,
it
would
still
work.
B
Yes,
I
think
I
think
that's
important
to
that
part
I
think,
is
important.
Should
they
be
sequential
values
or
they
should.
That
makes
I
think,
the
biggest
difference
when
I
said
to
distinguish
like
clearly
a
b
and
c
by
c.
I
mostly
my,
I
meant
sequential
numbers
here,
just
added
like
how
they
can
be
sort
of
merged
with
b,
but
the
c
in
most
kind
of
general
way
is
sequential
numbers.
So
I
think
that's
what
we
need
to
decide,
so
those
enums
should
be.
D
Okay,
that's
the
difference.
You
see
between
c
and
b.
Then
that's
not
that's,
not
how
I
was
interpreting
it
yeah.
That
was
not
how
I
was
interpreting
that
either
I
would
I
would
not
recommend
using
sequential
numbers.
I
would,
I
would
assume
you'd
do
the
encoding
and
then
just
tool
would
just
generate
the
enum
to
match
that
encoding
so
that
you
could
still
extract
the
fields
that
you're
talking
about,
because
I
think
it
is
valuable
to
be
able
to
do
that.
B
I
definitely
meant
that
c
is
sequential,
but
if
we
encode
what
I
was
saying,
if
encode,
then
it
becomes
close
to
b.
Basically,
then
we'll
look
at
it
as
b,
but
without
but
plus
debugging
abilities,
so
we
can
then
have
b
and
somehow
think
how
we
can
generate
those
names,
but
the
essence
of
the
still
you
need
some.
You
know
function
that
decodes
them
and
calls
them
decodes
them
right.
B
B
G
G
G
B
I'm
not
completely
sure
num
type
will
have
very
different
meaning.
Then
it
will
be
sort
of
like
maximum
encoded
world.
It's
not
going
to
be
a
number
of
emails.
It
will
be
sort
of
like
the
maximum
value
you
can
have
it
it'll
be
like
you
know,
ffff
or,
like
you
know,
into
max
or
something
or
unsigned
in
max
rate,
it
will
have
different
meaning
because
it's
very
important
to
distinguish,
like
you
know,
right
versus
encoded
enough.
So
it's
very
important.
I
B
But
for
data
type,
please
don't
forget
the
sve
right
they'll
be
recounted
around
100
combinations,
it's
not
a
huge
number,
but
it's.
I
don't
think
it's
kind
of
like
generating
hundreds
of
combinations
because
it
will
grow
in
the
future.
It
will
be
considerable
increase.
I
think
of
data
types.
B
B
B
D
B
G
B
Tables
right,
that's,
I
think
the
main
problem,
and
what
I'm
saying
is
the
only
way
to
reference
be
able
to
reference
problems
is
to
list
all
like
make
them
on
regular
enough
without
values.
Compiler
will
give
them
sequential
numbers
and
which
will
make
them
in
like
in
indexes.
You
can
use
them
as
an
index.
Compiler
will
take
care
of
it,
but
as
soon
as
you
encode
it
right.
D
D
Don't
think
he's
I
don't
leo's
hear
he
can
speak
for
himself,
but
I
don't
think
he's
concerned
that
you
can't.
I
don't
think
he
wants
the
ability
to
use
a
data
type
or
an
opcode
as
an
index.
I
think
he's
concerned
about
people
who
have
used
it
as
an
index
getting
broken
when
we
make
this
change.
B
G
G
So
the
difference
is
that
you,
when
you
sorry
go
ahead,
mark
go
ahead
leo
that
to
me
the
the
difference
here
is
you
have
when
you're
thinking
about
the
work
to
implement,
say,
b,
you
have
to
think
about
the
work.
It's
also
going
to
take
to
update
existing
code.
That
is
broken,
which
is
code
potentially
both
in
omr
and
downstream
projects.
B
G
Right
but
but
that
is
the
concern
here,
is
that
we
have
the
potential
to
break
downstream
users,
and
we
should
be
trying
to
do
everything
possible
to
avoid
that,
and
not
necessarily.
B
G
Okay,
yes,
that
particular
problem.
Yes,
there
are
other
the
question
about
c
that
that
was
for
a
different
concern.
I'm
not
saying
that
automatically
generating
the
enum
is
going
to
magically
make
the
issue
of
indexing
go
away,
but
it
still
has
value.
In
my
opinion,.
G
I
just
said:
is
we
can't
if
we
go
with
this
encoded
approach
right,
but
we
should
still
try
to
mitigate
other
issues
right
that
are
not
necessarily
related
to
indexing.
C
B
I
I
think
most
debugging,
as
far
as
I'm
aware,
does
not
happen
on
debug
builds
right,
you're,
just
maybe
a
bit
of
a
misnomer.
D
D
B
We
would
like
to
use
the
extra
tool
and
to
generate
old
possible
values
which
can
be
sort
of
from
yeah,
just
kind
of
isolate
it
and
generate
all
the
values.
But
then,
as
mark
actually
was
saying
right
like
to
make
it
consistent,
then
the
block
tool
should
be
generating
those
functions
too
right
because
we
want
to
keep
it
consistent
right,
android
rule
all
kind
of
functions
that
we
need
actually,
maybe
just
so
two
functions
that
encodes
and
decodes
the
values.
B
G
So
just
playing
devil's
advocate
here.
If
you
made
the
tool,
also
generate
those
functions,
you
could
make
the
tool
like
you
could
make
the
tool
then
generate
the
enum
sequentially
and
make
it
generate
tables
such
that
those
function
just
turned
into
a
lookup
into
a
table,
and
you
get
back
the
sync
property
values
right
and
you
wouldn't
be
able
to
decode
it
manually.
But
you
wouldn't
do
that
because
you
could
just
call
the
functions
to
do
it.
D
I
They
wouldn't
need
to
be
nearly
as
huge
as
the
space
of
of
opcodes
that
we're
reserving
or
considering
reserving
for
non-sequential
bit
packed
values
because
most
of
the
combinations
of
values
that
we
could
pack
are
garbage
and
we
would
not
generate
them
sure.
I
That
was
what
what
leo
was
bringing
up.
Yes,
yeah,
I
did
say
playing
devil's
advocate
at
the
beginning.
I
B
Maybe
discuss
that
certainly,
but
then
you
need
to
generate
property
table
which
you
sort
of.
What's
like
her
say,
do
you
want
to
generate
manually
or
with
the
two?
I
think
it
would
not
be.
I'm.
B
And
not
only
like
tables,
I
think
then,
then
you
cannot
use
this
right.
Then
you
can
have
to
add
all
these
enums
right
into.
If
you
cannot
decode
these
things
like
that,
right.
D
I
I
think
adding
some
structure
to
the
to
the
enum
helps
avoid
the
memory,
the
memory
bloat
in
a
in
a
fairly
simple
way,
but
I'd
also
like
to
suggest
that
maybe
we
don't
need
to
decide
this
today,
because,
if
we're
in
the,
if
we're
in
the
world
of
b
and
all
we're
deciding,
is
whether
or
not
there's
a
set
of
names
that
define
the
enum
for
for
the
higher
values,
then
that's
kind
of
a
we
could
do
it
or
not
to
start
with
right.
B
Exactly
exactly
actually
that's
what
I
mean
was
trying
to
say
in
the
summary
right.
If
we
see
the
only
difference
between
b
and
c3
is
ability
to
debug,
but
maybe
we
can
decide
on
that
later
right
like
we
need
to
create
this
functions,
we
need,
if
that's
only
difference
we
can
decide
later
on
that.
Maybe
yes
generate
those
names.
Maybe.
D
B
B
I
B
B
Which
leaves
us
with
the
issue
of
indexing
tables,
of
course,
but
that's
sort
of
hard
to
avoid.
I
think.
A
A
Okay,
okay,
so
let's
I
guess
a
follow-on
issue
from
you
to
track
the
work
as
it
as
it
proceeds,
and
if
there
is
a
future
discussion
around
debugging
that
needs
to
happen.
We
can
have
that
happen
at
a
later
meeting
or
in
an
issue.
B
Yeah
and
as
I
just
reiterate,
it's
it's
expandable
approach,
I
think
like
we
can
always
like
optimize
things
more
like
it's
sort
of
minimal
change
to
you
know
existing
code.
I
think,
comparing
to
other
approaches.
B
A
It's
a
hard
topic,
like
you
mentioned
at
the
beginning
it
this
has
been
ongoing
for
almost
four
years
now,
so
it's
I'm
glad
you're
able
to
take
it
to
this
point
and
put
a
serious
proposal
forward.
So
thanks
for
the
efforts
there.
A
Okay,
I'm
going
to
end
the
call
now
thanks
everyone
for
your
participation
and
our
next
scheduled
call
is
in
two
weeks
thanks.
Everyone
thanks
bye.