►
From YouTube: 01 Introduction to Codee tools Shift Left Performance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let's
move
on
to
the
to
real
content,
so
let's
see
what
kodi
is
what
these
three
words
actually
mean:
shift
left
performance
and
remember
that
shift
left
is
about
automating
the
inspection
of
the
code
of
an
application.
So
you
will
see
that
in
the
in
development
software
tools,
you
can
find
different
tools
that
automate
code
inspection
for
different
purposes.
A
Okay,
so
essentially,
what
shift
left
performance
means
shift
left,
automate
code
inspection,
so
save
a
lot
of
time
to
the
developer
in
automating,
repeated
and
time
consuming
quest
tasks
that
can
be
automated
and
performance.
This
specialization
in
reporting,
insights
and
information
that
will
help
you
to
modify
your
code
to
improve
the
performance
when
running
on
a
computer.
A
A
So,
first
pillar,
the
software
to
automate
called
inspection
of
real
applications.
The
second
pill
are
the
knowledge
and
writing
down
the
knowledge.
In
simple
rules
that
from
the
novit
novice
developer
to
the
expert
developer
can
really
understand,
we
can
have
a
developer
that
is
expert
in
optimizing
for
memory
that
can
be
a
novice
programmer
for
gpus
or
the
other
way
around.
So
having
all
this
open.
Catalog
of
knowledge
is
really
interesting
both
for
the
experts
and
for
the
novices
of
all
the
newcomers
to
performance,
optimization
techniques.
A
This
is
like
this
presentation.
You
will
be
seeing
a
live
demonstration
of
kodi
using
the
pi
example.
The
same
example
that
we
proposed
for
the
lab
so
that
you
can
have
a
first
experience
in
seeing
how
to
use
kodi
how
the
output
of
kodi
of
the
reports
of
kodi
look
like
and
later
you
can
do
your
own
hands-on
by
using
the
same
example
and
following
the
sequence
of
commands
that
we
propose
and
that
our
team
only
says
will
be
using
in
the
live
demo.
A
Okay.
So,
in
order
to
understand
why
kodi
is
relevant
now
and
is
expected
to
continue
to
be
relevant
in
the
future,
is
the
ever-growing
size
and
complexity
of
software
projects?
It
is
something
that
you
can
see
in
the
hpc
community
if
we
compare
source
codes
or
application
codes
that
started
years
or
decades
ago
and
the
size
of
the
application
of
codes
we
are
managing
today,
this
the
project,
size
and
complicity
has
never
stopped
growing,
and
even
now
it
is
growing
faster
than
ever,
because
we
need
to
solve
more
problems
and
faster.
A
So
just
as
an
example,
if
you
consider
a
reference
solver
project
like
the
software
powering
the
f35
firefighter
of
lockheed
martin,
this
was
estimated
to
have
around
25
million
slice
of
coke.
But
if
we
look
at
the
average
connected
car
that
we
see
on
the
streets
today
and
that
someone
some
of
us
can
be
or
could
be
driving
in
the
future,
it
is
estimated
to
have
more
than
150
million
lists
of
code.
A
We
as
users
demanding
more
and
more
functionality
in
software
from
our
smartphones
to
simulation
codes
running
on
on
supercomputers
and
the
second
driver
is
essentially
the
availability
of
modern
hardware
from
smartphones
to
supercomputers
and
essentially
the
same
multi-cores.
The
same
instructions
set
architectures
the
same
vector
instructions
that
are
powering
our
supercomputers
are
powering
the
same
processors
that
we
use
in
our
smartphones.
A
So
what
we
see
is
that
more
and
more
functionality
and
as
users
demanding
this
functionality
running
faster
with
a
great
user
experience,
is
what
motivates
and
makes
the
needs
of
having
a
tool
like
kodi
that
enables
to
automate
code
inspection,
looking
at
big
source
protocol
projects
from
the
performance
lens
so
moving
forward.
So
one
of
the
key
things
is
to
have
software
that
runs
fast
enough.
A
We
may
have
simulation
codes
that
that
start
running
for
hours
and
by
using
gpus
run
in
minutes,
but
also
we
may
want
to
improve
the
user
experience
of
a
post-processing
tool
that
the
scientists
might
be
using.
We
don't
need
such
a
huge
computative
capacity,
but
we
still
need
to
improve
the
performance
of
the
visualization
part
of
a
tool
that
is
provided
to
the
computational
scientist
or
to
a
chemical
chemistry
engineer
as
part
of
the
workflow.
A
So
the
key
it
is
key
to
have
software
that
runs
fast
on
modern
computers,
and
the
hardware
is
here
to
help
us.
We
all
have
seen
the
great
increase
in
performance,
for
instance,
from
kodi
to
permuter,
while
preparing
this
course.
We
have
seen
the
same
examples
running
on
corey
and
running
on
permitter,
just
by
recompiling
the
code
running
faster,
but
the
hardware
is
here
to
help.
But
the
hardware
is
not
the
whole
solution
to
this
problem,
and
this
is
why
we
are
here
today.
A
A
A
Another
even
are
specializing
compliance
for
some
software
to
really
be
allowed
legally
to
be
deployed
as
part
of
a
product
that,
for
instance,
a
doctor
uses
uses
in
a
hospital.
The
software
needs
to
be
certified
to
be
compliant
with
coding
standards.
So
statical
analyzers
have
also
been
traditionally
used
to
enforce
the
compliance
of
application
code
with
given
coding
standards,
but
we
have
seen
is
that
all
of
these
tools
are
enabled
to
shift
left
so
automate,
good
inspection,
security
bugs
and
compliance.
A
But
none
of
these
tools
so
far
has
enabled
to
shift
left
performance
and
performance
is
a
critical
part
for
our
community,
and
this
is
what
makes
a
kodi
unique.
It
takes
the
benefits
of
static
code
analysis
tools,
improv,
providing
these
reports
running
very
very
fast.
While
you
are
developing
your
code
and
it
provides
you
insights
relating
to
performance
how
to
improve
the
performance
of
your
code,
how
to
find
bugs
of
code
that
we
run
faster
with
running
correctly.
A
coder
runs
faster,
but
produces
incorrect.
A
Results
is
useless,
so
kodi
for
the
first
time
is
enabling
to
shift
left
performance.
This
means
automated
the
inspection
of
real
applications
of
source
code
by
providing
insights
from
the
point
of
view
of
performance,
how
to
improve
the
performance
or
detect
defects,
issues
related
to
performance
in
our
application
codes.
A
So
the
first
question
is:
who
can
actually
use
kodi
kodi
can
be
used
by
anyone
in
any
area
developing
code
written
in
cc
plus
plus
today
and
in
the
near
future.
We
are
working
hard
to
also
provide
fortune
support.
This
is
why
we
say
that
everything
you
will
see
today
in
the
near
future
will
be
also
available
for
the
fortran
programming
language
that
is
very
widely
used
in
our
community.
A
So
in
order
to
prove
that
what
we
have
been
doing
is
during
the
last
months
of
the
last
year
is
we
have
selected
reference
application
codes
that
are
used
in
different
application,
domains
from
audio
codecs
to
compression
tools
to
simulation
tools
like
spec
to
the
linux
kernel
to
simulation
codes
in
astrophysics
like
hack
mk?
That
is
part
of
the
coral
benchmarks
of
the
doe.
So
what
we
wanted
to
do
here,
we,
the
purpose,
was
not
to
really
improve
the
performance
so
that
we
can
improve.
A
The
performance
of
that
code
is
to
prove
that
cody
can
parse
and
analyze
and
produce
the
report
for
codes
from
10
lines
of
code
and
codes
like
the
linux
kernel.
That
has
almost
one
and
a
half
millions
of
lines
of
code
and
provide
that
information
from
seconds
to
just
a
few
minutes
in
of
computation
power
in
our
in
a
laptop.
So
here
you
can
see
that
lame
ship,
spec
chromax
ffmpeg
linux
kernel
with
the
well-known
nasa
parallel
benchmarks:
the
madmule
example
we
will
be
using
today
in
the
labs.
A
All
of
these
codes
have
been
used
and
have
passed
and
be
analyzed
in
seconds
two
minutes
by
kodi.
So
this
gives
you
an
idea
of
the
production
level
of
the
matureness
of
the
tool.
Any
project
written
in
cc,
plus,
plus
and
fortran
can
be
analyzed
and
we
can
produce
the
performance
optimization
report.
A
So
what
are
the
benefits?
And
you
have
these
benefits-
probably
very,
very
clear,
but
I
want
to
emphasize
three
main
benefits.
First,
you
will
be
able
to
deliver
applications
faster.
You
will
be
able
to
develop
your
application
in
less
time
and
applications
that
also
run
faster
on
modern,
low
power,
hardware,
okay
or
modern
processors.
A
Second,
by
automating
many
of
the
tasks
that
you
need
to
do
or
an
expert
could
do
manually
by
scanning
thousands
or
or
millions
of
lines
of
code
manually
by
automating
this
process.
You
can
get
in
just
a
few
minutes,
a
report
that
would
take
hours
days
or
weeks
of
manual
code
inspection.
So
definitely
it
will
enable
to
save
many
many
hours
in
the
software
development
process
so
in
the
end,
save
costs
in
the
software
development
process.
A
A
If
you
look
at
the
fundamental
challenges
you
need
to
address,
when
you
are
optimizing,
your
code
from
running
on
a
sequential
processor
faster
for
running
on
a
multi-core
processor
faster
to
running
faster
on
a
gpu.
In
the
end,
you
need
to
address
essentially
three
main
challenges:
memory:
traffic
control.
We
all
know
that
the
interaction
between
the
cpu,
the
central
processing
unit
and
the
memory
system,
all
the
memory
hierarchy,
main
memory,
the
cache
memory.
This
interaction
is
key
for
performance
in
modern
systems.
A
Second,
if
we
focus
on
the
processor
capabilities,
server,
class
processors
that
are
powering,
supercomputers
and
even
smartphones
have
very
advanced
instruction,
set
architectures
very
advanced
instructions
that
you
need
to
use
efficiently
in
your
applications,
for
your
application
to
run
faster,
particularly,
is
particularly
interesting.
The
vectorization
part.
We
typically
believe
that
compilers
can
do
all
that
is
possible
for
to
vectorize
our
code.
A
We
will
see
later
in
september
october,
second
part
of
the
training
that,
by
using
a
statical
analysis
from
kodi,
you
can
change
your
code
to
help
the
compiler
to
even
do
a
much
better
job
that
they
can
do
by
default,
just
by
developing
your
code
in
a
vectorization,
friendly
manner
and
finally,
the
third
challenge.
Multi-Threading.
A
When
I
say
when
we
say
multithreading
here,
we
say
to
create
multiple
threads
that
run
on
the
cpu
side
and
they
cooperate
to
run
the
code
in
parallel
correctly,
but
also
multiple
threads.
We
have
thousands
of
threads
inside
the
gpu
and
we
need
to
guarantee
that
all
the
threads
spawn
on
the
gpu
cooperate
in
a
timely
manner
in
a
correct
manner
to
produce
the
correct
result.
A
A
Look
at
the
steps,
one
two
and
three:
they
are
not
about
parallelism.
They
are
not
about
gpu,
multi-threading
or
vectorization.
They
are
just
about
writing
your
sequential
code
in
a
way
that
is
hardware
friendly
in
a
manner
that
is
performance
friendly,
for
instance,
is
step
number
one
scale
sequential
scale.
Optimization
use
instructions
in
your
program
that
enable
to
use
advanced
instructions
in
your
processor
step
number.
Two.
A
Typically
in
your
loops,
try
to
simplify
the
control
flow
if
you
can
avoid
having
a
branch
or
an
if
then
else
or
a
switch
instruction
within
a
loop
and
if
it
is
possible,
please
avoid
it
because,
having
that
conditional
control
flow
in
your
loops
will
only
prevent
many
hardware,
optimizations
or
compiler
optimizations
from
happening
a
automatically
so
optimizing
and
minimizing
simplifying
the
control
flow
of
your.
Our
applications
is
important
for
cpu
and
gpu.
Vectorization
multi-threading
and
offloading
step
number
three
focus
on
this
sequential
memory.
Optimization.
A
Finally,
once
we
make
this
kind
of
optimizations
in
our
code,
it's
usually
recommended
to
go
to
steps
four
five
and
six,
and
these
are
different,
flavors
or
different
types
of
parallelism,
so
from
vectorization.
That
is
typically
well
supported
by
compilers,
but
we
can
make
a
lot
to
help
the
compilers
to
make
even
much
better
job,
multi-threading
or
offloading
to
the
gpus.
So
in
the
end,
going
and
uploading
code
to
the
gpu
is
really
very
challenging,
and
typically
you
need
to
address
programming
challenges
that
are
typically
addressed
in
steps
one
to
five.
A
A
Okay,
so
remember,
we
said
it
is
important
to
have
software
that
shifts
left
performance
that
automates
the
inspection
of
the
code
to
produce
a
report
with
information
that
is
relevant
for
us
as
programmers
to
understand,
which
are
the
problems
in
terms
of
performance
and
how
to
fix
them.
So
the
problem
is
what
what
is
the
knowledge
base?
What
is
the
set
of
rules
that
we
need
to
consider
to
ensure
that
our
application
code
follows
performance,
best
practices?
A
A
We
call
them
recommendations,
opportunities,
defects
and
remarks.
These
are
different
types
of
actions
or
insights
about
the
code
that
kodi
can
provide
that
are
relevant
and
very
important
for
performance,
optimization
and
as
the
software
is
intended
for
experts
and
also
for
novice
use
novice
programmers.
A
A
So
it
is
quite
a
lot
of
information,
so
you
can
browse
the
catalog
by
these
categories.
Recommendations,
opportunities,
effects
and
remarks,
but
also
an
important
capability
of
the
software
is
that
it
allows
you
to
navigate
the
catalog
from
the
point
of
view
of
the
stages
of
the
performance,
optimization
roadmap.
A
A
So
moving
forward
for
those
of
you
that
are
familiar
with
the
static
of
analyzers.
This
will
be
a
probably
a
very
easy
to
understand
for
those
of
you
that
it
is
the
first
time
that
you
will
be
using
a
static
code
analyzer.
It
is
important
to
understand
how
statical
analyzer
behaves-
and
this
is
what
we
summarize
with
this
seven
valid
points
that
you
see
on
the
right
of
this
slide.
First,
a
static
code,
analyzer
scans,
inspects
the
source
code
of
your
application
without
executing
it.
This
is
very
important
for
productivity
reasons.
A
Second,
what
is
the
output
of
this
code?
Inspection
is
to
produce
a
report
of
human
readable,
actionable
recommendations
that
locate.
Where
is
the
problem
and
how
to
fix
it
and
describe
a
solution,
or
at
least
put
you
on
the
path
on
how
to
solve
that
problem,
so
scan
and
produce
a
report?
Third,
compliance
in
particular
for
performance.
It
is
important
that
all
the
insights,
all
the
actions,
including
the.
B
A
Help
to
promote
performance,
optimization
best
practices
that
are
aligned
with
the
six
stages
of
the
performance,
optimization
report,
a
roadmap,
sequential
optimizations,
simplify
the
control
flow,
efficient
usage
of
memory,
vectorization,
multiplying
and
offloading
okay.
So
compliance
with
the
performance,
optimization
roadmap,
optimization.
A
How
can
you
use
that
information
in
what
platforms
can
you
use
this
information
in
general
today
we
provide
a
set
of
hardware
agnostic
rules
that
you
can
use
in
any
microprocessor,
modern,
microprocessor,
x86,
arm
or
power
processors
are
both
can
can
can
benefit
from
using
the
applying
the
rules
recommended
by
kodi
and
also
accelerators.
You
will
see
here
today
how
you
can
use
these
capabilities
to
generate
create
very
quickly
code
that
runs
fast
on
the
gpu.
A
A
A
So
these
two
last
items
we
will
not
be
addressing
them
in
this
training
series
at
nerds
during
this
year,
so
focus
on
scan
report,
compliance
for
optimization
and
providing
automated
fixes
in
this
case
for
gpus
okay.
So
this
is
a
quick
introduction,
so
remember
everything
can
be
summarized
in
three
single
words:
shift
left
and
performance.
A
A
As
you
see
here
in
the
screen,
it
provides
you
with
a
table.
It
is
kind
of
a
screening
of
your
application.
It
reports
for
the
different
files
or
modules
of
your
application.
How
many
lines
of
code
were
actually
analyzed?
How
long
did
it
take
to
analyze
that
part
of
the
code?
Typically
something
in
the
seconds
time
frame
and
how
many
actions
have
been
found
here?
You
can
see
that
kodi
found
more
than
100
actions
in
a
project
of
image,
processing
called
kanye,
a
h
filter
that
consists
of
more
or
less
1
000
lines
of
code.
A
So
from
this
140
actions,
the
next
information
that
you
get
in
this
entry-level
report
is
the
split
of
all
the
actions
in
the
six
stages
of
the
performance.
Optimization
roadmap.
Look
at
the
columns
from
left
to
right,
a
scalar,
optimization,
serial
scalar
means
using
a
the
processor
instruction
set
efficiently.
A
A
A
If
you
follow
that
action
here,
for
instance,
by
copy
and
pasting
one
of
the
suggested
commands
and
if
you
compile
and
run
the
code,
not
optimized
and
the
code
optimized,
if
you
do
things
properly,
you
will
be
able
to
reduce
the
runtime
of
your
application,
in
this
case,
from
14
seconds
to
8.5
seconds.
Just
by
adopting
and
implementing
one
of
the
actions
recommended
by
the
software
so-
and
this
is
an
iterative
process
that
you
use-
that
you
follow
guided
by
the
all
the
insights
provided
by
the
performance,
optimization
report.
A
So
in
the
end,
if
you
focus
properly
on
the
hot
spots,
you
focus
properly
on
the
key
actions
that
are
relevant
for
your
application,
you
will
be
able
to
reduce
the
runtime
of
your
application.
This
is
one
of
the
nas
parallel
benchmarks,
cg
conjugate
case
gradient
gradient
running
faster
on,
in
this
case
enough
on
an
amd,
ryzen
processor.
A
Finally,
to
finish
this
part
of
the
presentation
here,
you
can
see
the
six
stages
of
the
performance
stimulation
roadmap
from
rows.
Two
three,
four
down
to
six.
You
can
see
how
these
stages
apply
to
microprocessors,
microcontrollers
or
other
devices
like
gpus
and
on
the
right
hand,
side.
You
can
see
examples
of
codes
that
are
running
faster
thanks
to
using
and
implementing
kodi
perform
optimization
recommendations,
and
there
are
calls
running
from
17
faster
to
96
faster.
A
Of
course,
you
know
this
depends
pretty
much
on
the
type
of
application,
the
programming
environment,
the
runtime
of
the
hardware.
In
the
end,
all
of
this,
in
the
end,
all
of
these
elements
have
an
impact
on
the
actual
performance
of
your
application,
but
at
least
what
this
provides.
You
is
a
systematic,
more
predictable
pathway
to
performance,
optimization
following
the
performance,
optimization
roadmap
from
stage
1
sequence
optimizations
to
stage
6
of
loading
to
gpus.
A
A
Okay,
here
you
have
the
typical
iterative
process
to
optimize
the
performance.
You
start
from
a
source
code
of
your
application.
You
typically
are
recommended
to
profile
your
application.
Don't
try
to
optimize
your
application
by
brute
force,
try
to
optimize
every
single
loop.
Do
it
in
a
smarter
manner?
A
So
once
you
have
this
hot
spot,
kodi
can
help
you
with
the
rest.
Kodi
will
not
do
profiling,
for
you
will
not
help
you
in
doing
profiling.
For
that
you
have
excellent
profiling
tools
available
at
nerd.
Score
are
available
in
art
development
tools,
but
look
at
here
here
we
provide
six
typical
use
cases
that
you
can
use
once
you
have
identified
and
profiled
your
application.
A
You
have
the
hottest
spots
of
your
application,
so
in
the
labs
we
will
be
seeing
probably
a
use
cases
one
and
two
and
use
cases
three
to
six
that
have
to
deal
with
more
real
applications,
build
systems
compilers
vectorization.
All
of
these
things
will
probably
be
addressed
in
the
second
part
of
the
training
series
in
september
and
october.
So
with
one
and
two
use
cases,
we
have
enough
information
to
speed
up
applications
in
on
gpa
running
on
motorola
or
on
gpus.
A
Okay,
and
this
is
all
so.
I
hope
I
have
been
able
to
to
transmit
how
kodi
can
help
in
different
ways:
first
providing
a
software
that
shifts
that
performance,
automatic
code,
inspection
specializing
in
performance.
A
So
I
I'm
gonna
stop
here
with
this
part,
and
now
I'm
going
to
hand
it
over
to
my
colleague,
lisa's
lisa's.
If
you
are
ready,
ulysses
will
be
providing
you
a
live
demonstration
of
kodi
using
the
pi
example,
the
same
source
code
that
you
will
be
using
in
the
first
lab
so
before,
providing
you
with
step-by-step
instructions
to
reproduce
what
you
are
going
to
see
in
the
live
demonstration.
A
C
C
C
Let's
make
a
brief
check,
and
here
it
is
it's
the
the
simplest
sequential
version
of
of
b
by
sorry,
let's
now
analyze
it
w
report
is
the
entry-level
tool
that
cody
provides
to
make
the
static
code
analysis.
C
C
And
now
it
shows
a
third
table
which
lists
the
the
loops
of
the
given
source
file
categorized
by
the
difficulty
that
it
has.
So
this
is
a
use
for
users
to
to
start
focusing
on
the
low
difficulty
loops,
and
in
this
case
it
makes
no
sense,
because
it
is
only
one
loop
and
is
a
very
simple
fight.
But
when
we
are
analyzing
big
projects,
it
is
usually
a
good
idea
to
start
analyzing
and
focusing
on
the
low
difficulty
loops,
and
now
we
are
going
to
follow
the
second
suggestion.
C
That
is,
to
execute
invoke
without
a
report
with
actions.
So
because
now
we
know
that
we
have
four
actions
and
the
type
of
that
actions,
but
we
want
to
list
them
all
individually
and
now.
This
fails
because
cody.
C
And
right
now
we
can
see
that
this
is
the
only
loop
that
it
has
and
it
has
three
opportunities
to
detect
multi-threading,
synthe
and
offloading
the
simply
offload
the
scene.
The
opportunity
is
associated
with
this
remark:
11
that
tells
that
the
vectorization
cost
model
of
kodi
states
the
loop,
my
benefit
for
the
physicalization,
so
it's
not
conclusive
about.
If
whether
we
improve
the
performance
or
not
so
in
in
this
case,
you
have
to
try
it
if
you
want,
but
now
and
this
course
we're
going
to
focus
on
offloading
opportunities.
C
C
The
first
one
is
for
generating
openmp
fragments
and
the
second
one
for
generating
openacc
pragmas
we're
going
to
invoke
both
because
we
want
to
test
them
both,
but
we
are
not
going
to
use
the
in
place
flag,
we're
going
to
use
dash
o.
So
we
don't
rewrite
the
same
input
file
and
we
want
to
generate
another.
It
will
be
called
b
or
p
of
that
c
and
it
successfully
generated
now
we
are
going
to
make
the
same
for
the
acc.
C
A
Good,
thank
you.
Thank
you
lisa.
So,
essentially,
what
you
have
seen
is
the
the
one
of
the
intended
uses
of
of
kodi
that
is
used
to
produce
the
entry
level
report
with
all
of
these
metrics
high
level
numbers
in
terms
of
number
of
actions
actually
found
in
your
code
and
how,
by
using
the
suggestions
of
the
tool,
the
tool
guides
you
through
different
command
line,
invocations
that
you
can
use
to
understand
each
of
the
actions.
What
type
of
action
it
is.
A
What
this
whole
process
guarantees
is
that,
whenever
a
particular
loop
is
identified
as
an
opportunity,
could
it
guarantee
that
the
semantics
of
the
code
is
can
be
executed
in
parallel,
either
using
vectorization,
multiplying
or
floating,
depending
on
the
opportunity
that
has
been
identified
and
also
this
guarantees
that
kodi
could
provide
you
source
code
writing
capabilities
to
annotate
the
code
with
openmp
and
openhcc
practice.
So
this
is
one
of
the
use
cases
that
we
described
in
the
in
the
in
the
presentation,
and
ideally
this
is
where
we
want
to
be.
A
If,
for
any
of
our
codes,
we
are
able
to
succeed
in
coding,
generating
and
identifying
many
of
these
opportunities
we
are
going
to
want
to
be
because
we
can
unlock
the
source
code,
writing
capabilities
to
quickly
generate
a
openmp,
open,
acc,
enable
code
for
the
host
cpu
or
for
the
gpu.
Just
by
changing
some
of
the
parameters
in
pw
directives,.