Hardware-assisted methods are hard, but they canprovide fine-grain results with high accuracy.
Typically, software-only methods are easier, but yield onlycoarse-grain results. A method that requiresusage of instrumentation such as a logic analyzer and filtering of datato obtain the answers is considered hard.
A method thatrequires the user to simply run the code and it produces an instantanswer or a table of results is considered easy. Important to note is that somefine-grain techniques can also be used to perform coarse-grainmeasurements, although the effort in doing so could be much greaterthan using a coarse-grain method.ĭifficulty subjectively defines the effort to obtain measurements. In contrast, a method that has fine granularity (also calledfine-grain) can be used to measure execution time of a loop, small codesegment, or even a single instruction. For example, coarse granularity (also calledcoarse-grain) methods would generally measure execution time on aper-process, per-procedure, or per-function basis. Granularity is the part of the code that can be measured, and usually specified ina subjective manner. In this case, y is theaccuracy of the measurement x. Thus, measurementscould yield answers of the form x +/- y. If a particular measurement is repeated several times, thereis usually some amount of error in the measurements. Forexample, a stop watch measures with a 0.01 sec resolution, while alogic analyzer might be able to measure with a resolution of 50 nsec.Īccuracy is the closeness of the measured value using a given method ofmeasuring, as compared to the actual time if a perfect measurement wasobtained.
Resolution isa representation of the limitations of the timing hardware. Rather, each technique is a compromise betweenmultiple attributes, such as resolution, accuracy, granularity, anddifficulty. Many different methods exist to measure execution time, but there is nosingle best technique. This includes debugging hard-to-find timing errors that result inhiccups in the system, estimating processing needs of software, anddetermining the hardware needs when enhancing functionality of anexisting system or reusing code in subsequent generations of embeddedsystems. Several other activities of the development process can benefit fromestimating and measuring execution time using the methods describedhere. Since this paper is directed towards practitioners,simple rules of thumb that encapsulate the knowledge of complextheories and proofs are presented.
This series of two articles discusses techniques for measuring andoptimizing real-time code, and analyzing performance by correlating themeasurements with the real-time specifications through use of real-timesystems theory.
There exists a balance between theory and practice, where properdesign of real-time code enables the real-time analysis of it.Systematic techniques for measuring execution time can then be usedalongside the guidelines provided by real-time systems theory to helpan engineer design, analyze, and if necessary quickly fix timingproblems in real-time embedded systems. Often, these problems are related to the system'stiming, because functional testing was done using good tools, and thesystem usually produces a correct response. Practitioners, on the other hand, spend days – if not weeks – oftestingand debugging hard-to-find anddifficult-to- replicate problems because their system is not performingto specifications. Adherence to thistheoryalone does not lead to working embedded systems, and thus use of thistheory is often dismissed by practitioners. Real-time systems theory advocates the use of an appropriatescheduling algorithm and performing a schedulabilityanalysis prior to building the system. Further complicating the issue is that for avariety of reasons, most of these same embedded systems have verylimited processing power it is not uncommon for them to be using an8-bit or 16- bit processor operating at 10 MHz or less. Many embedded systems require hard or soft real-time execution that mustmeetrigid timing constraints.