| $Id: README,v 1.2 2001/08/10 19:23:11 vipin Exp $ |
| |
| This is the README file for the JitterTest (and friends) |
| program. |
| |
| This program is used to measure what the jitter of a |
| real time task would be under "standard" Linux. |
| |
| More particularly, what is the effect of running |
| a real time task under Linux with background |
| JFFS file system activity. |
| |
| The jitter is measured in milli seconds (ms) from |
| the expected time of arrival of a signal from a |
| periodic timer (set by the task) to when the |
| task actually gets the signal. |
| |
| This jitter is then stored in a file specified |
| (or the default output file "jitter.dat"). |
| |
| The data may also be sent out to the console by |
| writing to /dev/console (See help options. This is |
| highly desirable specially if you have redirected |
| your console to the serial port and are storing it |
| as a minicom log on another computer for later analysis |
| using some tools provided here). |
| |
| This is particularly useful if you have a serial |
| console and are outputting "interesting" info |
| from inside some kernel task or driver. |
| (or something as simple as a "df" program running |
| periodically and redirecting o/p to the console). |
| |
| One "interesting" thing that I have measured |
| is the effect of FLASH chip erases on the jitter |
| of a real time task. |
| |
| One can do that by putting a printk at the |
| beginning of the flash erase routine in the MTD |
| flash chip driver. |
| |
| Now you will get jitter data interspersed with |
| flash sector erase events. Other printk's can also |
| be placed at suspected jitter causing locations in |
| the system. |
| |
| |
| |
| EXECUTING THE PROGRAM "JitterTest" |
| |
| You may specify a file to be read by the |
| program every time it wakes up (every cycle). |
| This file is created and filled with some junk |
| data. The purpose of this is to test the jitter |
| of the program if it were reading from- say |
| a JFFS (Journaling Flash File System) file system. |
| |
| By specifying the complete paths of the read and write |
| (o/p) files you can test the jitter a POSIX type |
| real time task will experience under Linux, under |
| various conditions. |
| |
| These can be as follows: |
| |
| 1. O/P file on ram file system, no i/p file. |
| |
| In this case you would presumably generate other |
| "typical" background activity for your system and |
| examine the worst case jitter experienced by |
| a task that is neither reading nor writing to |
| a file system. |
| |
| Other cases could be: |
| |
| 2. O/P to ram fs, I/P from JFFS (type) fs: |
| |
| This is specially useful to test the proper |
| operation of erase suspend type of operation |
| in JFFS file systems (with an MTD layer that |
| supports it). |
| |
| In this test you would generate some background |
| write/erase type activity that would generate |
| chip erases. Since this program is reading from |
| the same file system, you contrast the latencies |
| with those obtained with writes going to the same |
| fs. |
| |
| 3. Both read and writes to (or just write to) JFFS |
| file system: |
| |
| Here you would test for latencies experienced by |
| a program if it were writing (and optionally also |
| reading) from a JFFS fs. |
| |
| |
| |
| |
| Grabing a kernel profile: |
| |
| This program can also conditionally grab a kernel profile. |
| Specify --grab_kprofile on the cmd line as well as |
| a "threshold" parameter (see help options by -?). |
| |
| Any jitter greater than this "threshold" will cause the |
| program to read the /proc/profile file and dump it in |
| a local file with increasing file numbers. It will also |
| output the filename at that time to the console file specified. |
| This will allow you to corelate later in time the various profiles |
| with data in your console file and what was going on at that time. |
| |
| These profile files may then be later examined by running them through |
| ksymoops. |
| |
| Make sure you specify "profile=2" on the kernel command line |
| when you boot the kernel if you want to use this functionality. |
| |
| |
| |
| Signalling the JFFS[2] GC task: |
| |
| You can also force this program to send a SIGSTOP/SIGCONT to the |
| JFFS (or JFFS2) gc task by specifing --siggc <pid> on the cmd line. |
| |
| This will let you investigate the effect of forcing the gc task to |
| wake up and do its thing when you are not writing to the fs and to |
| force it to sleep when you want to write to the fs. |
| |
| These are just various tools to investigate the possibility of |
| achieving minimal read/write latency when using JFFS[2]. |
| |
| You need to manually do a "ps aux" and look up the PID of the gc |
| thread and provide it to the program. |
| |
| |
| |
| |
| EXECUTING THE PROGRAM "plotJittervsFill" |
| |
| This program is a post processing tool that will extract the jitter |
| times as printed by the JitterTest program in its console log file |
| as well as the data printed by the "df" command. |
| |
| This "df" data happens to be in the console log because you will |
| run the shell file fillJffs2.sh on a console when you are doing |
| your jitter test. |
| |
| This shell script copies a specified file to another specified file |
| every programmable seconds. It also does a "df" and redirects output |
| to /dev/console where it is mixed with the output from JitterTest. |
| |
| All this console data is stored on another computer, as all this data |
| is being outputted to the serial port as you have redirected the console |
| to the serial port (that is the only guaranteed way to not loose any |
| console log or printk data). |
| |
| You can then run this saved console log through this program and it |
| will output a very nice text file with the %fill in one col and |
| corrosponding jitter values in the second. gnuplot then does a |
| beautifull plot of this resulting file showing you graphically the |
| jitters encountered at different fill % of your JFFS[2] fs. |
| |
| |
| |
| OTHER COMMENTS: |
| |
| Use the "-w BYTES" cmd line parameter to simulate your test case. |
| Not everyone has the same requirements. Someone may want to measure |
| the jitter of JFFS2 with 500 bytes being written every 500ms. Others |
| may want to measure the system performance writing 2048 bytes every |
| 5 seconds. |
| |
| RUNNING MULTIPLE INSTANCES: |
| |
| Things get real interesting when you run multiple instances of this |
| program *at the same time*. |
| |
| You could have one version running as a real time task (by specifing |
| the priority with the -p cmd line parameter), not interacting with |
| any fs or at the very least not reading and writing to JFFS[2]. |
| |
| At the same time you could have another version running as a regular |
| task (by not specifing any priority) but reading and writing to JFFS[2]. |
| |
| This way you can easily measure the blocking performance of the real time |
| task while another non-real time task interacts with JFFS[2] in the back ground. |
| |
| You get the idea. |
| |
| |
| WATCH OUT! |
| |
| Be particularly careful of running this program as a real time task AND |
| writing to JFFS[2]. Any blocks will cause your whole system to block till |
| any garbage collect initiated by writes by this task complete. I have measured |
| these blocks to be of the order of 40-50 seconds on a reasonably powerful |
| 32 bit embedded system. |