How to Evaluate Embedded Software Testing Tools

You Can’t Evaluate a Test Tool by Reading a Data Sheet

All data sheets look pretty much alike. The buzzwords are the same: “Industry Leader”, “Unique Technology”, “Automated Testing”, and “Advanced Techniques”. The screen shots are similar: “Bar Charts”, “Flow Charts”, “HTML reports” and “Status percentages”. It is mind numbing.

What is Software Testing?

All of us who have done software testing realize that testing comes in many flavors. For simplicity, we will use three terms in this paper:

  • System Testing
  • Integration Testing
  • Unit Testing

Everyone does some amount of system testing where they do some of the same things with it that the end users will do with it. Notice that we said “some” and not “all.” One of the most common causes of applications being fielded with bugs is that unexpected, and therefore untested, combinations of inputs are encountered by the application when in the field.

Not as many folks do integration testing, and even fewer do unit testing. If you have done integration or unit testing, you are probably painfully aware of the amount of test code that has to be generated to isolate a single file or group of files from the rest of the application. At the most stringent levels of testing, it is not uncommon for the amount of test code written to be larger than the amount of application code being tested. As a result, these levels of testing are generally applied to mission and safety critical applications in markets such as aviation, medical device, and railway.

What Does “Automated Testing” Mean?

It is well known that the process of unit and integration testing manually is very expensive and time consuming; as a result every tool that is being sold into this market will trumpet “Automated Testing” as their benefit. But what is “automated testing”? Automation means different things to different people. To many engineers the promise of “automated testing” means that they can press a button and they will either get a “green check” indicating that their code is correct, or a “red x” indicating failure.

Unfortunately this tool does not exist. More importantly, if this tool did exist, would you want to use it? Think about it. What would it mean for a tool to tell you that your code is “Ok”? Would it mean that the code is formatted nicely? Maybe. Would it mean that it conforms to your coding standards? Maybe. Would it mean that your code is correct? Emphatically No!

Completely automated testing is not attainable nor is it desirable. Automation should address those parts of the testing process that are algorithmic in nature and labor intensive. This frees the software engineer to do higher value testing work such as designing better and more complete tests.

The logical question to be asked when evaluating tools is: “How much automation does this tool provide?” This is the large gray area and the primary area of uncertainty when an organization attempts to calculate an ROI for tool investment.

Anatomy of Test Tools

Test Tools generally provide a variety of functionality. The names vendors use will be different for different tools, and some functionality may be missing from some tools. For a common frame of reference, we have chosen the following names for the “modules” that might exist in the test tools you are evaluating:

Parser: The parser module allows the tool to understand your code. It reads the code, and creates an intermediate representation for the code (usually in a tree structure). Basically the same as the compiler does. The output, or “parse data” is generally saved in an intermediate language (IL) file.

CodeGen: The code generator module uses the “parse data” to construct the test harness source code.

Test Harness: While the test harness is not specifically part of the tool; the decisions made in the test harness architecture affect all other features of the tool. So the harness architecture is very important when evaluating a tool.

Compiler: The compiler module allows the test tool to invoke the compiler to compile and link the test harness components.

Target: The target module allows tests to be easily run in a variety of runtime environments including support for emulators, simulators, embedded debuggers, and commercial RTOS.

Test Editor: The test editor allows the user to use either a scripting language or a sophisticated graphical user interface (GUI) to setup preconditions and expected values (pass/fail criteria) for test cases.

Coverage: The coverage module allows the user to get reports on what parts of the code are executed by each test.

Reporting: The reporting module allows the various captured data to be compiled into project documentation.

CLI: A command line interface (CLI) allows further automation of the use of the tool, allowing the tool to be invoked from scripts, make, etc.

Regression: The regression module allows tests that are created against one version of the application to be re-run against new versions.

Integrations: Integrations with third-party tools can be an interesting way to leverage your investment in a test tool. Common integrations are with configuration management, requirements management tools, and static analysis tools.

Later sections will elaborate on how you should evaluate each of these modules in your candidate tools.

Classes of Test Tools / Levels of Automation

Since all tools do not include all functionality or modules described above and also because there is a wide difference between tools in the level of automation provided, we have created the following broad classes of test tools. Candidate test tools will fall into one of these categories.

“Manual” tools generally create an empty framework for the test harness, and require you to hand-code the test data and logic required to implement the test cases. Often, they will provide a scripting language and/or a set of library functions that can be used to do common things like test assertions or create formatted reports for test documentation.

“Semi-Automated” tools may put a graphical interface on some Automated functionality provided by a “manual” tool, but will still require hand-coding and/or scripting in-order to test more complex constructs. Additionally, a “semi-automated” tool may be missing some of the modules that an “automated” tool has. Built in support for target deployment for example.

“Automated” tools will address each of the functional areas or modules listed in the previous section. Tools in this class will not require manual hand coding and will support all language constructs as well a variety of target deployments.

Subtle Tool Differences

In addition to comparing tool features and automation levels, it is also important to evaluate and compare the test approach used. This may hide latent defects in the tool, so it is important to not just load your code into the tool, but to also try to build some simple test cases for each method in the class that you are testing. Does the tool build a complete test harness? Are all stubs created automatically? Can you use the GUI to define parameters and global data for the test cases or are you required to write code as you would if you were testing manually?

In a similar way target support varies greatly between tools. Be wary if a vendor says: “We support all compilers and all targets out of the box”. These are code words for: “You do all the work to make our tool work in your environment”.

How to Evaluate Test Tools

The following few sections will describe, in detail, information that you should investigate during the evaluation of a software testing tool. Ideally you should confirm this information with hands-on testing of each tool being considered.

Since the rest of this paper is fairly technical, we would like to explain some of the conventions used. For each section, we have a title that describes an issue to be considered, a description of why the issue is important, and a “Key Points” section to summarize concrete items to be considered.

Also, while we are talking about conventions, we should also make note of terminology. The term “function” refers to either a C function or a C++ class method, “unit” refers to a C file or a C++ class. Finally, please remember, almost every tool can somehow support the items mentioned in the “Key Points” sections, your job is to evaluate how automated, easy to use, and complete the support is.

Parser and Code Generator

It is relatively easy to build a parser for C; however it is very difficult to build a complete parser for C++. One of the questions to be answered during tool evaluation should be: “How robust and mature is the parser technology”? Some tool vendors use commercial parser technology that they license from parser technology companies and some have homegrown parsers that they have built themselves. The robustness of the parser and code generator can be verified by evaluating the tool with complex code constructs that are representative of the code to be used for your project.

Key Points:

– Is the parser technology commercial or homegrown?
– What languages are supported?
– Are tool versions for C and C++ the same tool or different?
– Is the entire C++ language implemented, or are their restrictions?
– Does the tool work with our most complicated code?

The Test Driver

The Test Driver is the “main program” that controls the test. Here is a simple example of a driver that will test the sine function from the standard C library:



int main () {

float local;

local = sin (90.0);

if (local == 1.0) printf (“My Test Passed!n”);

else printf (“My Test Failed!n”);

return 0;


Although this is a pretty simple example, a “manual” tool might require you to type (and debug) this little snippet of code by hand, a “semi-automated” tool might give you some sort of scripting language or simple GUI to enter the stimulus value for sine. An “automated” tool would have a full-featured GUI for building test cases, integrated code coverage analysis, an integrated debugger, and an integrated target deployment.

I wonder if you noticed that this driver has a bug. The bug is that the sin function actually uses radians not degrees for the input angle.

Key Points

– Is the driver automatically generated or do I write the code?
– Can I test the following without writing any code:
– Testing over a range of values
– Combinatorial Testing
– Data Partition Testing (Equivalence Sets)
– Lists of input values
– Lists of expected values
– Exceptions as expected values
– Signal handling
– Can I set up a sequence of calls to different methods in the same test?

Stubbing Dependent Functions

Building replacements for dependent functions is necessary when you want to control the values that a dependent function returns during a test. Stubbing is a really important part of integration and unit testing, because it allows you to isolate the code under test from other parts of your application, and more easily stimulate the execution of the unit or sub-system of interest.

Many tools require the manual generation of the test code to make a stub do anything more than return a static scalar value (return 0;)

Key Points

– Arestubs automatically generated, or do you write code for them?
– Are complex outputs supported automatically (structures, classes)?
– Can each call of the stub return a different value?
– Does the stub keep track of how many times it was called?
– Does the stub keep track of the input parameters over multiple calls?
– Can you stub calls to the standard C library functions like malloc?

Test Data

There are two basic approaches that “semi-automated” and “automated” tools use to implement test cases. One is a “data-driven” architecture, and the other is a “single-test” architecture.

For a data-driven architecture, the test harness is created for all of the units under test and supports all of the functions defined in those units. When a test is to be run, the tool simply provides the stimulus data across a data stream such as a file handle or a physical interface like a UART.

For a “single-test” architecture, each time a test is run, the tool will build the test driver for that test, and compile and link it into an executable. A couple of points on this; first, all the extra code generation required by the single-test method, and compiling and linking will take more time at test execution time; second, you end up building a separate test harness for each test case.

This means that a candidate tool might appear to work for some nominal cases but might not work correctly for more complex tests.

Key Points

– Is the test harness data driven?
– How long does it take to execute a test case (including any code generation and compiling time)?
– Can the test cases be edited outside of the test tool IDE?
– If not, have I done enough free play with the tool with complex code examples to understand any limitations?

Automated Generation of Test Data

Some “automated” tools provide a degree of automated test case creation. Different approaches are used to do this. The following paragraphs describe some of these approaches:

Min-Mid-Max (MMM) Test Cases tests will stress a function at the bounds of the input data types. C and C++ code often will not protect itself against out-of-bound inputs. The engineer has some functional range in their mind and they often do not protect themselves against out of range inputs.

Equivalence Classes (EC) tests create “partitions” for each data type and select a sample of values from each partition. The assumption is that values from the same partition will stimulate the application in a similar way.

Random Values (RV) tests will set combinations of random values for each of the parameters of a function.

Basic Paths (BP) tests use the basis path analysis to examine the unique paths that exist through a procedure. BP tests can automatically create a high level of branch coverage.

The key thing to keep in mind when thinking about automatic test case construction is the purpose that it serves. Automated tests are good for testing the robustness of the application code, but not the correctness. For correctness, you must create tests that are based on what the application is supposed to do, not what it does do.

Compiler Integration

The point of the compiler integration is two-fold. One point is to allow the test harness components to be compiled and linked automatically, without the user having to figure out the compiler options needed. The other point is to allow the test tool to honor any language extensions that are unique to the compiler being used. Especially with cross-compilers, it is very common for the compiler to provide extensions that are not part of the C/C++ language standards. Some tools use the approach of #defining these extension to null strings. This very crude approach is especially bad because it changes the object code that the compiler produces. For example, consider the following global extern with a GCC attribute:

extern int MyGlobal __attribute__ ((aligned (16)));

If your candidate tool does not maintain the attribute when defining the global object MyGlobal, then code will behave differently during testing than it will when deployed because the memory will not be aligned the same.

Key Points

– Does the tool automatically compile and link the test harness?
– Does the tool honor and implement compiler-specific language extension?
– What type of interface is there to the compiler (IDE, CLI, etc.)?
– Does the tool have an interface to import project settings from your development environment, or must they be manually imported?
– If the tool does import project settings, is this import feature general purpose or limited to specific compiler, or compiler families?
– Is the tool integrated with your debugger to allow you to debug tests?

Support for Testing on an Embedded Target

In this section we will use the term “Tool Chain” to refer to the total cross development environment including the cross-compiler, debug interface (emulator), target board, and Real-Time Operating System (RTOS). It is important to consider if the candidate tools have robust target integrations for your tool chain, and to understand what in the tool needs to change if you migrate to a different tool chain.

Additionally, it is important to understand the automation level and robustness of the target integration. As mentioned earlier: If a vendor says: “we support all compilers and all targets out of the box.” They mean: “You do all the work to make our tool work in your environment.”

Ideally, the tool that you select will allow for “push button” test execution where all of the complexity of downloading to the target and capturing the test results back to the host is abstracted into the “Test Execution” feature so that no special user actions are required.

An additional complication with embedded target testing is hardware availability. Often, the hardware is being developed in parallel with the software, or there is limited hardware availability. A key feature is the ability to start testing in a native environment and later transition to the actual hardware. Ideally, the tool artifacts are hardware independent.

Key Points

– Is my tool chain supported? If not, can it be supported? What does “supported” mean?
– Can I build tests on a host system and later use them for target testing?
– How does the test harness get downloaded to the target?
– How are the test results captured back to the host?
– What targets, cross compilers, and RTOS are supported off-the-shelf?
– Who builds the support for a new tool chain?
– Is any part of the tool chain integration user configurable?

Test Case Editor

Obviously, the test case editor is where you will spend most of your interactive time using a test tool. If there is true automation of the previous items mentioned in this paper, then the amount of time attributable to setting up the test environment, and the target connection should be minimal. Remember what we said at the start, you want to use the engineer’s time to design better and more complete tests.

The key element to evaluate is how hard is it to setup test input and expected values for non-trivial constructs. All tools in this market provide some easy way to setup scalar values. For example, does your candidate tool provide a simple and intuitive way to construct a class? How about an abstract way to setup an STL container; like a vector or a map? These are the things to evaluate in the test case editor.

As with the rest of this paper there is “support” and then there is “automated support”. Take this into account when evaluating constructs that may be of interest to you.

Key Points

– Are allowed ranges for scalar values shown
– Are array sizes shown?
– Is it easy to set Min and Max values with tags rather than values? This is important to maintain the integrity of the test if a type changes.
– Are special floating point numbers supported (e.g. NaN, +/- Infinity)
– Can you do combinatorial tests (vary 5 parameters over a range and have the tool do all combinations of those values)?
– Is the editor “base aware” so that you can easily enter values in alternate bases like hex, octal, and binary?
– For expected results, can you easily enter absolute tolerances (e.g. +/- 0.05) and relative tolerances (e.g. +/- 1%) for floating point values?
– Can test data be easily imported from other sources like Excel?

Code Coverage

Most “semi-automated” tools and all “automated” tools have some code coverage facility built in that allows you to see metrics which show the portion of the application that is executed by your test cases. Some tools present this information in table form. Some show flow graphs, and some show annotated source listings. While tables are good as a summary, if you are trying to achieve 100% code coverage, an annotated source listing is the best. Such a listing will show the original source code file with colorations for covered, partially covered, and uncovered constructs. This allows you to easily see the additional test cases that are needed to reach 100% coverage.

It is important to understand the impact of instrumentation the added instrumentation on your application. There are two considerations: one is the increase in size of the object code, and the other is the run-time overhead. It is important to understand if your application is memory or real-time limited (or both). This will help you focus on which item is most important for your application.

Key Points

-What is the code size increase for each type of instrumentation?
– What is the run-time increase for each type of instrumentation?
– Can instrumentation be integrated into your “make” or “build” system?
– How are the coverage results presented to the user? Are there annotated listings with a graphical coverage browser, or just tables of metrics?
– How is the coverage information retrieved from the target? Is the process flexible? Can data be buffered in RAM?
– Are statement, branch (or decision) and MC/DC coverage supported?
– Can multiple coverage types be captured in one execution?
– Can coverage data be shared across multiple test environments (e.g. can some coverage be captured during system testing and be combined with the coverage from unit and integration testing)?
– Can you step through the test execution using the coverage data to see the flow of control through your application without using a debugger?
– Can you get aggregate coverage for all test runs in a single report?
– Can the tool be qualified for DO-178B and for Medical Device intended use?

Regression Testing

There should be two basic goals for adopting a test tool. The primary goal is to save time testing. If you’ve read this far, we imagine that you agree with that! The secondary goal is to allow the created tests to be leveraged over the life cycle of the application. This means that that the time and money invested in building tests should result in tests that are re-usable as the application changes over time and easy to configuration manage. The major thing to evaluate in your candidate tool is what specific things need to be “saved” in order to run the same tests in the future and how the re-running of tests is controlled.

Key Points

> What file or files need to be configuration managed to regression test?
> Does the tool have a complete and documented Command Line Interface (CLI)?
> Are these files plain text or binary? This affects your ability to use a diff utility to evaluate changes over time.
> Do the harness files generated by the tool have to be configuration managed?
> Is there integration with configuration management tools?
> Create a test for a unit, now change the name of a parameter, and re-build your test environment. How long does this take? Is it complicated?
> Does the tool support database technology and statistical graphs to allow trend analysis of test execution and code coverage over time?
> Can you test multiple baselines of code with the same set of test cases automatically?
> Is distributed testing supported to allow portions of the tests to be run on different physical machines to speed up testing?


Most tools will provide similar reporting. Minimally, they should create an easy to understand report showing the inputs, expected outputs, actual outputs and a comparison of the expected and actual values.

Key Points

> What output formats are supported? HTML? Text? CSV? XML?
> Is it simple to get both a high level (project-wide) report as well as a detailed report for a single function?
> Is the report content user configurable?
> Is the report format user configurable?

Integration with Other Tools

Regardless of the quality or usefulness of any particular tool, all tools need to operate in a multi-vendor environment. A lot of time any money has been spent by big companies buying little companies with an idea of offering “the tool” that will do everything for everybody. The interesting thing is that most often with these mega tool suites, the whole is a lot less than the sum of the parts. It seems that companies often take 4-5 pretty cool small tools and integrate them into one bulky and unusable tool.

Key Points

> Which tools does your candidate tool integrate with out-of-the-box, and can the end-user add integrations?

Additional Desirable Features for a Testing Tool

The previous sections all describe functionality that should be in any tool that is considered an automated test tool. In the next few sections we will list some desirable features, along with a rationale for the importance of the feature. These features may have varying levels of applicability to your particular project.

True Integration Testing / Multiple Units Under Test

Integration testing is an extension of unit testing. It is used to check interfaces between units and requires you to combine units that make up some functional process. Many tools claim to support integration testing by linking the object code for real units with the test harness. This method builds multiple files within the test harness executable but provides no ability to stimulate the functions within these additional units. Ideally, you would be able to stimulate any function within any unit, in any order within a single test case. Testing the interfaces between units will generally uncover a lot of hidden assumptions and bugs in the application. In fact, integration testing may be a good first step for those projects that have no history of unit testing.

Key Points

> Can I include multiple units in the test environment?
> Can I create complex test scenarios for these classes where we stimulate a sequence of functions across multiple units within one test case?
> Can I capture code coverage metrics for multiple units?

Dynamic Stubbing

Dynamic stubbing means that you can turn individual function stubs on and off dynamically. This allows you to create a test for a single function with all other functions stubbed (even if they exist in the same unit as the function under test). For very complicated code, this is a great feature and it makes testing much easier to implement.

Key Points

> Can stubs be chosen at the function level, or only the unit level?
> Can function stubs be turned on an off per test case?
> Are the function stubs automatically generated (see items in previous section)?

Library and Application Level Thread Testing (System Testing)

One of the challenges of system testing is that the test stimulus provided to the fully integrated application may require a user pushing buttons, flipping switches, or typing at a console. If the application is embedded the inputs can be even more complicated to control. Suppose you could stimulate your fully integrated application at the function level, similar to how integration testing is done. This would allow you to build complex test scenarios that rely only on the API of the application.

Some of the more modern tools allow you to test this way. An additional benefit of this mode of testing is that you do not need the source code to test the application. You simply need the definition of the API (generally the header files). This methodology allows testers an automated and scriptable way to perform system testing.

Agile Testing and Test Driven Development (TDD)

Test Driven Development promises to bring testing into the development process earlier than ever before. Instead of writing application code first and then your unit tests as an afterthought, you build your tests before your application code. This is a popular new approach to development and enforces a test first and test often approach. Your automated tool should support this method of testing if you plan to use an Agile Development methodology.

Bi-directional Integration with Requirements Tools

If you care about associating requirements with test cases, then it is desirable for a test tool to integrate with a requirements management tool. If you are interested in this feature, it is important that the interface be bi-directional, so that when requirements are tagged to test cases, the test case information such as test name and pass / fail status can be pushed back to your requirements database. This will allow you to get a sense of the completeness of your requirements testing.

Tool Qualification

If you are operating in a regulated environment such as commercial aviation or Class III medical devices then you are obligated to “qualify” the development tools used to build and test your application.

The qualification involves documenting what the tool is supposed to do and tests that prove that the tool operates in accordance with those requirements. Ideally a vendor will have these materials off-the-shelf and a history of customers that have used the qualification data for your industry.

Key Points

> Does the tool vendor offer qualification materials that are produced for your exact target environment and tool chain?
> What projects have successfully used these materials?
> How are the materials licensed?
> How are the materials customized and approved for a particular project?
> If this is an FAA project have the qualification materials been successfully used to certify to DO-178B Level A?
> If it is an FDA project, have the tools been qualified for “intended use”?


Hopefully this paper provides useful information that helps you to navigate the offerings of test tool vendors. The relative importance of each of the items raised will be different for different projects. Our final suggestions are:

> Evaluate the candidate tools on code that is representative of the complexity of the code in your application
> Evaluate the candidate tools with the same tool chain that will be used for your project
> Talk to long-term customers of the vendor and ask them some of the questions raised in this paper
> Ask about the tool technical support team. Try them out by submitting some questions directly to their support (rather than to their sales representative)

Finally, remember that most every tool can somehow support the items mentioned in the “Key Points” sections. Your job is to evaluate how automated, easy to use, and complete the support is.

What is GIS Software?

GIS, or graphic information system software, can help businesses in many ways. The label GIS applies to a large group of different type of software that can help different businesses in different ways. GIS software can be defined as any software used develop, analyze, manage, or view data from a digital map. As the industry ages, it is becoming increasingly specialized, with various software companies developing niche products to meet the specific needs of certain types of businesses.

GIS software falls under one of a few different categories. Typically, the end users will deal with what is termed Desktop GIS, which describes the software used to access and modify any GIS information the company possesses. This is also usually the point of entry for any new data. Those that are simply viewing and utilizing information would use a GIS Viewer, while those whose jobs were to analyze the data would use a GIS Analyst. To manipulate or change the data, the user would utilize a GIS Editor. Alternatively, new data may be collected using a Mobile GIS, which would see duty out in the field. End users also may access GIS data via the Internet or a corporate Intranet using what is referred to as a Web GIS Client. This may take the form of a thin client, which only enables the user to view the data, or a thick client which is much the same as a Desktop GIS application. An application that accesses this information over a corporate network may be referred to as a Server GIS as well.

Behind the scenes, companies may either maintain their own servers and databases to store and organize GIS data, utilize another company’s data, or possibly hire an external agent to manage their information. Spatial database management systems are the applications that store the data, though they also may have some functionality similar to a Desktop GIS or Web GIS Client. Web map Servers are used to distribute the data over the Internet as maps, and these may either be downloaded or viewed in a browser or other client. The Web map Server is what the Web GIS Client connects to in order to retrieve the requested data.

Also available are various specialized additions to the GIS software, termed Libraries or Extensions, that provide extra functionality that may not be necessary for every end user. In this way, businesses are not forced to pay for bloated and confusing software that does not actually do anything to serve their needs. Instead, they can opt to purchase a streamlined core product and whatever libraries they may need. Because additional features that would not be used are not even present, libraries, when used correctly, can also dramatically cut down on the time and costs of training employees to use the GIS software.

Any business that makes use of, or could make use of digital maps of any kind could benefit from the use of GIS software. Many businesses are already using something similar, but could see improved results with the use of specialized GIS software that is tailored to their company’s needs. With the use of libraries and extensions, future needs can be addressed when and if those needs arise, as opposed to a company spending money on software and training just in case those features are used in the future.

Promoting Literacy in School Libraries in Sierra Leone


The heart of information literacy is contained within definitions used to describe it. Traditionally librarians have given ‘library induction’ or ‘library skills training’ in a limited role. Library users need to know where the catalogue is, what the services are, and most importantly where the enquiry desk is. This is not to reduce the value of traditional library induction, but libraries and information are also changing. The provision of information through a library in a traditional form has gone through radical alterations. Already in most library and information organisations staffs are adjusting their services with the provision of new media and access to information provision within these organisations. Thus librarians are talking about social inclusion, opportunity, life-long learning, information society and self development.

A plethora of definitions for information literacy abound in books, journal papers and the web. Some of these definitions centre on the activities of information literacy i.e. identifying the skills needed for successful literate functioning. Other definitions are based on the perspective of an information literate person i.e. trying to outline the concept of information literacy. Deriving therefore a single definition is a complex process of collecting together a set of ideas as to what might be, should be, or may be considered a part of information literacy. For example Weber and Johnson (2002) defined information literacy as the adoption of appropriate information behaviour to obtain, through whatever channel or medium, information well fitted to information needs, together with critical awareness of the importance of wise and ethical use of information in society. The American Library Association (2003) defined information literacy as a set of skills needed to find, retrieve, analyze, and use information. While CLIP (2004) defined information literacy as knowing when and why one needs information, where to find it, and how to evaluate, use and communicate it in an ethical manner. Succinctly these definitions imply that information literacy requires not only knowledge but also skills in:

• recognising when information is needed;
• resources available
• locating information;
• evaluating information;
• using information;
• ethics and responsibility of use of information;
• how to communicate or share information;
• how to manage information

Given therefore the variety of definitions and implied explanation information literacy is a cluster of abilities that an individual can employ to cope with, and to take advantage of the unprecedented amount of information which surrounds us in our daily life and work.


Sierra Leone’s current educational system is composed of six years of formal primary education, three years of Junior Secondary School (JSS), three years Senior Secondary School (SSS) and four years of tertiary education-6-3-3-4. (The Professor Gbamanja Commission’s Report of 2010 recommended an additional year for SSS to become 6-3-4-4). The official age for primary school pupils is between six and eleven years. All pupils at the end of class six are required to take and pass the National Primary School Examinations designed by the West African Examinations Council (WAEC) to enable them proceed to the secondary school divided into Junior Secondary School(JSS) and Senior Secondary School (SSS). Each part has a final examination: the Basic Education Certificate Examinations (BECE) for the JSS, and the West African Senior Secondary School Certificate Examinations (WASSCE) for SSS, both conducted by WAEC. Successful candidates of WASSCE are admitted to tertiary institutions based on a number of subjects passed (GoSL,1995)

The curriculum of primary schools emphasizes communication competence and the ability to understand and manipulate numbers. At the JSS level, the curriculum is general and comprehensive, encompassing the whole range of knowledge, attitudes and skills in cognitive, affective, and psychomotor domains. The core subjects of English, Mathematics, Science and Social studies are compulsory for all pupils. At the SSS level, the curriculum is determined by its nature (general or specialist), or its particular objectives. Pupils are offered a set of core (compulsory) subjects with optional subjects based on their specialization. Teaching is guided by the teaching syllabuses and influenced by the external examinations that pupils are required to take at the 3/ 4-year course. English is the language of instruction (GoSL,1995).

The countries two universities, three polytechnics, and two teacher training colleges are responsible for the training of teachers in Sierra Leone. The Universities Act of 2004 provides for private universities so that these institutions too could help in the training of teachers. Programs range from the Teacher Certificate offered by the teacher training colleges to the Masters in Education offered by universities. Pre-service certification of teachers is the responsibility of the National Council for Technical, Vocational and Other Academic Awards (NCTVA). There is also an In-service Teacher Training program (Distance Education Program) conducted for teachers in part to reduce the number of untrained and unqualified teachers especially in the rural areas.


In Sierra Leone as it is in most parts of the developing world literacy involves one’s ability to read, write and numeracy. It is the ability to function effectively in life contexts. A literate person is associated with the possession of skills and knowledge and how these could be applied within his local environment. For instance a literate person is believed to be able to apply chemical fertilizer to his crops, fill in a loans form, determine proper dosage of medicine, calculate cash cropping cost and profits, glean information from a newspaper, make out a bank deposit slip and understanding instructions and basic human rights.

Literacy is at the heart of the country’s development goals and human rights (World Bank, 2007). Wherever practised literacy activities are part of national and international strategies for improved education, human development and well-being. According to the 2013 United Nations Human Development Index Sierra Leone has a literacy rate of 34 %.Implicitly Sierra Leone is an oral society. And oral societies rely heavily on memory to transmit their values, laws, history, music, and culture whereas the written word allows infinite possibilities for transmission and therefore of active participation in communication. These possibilities are what make the goal of literacy crucial in society.

In academic parlance literacy hinges on the printed word. Most pupils are formally introduced to print when they encounter schoolbook. School teachers in Sierra Leone continue to use textbooks in their teaching activities to convey content area information to pupils. It is no gainsaying that pupils neither maximise their learning potential nor read at levels necessary for understanding the type of materials teachers would like them to use. Thus the performance of pupils at internal and public examinations is disappointing. Further pupils’ continued queries in the library demonstrate that they do not only lack basic awareness of resources available in their different school libraries but also do not understand basic rudiments of how to source information and materials from these institutions. What is more worrisome is that pupils do not use appropriate reading skills and study strategies in learning. There is a dearth of reading culture in schools and this situation cuts across the fabric of society. In view of the current support the Ministry of Education, Science and Technology (MEST) to establish literacy standards in school this situation has proved frustrating as teachers do not know how to better help pupils to achieve this goal. Thus they look up to the school librarians to play a more proactive role.


In everyday situations school pupils are expected to be able to identify and seek information they need. Providing a variety of reading and writing experiences using varied materials in the school library can help develop pupils’ literacy ability (Roe, Stoodt-Hill and Burns, 2004). The mode of assessment in schools in Sierra Leone includes class exercises, tests, written and practical assignments, as well as written examinations to see pupils through to their next levels. These pupils, for example, need to read content books and supplementary materials in school for homework. Pupils have even more literacy needs in their activities outside school. They need to read signs found in their communities, job applications, road maps and signs, labels on food and medicine, newspapers, public notices, bank statements, bills and many other functional materials. Failure to read and understand these materials can result in their committing traffic violations, having unpleasant reactions to food or medicine, becoming lost, losing employment opportunities and missing desirable programs. Equally so pupils need to write to their relatives and loved ones, instructions to people who are doing things for them, notes to themselves about tasks to be completed, phone messages for colleagues and many other items. Mistakes in these activities can have negative effects on them. Good literacy skills are especially important to pupils who plan to pursue higher education studies. The job market in the country calls for pupils to be literate. For instance most jobs advertised these days require people who have completed their JSS. The fact is that workers need to be able to understand graphic aids, categorized information and skim and scan to locate information. Also the nature of reading in the workplace generally involves locating information for immediate use and inferring information for problem solving. The reading and writing of a variety of documents like memos, manuals, letters, reports and instructions are necessary literacy skills in the workplace.


School libraries in Sierra Leone are perceived as integral aspect of the county’s educational system. These institutions bring together four major components of the school community: the materials, pupils, teacher and library staff. The main purpose for the establishment of these institutions in schools is to complement the teaching/learning process, if not to support the curriculum. This purpose is achieved in two ways: by providing pupils with the means of finding whatever information they need; and by developing in pupils the habit of using books both for information and for pleasure. Pupils need information to help them with the subjects they learn in school. The textbooks they use and the notes they take in class can be an excellent foundation. They may also be sufficient for revision purposes. But these could not be enough to enable pupils to write good essays of their own or to carry out group projects. School libraries then are expected to complement this effort and therefore are perceived as learning centres.

Pupils need information on subjects not taught in school. School libraries are looked upon as places pupils find information to help them in their school studies and personal development. Through these institutions pupils’ habit of using libraries for life-long education is not only developed but also school libraries could be used to improve pupils’ reading skills. In the school community both pupils and teachers use school libraries for leisure and recreational purpose and for career advancement. The culture of society is also transmitted through use of school libraries. Because of the important role school libraries play in the country’s educational system they are organised in such way that pupils as well as teachers can rely upon them for support in the teaching/learning process. Most of these institutions are managed by either a full-time staff often supervised by a senior teacher. Staffs use varied methods to promote their use including user education.


A pre-requisite for the development of autonomous pupils through flexible resource-based learning approaches is that pupils master a set of skills which gradually enable them to take control of their own learning. Current emphasis in teaching in schools in Sierra Leone has shifted from “teacher-centred” to “pupil-centred” approach thereby making pupils to “learn how to learn” for themselves so that the integration of process skills into the design of the school curriculum becomes crucial (GoSL,1995). It is in this area of “learning” or “information literacy” skills that one can most clearly see the inter-relationship between the school curriculum and the school library. For pupils to become independent users of information and for this to occur it is vital that they are given the skills to learn how to find information, how to select what is relevant, and how to use it in the best way possible for their own particular needs and take responsibility for their own learning. As information literate, pupils will be able to manage information skilfully and efficiently in a variety of contexts. They will be capable of weighing information carefully and wisely to determine its quality (Marcum2002). Pupils do recognise that having good information is central to meeting the opportunity and challenges of day-to-day living. They are also aware of the importance of how researching across a variety of sources and formats to locate the best information to meet particular needs.

Literacy activities in schools in Sierra Leone are the responsibility of content area teachers, reading consultants and school librarians. Of these the role of the school librarian is paramount. As specialist the school librarian is expected to provide assistance to pupils and teachers alike by locating materials in different subjects, and at different reading levels by making available materials that can be used for motivation and background reading. The school librarian is also expected to provide pupils with instructions in locating strategies related to the library such as doing online searches and skimming through printed reference materials. The librarian is expected to display printed materials within his purview, write specialised bibliographies and lists of addresses on specific subjects at the request of teachers. He should be able to provide pupils with direct assistance in finding and using appropriate materials; recreational reading can be fostered by the librarian’s book talks or attractive book displays on high-interest topics like HIV/AIDS, child abuse, child rights, human rights and poverty alleviation. In view of this the fundamental qualities expected of the good school librarian include knowledge of his collection and how to access it; ability to understand the needs of his users more so those of pupils; ability to communicate with pupils and adult users; and knowledge of information skills and how to use information.


Pupils’ success in school depends to a large extent upon their ability to access, evaluate and use information. Providing access to information and resources is a long-standing responsibility of the school librarian. The school librarian should provide the leadership and expertise necessary to ensure that the library becomes integral in the instructional program of the school. In school the librarian is the information specialist, teacher and instructional consultant. He is the interface responsible for guiding pupils and teachers through the complex information resources housed in his library (Lenox and Walker, 1993). He is looked up to assist and guide numerous users in seeking to use and understand the resources and services of the library. In this respect the school librarian should inculcate in these users such skills as manual and online searching of information; use of equipment; developing critical skills for the organization, evaluation and use of information and ideas as integral part of the curriculum (Lonsdale, 2003). The school librarian should be aware of the range of available information retrieval systems, identify that most suitable to the needs of pupils and provide expertise in helping them become knowledgeable, if not comfortable, in their use. Since no library is self-sufficient the school librarian can network with information agencies, lending/renting materials and/or using electronic devises to transmit information (Tilke, 1998; 2002).

As information specialist the school librarian should be able to share his expertise with those who may wish to know what information sources and/or learning materials are available to support a program of work. Such consultation should be offered to the whole school through the curriculum development committee or to individual subject teachers. The school librarian should take the lead in developing pupils’ information literacy skills by being involved with the school curriculum planning and providing a base of resources to meet its needs. He should be aware of key educational initiatives and their impact in teaching and learning; he should be familiar with teaching methods and learning styles in school; over all he should maintain an overview of information literacy programmes within the school (Herring, 1996; Kuhlthau, 2004).

Kuhlthau (2004) opined that information seeking is a primary activity of life and that pupils seek information to deepen and broaden their understanding of the world around them. When therefore, information in school libraries is placed in a larger context of learning, pupils’ perspective becomes an essential component in information provision. The school librarian should ensure that skills, knowledge and attitude concerning information access, use and communication, are integral part of the school curriculum. Information skills are crucial in the life-long learning process of pupils. As short term objective the school librarian should provide a means of achieving learning objectives within the curriculum; as long term information skills have a direct impact on individual pupils’ ability to deal effectively with a changing environment. Therefore the school librarian should work in concert with teachers and administrators to define the scope and sequence of the information relevant to the school curriculum and ensure its integration throughout the instructional programs (Tilke, 2002; Birks and Hunt, 2003). Pupils should be encouraged to realise their potential as informed citizens who critically think and solve problems. In view of the relationship between the curriculum and school library, the librarian should serve on the curriculum committee ensuring that information access skills are incorporated into subject areas. The school librarian’s involvement in the curriculum development will permit him to provide advice on the use of a variety of instructional strategies such as learning centres and problem-solving software, effective in communicating content to pupils (Herring, 1996; Birks and Hunt, 2003).

Literacy could be actively developed as pupils need access to specific resources, demonstrate understanding of their functionality and effective searching skills. In this regard pupils should be given basic instruction to the library, its facilities and services and subsequent use. Interactive teaching methods aimed at information literacy education should be conducted for the benefit of pupils. Teaching methods could include an outline of a variety of aides like quizzes and worksheets of differing complexity level to actively engage pupils in learning library skills and improving their information literacy. Classes should be divided into small groups so that pupils could have hands-on-experience using library resources. Where Internet services are available in the library online tutorials should be provided. Post session follow-up action will ensure that pupils receive hands-on-experience using library resources. Teaching methods should be constantly evaluated to identify flaws and improve on them.

Further the school librarian should demonstrate willingness to support and value pupils in their use of the library through: provision of readers’ guides; brochures; book marks; library handbooks/guides; computerization of collection; helpful guiding throughout the library; and regular holding of book exhibitions and book fairs. Since there are community radio stations in the country the school librarian could buy air time to report library activities, resources and services. He can also communicate to pupils through update newspapers. Pupils could be encouraged to contribute articles on library development, book reviews and information about opening times and services. The school librarian could help pupils to form book and reading clubs, organize book weeks and book talks using visiting speakers and renowned writers to address pupils. Classes could also be allowed to visit the library to facilitate use. More importantly the school librarian should provide assistance to pupils in the use of technology to access information outside the library. He should offer pupils opportunities related to new technology, use and production of varied media formats, and laws and polices regarding information. In order to build a relevant resource base for the school community the librarian should constantly carry out needs assessment, comparing changing demands to available resources.

The Internet is a vital source for promoting literacy in the school library. The school librarian should ensure that the library has a website that will serve as guide to relevant and authoritative sources and as a tool for learning whereby pupils and teachers are given opportunity to share ideas and solutions (Herring, 2003). Through the Internet pupils can browse the library website to learn how to search and develop information literacy skills. In order for pupils to tap up-to-date sources from the Net the school librarian should constantly update the home page, say on a daily basis, if necessary. Simultaneously the school librarian should avail to pupils and teachers sheets/guides to assist them in carrying out their own independent researches. He should give hands-on-experience training to users to share ideas with others through the formation of “lunch time” or “after school support groups”. Such activities could help pupils to develop ideas and searching information for a class topic and assignment.

Even the location of the library has an impact in promoting literacy in school. The library should be centrally located, close to the maximum number of teaching areas. It should be able to seat at least ten per cent of school pupils at any given time, having a wide range of resources vital for teaching and learning programs offered in school. The library should be characterised by good signage for the benefit of pupil and teacher users with up-to-date displays to enhance the literacy skills of pupils and stimulating their intellectual curiosity.


Indeed the promotion of literacy should be integral in the school curriculum and that the librarian should be able to play a leading role to ensure that the skills, knowledge and attitudes related to information access are inculcated in pupils and teachers alike as paramount users of the school library. But the attainment of this goal is dependent on a supportive school administration, always willing and ready to assist the library and its programs financially. To make the librarian more effective he should be given capacity building to meeting the challenges of changing times.


American Library Association (2003). ‘Introduction to information literacy.’
Birks, J. & Hunt, F. (2003). Hands-on information literacy activities. London: Neal-Schumann.
CLIP (2004).’Information Literacy: definition.’
GoSL (2010). Report of the Professor Gbamanja Commission of Inquiry into the Poor Performance of Pupils in the 2008 BECE and WASSCE Examinations (Unpublished).
___________(1995). New Education policy for Sierra Leone. Freetown: Department of Education.
Herring, James E. (1996). Teaching information skills in schools. London: Library Association Publishing.
__________________ (2003).The Internet and information skills: a guide for teachers and librarians. London: Facet Publishing.
Kahlthau, C. C. (2004). Seeking meaning: a process approach to library and information services. 2nd. ed. London: Libraries Unlimited.
Lenox, M. F. & Walker, M. L.(1993). ‘Information Literacy in the education process.’ The Educational Forum, 52 (2): 312-324.
Lonsdale, Michael (2003). Impact of school libraries on student achievement: a review of research. Camberwell: Australian Council of Educational Research.
Marcum, J. W. (2002). ‘ Rethinking Information Literacy,’ Library Quarterly, 72:1-26.
Roe, Betty D., Stoodt-Hill & Burns, Paul C. (2004).Secondary School Literacy instruction: the content areas. Boston: Houghton Mifflin Company.
Tilke, A. (1998). On-the-job sourcebook for school librarians. London: Library Association.
_________ (2002). Managing your school library and information service: a practical handbook. London: Facet Publishing.
Weber, S. & Johnston, B. ( 2002). ‘Assessment in the Information Literate University.’ Conference: Workshop 1st International Conference on IT and Information Literacy, 20th- 22nd. \March 2002, Glasgow, Scotland. Parallel Session 3, Thursday 21st March,2002.
World Bank (2007). Education in Sierra Leone; present challenges, future opportunities. Washington,DC: World Bank.