Archive

Archive for the ‘Effectiveness Tools ’ Category

Tool for Calculating Retrieval Model Effectiveness with Standard Retrieval Models

Septemberl 2nd, 2011

 

 

The software can calculate the effectiveness of THE following standard retrieval models.

(1) Normalized-TFIDF.

(2) TFIDF.

(3) OKAPI-BM25.

(4) Language Modeling (Bayesian Smoothing).

(5) Language Modeling (Jelinek Mercer).

(6) Language Modeling (TwoStage Smoothing).

(7) Language Modeling (Absolute Discounting).

(8) SMART Retrieval Model.

 

(A) Input File Format

The software requires three files in order to run it. These are

1. fullText99.txt

2. ItemsetProcessing99.txt

3. Settings.txt

 

1. fullText99.txt

This file contains the information about the vectors. 99 is the vector file unqiue ID. Each vector contains the terms information of the documents. Vectors weights are represented by the Term Frequency format. These term information is further divided by two fields. Term numeric ID and Term frequency of the documents. Additionally each vector is separated by (-17 -17) End Header Tag.

Example:

1 12 2 4 3 2 4 7 -17 -17
8 14 9 7 21 7 35 6 -17 -17
1 8 9 7 35 6 21 6 -17 -17

The above example contains the information about three documents vectors. After each Term ID (bold font) there is its term frequency in the document (italic font). -17 -17 represents the vector End Header Tags.

 

2. ItemsetProcessing99.txt

This file contains the information about the total number of documents and the Highest ID of Terms IDs in the fullText99.txt. For the above example, the content of this file should be look like as.

Example:

3
35

In the above, 3 represents that are three vector in the fullText99.txt file, while 35 represents the highest numeric ID of term in the fullText99.txt.

 

3. Settings.txt

The file contain the useul settings information about running the code. It is seperated by the followign 9 fields.

1. Query Topic File Name

2. Total number of topics in Query Topic File Name.

3. Relevance Judgements of Query Topic.

4. Recall rank cutoff level.

5. Precision rank cutoff level.

6. Just write "not requried".

7. Just write "not requried".

8. Just write "not requried".

9. Just write "not requried".

3.1. Query Topics File

This file contains the information about the topic that are used for calculating the effectiveness of retrieval models. Each line represents the topic, and it is seperated with two parts. First part contains the information about the topic text, and text is represented by term numeric ID. The second part contains the information about End Header Tag, and it is represented by -17.

Example:

1 2 3 -17
9 21 35 -17
1 9 35 21 -17

The above example file contains information about three queries. -17 represents the End header tags of queries, while other numeric numbers represent the query term ID numbers.

 

3.2. Relevance Judgement of Topic Queries

This file contains information about the relevance judgements of topic query. Each line (representing judged document of topic query) represent the ID of vector in the fullText99.txt file. When the line contain -17 then it indicates that the file reached at the end of relevance judgement of given topic.

Example:

6
10
7523
578
-17
98
42
68
-17

The above example file contains the information about the relevance judgements of two topic queries. The italic numeric IDs represent the vector IDs in fullText99.txt, and bold numeric IDs represent the end Tag of the relevance judgements of topics. First topic contains 4 relevance judgements (6,10,7523,578) and second topic contains 3 relevance judgements (98,42,68).

 

(B) Effectiveness Measures

The effectiveness of retrieval models are calculated with the following measures

1. Recall

2. Precision

3. Mean Average Precision

4. b-pref

 

(C) Program Arguments

The software accepts the following arguments for running the code.

1. Directory Location where (fullText99.txt, ItemsetProcessing99.txt, and Settings.txt are saved).

2. Unique ID of vector file. For example, if your vector file name is fullText99.txt, then just put 99.

3. Not required. Just type 0.

4. The total number of vectors in the fullText99.txt.

5. Not required. Just type 0.

6. Retrieval Model ID. These could be given as

........... (1) Normalized-TFIDF.

........... (2) OKAPI-BM25.

........... (3) Language Modeling (Bayesian Smoothing).

........... (4) Language Modeling (Jelinek Mercer).

........... (5) Language Modeling (Absolute Discounting).

........... (6) Language Modeling (TwoStage Smoothing).

........... (7) TFIDF.

........... (8) SMART Retrieval Model.

7. Parameter values of Retrieval Models. In case of Normalized-TFIDF, TFIDF, and SMART Retrieval Models just typed 1, since these retrieval models do not need parameter values. In case of BM25, select the parameter value of b between 0 to 1. In case of Language Modeling (Bayesian Smoothing), select the parameter value of \mu between 50 to 10000. In case of Language Modeling (Jelinek Mercer), Language Modeling (Absolute Discounting), and Language Modeling (TwoStage Smoothing), select the parameter values of \lambda between 0 to 1.

8. Not required. Just type 0.

 

Download Code:

Example:

./code //my_doc_collection// 99 0 75000 0 2 .75 0