Abstract: In this paper a new algorithm for recognizing handwritten Hindi digits is proposed. The proposed algorithm is based on using the topological characteristics combined with statistical properties of the given digits in order to extract a set of features that can be used in the process of digit classification. 10,000 handwritten digits are used in the experimental results. 1100 digits are used for training and another 5500 unseen digits are used for testing. The recognition rate has reached 97.56%, a substitution rate of 1.822%, and a rejection rate of 0.618%.
Abstract: A new approach for form document representation using the maximal grid of its frameset is presented. Using image processing techniques, a scanned form is transformed into a frameset composed of a number of cells. The maximal grid is the grid that encompasses all the horizontal and vertical lines in the form and can be easily generated from the cell coordinates. The number of cells from the original frameset, included in each of the cells created by the maximal grid, is then calculated. Those numbers are added for each row and column generating an array representation for the frameset. A novel algorithm for similarity matching of document framesets based on their maximal grid representations is introduced. The algorithm is robust to image noise and to line breaks, which makes it applicable to poor quality scanned documents. The matching algorithm renders the similarity between two forms as a value between 0 and 1. Thus, it may be used to rank the forms in a database according to their similarity to a query form. Several experiments were performed in order to demonstrate the accuracy and the efficiency of the proposed approach.
Abstract: In this paper, a new algorithm for recognizing partially occluded objects is introduced. The proposed algorithm is based on searching for first three matched connected lines in both occluded and model objects, then left and right lines in both occluded and model objects are marked as matched lines as long as they have the same relations of distance ratio and angle to the last matched and connected lines. The process is repeated until there is no more three matched connected lines. The ratio_test is then performed to detect scattered matched points and lines. The new algorithm is invariant to translations, rotations, reflections and scale changes and has computational complexity of O(m.n).
Abstract: Application areas such as medical imaging or satellite imaging often store large collections of similar images. Lossless compression techniques are usually needed in such critical applications. Previous researches have introduced the centroid method, which gets benefit from the inter-image redundancy ì the set redundancy. In this paper a new algorithm is proposed as an extension of the centroid method. Experimental results with two sets of CT and MRI brain images demonstrate the efficiency and superiority of the proposed algorithm in respect to compression ratio. 1
Abstract: The need for Lossless data compression in medical imaging is becoming essential. Medical image database often store large collection of similar images. Traditional compression techniques focused on exploiting redundancy presented in individual images ignoring the set redundancy, which is the inter-image  redundancy. Previous research has introduced the centroid method, which gets benefit from the set redundancy. In this paper a new algorithm is proposed as an extension of the centroid method combined with the Quadtree structure widely used before to represent binary images. Experimental results with two sets of CT and MRI brain images demonstrate the efficiency and superiority of the proposed algorithm in respect to compression ratio.
Abstract: The computation of optical flow can be an important part in a diverse number of applications. However, optical flow algorithms can be categorized as either very accurate and slow or very fast and highly inaccurate. None of the optical flow algorithms combined both accuracy and efficiency. Among these algorithms was the phase-based fleet and Jepson algorithm. Although this algorithm has proved to produce relatively accurate results, it can not be exploited in many real-life applications due to its relatively long run-time. The goal of this paper is to combine the accuracy of the phase-based optical flow algorithm by Fleet and Jepson and exploit the parallelism and high performance capabilities of the FPGAs to provide an accurate and efficient optical flow algorithm for FPGA-based applications.
Abstract: Words have always been important carriers of information. They convey a lot of aspects about images in which they are embedded. Inspite of the many approaches that have been proposed to separate text appearances from images, very few of them have handled Arabic script. This paper presents a technique to extract Arabic words from a variety of colored images with complex backgrounds. In order to accomplish the task we have chosen the Connected Components (CC) approach. It starts with the breakdown of the RGB image into tiny homogeneous regions using the watershed transform, followed by region merging. The resulting CCs are aggregated into blocks, some of which are the candidate words. Each block is then condensed into a single vector holding the values of it features. The features generally describe the geometrical nature of the Arabic script, including a set of invariant moments. The final decision as to classify the blocks as Arabic words or other was left up to a support vector machine (SVM) before passing them to an OCR software. The system showed promising results by achieving an accuracy rate of ≈67.