We present a holistic technique for recognition of text in cursive scripts using printed Urdu ligatures as a case study. Convolutional neural networks (CNNs) are trained on high-frequency ligature clusters for feature extraction and classification. A query ligature presented to the system is first divided into primary and secondary ligatures that are separately recognized and later associated in a postprocessing step to recognize the complete ligature. Experiments are carried out using transfer learning on pretrained networks as well as by training a network from scratch. The technique is evaluated on ligatures extracted from two standard databases of printed Urdu text, Urdu printed text image (UPTI) and Center of Language Engineering (CLE), as well as by combining the ligatures of the two datasets. The system realizes high recognition rates of 97.81% and 89.20% on the UPTI and CLE databases, respectively.
Segmentation of constituent shapes is a vital yet challenging step in any automated sketch analysis and interpretation system. Conventional stroke level sketch segmentation techniques perform well on a single shape, nevertheless, their performance degrades in a cluttered multi-object sample. On the contrary, object level techniques rely on proximity based perceptual grouping for shapes with disjoint constituent parts, which requires high level semantic knowledge. To overcome these challenges, we propose the use of state-of-the-art convolutional object detectors for the detection and segmentation of hand drawn shapes from offline samples of a neuropsychological drawing test i.e. Bender Gestalt Test (BGT). Experiments with different combinations of convolutional meta-architectures and feature extractors show that such networks can successfully be employed for sketch segmentation purposes even with limited training data and resources. Amongst all network combinations under evaluation in this study, Faster R-CNNs with ResNet-101 as feature extractor, outperform others by achieving precision, recall and F-measure values of 92.93%, 95.24% and 94.07% respectively.
Shape drawing tests are widely used by practitioners to assess the neuropsychological conditions of patients. Most of these neuropsychological figure drawing tests comprise a set of figures drawn on a single sheet of paper which are inspected to analyze the presence or absence of certain properties and are scored accordingly. An automated scoring system for such a test requires the extraction and identification of a particular shape from the set of figures as a vital preprocessing step. This paper presents a system for effective segmentation and recognition of shapes for a well-known clinical test, the Bender Gestalt Test (BGT). The segmentation is based on connected component analysis, morphological processing and spatial clustering while the recognition is carried out using shape context matching. Experiments carried out on offline images of hand drawn samples contributed by different subjects realize promising segmentation and classification results validating the ideas put forward in this study.
In this paper we present a novel method for multilingual artificial text extraction from still images. We propose a lexicon
independent, block based technique that employs a combination of spatial transforms, texture, edge and, gradient based
operations to detect unconstrained textual regions from still images. Finally, some morphological and geometrical
constraints are applied for fine localization of textual content. The proposed method was evaluated on two standard and
three custom developed datasets comprising a wide variety of images with artificial text occurrences in five different
languages namely English, Urdu, Arabic, Chinese and Hindi.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.