Abstract: Given a list W = [w1,…, wn] of n positive integer symbol weights, and a list L = [l1,…,ln] of n corresponding integer codeword lengths, it is required to find the new list L when a new value x is ed in or when an existing value is d from the list of weights W. The presented algorithm uses the given information about the weights and their corresponding levels in order to perform the required update. No other knowledge about the original Huffman tree is assumed to be known. Instead of rebuilding the Huffman tree, the new algorithm redistributes the weights among the levels to obtain the new Huffman code. In many special cases, the updated Huffman code can be generated with lower complexity than reconstructing the Huffman tree from scratch by efficiently using the information of weights and their levels. In this paper, we present an updating algorithm that requires a linear complexity in many practical cases rather than the O(n log n) needed for reconstructing the Huffman tree. We also give a practical O(n log n) implementation for our algorithms.
Abstract: Segment trees are considered among the important data structures in computer science. A segment tree is, in principle, a static structure that is we cannot a new segment once the structure is built. In this paper we are presenting an enhanced model for this data structure. The suggested model is a parallel dynamic one that gives the flexibility of adding new segments to the structure in an efficient way and improves the query operation performance as well. The analysis of this parallel model on the CREW PRAM using O(log n) processors shows an improvement to the time complexity of the initial construction of the tree from O(n log n) to O(n), where n is the number of segments. The query operation also improves from O(log n + k) to O(log n/(log(log n)) + k) where k is the number of reported segments. Finally, the new segment ion feature is achieved with a time complexity of O(log n + m) where m is the maximum number of segments covering the same point. This proved to be very efficient specially when handling large amount of data.