HKU Scholars Hubhttp://hub.hku.hkThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Sun, 03 Dec 2023 18:07:19 GMT2023-12-03T18:07:19Z501461- MEGAHIT v1.0: A fast and scalable metagenome assembler driven by advanced methodologies and community practiceshttp://hdl.handle.net/10722/230205Title: MEGAHIT v1.0: A fast and scalable metagenome assembler driven by advanced methodologies and community practices
Authors: LI, D; Luo, R; Liu, CM; Leung, HCM; Ting, HF; Sadakane, KUNIHIKO; Yamashita, HIROSHI; Lam, TW
Abstract: The study of metagenomics has been much benefited from low-cost and high-throughput sequencing technologies, yet the tremendous amount of data generated make analysis like de novo assembly to consume too much computational resources. In late 2014 we released MEGAHIT v0.1 (together with a brief note of Li et al. (2015) [1]), which is the first NGS metagenome assembler that can assemble genome sequences from metagenomic datasets of hundreds of Giga base-pairs (bp) in a time- and memory-efficient manner on a single server. The core of MEGAHIT is an efficient parallel algorithm for constructing succinct de Bruijn Graphs (SdBG), implemented on a graphical processing unit (GPU). The software has been well received by the assembly community, and there is interest in how to adapt the algorithms to integrate popular assembly practices so as to improve the assembly quality, as well as how to speed up the software using better CPU-based algorithms (instead of GPU).In this paper we first describe the details of the core algorithms in MEGAHIT v0.1, and then we show the new modules to upgrade MEGAHIT to version v1.0, which gives better assembly quality, runs faster and uses less memory. For the Iowa Prairie Soil dataset (252 Gbp after quality trimming), the assembly quality of MEGAHIT v1.0, when compared with v0.1, has a significant improvement, namely, 36% increase in assembly size and 23% in N50. More interestingly, MEGAHIT v1.0 is no slower than before (even running with the extra modules). This is primarily due to a new CPU-based algorithm for SdBG construction that is faster and requires less memory. Using CPU only, MEGAHIT v1.0 can assemble the Iowa Prairie Soil sample in about 43 h, reducing the running time of v0.1 by at least 25% and memory usage by up to 50%. MEGAHIT v1.0, exhibiting a smaller memory footprint, can process even larger datasets. The Kansas Prairie Soil sample (484 Gbp), the largest publicly availa ble dataset, can now be assembled using no more than 500 GB of memory in 7.5 days. The assemblies of these datasets (and other large metgenomic datasets), as well as the software, are available at the website https://hku-bal.github.io/megabox.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2302052016-01-01T00:00:00Z
- Approximating frequent items in asynchronous data stream over a sliding windowhttp://hdl.handle.net/10722/93223Title: Approximating frequent items in asynchronous data stream over a sliding window
Authors: Chan, HL; Lam, TW; Lee, LK; Ting, HF
Abstract: In an asynchronous data stream, the data items may be out of order with respect to their original timestamps. This paper gives a space-efficient data structure to maintain such a data stream so that it can approximate the frequent item set over a sliding time window with sufficient accuracy. Prior to our work, Cormode et al. [3] have the best solution, with space complexity O( 1/ε log W log (ε B/log W) min{log W, 1/ε } log U), where ε is the given error bound, W and B are parameters of the sliding window, and U is the number of all possible item names. Our solution reduces the space to O( 1/ε log W log (ε B/log W )). We also unify the study of synchronous and asynchronous data stream by quantifying the delay of the data items. When the delay is zero, our solution matches the space complexity of the best solution to the synchronous data streams [8]. © 2010 Springer-Verlag Berlin Heidelberg.
Description: LNCS v. 5893 is Proceedings of the 7th International Workshop, WAOA 2009; WAOA Session 7
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/10722/932232009-01-01T00:00:00Z
- Escaping a grid by edge-disjoint pathshttp://hdl.handle.net/10722/45612Title: Escaping a grid by edge-disjoint paths
Authors: Chan, WunTat; Chin, Francis YL; Ting, HingFung
Abstract: We study the edge-disjoint escape problem in grids: Given a set of n sources in a two-dimensional grid, the problem is to connect all sources to the grid boundary using a set of n edge-disjoint paths. Different from the conventional approach that reduces the problem to network flow problem, we solve the problem by ensuring that no rectangles in the grid contain more sources than outlets, a necessary and sufficient condition for the existence of a solution. Based on this condition, we give a greedy algorithm which finds the paths in O(n2) time, which is faster than the previous approaches. This problem has applications in point-to-point delivery, VLSI reconfiguration and package routing.
Sat, 01 Jan 2000 00:00:00 GMThttp://hdl.handle.net/10722/456122000-01-01T00:00:00Z
- Requirement-based data cube schema designhttp://hdl.handle.net/10722/93414Title: Requirement-based data cube schema design
Authors: Cheung, David W; Zhou, Bo; Kao, Ben; Lu, Hongjun; Lam, Tak Wah; Ting, Hing Fung
Abstract: On-line analytical processing (OLAP) requires efficient processing of complex decision support queries over very large databases. It is well accepted that pre-computed data cubes can help reduce the response time of such queries dramatically. A very important design issue of an efficient OLAP system is therefore the choice of the right data cubes to materialize. We call this problem the data cube schema design problem. In this paper we show that the problem of finding an optimal data cube schema for an OLAP system with limited memory is NP-hard. As a more computationally efficient alternative, we propose a greedy approximation algorithm cMP and its variants. Algorithm cMP consists of two phases. In the first phase, an initial schema consisting of all the cubes required to efficiently answer the user queries is formed. In the second phase, cubes in the initial schema are selectively merged to satisfy the memory constraint. We show that cMP is very effective in pruning the search space for an optimal schema. This leads to a highly efficient algorithm. We report the efficiency and the effectiveness of cMP via an empirical study using the TPC-D benchmark. Our results show that the data cube schemas generated by cMP enable very efficient OLAP query processing.
Fri, 01 Jan 1999 00:00:00 GMThttp://hdl.handle.net/10722/934141999-01-01T00:00:00Z
- The Greedier the Better: An Efficient Algorithm for Approximating Maximum Independent Sethttp://hdl.handle.net/10722/89029Title: The Greedier the Better: An Efficient Algorithm for Approximating Maximum Independent Set
Authors: Lau, HY; Ting, HF
Abstract: The classical greedy heuristic for approximating maximum independent set is simple and efficient. It achieves a performance ratio of (Δ + 2)/3, where Δ is the maximum node degree of the input graph. All known algorithms for the problem with better performance ratios are much more complicated and inefficient. In this paper, we propose a natural extension of the greedy heuristic. It is as simple and as efficient as the classical greedy heuristic. By a careful analysis on the structure of the intermediate graphs manipulated by our heuristic, we prove that the performance ratio is improved to (Δ + 3)/3.25.
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10722/890292001-01-01T00:00:00Z
- Improved online algorithms for 1-space bounded 2-dimensional bin packinghttp://hdl.handle.net/10722/125685Title: Improved online algorithms for 1-space bounded 2-dimensional bin packing
Authors: Zhang, Y; Chen, J; Chin, FYL; Han, X; Ting, HF; Tsin, YH
Abstract: In this paper, we study 1-space bounded 2-dimensional bin packing and square packing. A sequence of rectangular items (square items, respectively) arrive over time, which must be packed into square bins of size 1×1. 90°-rotation of an item is allowed. When an item arrives, we must pack it into an active bin immediately without any knowledge of the future items. The objective is to minimize the total number of bins used for packing all the items in the sequence. In the 1-space bounded variant, there is only one active bin for packing the current item. If the active bin does not have enough space to pack the item, it must be closed and a new active bin is opened. Our contributions are as follows: For 1-space bounded 2-dimensional bin packing, we propose an online packing strategy with competitive ratio 5.155, surpassing the previous 8.84-competitive bound. The lower bound of competitive ratio is also improved from 2.5 to 3. Furthermore, we study 1-space bounded square packing, which is a special case of the bin packing problem. We give a 4.5-competitive packing algorithm, and prove that the lower bound of competitive ratio is at least 8/3. © 2010 Springer-Verlag.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10722/1256852010-01-01T00:00:00Z
- New results on on-demand broadcasting with deadline via job scheduling with cancellationhttp://hdl.handle.net/10722/93388Title: New results on on-demand broadcasting with deadline via job scheduling with cancellation
Authors: Chan, WT; Lam, TW; Ting, HF; Wong, PWH
Abstract: This paper studies the on-demand broadcasting problem with deadlines. We give the first general upper bound and improve existing lower bounds on the competitive ratio of the problem. The novelty of our work is the introduction of a new job scheduling problem that allows cancellation. We prove that the broadcasting problem can be reduced to this scheduling problem. This reduction frees us from the complication of the broadcasting model and allows us to work on a conceptually simpler model for upper bound results. © Springer-Verlag Berlin Heidelberg 2004.
Thu, 01 Jan 2004 00:00:00 GMThttp://hdl.handle.net/10722/933882004-01-01T00:00:00Z
- A new upper bound 2.5545 on 2D online bin packinghttp://hdl.handle.net/10722/152469Title: A new upper bound 2.5545 on 2D online bin packing
Authors: Han, X; Han, X; Chin, FYL; Ting, HF; Zhang, GC; Zhang, Y
Abstract: The 2D Online Bin Packing is a fundamental problem in Computer Science and the determination of its asymptotic competitive ratio has research attention. In a long series of papers, the lower bound of this ratio has been improved from 1.808, 1.856 to 1.907 and its upper bound reduced from 3.25, 3.0625, 2.8596, 2.7834 to 2.66013. In this article, we rewrite the upper bound record to 2.5545. Our idea for the improvement is as follows. In 2002, Seiden and van Stee [Seiden and van Stee 2003] proposed an elegant algorithm called H. C, comprised of the Harmonic algorithm H and the Improved Harmonic algorithm C, for the two-dimensional online bin packing problem and proved that the algorithm has an asymptotic competitive ratio of at most 2.66013. Since the best known online algorithm for one-dimensional bin packing is the Super Harmonic algorithm [Seiden 2002], a natural question to ask is: could a better upper bound be achieved by using the Super Harmonic algorithm instead of the Improved Harmonic algorithm? However, as mentioned in Seiden and van Stee [2003], the previous analysis framework does not work. In this article, we give a positive answer for this question. A new upper bound of 2.5545 is obtained for 2-dimensional online bin packing. The main idea is to develop new weighting functions for the Super Harmonic algorithm and propose new techniques to bound the total weight in a rectangular bin. © 2011 ACM.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/1524692011-01-01T00:00:00Z
- A decomposition theorem for maximum weight bipartite matchingshttp://hdl.handle.net/10722/43656Title: A decomposition theorem for maximum weight bipartite matchings
Authors: Kao, MY; Lam, TW; Sung, WK; Ting, HF
Abstract: Let G be a bipartite graph with positive integer weights on the edges and without isolated nodes. Let n, N, and W be the node count, the largest edge weight, and the total weight of G. Let k(x, y) be log x/ log(x2/y). We present a new decomposition theorem for maximum weight bipartite matchings and use it to design an O(√nW/k(n, W/N))-time algorithm for computing a maximum weight matching of G. This algorithm bridges a long-standing gap between the best known time complexity of computing a maximum weight matching and that of computing a maximum cardinality matching. Given G and a maximum weight matching of G, we can further compute the weight of a maximum weight matching of G - {u} for all nodes u in O(W) time. © 2001 Society for Industrial and Applied Mathematics.
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10722/436562001-01-01T00:00:00Z
- Constrained pairwise and center-star sequences alignment problemshttp://hdl.handle.net/10722/217295Title: Constrained pairwise and center-star sequences alignment problems
Authors: Zhang, Y; Chan, J; Chin, FYL; Ting, HF; Ye, D; Zhang, F; Shi, J
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2172952016-01-01T00:00:00Z
- ALMOST LINEAR TIME AND O(N LOG N plus E) MESSAGES DISTRIBUTED ALGORITHM FOR MINIMUM-WEIGHT SPANNING TREES.http://hdl.handle.net/10722/151783Title: ALMOST LINEAR TIME AND O(N LOG N plus E) MESSAGES DISTRIBUTED ALGORITHM FOR MINIMUM-WEIGHT SPANNING TREES.
Authors: Chin, F; Ting, HF
Abstract: A distributed algorithm is presented that constructs the minimum-weight spanning tree of an undirected connected graph with distinct edge weights and distinct node identities. Initially, each node knows only the weight of each of its adjacent edges. When the algorithm terminates, each node knows which of its adjacent edges are edges of the tree. For a graph with n nodes and e edges, the total number of messages required by the algorithm is at most 5n log n plus 2e, and each message contains at most one edge weight or one node identity plus 3 plus log n bits. The time complexity is at most O(nG(n)) plus time units. A worst-case O(nG(n)) is also possible.
Tue, 01 Jan 1985 00:00:00 GMThttp://hdl.handle.net/10722/1517831985-01-01T00:00:00Z
- Offline and online algorithms for single-minded selling problemhttp://hdl.handle.net/10722/294277Title: Offline and online algorithms for single-minded selling problem
Authors: Zhang, Y; Chin, FYL; Poon, SH; Ting, HF; Xu, D; Yu, D
Abstract: Given a seller with k types of items and n single-minded buyers, i.e., each buyer is only interested in a particular bundle of items, to maximize the revenue, the seller must assign some amount of bundles to each buyer with respect to the buyer's accepted price. Each buyer is associated with a value function such that is the accepted unit bundle price is willing to pay for x bundles. In this paper, we assume that bundles can be sold fractionally. The single-minded item selling problem is proved to be NP-hard. Moreover, we give an -approximation algorithm. For the online version, i.e., buyers come one by one and the decision must be made immediately on the arrival of each buyer, an -competitive algorithm is given, where h is the highest unit item price among all buyers.
Wed, 01 Jan 2020 00:00:00 GMThttp://hdl.handle.net/10722/2942772020-01-01T00:00:00Z
- Improved On-line Stream Merging: from a Restricted to a General Settinghttp://hdl.handle.net/10722/93472Title: Improved On-line Stream Merging: from a Restricted to a General Setting
Authors: Chan, WT; Lam, TW; Ting, HF; Wong, WH
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10722/934722001-01-01T00:00:00Z
- RENET: A Deep Learning Approach for Extracting Gene-Disease Associations from Literaturehttp://hdl.handle.net/10722/274108Title: RENET: A Deep Learning Approach for Extracting Gene-Disease Associations from Literature
Authors: Wu, Y; Luo, R; Leung, HCM; Ting, HF; Lam, TW
Abstract: Over one million new biomedical articles are published every year. Efficient and accurate text-mining tools are urgently needed to automatically extract knowledge from these articles to support research and genetic testing. In particular, the extraction of gene-disease associations is mostly studied. However, existing text-mining tools for extracting gene-disease associations have limited capacity, as each sentence is considered separately. Our experiments show that the best existing tools, such as BeFree and DTMiner, achieve a precision of 48% and recall rate of 78% at most. In this study, we designed and implemented a deep learning approach, named RENET, which considers the correlation between the sentences in an article to extract gene-disease associations. Our method has significantly improved the precision and recall rate to 85.2% and 81.8%, respectively. The source code of RENET is available at https://bitbucket.org/alexwuhkucs/gda-extraction/src/master/.
Tue, 01 Jan 2019 00:00:00 GMThttp://hdl.handle.net/10722/2741082019-01-01T00:00:00Z
- A Decomposition Theorem For Maximum Weight Bipartite Matchings with Applications To Evolutionary Treeshttp://hdl.handle.net/10722/93115Title: A Decomposition Theorem For Maximum Weight Bipartite Matchings with Applications To Evolutionary Trees
Authors: Kao, MY; Lam, TW; Ting, HF; Sung, WK
Fri, 01 Jan 1999 00:00:00 GMThttp://hdl.handle.net/10722/931151999-01-01T00:00:00Z
- Computing the Unrooted Maximum Agreement Subtree in Sub-quadratic Timehttp://hdl.handle.net/10722/88990Title: Computing the Unrooted Maximum Agreement Subtree in Sub-quadratic Time
Authors: Lam, TW; Sung, WK; Ting, HF
Mon, 01 Jan 1996 00:00:00 GMThttp://hdl.handle.net/10722/889901996-01-01T00:00:00Z
- Online bin packing problem with buffer and bounded size revisitedhttp://hdl.handle.net/10722/261201Title: Online bin packing problem with buffer and bounded size revisited
Authors: Zhang, MH; Han, X; Lan, Y; Ting, HF
Abstract: In this paper we study the online bin packing with buffer and bounded size, i.e., there are items with size within (α, 1 / 2 ] where 0 ≤ α< 1 / 2 , and there is a buffer with constant size. Each time when a new item is given, it can be stored in the buffer temporarily or packed into some bin, once it is packed in the bin, we cannot repack it. If the input is ended, the items in the buffer should be packed into bins too. In our setting, any time there is at most one bin open, i.e., that bin can accept new items, and all the other bins are closed. This problem is first studied by Zheng et al. (J Combin Optim 30(2):360–369, 2015), and they proposed a 1.4444-competitive algorithm and a lower bound 1.3333 on the competitive ratio. We improve the lower bound from 1.3333 to 1.4230, and the upper bound from 1.4444 to 1.4243.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2612012017-01-01T00:00:00Z
- An O(nlogn)-Time Algorithm for the Maximum Constrained Agreement Subtree Problem for Binary Treeshttp://hdl.handle.net/10722/93273Title: An O(nlogn)-Time Algorithm for the Maximum Constrained Agreement Subtree Problem for Binary Trees
Authors: Peng, Z; Ting, HF
Thu, 01 Jan 2004 00:00:00 GMThttp://hdl.handle.net/10722/932732004-01-01T00:00:00Z
- A unified analysis of hot video schedulershttp://hdl.handle.net/10722/93127Title: A unified analysis of hot video schedulers
Authors: Chan, WT; Lam, TW; Ting, HF; Wong, WH
Abstract: In this paper we consider the notion of relative competitive analysis, which is a simple generalization of the conventional competitive analysis and extra-resource analysis for on-line algorithms. We apply this analysis to study on-line schedulers for stream merging in two different video-on-demand (VOD) systems, which are based on two common approaches, namely, piggybacking and skimming. Our new analysis, in its simplest form, reveals a 3-competitive algorithm for stream merging based on skimming as well as piggybacking. This improves all previous results [4, 8]. We also show how to obtain guarantee on the performance improvement based on adding extra resources, and more interestingly, we provide a unified methodology to compare piggybacking and skimming. We believe that our result gives a clue to system designers for choosing desirable configurations.
Tue, 01 Jan 2002 00:00:00 GMThttp://hdl.handle.net/10722/931272002-01-01T00:00:00Z
- Allowing Mismatches In Anchors For Whole Genome Alignmenthttp://hdl.handle.net/10722/88991Title: Allowing Mismatches In Anchors For Whole Genome Alignment
Authors: Yiu, SM; Chan, PY; Lam, TW; Sung, WK; Ting, HF; Wong, PWH
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/10722/889912007-01-01T00:00:00Z
- Approximated distributed minimum vertex cover algorithms for bounded degree graphshttp://hdl.handle.net/10722/129585Title: Approximated distributed minimum vertex cover algorithms for bounded degree graphs
Authors: Zhang, Y; Chin, FYL; Ting, HF
Abstract: In this paper, two distributed algorithms for the minimum vertex cover problem are given. In the unweighted case, we propose a 2.5-approximation algorithm with round complexity O(Δ), where Δ is the maximal degree of G, improving the previous 3-approximation result with the same round complexity O(Δ). For the weighted case, we give a 4-approximation algorithm with round complexity O(Δ). © 2010 Springer-Verlag Berlin Heidelberg.
Description: LNCS v. 6196 is the Conference Proceedings of COCOON 2010
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10722/1295852010-01-01T00:00:00Z
- Cavity matchings, label compressions, and unrooted evolutionary treeshttp://hdl.handle.net/10722/43657Title: Cavity matchings, label compressions, and unrooted evolutionary trees
Authors: Kao, MY; Lam, TW; Sung, WK; Ting, HF
Abstract: We present an algorithm for computing a maximum agreement subtree of two unrooted evolutionary trees. It takes O(n1.5 log n) time for trees with unbounded degrees, matching the best known time complexity for the rooted case. Our algorithm allows the input trees to be mixed trees, i.e., trees that may contain directed and undirected edges at the same time. Our algorithm adopts a recursive strategy exploiting a technique called label compression. The backbone of this technique is an algorithm that computes the maximum weight matchings over many subgraphs of a bipartite graph as fast as it takes to compute a single matching. © 2000 Society for Industrial and Applied Mathematics.
Sat, 01 Jan 2000 00:00:00 GMThttp://hdl.handle.net/10722/436572000-01-01T00:00:00Z
- Sleep with guilt and work faster to minimize flow plus energyhttp://hdl.handle.net/10722/93390Title: Sleep with guilt and work faster to minimize flow plus energy
Authors: Lam, TW; Lee, LK; Ting, HF; To, IKK; Wong, PWH
Abstract: In this paper we extend the study of flow-energy scheduling to a model that allows both sleep management and speed scaling. Our main result is a sleep management algorithm called IdleLonger, which works online for a processor with one or multiple levels of sleep states. The design of IdleLonger is interesting; among others, it may force the processor to idle or even sleep even though new jobs have already arrived. IdleLonger works in both clairvoyant and non-clairvoyant settings. We show how to adapt two existing speed scaling algorithms AJC [15] (clairvoyant) and LAPS [9] (non-clairvoyant) to the new model. The adapted algorithms, when coupled with IdleLonger, are shown to be O(1)-competitive clairvoyant and non-clairvoyant algorithms for minimizing flow plus energy on a processor that allows sleep management and speed scaling. The above results are based on the traditional model with no limit on processor speed. If the processor has a maximum speed, the problem becomes more difficult as the processor, once overslept, cannot rely on unlimited extra speed to catch up the delay. Nevertheless, we are able to enhance IdleLonger and AJC so that they remain O(1)-competitive for flow plus energy under the bounded speed model. Non-clairvoyant scheduling in the bounded speed model is left as an open problem. © 2009 Springer Berlin Heidelberg.
Description: LNCS v. 5555 is Proceedings of the 36th International Colloquium, ICALP 2009; Session A3 - Scheduling
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/10722/933902009-01-01T00:00:00Z
- An efficient online algorithm for square detectionhttp://hdl.handle.net/10722/93467Title: An efficient online algorithm for square detection
Authors: Leung, HF; Peng, Z; Ting, HF
Abstract: A square is a string of characters that can be divided into two identical substrings. The problem of detecting squares in a string finds applications in many areas such as bioinformatics and data compression. In this paper, we give the first efficient online algorithm for the problem. Given any input string, our algorithm reports the first square in the string using O(n log2 n) time where n is the position in the string where the square ends. This time complexity is only a factor of O(log n) larger than that of an optimal offline algorithm. © Springer-Verlag Berlin Heidelberg 2004.
Thu, 01 Jan 2004 00:00:00 GMThttp://hdl.handle.net/10722/934672004-01-01T00:00:00Z
- Dynamic offline conflict-free coloring for unit diskshttp://hdl.handle.net/10722/61203Title: Dynamic offline conflict-free coloring for unit disks
Authors: Chan, JWT; Chin, FYL; Hong, X; Ting, HF
Abstract: A conflict-free coloring for a given set of disks is a coloring of the disks such that for any point p on the plane there is a disk among the disks covering p having a color different from that of the rest of the disks that covers p. In the dynamic offline setting, a sequence of disks is given, we have to color the disks one-by-one according to the order of the sequence and maintain the conflict-free property at any time for the disks that are colored. This paper focuses on unit disks, i.e., disks with radius one. We give an algorithm that colors a sequence of n unit disks in the dynamic offline setting using O(logn) colors. The algorithm is asymptotically optimal because Ω(logn) colors is necessary to color some set of n unit disks for any value of n [9]. © 2009 Springer Berlin Heidelberg.
Description: LNCS v. 5426 is Proceedings of the 6th International Workshop, WAOA 2008; WAOA – Audimax A
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/10722/612032009-01-01T00:00:00Z
- Time and space efficient algorithms for constrained sequence alignmenthttp://hdl.handle.net/10722/93048Title: Time and space efficient algorithms for constrained sequence alignment
Authors: Peng, ZS; Ting, HF
Abstract: In this paper, we study the constrained sequence alignment problem, which is a generalization of the classical sequence alignment problem with the additional constraint that some characters in the alignment must be positioned at the same columns. The problem finds important applications in Bioinformatics. Our major result is an O(ℓn2)-time and O(ℓn)-space algorithm for constructing an optimal constrained alignment of two sequences where n is the length of the longer sequence and ℓ is the length of the constraint. Our algorithm matches the best known time complexity and reduces the best known space complexity by a factor of n for solving the problem. We also apply our technique to design time and space efficient heuristic and approximation algorithm for aligning multiple sequences. © Springer-Verlag Berlin Heidelberg 2005.
Sat, 01 Jan 2005 00:00:00 GMThttp://hdl.handle.net/10722/930482005-01-01T00:00:00Z
- Dictionary matching with uneven gapshttp://hdl.handle.net/10722/214759Title: Dictionary matching with uneven gaps
Authors: Hon, WK; Lam, TW; Shah, R; Thankachan, SV; Ting, HF; Yang, Y
Abstract: A gap-pattern is a sequence of sub-patterns separated by bounded sequences of don’t care characters (called gaps). A one-gap-pattern is a pattern of the form P[α,β]Q , where P and Q are strings drawn from alphabet Σ and [α,β] are lower and upper bounds on the gap size g . The gap size g is the number of don’t care characters between P and Q . The dictionary matching problem with one-gap is to index a collection of one-gap-patterns, so as to identify all sub-strings of a query text T that match with any one-gap-pattern in the collection. Let D be such a collection of d patterns, where D={P i [α i ,β i ]Q i ∣1≤i≤d} . Let n=∑ d i=1 |P i |+|Q i | . Let γ and λ be two parameters defined on D as follows: γ=|{j∣j∈[α i ,β i ],1≤i≤d}| and λ=|{α i ,β i ∣1≤i≤d}| . Specifically γ is the total number gap lengths possible over all patterns in D and λ is the number of distinct gap boundaries across all the patterns. We present a linear space solution (i.e., O(n) words) for answering a dictionary matching query on D in time O(|T|γlogλlogd+occ) , where occ is the output size. The query time can be improved to O(|T|γ+occ) using O(n+d 1+ϵ ) space, where ϵ>0 is an arbitrarily small constant. Additionally, we show a compact/succinct space index offering a space-time trade-off. In the special case where parameters α i and β i ’s for all the patterns are same, our results improve upon the work by Amir et al. [CPM, 2014]. We also explore several related cases where gaps can occur at arbitrary locations and where gap can be induced in the text rather than pattern.
Description: LNCS v. 9133 entitled: Combinatorial Pattern Matching: 26th Annual Symposium, CPM 2015 ... Proceedings
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10722/2147592015-01-01T00:00:00Z
- Variable-size rectangle coveringhttp://hdl.handle.net/10722/61165Title: Variable-size rectangle covering
Authors: Chin, FYL; Ting, HF; Zhang, Y
Abstract: In wireless communication networks, optimal use of the directional antenna is very important. The directional antenna coverage (DAC) problem is to cover all clients with the smallest number of directional antennas. In this paper, we consider the variable-size rectangle covering (VSRC) problem, which is a transformation of the DAC problem. There are n points above the base line y=0 of the two-dimensional plane. The target is to cover all these points by minimum number of rectangles, such that the dimension of each rectangle is not fixed but the area is at most 1, and the bottom edge of each rectangle is on the base line y=0. In some applications, the number of rectangles covering any position in the two-dimensional plane is bounded, so we also consider the variation when each position in the plane is covered by no more than two rectangles. We give two polynomial time algorithms for finding the optimal covering. Further, we propose two 2-approximation algorithms that use less running time (O(nlogn) and O(n)). © 2009 Springer Berlin Heidelberg.
Description: LNCS v. 5573 is conference proceedings of the 3rd International Conference, COCOA 2009
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/10722/611652009-01-01T00:00:00Z
- Approximating frequent items in asynchronous data stream over a sliding windowhttp://hdl.handle.net/10722/152499Title: Approximating frequent items in asynchronous data stream over a sliding window
Authors: Ting, HF; Lee, LK; Chan, HL; Lam, TW
Abstract: In an asynchronous data stream, the data items may be out of order with respect to their original timestamps. This paper studies the space complexity required by a data structure to maintain such a data stream so that it can approximate the set of frequent items over a sliding time window with sufficient accuracy. Prior to our work, the best solution is given by Cormode et al. [1], who gave an O (1/∈ logW log(∈B/logW) min{logW; 1/∈} log |U|) -space data structure that can approximate the frequent items within an ∈ error bound, where W and B are parameters of the sliding window, and U is the set of all possible item names. We gave a more space-efficient data structure that only requires O (1/∈log W log(∈B/logW) log logW) space. © 2011 by the authors.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/1524992011-01-01T00:00:00Z
- Non-clairvoyant scheduling for weighted flow time and energy on speed bounded processorshttp://hdl.handle.net/10722/140791Title: Non-clairvoyant scheduling for weighted flow time and energy on speed bounded processors
Authors: Chan, SH; Lam, TW; Lee, LK; Ting, HF; Zhang, P
Abstract: We consider the online scheduling problem of minimizing total weighted flow
time plus energy in the dynamic speed scaling model, where a processor can scale its speed
dynamically between 0 and some maximum speed T. In the past few years this problem has
been studied extensively under the clairvoyant setting, which requires the size of a job to
be known at release time [1, 4, 5, 8, 15, 18–20]. For the non-clairvoyant setting, despite its
practical importance, the progress is relatively limited. Only recently an online algorithm
LAPS is known to be O(1)-competitive for minimizing (unweighted) flow time plus energy
in the infinite speed model (i.e., T = ¥) [11, 12]. This paper makes two contributions to
the non-clairvoyant scheduling. First, we resolve the open problem that the unweighted
result of LAPS can be extended to the more realistic model with bounded maximum speed.
Second, we show that another non-clairvoyant algorithm WRR is O(1)-competitive when
weighted flow time is concerned. Note that WRR is not as efficient as LAPS for scheduling
unweighted jobs as WRR has a much bigger constant hidden in its competitive ratio.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/1407912011-01-01T00:00:00Z
- Online tree node assignment with resource augmentationhttp://hdl.handle.net/10722/152467Title: Online tree node assignment with resource augmentation
Authors: Chan, JWT; Chin, FYL; Ting, HF; Zhang, Y
Abstract: Given a complete binary tree of height h, the online tree node assignment problem is to serve a sequence of assignment/release requests, where an assignment request, with an integer parameter 0≤i≤h, is served by assigning a (tree) node of level (or height) i and a release request is served by releasing a specified assigned node. The node assignments have to guarantee that no node is assigned to two assignment requests unreleased, and every leaf-to-root path of the tree contains at most one assigned node. With assigned node reassignments allowed, the target of the problem is to minimize the number of assignments/reassignments, i.e., the cost, to serve the whole sequence of requests. This online tree node assignment problem is fundamental to many applications, including OVSF code assignment in WCDMA networks, buddy memory allocation and hypercube subcube allocation. Most of the previous results focus on how to achieve good performance when the same amount of resource is given to both the online and the optimal offline algorithms, i.e., one tree. In this paper, we focus on resource augmentation, where the online algorithm is allowed to use more trees than the optimal offline algorithm. By using different approaches, we give (1) a 1-competitive online algorithm, which uses (h+1)/2 trees and is optimal because (h+1)/2 trees are required by any online algorithm to match the cost of the optimal offline algorithm with one tree; (2) a 2-competitive algorithm with 3h/8+2 trees; (3) an amortized 8/3-competitive algorithm with 11/4 trees; (4) a general amortized (4/3+α)-competitive algorithm with (11/4+4/(3α)) trees, for 0<α≤4/3. © 2010 Springer Science+Business Media, LLC.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/1524672011-01-01T00:00:00Z
- Online problems for frequency assignment and OVSF code assignment in wireless communication networkshttp://hdl.handle.net/10722/127351Title: Online problems for frequency assignment and OVSF code assignment in wireless communication networks
Authors: Chan, JWT; Chin, FYL; Ting, HF; Zhang, Y
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/10722/1273512009-01-01T00:00:00Z
- An optimal algorithm for global termination detection in shared-memory asynchronous multiprocessor systemshttp://hdl.handle.net/10722/43635Title: An optimal algorithm for global termination detection in shared-memory asynchronous multiprocessor systems
Authors: Leung, HF; Ting, HF
Abstract: In the literature, the problem of global termination detection in parallel systems is usually solved by message passing. In shared-memory systems, this problem can also be solved by using exclusively accessible variables with locking mechanisms. In this paper, we present an algorithm that solves the problem of global termination detection in shared-memory asynchronous multiprocessor systems without using locking. We assume a reasonable computation model in which concurrent reading does not require locking and concurrent writing different values without locking results in an arbitrary one of the values being actually written. For a system of n processors, the algorithm allocates a working space of 2n + 1 bits. The worst case time complexity of the algorithm is n + 2√n + 1, which we prove is the lower bound under a reasonable model of computation. © 1997 IEEE.
Wed, 01 Jan 1997 00:00:00 GMThttp://hdl.handle.net/10722/436351997-01-01T00:00:00Z
- Approximation algorithms for the partial assignment problemhttp://hdl.handle.net/10722/294278Title: Approximation algorithms for the partial assignment problem
Authors: Gao, G; Ning, L; Ting, HF; Xu, Y; Zhang, Y; Zou, Y
Abstract: In the partial assignment problem, a seller has an item set M = {i(1), i(2), ..., i(m)}, the amount of each item is exactly one. There are n buyers N = {b(1), b(2), ..., b(n)}, each buyer b y has a preferred bundle B-p subset of M and a value function f(p)(.). Assume that each item should be sold integrally and thus can be only assigned to at most one buyer. In previous works, buyers are often considered to be single-minded, i.e., a buyer can be either assigned the whole preferred bundle, or nothing. In this paper, we consider a more generalized and realistic model where the buyer can be partially satisfied, i.e., buyer b(p) can have some positive value if the seller assigns a subset of b(p)'s preferred bundle. However, there might be exponential number of subsets, to tackle this situation, a value oracle is implemented. We can get the value f(p) (S-p) for buyer b(p) and S-p subset of B-p by querying the value oracle. The objective is to assign items to buyers such that the total values are maximized, i.e., max Sigma(n)(p=1) f(p)(S-p). We first show that in this model, maximizing the total values is NP-hard. We then propose provably efficient approximation algorithms for general and submodular value functions respectively. If the value function satisfies non-negative, monotone and normalized, an 1/root m-approximation algorithm can be achieved. If the value function is submodular, the total values can be approximated within a factor of (1 -1/e). (C) 2020 Elsevier B.V. All rights reserved.
Wed, 01 Jan 2020 00:00:00 GMThttp://hdl.handle.net/10722/2942782020-01-01T00:00:00Z
- MegaPath: Low-Similarity Pathogen Detection from Metagenomic NGS Datahttp://hdl.handle.net/10722/274110Title: MegaPath: Low-Similarity Pathogen Detection from Metagenomic NGS Data
Authors: Li, D; Leung, HCM; Wong, CK; Zhang, Y; Law, WC; Xin, Y; Luo, R; Ting, HF; Lam, TW
Abstract: Detecting pathogen, the causal bacteria or virus, of infections such as pneumonia is an important step in diagnosis. Traditional method for pathogen detection is time- consuming as infectious disease may be caused by a large range of pathogens which should be checked one by one. This causes the delay of treatment or even mistreatment of patients. Unbiased next-generation sequencing (NGS) can detect DNA fragments (reads) of all species in a metagenomics sample with a mixture of different species. Those NGS reads could be classified into different taxa by comparing them with a collection of reference genome sequences, and pathogens could be detected if some reads match them. In clinical diagnoses, it is important that a classifier can detect a significant number of reads supporting the potential pathogens and report as few false classifications as possible, to give a high abundance rank for the pathogen. Otherwise, the pathogen cannot be distinguished from background noises, and it will take doctors a long time to go through a long list of candidates to verify its existence. Existing metagenomic classifiers do not perform well for detecting low-similarity pathogens, i.e., pathogen with genome that is not similar to the reference. It is because most classifiers detect pathogen by constructing a characteristic profile (e.g. k-mers) for each reference and assign reads to species by comparing them with the profiles. When the characteristic profile does not match with the genome of low- similarity pathogens, this approach fails and results in many incorrect or nonspecifically classification. Some tools assign reads to reference sequences by local or semi-global alignment. The analysis time is long (over 4 hours for a typical dataset of 1 Gb) but more reads from the pathogen can be assigned correctly. However, the alignment score of reads are still low for low-similarity pathogen. These reads cannot be assigned to the pathogen specifically such that the number of reads supporting the pathogen is still too low. In order to detect low-similarity pathogen, we introduce MegaPath for NGS-based pathogen detection. There are two major contributions. First, instead of assigning each read to reference sequence one by one, MegaPath analyzes all aligned reads globally to determine a subset of reads with confident alignments. Then MegaPath reassigns non-specifically aligned reads to species with confident alignments, and discards unconfident alignments to avoid potential false classifications. It will increase the number of reads supporting the pathogen and reduce the number of false positive assignments. Second, MegaPath adopts a fast alignment-based approach using an enhanced maximum-exact-match prefix seeding strategy and SIMD-accelerated Smith-Waterman algorithm. Use a metagenomic NGS sample of cerebrospinal fluid (CSF) [1] as an example. The similarity of the pathogen to reference is 18.7%. Centrifuge [2] and Kraken [4], based on characteristic profile, detect 31 and 6 reads from the pathogen respectively. The abundance rank of the pathogen is 710 and 384 respectively. Thus, the doctor needs to go through a list of 300+ species to find out the pathogen. By an alignment processes taking 4 hours, SURPI [3] can detect 76 reads for the pathogen and its rank go up to 245. With better alignment tools and global analysis of reads, MegaPath takes less than one hour to detect 608 reads for the pathogen and its rank is at 33. Thus, MegaPath has the best performance among existing software with a reasonable running time. Experiment results for more datasets can be found in the full paper. In addition to detecting pathogens with known reference sequences, MegaPath can also detect pathogens without any similar DNA-level sequences in the reference database, using de novo assembly and protein alignment.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10722/2741102018-01-01T00:00:00Z
- Online algorithms for 1-space bounded multi dimensional bin packing and hypercube packinghttp://hdl.handle.net/10722/147128Title: Online algorithms for 1-space bounded multi dimensional bin packing and hypercube packing
Authors: Zhang, Y; Chin, FYL; Ting, HF; Han, X
Abstract: In this paper, we study 1-space bounded multi-dimensional bin packing and hypercube packing. A sequence of items arrive over time, each item is a d-dimensional hyperbox (in bin packing) or hypercube (in hypercube packing), and the length of each side is no more than 1. These items must be packed without overlapping into d-dimensional hypercubes with unit length on each side. In d-dimensional space, any two dimensions i and j define a space P ij. When an item arrives, we must pack it into an active bin immediately without any knowledge of the future items, and 90 {ring operator}-rotation on any plane P ij is allowed. The objective is to minimize the total number of bins used for packing all these items in the sequence. In the 1-space bounded variant, there is only one active bin for packing the current item. If the active bin does not have enough space to pack the item, it must be closed and a new active bin is opened. For d-dimensional bin packing, an online algorithm with competitive ratio 4 d is given. Moreover, we consider d-dimensional hypercube packing, and give a 2 d+1-competitive algorithm. These two results are the first study on 1-space bounded multi dimensional bin packing and hypercube packing. © 2012 The Author(s).
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/10722/1471282013-01-01T00:00:00Z
- GLProbs: Aligning multiple sequences adaptivelyhttp://hdl.handle.net/10722/204716Title: GLProbs: Aligning multiple sequences adaptively
Authors: YE, Y; Cheung, DWL; Wang, YD; Yiu, SM; Zhan, Q; Lam, TW; Ting, HF
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/10722/2047162015-01-01T00:00:00Z
- A constant-competitive algorithm for online OVSF code assignmenthttp://hdl.handle.net/10722/127350Title: A constant-competitive algorithm for online OVSF code assignment
Authors: Chin, FYL; Ting, HF; Zhang, Y
Abstract: Orthogonal Variable Spreading Factor (OVSF) code assignment is a fundamental problem in Wideband Code-Division Multiple-Access (W-CDMA) systems, which plays an important role in third generation mobile communications. In the OVSF problem, codes must be assigned to incoming call requests with different data rate requirements, in such a way that they are mutually orthogonal with respect to an OVSF code tree. An OVSF code tree is a complete binary tree in which each node represents a code associated with the combined bandwidths of its two children. To be mutually orthogonal, each leaf-to-root path must contain at most one assigned code. In this paper, we focus on the online version of the OVSF code assignment problem and give a 10-competitive algorithm (where the cost is measured by the total number of assignments and reassignments used). Our algorithm improves the previous O(h)-competitive result, where h is the height of the code tree, and also another recent constant-competitive result, where the competitive ratio is only constant under amortized analysis and the constant is not determined. We also improve the lower bound of the problem from 3/2 to 5/3. © 2008 Springer Science+Business Media, LLC.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/10722/1273502010-01-01T00:00:00Z
- A simple and economical method for improving whole genome alignmenthttp://hdl.handle.net/10722/245817Title: A simple and economical method for improving whole genome alignment
Authors: MAI, H; Lam, TW; Ting, HF
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10722/2458172017-01-01T00:00:00Z
- An O(n log n)-time algorithm for the maximum constrained agreement subtree problem for binary treeshttp://hdl.handle.net/10722/152367Title: An O(n log n)-time algorithm for the maximum constrained agreement subtree problem for binary trees
Authors: Peng, Z; Ting, H
Abstract: This paper introduces the maximum constrained agreement subtree problem, which is a generalization of the classical maximum agreement subtree problem. This new problem is motivated by the understood constraint when we compare the evolutionary trees. We give an O(n log n)-time algorithm for solving the problem when the input trees are binary. The time complexity of our algorithm matches the fastest known algorithm for the maximum agreement subtree problem for binary trees. © Springer-Verlag 2004.
Thu, 01 Jan 2004 00:00:00 GMThttp://hdl.handle.net/10722/1523672004-01-01T00:00:00Z
- All-cavity maximum Matchingshttp://hdl.handle.net/10722/93016Title: All-cavity maximum Matchings
Authors: Kao, MY; Lam, TW; Sung, WK; Ting, HF
Abstract: Let G = (X, Y, E) be a bipartite graph with integer weights on the edges. Let n, m, and N denote the vertex count, the edge count, and an upper bound on the absolute values of edge weights of G, respectively. For a vertex u in G, let Gu denote the graph formed by deleting u from G. The all-cavity maximum matching problem asks for a maximum weight matching in Gu for all u in G. This problem finds applications in optimal tree algorithms for computational biology. We show that the problem is solvable in O(√nmlog(nN)) time, matching the currently best time complexity for merely computing a single maximum weight matching in G. We also give an algorithm for a generalization of the problem where both a vertex from X and one from Y can be deleted. The running time is O(n21og n + nm). Our algorithms are based on novel linear-time reductions among problems of computing shortest paths and all-cavity maximum matchings.
Research supported in part by NSF Grants CCR-9531028.
Research supported in part by RGC (The Research Grants Council of Hong Kong) Grants 338/065/0027 and 338/065/0028.
Wed, 01 Jan 1997 00:00:00 GMThttp://hdl.handle.net/10722/930161997-01-01T00:00:00Z
- Online pricing for bundles of multiple itemshttp://hdl.handle.net/10722/185123Title: Online pricing for bundles of multiple items
Authors: Zhang, Y; Chin, FYL; Ting, HF
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10722/1851232014-01-01T00:00:00Z
- A 5-competitive on-line scheduler for merging video streamshttp://hdl.handle.net/10722/93185Title: A 5-competitive on-line scheduler for merging video streams
Authors: Chan, WT; Lam, TW; Ting, HF; Wong, WH
Mon, 01 Jan 2001 00:00:00 GMThttp://hdl.handle.net/10722/931852001-01-01T00:00:00Z
- Near Optimal Algorithms for Online Maximum Weighted b-Matchinghttp://hdl.handle.net/10722/203639Title: Near Optimal Algorithms for Online Maximum Weighted b-Matching
Authors: Ting, HF; Xiang, X
Abstract: We study the online maximum weighted b-matching problem, in which the input is a bipartite graph G = (L,R,E, w). Vertices in R arrive online and each vertex in L can be matched to at most b vertices in R. Assume that the edge weights in G are no more than w max , which may not be known ahead of time. We show that a randomized algorithm Greedy-RT which has competitive ratio (Omega( frac{1}{prod_{j=1}^{log^* w_{max} - 1} log^{(j)} w_{max} })). We can improve the competitive ratio to (Omega(frac{1}{log w_{max}})) if w max is known to the algorithm when it starts. We also derive an upper bound (O(frac{1}{log w_{max}})) suggesting that Greedy-RT is near optimal. Deterministic algorithms are also considered and we present a near optimal algorithm Greedy-D which is ( frac{1}{1+2xi(w_{max}+1)^{frac{1}{xi}}})-competitive, where ξ = min {b, ⌈ln (1 + w max ) ⌉}. We propose a variant of the problem called online two-sided vertex-weighted matching problem, and give a modification of the randomized algorithm Greedy-RT called Greedy- v RT specially for this variant. We show that Greedy- v RT is also near optimal.
Description: Lecture Notes in Computer Science, vol. 8497 entitled: Frontiers in Algorithmics: 8th International Workshop, FAW 2014, Zhangjiajie, China, June 28-30, 2014: proceedings
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/10722/2036392014-01-01T00:00:00Z
- PnpProbs: A better multiple sequence alignment tool by better handling of guide treeshttp://hdl.handle.net/10722/235314Title: PnpProbs: A better multiple sequence alignment tool by better handling of guide trees
Authors: YE, Y; Lam, TW; Ting, HF
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/10722/2353142016-01-01T00:00:00Z
- An efficient algorithm for optimal linear broadcast routing with capacity constraintshttp://hdl.handle.net/10722/93283Title: An efficient algorithm for optimal linear broadcast routing with capacity constraints
Authors: Ting, HF; Wong, WH; Yau, MH
Sun, 01 Jan 1995 00:00:00 GMThttp://hdl.handle.net/10722/932831995-01-01T00:00:00Z
- Online algorithm for 1-space bounded multi-dimensional bin packinghttp://hdl.handle.net/10722/141245Title: Online algorithm for 1-space bounded multi-dimensional bin packing
Authors: Zhang, Y; Chin, FYL; Ting, HF; Han, X; Chang, Z
Abstract: In this paper, we study 1-space bounded multi-dimensional bin packing. A sequence of items arrive over time, each item is a d-dimensional hyperbox and the length of each side is no more than 1. These items must be packed without overlapping into d-dimensional hypercubes with unit length on each side. In d-dimensional space, any two dimensions i and j define a space Pij. When an item arrives, we must pack it into an active bin immediately without any knowledge of the future items, and 90°-rotation on any plane Pij is allowed. The objective is to minimize the total number of bins used for packing all these items in the sequence. In the 1-space bounded variant, there is only one active bin for packing the current item. If the active bin does not have enough space to pack the item, it must be closed and a new active bin is opened. For this problem, we give an online algorithm with competitive ratio 4d, which is the first study on 1-space bounded d-dimensional bin packing. © 2011 Springer-Verlag.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/10722/1412452011-01-01T00:00:00Z
- A tight analysis of most-requested-first for on-demand data broadcasthttp://hdl.handle.net/10722/93178Title: A tight analysis of most-requested-first for on-demand data broadcast
Authors: Hung, RYS; Ting, HF
Abstract: This paper gives a complete and tight mathematical analysis on the performance of the Most-Requested-First algorithm for on-demand data broadcast. The algorithm is natural and simple, yet its performance is surprisingly good in practice. We derive tight upper and lower bounds on MRF's competitiveness and thus reveal the exact competitive ratios of the algorithm under different system configurations. We prove that the competitive ratio of MRF is exactly 3 - ℓ/d when the start-up delay d is a multiple of the page length ℓ otherwise the ratio is 3. © Springer-Verlag Berlin Heidelberg 2006.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/10722/931782006-01-01T00:00:00Z
- Approximation algorithms for the selling with preferencehttp://hdl.handle.net/10722/294276Title: Approximation algorithms for the selling with preference
Authors: Li, P; Hua, Q; Hu, Z; Ting, HF; Zhang, Y
Abstract: We consider the market mechanism to sell two types of products, A and B, to a set of buyers I={1,2,…,n}. The amounts of products are mA and mB respectively. Each buyer i has his information including the budget, the preference and the utility function. On collecting the information from all buyers, the market maker determines the price of each product and allocates some amount of product to each buyer. The objective of the market maker is designing a mechanism to maximize the total utility of the buyers in satisfying the semi market equilibrium. In this paper, we show that this problem is NP-hard and give an iterative algorithm with the approximation ratio 1.5. Moreover, we introduce a PTAS for the problem, which is an (1+ϵ)-approximation algorithm with the running time O(21/ϵ+nlogn) for any positive ϵ.
Wed, 01 Jan 2020 00:00:00 GMThttp://hdl.handle.net/10722/2942762020-01-01T00:00:00Z
- Competitive analysis of most-request-first for scheduling broadcasts with start-up delayhttp://hdl.handle.net/10722/88914Title: Competitive analysis of most-request-first for scheduling broadcasts with start-up delay
Authors: Hung, RYS; Ting, HF
Abstract: In this paper, we give a tight and complete mathematical analysis of the Most-Request-First algorithm for scheduling on-demand broadcasts with start-up delay. The algorithm is natural and simple, yet its practical performance is surprisingly good. We derive tight upper and lower bounds on its competitive ratio under different system configurations. Our results reveal an interesting relationship between the start-up delay and the competitiveness of the algorithm. © 2008 Elsevier Ltd. All rights reserved.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/10722/889142008-01-01T00:00:00Z