標題: 在以道路基礎蜂巢式網路中以馬可夫決策為基礎並使用鄰近狀態資訊的允入控制機制
Markov-Decision-Based Call Admission Control with State Information of Adjacent Cells in Road-based Cellular Networks
作者: 曹賢宗
Hsien-Chung Tsao
廖維國
Wei-kuo Liao
電信工程研究所
關鍵字: 允入控制機制;蜂巢式網路;馬可夫決策;Call Admission Control;Cellular Networks;Markov Decision
公開日期: 2007
摘要: 在此篇論文中,我們研究將以固定通道(Fix Channel Allocation)分配蜂巢式網路鄰近狀態資訊列入考慮的允入控制機制。在某種程度的假設前提之下,對於每個細胞,我們使用二維狀態的馬可夫鏈(Markov chain)來模擬此系統。其第一維是代表基礎細胞的50個狀態,其第二維是代表周圍細胞的300個狀態。因此,我們的模型變成一個總共15000狀態數的二維馬可夫鏈。對於以最小化新連線的阻斷率(new call blocking probability)和連線交遞的失敗率(handoff dropping probability)為目標函數的問題,可用馬可夫決策過程(Markov Decision Process)來描述。然而,此一龐大的狀態數目使得轉置機率的反矩陣(其大小為 2.25×108)無法計算出來,也因此複雜化了應用馬可夫決策過程中的「策略疊代法」(policy-iteration method)來解決我們的問題。於是我們使用把在幾步之內可以到達的狀態們群集在一起的「狀態聚合法」(state aggregation method)來克服此困難。如此做了之後,我們的模型變成總共只有66個狀態而且可用「策略疊代法」來解。最後,我們證實與熟知的「保護頻道策略」(Guard Channel policy)相比較之下,我們的策略可以很容易的得到並且有較低的平均成本。然而,以通道借償並隨方向鎖定(Borrowing with Directional Locking)的通道分配方式比固定通道分配方式能有更好的通話失敗率。因此我們多加入單步決策(One-Step Policy)來改變以上碼可夫決策過程以期待得到更好的效能。
We study the admission control problem in cellular network when taking the neighboring state information into consideration with FA Strategy. For each cell, under certain assumptions, we model the system by Markov chain with two-dimensional states where the first dimension represents the base cell’s 50 states and the second dimension stands for the adjacent cells’ 300 states. As a result, the model becomes a two-dimensional Markov chain with 15000 states in total. The problem of minimizing a linear objective function of new call blocking and handoff call dropping probabilities can then be formulated as a Markov Decision Process. However, the enormous number of states makes the inverse of the transition probability matrix (which is of size 2.25×108) computation-prohibitive and thus complicates the application of policy iteration method in the context of Markov Decision Process to solve our problem. To attack such, we use the state aggregation method where we group those states which basically are few steps reachable from each other. After doing so, our model turns into involving only 66 states in total and solvable by the policy-iteration method. Finally, we show that our policy can be easily derived and has lower average cost than the well-known Guard Channel policy. However, we find that BDCL strategy outperforms FA strategy. Therefore, we modify the above MDP model by adding One-Step policy to it in order to get better efficiency.
URI: http://140.113.39.130/cdrfb3/record/nctu/#GT009213615
http://hdl.handle.net/11536/70557
顯示於類別:畢業論文


文件中的檔案:

  1. 361501.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。