Full metadata record
DC FieldValueLanguage
dc.contributor.author李佳蓉en_US
dc.contributor.authorLee, Chia-Jungen_US
dc.contributor.author蔡錫鈞en_US
dc.contributor.authorTsai, Shi-Chunen_US
dc.date.accessioned2014-12-12T01:21:47Z-
dc.date.available2014-12-12T01:21:47Z-
dc.date.issued2009en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT079117564en_US
dc.identifier.urihttp://hdl.handle.net/11536/40300-
dc.description.abstract我們討論從各種隨機源中萃取出隨機元的問題。首先,我們考慮多個獨立的隨機源。當有兩個獨立的隨機源時,我們利用延伸的剩餘雜湊引理建造出一個萃取器。接著我們進而延伸此建造方法從更多個獨立的隨機源中萃取出隨機元。值得一提的是,即使除了一個隨機源外,所有的隨機源都被暴露,我們的萃取器依然可以萃取出隨機元。此外,我們可以將此萃取器應用到密碼學上,解決有一群人希望能透過一個不安全的管道且不使用當地的理想隨機元而能決定一個將用在群體通訊之秘密金鑰的問題。 我們也考慮獨立符號隨機源。一個獨立符號隨機源包含n個獨立的符號,其中每個符號都是屬於{0,1}^d,且獨立符號隨機源僅保證整個隨機源的min-entropy為k 。我們提出一個對任何 $n,d,k\in\N$,可萃取出大概 $\Omega(\log k)$ 個隨機元的決定性萃取器。接著我們證明當$k\geq\log^c n$ (對某個常數c>0)時,我們幾乎可以萃取出所有包含於來源中的隨機元。更進一步地,我們證明只要滿足min-entropy $k=\omega(d+\log(n/\eps))$,則存在一個可萃取出 $m=k-O(\log(1/\eps))$ 個隨機元的決定性萃取器,其中 $\eps$ 為誤差。最後,我們亦證明任何針對固定某些位元的萃取器,不論其需要種子與否,其min-entropy損失為 $k-m = \Omega(\log(1/\eps))$ ,此為Radhakrishnan 與Ta-Shma針對一般弱隨機源之結果的一個延伸。 然後,我們由另一角度出發去尋找一種更廣義的隨機源,其亦存在決定性萃取器。我們在這裡所考慮的是一種條件的來源(f(X)|X),其中X是一個在 {0,1}^{n_1} 上的分布,而 f 為某個函數。假設當輸入x是依照分布X所產生時,任何大小為 $2^k$ 的電路最多只有$2^{-k}$ 的機率可以猜對f(x)的值時,則我們說這種條件分布(f(X)|X)擁有計算的min-entropy k。我們首先證明無法從單一個只擁有計算的min-entropy 之來源中萃取出一個隨機元。接著我們證明可從一個只擁有計算的 min-entropy 與另一個擁有統計的 min-entropy 之隨機源中萃取出隨機元。更進一步地,我們亦證明可從兩個只擁有計算的 min-entropy 之隨機源中萃取出隨機元。這可看成是把原先在統計的環境中針對多隨機源萃取器的研究延伸到計算的環境。我們把建造此種萃取器的工作轉成是一個在學習理論上的問題: 利用任何一個分布,在對手雜訊(adversarial noise)的模型下學習線性函數。針對此問題,我們亦提出一個學習演算法。 最後,我們考慮計算的獨立符號隨機源。如同獨立符號隨機源,計算的獨立符號隨機源亦包含n個獨立的符號 (f1(X1)|X1),…,(fn(Xn)|Xn),其中每個fi(Xi)都是分布在 ${0,1}^d$,使得當輸入xi是依照分布Xi所產生時,任何一個大小為s的電路最多只有$2^{-k_i}$的機率可以猜對fi(Xi) 對某個數$ki\leq d$,且k1+…+kn=k。我們延伸核心集引理來證明我們針對獨立符號隨機源的萃取器亦可作用於計算的獨立符號隨機源。事實上,此針對計算的獨立符號隨機源的萃取器之結果隱含延伸的XOR 引理。我們亦提供一個在黑箱子建造法中,二元的核心集大小的上限。zh_TW
dc.description.abstractIn this thesis, we consider the problem of extracting randomness from several classes of random sources. First, we consider multiple independent sources. With two independent sources, we have an explicit extractor, via generalized leftover hash lemma. We also extend our construction to extract randomness from more independent sources. One nice feature is that the extractor still works even with all but one source exposed. Moreover, we apply our extractor for a cryptographic task in which a group of parties want to agree on a secret key for group communication over an insecure channel, without using ideal local randomness. We also consider the independent-symbol sources which consist of a sequence of $n$ independent symbols from ${0,1}^d$, and the only randomness guarantee on such a source is that the whole source has min-entropy $k$. We give an explicit deterministic extractor which extracts about $\Omega(\log k)$ bits, for any $n,d,k\in\N$. When $k\geq \log^c n$, we can extract almost all randomness. Moreover, we show the existence of a non-explicit deterministic extractor which can extract $m=k-O(\log(1/\eps))$ bits whenever $k=\omega(d+\log(n/\eps))$. Finally, we show that even to extract from bit-fixing sources, any extractor, seeded or not, must suffer an entropy loss $k-m = \Omega(\log(1/\eps))$. This generalizes a lower bound of Radhakrishnan and Ta-Shma on extracting from general sources. Then, we go to the other direction to look for a more general class of sources from which seedless extraction is still possible. The sources we consider have the form of a conditional distribution $(f(\X)|\X)$, for some function $f$ and some distribution $\X$, and we say that such a source has computational min-entropy $k$ if any circuit of size $2^k$ can only predict $f(x)$ correctly with probability at most $2^{-k}$ given input $x$ sampled from $\X$. We first show that it is impossible to have a seedless extractor to extract from one single source of this kind. Then we show that it becomes possible if we are allowed a seed which is weakly random (instead of perfectly random) but contains some statistical min-entropy, or even a seed which is not random at all but contains some computational min-entropy. This can be seen as a step toward extending the study of multi-source extractors from the traditional, statistical setting to a computational setting. We reduce the task of constructing such extractors to a problem in learning theory: learning linear functions under arbitrary distribution with adversarial noise. For this problem, we provide a learning algorithm, which may have interest of its own. Finally, we consider computational independent-symbol sources, which consist of $n$ mutually independent parts, $(f_1(\X_1)|\X_1), \cdots,(f_n(\X_n)|\X_n)$, each $f_i(\X_i)$ of length $d$ such that for each $i$ if given input $x_i$ sampled from $\X_i$, any circuit of size $s$ can only predict $f_i(x_i)$ with probability at most $2^{-k_i}$ for some $k_i\leq d$, and the sum of $k_i$'s is $k$. We generalize the well-known hardcore set lemma to show that our extractor for independent-symbol sources still works for computational independent-symbol sources. In fact, the result of computational extractors for computational independent-symbol sources implies a generalization of the well-known XOR lemma. Besides, we provide a size upper bound on a binary hardcore set in any black-box construction.en_US
dc.language.isoen_USen_US
dc.subject非種子萃取器zh_TW
dc.subject多個獨立來源zh_TW
dc.subject獨立符號隨機源zh_TW
dc.subject計算的萃取器zh_TW
dc.subject學習線性函數zh_TW
dc.subject計算的獨立符號隨機源zh_TW
dc.subjectseedless extractorsen_US
dc.subjectmultiple independent sourcesen_US
dc.subjectIndependent-symbol sourcesen_US
dc.subjectcomputational extractorsen_US
dc.subjectLearning linear functionsen_US
dc.subjectcomputational independent-symbol sourcesen_US
dc.title非種子萃取器之設計與分析zh_TW
dc.titleSeedless Extractors: Constructions and Analysisen_US
dc.typeThesisen_US
dc.contributor.department資訊科學與工程研究所zh_TW
Appears in Collections:Thesis


Files in This Item:

  1. 756401.pdf

If it is a zip file, please download the file and unzip it, then open index.html in a browser to view the full text content.