大數(shù)據(jù)挖掘技術_第1頁
已閱讀1頁,還剩48頁未讀 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、1,Improvements to A-Priori,Bloom FiltersPark-Chen-Yu AlgorithmMultistage AlgorithmApproximate AlgorithmsCompacting Results,2,Aside: Hash-Based Filtering,Simple problem: I have a set S of one billion strings of lengt

2、h 10.I want to scan a larger file F of strings and output those that are in S.I have 1GB of main memory.So I can’t afford to store S in memory.,3,Solution – (1),Create a bit array of 8 billion bits, initially all 0’

3、s.Choose a hash function h with range [0, 8*109), and hash each member of S to one of the bits, which is then set to 1.Filter the file F by hashing each string and outputting only those that hash to a 1.,4,Solu

4、tion – (2),Filter,File F,0010001011000,h,5,Solution – (3),As at most 1/8 of the bit array is 1, only 1/8th of the strings not in S get through to the output.If a string is in S, it surely hashes to a 1, so it always ge

5、ts through.Can repeat with another hash function and bit array to reduce the false positives by another factor of 8.,6,Solution – Summary,Each filter step costs one pass through the remaining file F and reduces the fr

6、action of false positives by a factor of 8.Actually 1/(1-e -1/8).Repeat passes until few false positives.Either accept some errors, or check the remaining strings.e.g., divide surviving F into chunks that fit in mem

7、ory and make a pass though S for each.,7,Aside: Throwing Darts,A number of times we are going to need to deal with the problem: If we throw k darts into n equally likely targets, what is the probability that a target

8、gets at least one dart?Example: targets = bits, darts = hash values of elements.,8,Throwing Darts – (2),9,Throwing Darts – (3),If k << n, then e-k/n can be approximated by the first two terms of its Taylor expans

9、ion: 1 – k/n.Example: 109 darts, 8*109 targets.True value: 1 – e-1/8 = .1175.Approximation: 1 – (1 – 1/8) = .125.,10,Improvement: Superimposed Codes (Bloom Filters),We could use two hash functions, and hash each membe

10、r of S to two bits of the bit array.Now, around ¼ of the array is 1’s.But we transmit a string in F to the output only if both its bits are 1, i.e., only 1/16th are false positives.Actually (1-e -1/4)2 = 0.0493

11、.,11,Superimposed Codes – (2),Generalizes to any number of hash functions.The more hash functions, the smaller the probability of a false positive.Limiting Factor: Eventually, the bit vector becomes almost all 1’s.Alm

12、ost anything hashes to only 1’s.,12,Aside: History,The idea is attributed to Bloom (1970).But I learned the same idea as “superimposed codes,” at Bell Labs, which I left in 1969.Technically, the original paper on super

13、imposed codes (Kautz and Singleton, 1964) required uniqueness : no two small sets have the same bitmap.,13,PCY Algorithm – An Application of Hash-Filtering,During Pass 1 of A-priori, most memory is idle.Use that memory

14、to keep counts of buckets into which pairs of items are hashed.Just the count, not the pairs themselves.,14,Needed Extensions to Hash-Filtering,Pairs of items need to be generated from the input file; they are not prese

15、nt in the file.We are not just interested in the presence of a pair, but we need to see whether it is present at least s (support) times.,15,PCY Algorithm – (2),A bucket is frequent if its count is at least the suppor

16、t threshold.If a bucket is not frequent, no pair that hashes to that bucket could possibly be a frequent pair.On Pass 2, we only count pairs that hash to frequent buckets.,16,Picture of PCY,Hashtable,,Item counts,Bitm

17、ap,Pass 1,Pass 2,Frequent items,,,Counts ofcandidate pairs,,,17,PCY Algorithm – Before Pass 1 Organize Main Memory,Space to count each item.One (typically) 4-byte integer per item.Use the rest of the space for as m

18、any integers, representing buckets, as we can.,18,PCY Algorithm – Pass 1,FOR (each basket) {FOR (each item in the basket)add 1 to item’s count;FOR (each pair of items) {hash the pair to a bucket;add 1 to the

19、 count for that bucket}},19,Observations About Buckets,A bucket that a frequent pair hashes to is surely frequent.We cannot use the hash table to eliminate any member of this bucket.Even without any frequent pai

20、r, a bucket can be frequent.Again, nothing in the bucket can be eliminated.,20,Observations – (2),3. But in the best case, the count for a bucket is less than the support s.Now, all pairs that hash to this bucket can

21、be eliminated as candidates, even if the pair consists of two frequent items.Thought question: under what conditions can we be sure most buckets will be in case 3?,21,PCY Algorithm – Between Passes,Replace the buckets b

22、y a bit-vector:1 means the bucket is frequent; 0 means it is not.4-byte integers are replaced by bits, so the bit-vector requires 1/32 of memory.Also, decide which items are frequent and list them for the second pass.

23、,22,PCY Algorithm – Pass 2,Count all pairs {i, j } that meet the conditions for being a candidate pair:Both i and j are frequent items.The pair {i, j }, hashes to a bucket number whose bit in the bit vector is 1.Not

24、ice all these conditions are necessary for the pair to have a chance of being frequent.,23,Memory Details,Buckets require a few bytes each.Note: we don’t have to count past s.# buckets is O(main-memory size).On second

25、 pass, a table of (item, item, count) triples is essential (why?).Thus, hash table must eliminate 2/3 of the candidate pairs for PCY to beat a-priori.,24,Multistage Algorithm,Key idea: After Pass 1 of PCY, rehash only t

26、hose pairs that qualify for Pass 2 of PCY.On middle pass, fewer pairs contribute to buckets, so fewer false positives –frequent buckets with no frequent pair.,25,Multistage Picture,Firsthash table,Secondhash table,,It

27、em counts,Bitmap 1,Bitmap 1,Bitmap 2,Freq. items,Freq. items,Counts ofcandidate pairs,,,,,,,Pass 1,Pass 2,Pass 3,26,Multistage – Pass 3,Count only those pairs {i, j } that satisfy these candidate pair conditions:Bot

28、h i and j are frequent items.Using the first hash function, the pair hashes to a bucket whose bit in the first bit-vector is 1.Using the second hash function, the pair hashes to a bucket whose bit in the second bit-v

29、ector is 1.,27,Important Points,The two hash functions have to be independent.We need to check both hashes on the third pass.If not, we would wind up counting pairs of frequent items that hashed first to an infrequent

30、bucket but happened to hash second to a frequent bucket.,28,Multihash,Key idea: use several independent hash tables on the first pass.Risk: halving the number of buckets doubles the average count. We have to be sure mo

31、st buckets will still not reach count s.If so, we can get a benefit like multistage, but in only 2 passes.,29,Multihash Picture,First hashtableSecondhash table,,Item counts,,,,,,,,Pass 1,Pass 2,30,Extensions,Either

32、multistage or multihash can use more than two hash functions.In multistage, there is a point of diminishing returns, since the bit-vectors eventually consume all of main memory.For multihash, the bit-vectors occupy exa

33、ctly what one PCY bitmap does, but too many hash functions makes all counts > s.,31,All (Or Most) Frequent Itemsets In < 2 Passes,A-Priori, PCY, etc., take k passes to find frequent itemsets of size k.Other techn

34、iques use 2 or fewer passes for all sizes:Simple algorithm.SON (Savasere, Omiecinski, and Navathe).Toivonen.,32,Simple Algorithm – (1),Take a random sample of the market baskets.Run a-priori or one of its improvement

35、s (for sets of all sizes, not just pairs) in main memory, so you don’t pay for disk I/O each time you increase the size of itemsets.Be sure you leave enough space for counts.,33,Main-Memory Picture,,,Copy ofsamplebask

36、ets,Space forcounts,34,Simple Algorithm – (2),Use as your support threshold a suitable, scaled-back number.E.g., if your sample is 1/100 of the baskets, use s /100 as your support threshold instead of s .,35,Simple

37、Algorithm – Option,Optionally, verify that your guesses are truly frequent in the entire data set by a second pass.But you don’t catch sets frequent in the whole but not in the sample.Smaller threshold, e.g., s /125, h

38、elps catch more truly frequent itemsets.But requires more space.,36,SON Algorithm – (1),Repeatedly read small subsets of the baskets into main memory and perform the first pass of the simple algorithm on each subset.An

39、 itemset becomes a candidate if it is found to be frequent in any one or more subsets of the baskets.,37,SON Algorithm – (2),On a second pass, count all the candidate itemsets and determine which are frequent in the ent

40、ire set.Key “monotonicity” idea: an itemset cannot be frequent in the entire set of baskets unless it is frequent in at least one subset.,38,SON Algorithm – Distributed Version,This idea lends itself to distributed data

41、 mining.If baskets are distributed among many nodes, compute frequent itemsets at each node, then distribute the candidates from each node.Finally, accumulate the counts of all candidates.,39,Toivonen’s Algorithm – (1)

42、,Start as in the simple algorithm, but lower the threshold slightly for the sample.Example: if the sample is 1% of the baskets, use s /125 as the support threshold rather than s /100.Goal is to avoid missing any itemse

43、t that is frequent in the full set of baskets.,40,Toivonen’s Algorithm – (2),Add to the itemsets that are frequent in the sample the negative border of these itemsets.An itemset is in the negative border if it is not d

44、eemed frequent in the sample, but all its immediate subsets are.,41,Example: Negative Border,ABCD is in the negative border if and only if:It is not frequent in the sample, butAll of ABC, BCD, ACD, and ABD are.A i

45、s in the negative border if and only if it is not frequent in the sample.Because the empty set is always frequent.Unless there are fewer baskets than the support threshold (silly case).,42,Picture of Negative Border,…

46、tripletonsdoubletonssingletons,,,Negative Border,,,,,Frequent Itemsetsfrom Sample,43,Toivonen’s Algorithm – (3),In a second pass, count all candidate frequent itemsets from the first pass, and also count their nega

47、tive border.If no itemset from the negative border turns out to be frequent, then the candidates found to be frequent in the whole data are exactly the frequent itemsets.,44,Toivonen’s Algorithm – (4),What if we find t

48、hat something in the negative border is actually frequent?We must start over again!Try to choose the support threshold so the probability of failure is low, while the number of itemsets checked on the second pass fits

49、in main-memory.,45,If Something in the Negative Border is Frequent . . .,…tripletonsdoubletonssingletons,,,Negative Border,,,,,Frequent Itemsetsfrom Sample,,,,,We broke through thenegative border. Howfar does t

50、he problem go?,46,Theorem:,If there is an itemset that is frequent in the whole, but not frequent in the sample, then there is a member of the negative border for the sample that is frequent in the whole.,47,Proo

51、f: Suppose not; i.e.;There is an itemset S frequent in the whole but not frequent in the sample, andNothing in the negative border is frequent in the whole.Let T be a smallest subset of S that is not frequent in th

52、e sample.T is frequent in the whole (S is frequent + monotonicity).T is in the negative border (else not “smallest”).,48,Compacting the Output,Maximal Frequent itemsets : no immediate superset is frequent.Closed it

53、emsets : no immediate superset has the same count (> 0).Stores not only frequent information, but exact counts.,49,Example: Maximal/Closed,CountMaximal (s=3)ClosedA4No NoB5No YesC3No NoAB4

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論