Open in new window / Try shogun cloud
--- Log opened Sat Sep 10 00:00:12 2011
blackburnbroken in 0.900:10
@sonney2kI really thought I have checked these00:21
blackburnsonney2k: will check some more tags tomorrow00:23
blackburnand btw I have to integrate superlu as soon as possible00:24
blackburnour LLE is slower than scikits-learn one00:24
blackburnit is a blocker for my possible paper about our implementations00:24
@sonney2kheh :)00:25
@sonney2kgo ahead00:25
@sonney2kand the GNB needs fixing too00:25
blackburna lot of things to fix00:26
blackburnI will have a hard next week, but I hope later it will go smoother00:26
blackburnthere will be some development kick-off at my job00:26
blackburnsee you00:27
-!- blackburn [~blackburn@] has quit [Quit: Leaving.]00:27
-!- serialhex [] has joined #shogun02:05
-!- in3xes [~in3xes@] has joined #shogun04:52
-!- blackburn [~blackburn@] has joined #shogun09:49
-!- in3xes [~in3xes@] has quit [Ping timeout: 258 seconds]10:19
CIA-3shogun: Sergey Lisitsyn master * r41f17ea / src/configure : Added SuperLU detection -
@sonney2kblackburn, ok12:28
blackburnsonney2k: ok for what?12:29
@sonney2kno worries - just stay in the team12:29
@sonney2kwhat is the superlu stuff?12:29
blackburnsonney2k: sparse direct solver12:29
blackburnI'm currently integrating it to arpack12:29
blackburnsonney2k: the problem is LLE uses sparse weight matrix12:52
blackburnand arpack provides reverse interface making possible to use sparse solver12:52
blackburnthat's how sklearn guys did that12:52
-!- mrsrikanth [~mrsrikant@] has joined #shogun14:45
CIA-3shogun: Sergey Lisitsyn master * rf34ce2c / src/configure : Fixed typo in configure -
-!- mrsrikanth [~mrsrikant@] has quit [Quit: Leaving]16:39
blackburnsonney2k: helloo19:03
@sonney2kblackburn, yes?19:49
@sonney2kblackburn, I like the 3rd one19:50
@sonney2kwho painted ti?19:50
blackburnsonney2k: me, who else can?19:50
@sonney2kI guess we should have a vote on the mailinglist19:51
@sonney2kor even a call for logos if someone thinks he can do better19:51
blackburnsonney2k: do you know some good sparse matrix multiplication lib or so?19:52
blackburnsonney2k: I need matrix-matrix dot product19:56
@sonney2kwhat is a matrix matrix dotproduct?19:57
blackburnI have sparse matrix W and have to compute W'W19:57
blackburnI realized it is the only bottleneck19:57
blackburnI guess faster to write it by myself20:00
@sonney2kblackburn, maybe it is not that difficult20:00
@sonney2kif you assume indices are sorted for sparse vectors20:00
@sonney2kyou could keep track of all indices in a row20:01
blackburnsonney2k: I would write very specialized version of this product20:01
blackburnwell I did it already but wrong20:01
@sonney2kyeah it is not too easy20:59
blackburnsonney2k: did it with std::list, is it ok?21:00
blackburnsonney2k: finally our LLE became faster than sklearn's one21:02
@sonney2kblackburn, can't you use dynarray?21:02
blackburnsonney2k: I guess I can21:02
blackburn8.37s shogun lle21:05
blackburn11.59s scikits learn lle21:05
blackburnsonney2k: DynArray have pretty big granularity21:08
blackburnI need list with constant time insertion21:10
@sonney2kyou can adjust granularity though21:11
blackburnsonney2k: I have N (number of examples) lists with no apriori known sizes21:12
@sonney2kisn't that a bit too much?21:13
@sonney2kN lists?!21:13
blackburnsonney2k: how can I store non-zero indexes any other way?21:14
@sonney2kahh number of examples. misread taht21:14
blackburnsonney2k: so ok to use std::list?21:15
@sonney2kstill no  - but what are you doing?21:16
@sonney2kwhat is in the list?21:16
blackburnsonney2k: indexes of non zero elements of columns21:16
@sonney2kblackburn, but then you could use dynarray and set granularity to number of non-zero elements in row your multiply with (or subsets of it)21:18
blackburnsonney2k: ok will try21:19
blackburnsonney2k: done21:27
blackburna little slower21:27
blackburnbut without your hateful std haha21:28
@sonney2kwhich granularity size did you use?21:28
blackburnwell k parameter twice21:28
@sonney2km_k ?21:28
blackburntypically where are <m_k non zero elements21:29
blackburnexactly m_k in a tow21:29
@sonney2kI mean you knwo the number of elements in a row21:29
@sonney2kso that is m_k?21:29
@sonney2kwhy m_k * 2?21:29
blackburnnumber of non zero elements in row21:29
@sonney2kI mean it can only become smaller21:29
blackburnsonney2k: no, in column it can be larger21:30
@sonney2kblackburn, yes but not in product21:30
@sonney2kthen it is intersection of row / col indices21:30
blackburnI do W'W21:31
@sonney2kthe number of nnz components!?21:31
blackburnso column is multiplied on column21:31
blackburnusing dynarray costs 0.4s :)21:32
blackburngranularity doesn't affect any speed21:33
blackburnI would parallelize that too21:35
@sonney2khow is that possible then?! I mean if you used huge granularity dynarray must be faster than any list21:35
blackburnno idea, may be wrong measurement21:36
-!- blackburn [~blackburn@] has quit [Read error: No route to host]21:56
-!- blackburn [~blackburn@] has joined #shogun21:56
CIA-3shogun: Sergey Lisitsyn master * rf8944e0 / (2 files): Beautified dimreduction examples -
CIA-3shogun: Sergey Lisitsyn master * r613d9dc / (2 files): Improved performance of locally linear embedding -
CIA-3shogun: Sergey Lisitsyn master * r7e0438e / src/shogun/preprocessor/LocallyLinearEmbedding.cpp : Removed unnecessary includes -
CIA-3shogun: Sergey Lisitsyn master * r5ef91ed / src/shogun/preprocessor/KernelLocallyLinearEmbedding.cpp : Updated KLLE -
--- Log closed Sun Sep 11 00:00:17 2011