Open in new window / Try shogun cloud
--- Log opened Mon Apr 02 00:00:13 2012
--- Day changed Mon Apr 02 2012
harshit_okay.doing that ..00:00
n4nd0sonney2k, blackburn new pull request :)00:00
blackburnit seems I stare to github whole day long hah00:00
blackburnn4nd0: I'll review it in a min00:01
n4nd0blackburn: thank you man, but no hurries, it is kind of late already00:01
@sonney2kharshit_, I am starting to fall asleep - but I want to see the updated output before I do so please do it now if you can00:01
n4nd0blackburn: just tell me a general opinion if so00:01
blackburnn4nd0: do you prefer commenting code here or at github? ;)00:02
@sonney2kn4nd0, cool figures00:02
n4nd0blackburn: let's do it at github better00:02
n4nd0sonney2k: thank you :D00:02
n4nd0sonney2k: did you see the one I prepared for multiclass svm?00:03
n4nd0sonney2k: I didn't expect that flower shape haha00:04
@sonney2kI should check whether my son recognizes this as flower :D00:05
@sonney2ksaying bluuu bluu (flower == blume in german)00:05
n4nd0blomma in Swedish00:05
n4nd0there are similarities00:06
blackburnsonney2k: your son is kind of talking already?!00:06
blackburnn4nd0: are you sure it works only for euclidean?00:07
@sonney2kmight be an illusion but I think he starts to recognizing flowers - at least every time I got close to one he is making this weird bluuu  bluee sound00:08
n4nd0blackburn: I didn't check it :S00:08
blackburnsonney2k: cool00:08
harshit_sonney2k: hey figured that out .. it was not the problem in the values , it was the problem with the way i was printing them00:09
harshit_the actual problem was something else00:09
-!- PhilTillet [] has joined #shogun00:10
@sonney2kharshit_, so enlighten us!00:10
harshit_actually problem was that i was resetting value of t to 0 in function line_search_linear00:13
harshit_where as it needed to be calculated by using its previous value00:14
harshit_Just need some more testing then i'll send you the final test results on both c++ and matlab00:15
shogun-buildbotbuild #652 of libshogun is complete: Failure [failed compile]  Build details are at  blamelist:, shelhamer@imaginarynumber.net00:18
blackburnah yes btw it fails00:19
blackburnI'll take care00:24
CIA-64shogun: Soeren Sonnenburg master * ra2a559c / testsuite/python_modular/ : add -m option to tester to show only missing tests -
@sonney2knite folks00:27
blackburnsonney2k: nite00:27
n4nd0good night00:27
@sonney2kharshit_, good to hear - I think we will need to compare newton svm from C++ to some bigger data set once you are confident again that it works00:28
harshit_sonney2k : wait00:28
harshit_for 2 min00:28
harshit_see here :
blackburnno way :D00:28
harshit_is he gone ?00:29
harshit_then please you have a look00:29
n4nd0I had enough for today too guys00:30
n4nd0good night00:30
-!- Vuvu [~Vivan_Ric@] has quit [Remote host closed the connection]00:30
-!- romi_ [~mizobe@] has quit [Quit: Leaving]00:30
blackburnn4nd0: nite00:30
harshit_good bye n3nd000:30
n4nd0harshit_: idk if I can help you with that, what is the problem?00:31
harshit_problem is with precision of float64_t00:31
n4nd0mmm no idea what's going on there00:32
blackburnyeah hard to digest00:32
-!- n4nd0 [] has left #shogun []00:32
-!- Vuvu [~Vivan_Ric@] has joined #shogun00:34
blackburnharshit_: hey but h's are different!00:36
harshit_I think i have messed up a lil with the label values00:37
harshit_in sleep :(00:37
blackburnhah no worries00:38
harshit_dont knw what i am doing, I think i should continue tmrw00:38
harshit_maybe everything requires a clean test again00:38
CIA-64shogun: Sergey Lisitsyn master * ra17ad85 / (3 files): Fixes for SG_ADD conversion -
harshit_blackburn: hey do you think liblinear + lbp features + wiking preprocessor will make a good proposal00:39
harshit_c5.0 + liblinear00:39
blackburnc5.0 is much more time demanding and important for me00:39
harshit_may be i should submit both , what say ?00:40
blackburnfeel free if you want to do that :)00:41
-!- av3ngr [av3ngr@nat/redhat/x-mtbcichayakoymjc] has joined #shogun00:42
harshit_btw would you also apply for gsoc ?00:42
harshit_if so then what project would you be working on ?00:45
blackburnyes, for multitask learning00:48
-!- PhilTillet [] has quit [Ping timeout: 252 seconds]00:48
harshit_transfer learning is really nice field :) would be nice to see it in shogun00:52
harshit_good bye, going to sleep.00:53
-!- harshit_ [~harshit@] has quit [Remote host closed the connection]00:55
blackburngood night00:57
-!- PhilTillet [] has joined #shogun01:00
-!- PhilTillet [] has quit [Ping timeout: 245 seconds]01:13
-!- PhilTillet [] has joined #shogun01:17
-!- blackburn [~qdrgsm@] has quit [Quit: Leaving.]01:20
shogun-buildbotbuild #654 of libshogun is complete: Success [build successful]  Build details are at
-!- PhilTillet [] has quit [Ping timeout: 265 seconds]01:36
-!- flxb [] has left #shogun []01:40
-!- Vuvu [~Vivan_Ric@] has quit [Quit: Leaving]06:06
-!- harshit_ [~harshit@] has joined #shogun06:43
-!- harshit_ [~harshit@] has quit [Ping timeout: 246 seconds]07:39
-!- n4nd0 [] has joined #shogun07:40
-!- harshit_ [~harshit@] has joined #shogun07:48
-!- menonnik [b4953181@gateway/web/freenode/ip.] has joined #shogun08:00
@sonney2kharshit_, for liblinear/ocas not a lot of work (maybe close to nothing) might be necessary - so implementing other dotfeatures might be sth to focus on and of course if you intend to work on trees in general - also fine08:10
harshit_sonney2k: So would you prefer liblinear + c5.008:11
harshit_or liblinear + lbp + some other features08:12
@sonney2kdotfeatures + c5.x08:12
@sonney2kor other trees08:12
harshit_ohk got it, so trees are in demand.08:13
harshit_sonney2k : for Newton SVM , it was definitely  not precision issue08:13
@sonney2kwell we don't have any trees in there - so it would be nice to have any08:13
harshit_there is some other small problem, which i am searching for last 3hrs08:14
harshit_will get you on it,As soon i'll find it08:14
@sonney2kbut other stuff is also fine - we will have to see how many slots we get and how we can divide work then so a little flexibility is necessary08:14
@sonney2kharshit_, thanks!08:14
harshit_so in trees would c5.0 will be the best to implement, the c5.0 released under gnu license08:15
harshit_or you have something else in your mind ?08:15
harshit_Also i dont want to waste the current time, So want to start working on LBP features08:16
harshit_Saw opencv code: it looks good08:17
CIA-64shogun: Evan Shelhamer master * r4f1e9e5 / src/README.developer : Cleanup doc whitespace and word wrap -
CIA-64shogun: Evan Shelhamer master * r787d681 / src/README.developer : (log message trimmed)08:17
CIA-64shogun: Revise whitespace and versioning scheme sections of the developer readme08:17
CIA-64shogun: - include notice of newline convention (LF) and how to enforce it automatically08:17
CIA-64shogun:  through git settings08:17
CIA-64shogun: - rephrase trailing whitespace caution and suggest means to automatically08:17
CIA-64shogun:  strip trailing whitespace for emacs and vim08:17
CIA-64shogun: - update versioning section with info on github and suggested workflow08:17
CIA-64shogun: Soeren Sonnenburg master * r142a356 / src/README.developer :08:17
CIA-64shogun: Merge pull request #411 from shelhamer/readme-whitespace-and-versioning08:17
CIA-64shogun: Developer Readme whitespace and versioning -
-!- menonnik [b4953181@gateway/web/freenode/ip.] has quit [Quit: Page closed]08:26
harshit_sonney2k: ^08:26
-!- gsomix [~gsomix@] has quit [Read error: Operation timed out]08:33
-!- sonne|work [~sonnenbu@] has joined #shogun08:51
-!- mridul [~Adium@] has joined #shogun08:59
-!- gsomix [~gsomix@] has joined #shogun09:04
-!- harshit_ [~harshit@] has quit [Ping timeout: 248 seconds]09:09
-!- harshit_ [~harshit@] has joined #shogun09:10
sonne|workmoin gsomix09:10
harshit_sonney2k:, have a look at line 17,18,19 in C++ output and 18,19,20 in Matlab output09:29
harshit_n4nd0 : if you are around, please have a look at
n4nd0harshit_: so what's the problem?09:31
harshit_actually precision is the problem ,09:31
harshit_as you can see in line 19 of C++09:32
harshit_and in line 21 of matlab09:32
harshit_g = gpart1- gpart209:33
harshit_g after subtraction comes out to be of order e-1709:33
n4nd0there is a little error around here09:33
-!- mridul [~Adium@] has left #shogun []09:34
n4nd0but what's g? is it the difference between the result you get with matlab and C++?09:34
harshit_yeah and thats bocz of high precision in matlab09:34
harshit_no g is just a variable used for getting the results09:34
n4nd0but the difference between C++ and matlab is ~e-17??09:35
-!- stephenlee [da18b3c4@gateway/web/freenode/ip.] has joined #shogun09:36
harshit_yeah its a very small difference but the problem is that in next line of code i compute g/h09:36
harshit_as h is also of order e-17 , g/h turn out to be some thing large09:37
harshit_and here the difference becomes prominent09:37
n4nd0I see09:38
harshit_do you think i should move on with it ..09:41
harshit_coz it really doesnt make any big impact on final result09:42
n4nd0so the final result is ok?09:42
harshit_yeah in most cases, the difference is of order of 1e-5 at most09:44
harshit_in final result09:44
n4nd0in my opinion, that is ok09:44
n4nd0but you should ask sonney2k just in case09:45
harshit_have a look here :
harshit_thats one of the big differences i have encountered09:47
harshit_do you think thats acceptable09:47
n4nd0I see09:47
n4nd0idk, I mean, there's probably no general answer to that09:47
n4nd0I guess it depends on application09:47
harshit_yeah, right may be i need to do some real tests09:48
harshit_on it09:48
-!- n4nd0 [] has quit [Quit: Changing server]09:50
-!- n4nd0 [] has joined #shogun09:53
n4nd0harshit_: have you already thought of any particular example to test it?09:54
harshit_not yet, why ? do you have any in your mind ?09:55
n4nd0mmm I would try with classification of toy data first09:57
n4nd0if you get good results / similar to MATLAB09:57
n4nd0it will be ok I guess09:57
harshit_ohk will do it tday, lets hope for best :)09:58
-!- flxb [~cronor@] has joined #shogun10:03
-!- flxb [~cronor@] has quit [Quit: flxb]10:10
-!- flxb [~cronor@] has joined #shogun10:15
-!- flxb [~cronor@] has left #shogun []10:16
-!- flxb [] has joined #shogun10:16
-!- blackburn [5bdfb203@gateway/web/freenode/ip.] has joined #shogun10:21
blackburnoh good old sonne|work haven't seen you there for a while ;)10:22
-!- n4nd0 [] has quit [Ping timeout: 244 seconds]10:25
blackburnsonne|work: do you think development and master line separating is a good idea? I do10:28
-!- vikram360 [~vikram360@] has quit [Ping timeout: 265 seconds]10:30
sonne|workI don't understand why it is a good idea...10:32
sonne|workin my brain (at least currently) master is always the most up-to-date thing10:32
blackburnsonne|work: something like debian stable as master and debian unstable as development? :)10:34
sonne|workbut where is master in all this?10:35
sonne|workI mean what is master needed for then?10:35
sonne|workwe currently have branches for shogun 1.X etc10:35
sonne|workbut never use them (unfortunately)10:35
sonne|workso we don't really fix bugs and release 1.1.1 / 1.1.2 ...10:36
blackburnsonne|work: what is master for you?10:36
sonne|workbut always work on master10:36
sonne|workmaster is development branch for me - the thing that is most recent10:36
blackburnin this approach current master should go to developmen10:36
sonne|workof course it makes sense to have branches for new complicated features10:37
blackburnother way is to create stable branch10:37
sonne|worklike the (rotten) c5.0 branch10:37
blackburnone example10:37
blackburnwe wanted to release10:37
blackburnso we would separate branch from some march revision10:37
blackburnbefore this new features10:37
blackburnand merge fixes10:37
sonne|workyeah I agree to that10:40
blackburnin fact I think we are way too fast to have releases10:40
sonne|workI don't think so :D10:40
blackburnI can hardly imagine somebody would want to install old shogun10:41
sonne|workbut that is the reality!10:42
sonne|workpeople only install release versions!10:42
blackburnsonne|work: what I do not like is to create branches for each fix10:42
blackburnsonne|work: yes that'd right if we had .deb10:42
sonne|workwe should create automagically created debs / releases every night10:43
sonne|workonly when the test suite passes10:43
blackburnit would be costly for me to get into .deb packaging ;)10:43
blackburnwill you have time to manage this?10:44
blackburnhowever we have mighty gsomix!10:45
blackburngsomix: would you like to set it up later? ;)10:45
blackburnhey that's kind of good idea to add to your proposal10:46
blackburnsonne|work: do you like it?10:46
blackburngsomix: do not afraid to be overloaded with it, it would be more interesting than hunt for covertree memleaks10:48
blackburngsomix: ok even two extensions of your proposal: auto deb (with ppa probably) and auto tests improvement10:53
-!- n4nd0 [] has joined #shogun10:57
gsomixblackburn, ou.11:08
gsomixblackburn, I'll read it later. But, I agree to everything.11:09
-!- PhilTillet [] has joined #shogun11:09
blackburngsomix: and again start writing proposal, proposal would never be wrote by itself ;)11:10
gsomixblackburn, ok. Once I get back from the gym.11:11
blackburnare you used to visit gym? you never told me that before :)11:12
gsomixblackburn, in university.11:12
blackburnah that kind of11:12
gsomixClasses in physical culture.11:12
blackburnyou have 17 minutes more, get the python4 support done!11:13
gsomixMore defines!11:14
gsomix#defines, i mean11:14
blackburnah yes one more funny thing you can deal with11:14
blackburnis to add common lisp typemaps11:14
blackburnyeah why not11:14
blackburnIIRC there is no haskell support for swig11:14
blackburnwho would ever need this shit11:15
gsomixThere is part in swig documentation "Extending SWIG to support new languages".11:15
blackburnnevertheless I can hardly imagine this stuff in haskell11:16
-!- stephenlee [da18b3c4@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]11:17
harshit_blackburn : good news :) every thing is working perfectly now in Newton SVM11:28
harshit_hurray :D11:28
harshit_now just gonna test it on toy dataset11:28
blackburnharshit_: that's nice11:29
harshit_thanks for your support blackburn and n4nd011:29
blackburnoscar award speech ;)11:30
harshit_cant wait for my first open source contribution to be accepted11:31
blackburnis it your first PR?11:31
harshit_PR ?11:33
blackburnpull request11:33
harshit_to an open source organization - yes.11:34
blackburnI see11:34
harshit_but had some PR on private git hub repos11:34
blackburnI just thought you commited some things before11:34
-!- harshit_ [~harshit@] has quit [Ping timeout: 260 seconds]11:51
-!- PhilTillet [] has quit [Quit: Leaving]12:11
-!- PhilTillet [] has joined #shogun12:11
-!- nickon [] has joined #shogun12:13
-!- n4nd0 [] has quit [Ping timeout: 246 seconds]12:14
-!- nickon [] has quit [Client Quit]12:16
-!- nickon [] has joined #shogun12:16
-!- n4nd0 [] has joined #shogun12:27
-!- vikram360 [~vikram360@] has joined #shogun12:30
-!- PhilTillet [] has quit [Read error: Connection reset by peer]12:33
n4nd0blackburn: regarding one of the comments in th PR12:38
n4nd0blackburn: you mean that it should work with sparse features too?12:38
blackburnwhy not?12:38
blackburnall you need is distance so you don't have to check it12:38
n4nd0yes, it looks reasonable for me12:38
n4nd0blackburn: but we are just talking about simple or sparse features right?12:40
n4nd0blackburn: I am checking some of the other types of CDotFeatures though12:41
blackburnn4nd0: let the distance check it :012:42
n4nd0blackburn: do you mean that it shouldn't be checked here?12:43
blackburnyes, distance should I think12:43
n4nd0in case it cannot be done, the distance will lock it when doing get_distance_matrix12:44
blackburnit should throw SG_ERROR on init12:44
n4nd0in init yeah12:44
n4nd0I got to that part right now :D12:44
n4nd0blackburn: btw, I think I am not really familiarized with the idea of test suite and I probably should12:49
n4nd0blackburn: I understand that it's something we use to check the behaviour of the build right?12:50
n4nd0blackburn: that everything is working fine12:50
blackburnn4nd0: it compares output with previous one12:50
n4nd0blackburn: ok12:51
n4nd0blackburn: we should always be sure that the previous one is ok then :P12:51
blackburnif they are equal - life is good12:51
n4nd0this might make me look kind of stupid but ...12:52
n4nd0I think I have never run this testsuite12:52
blackburnyou don't have to12:52
n4nd0is it something we should ourselves? or done in the buildbot?12:52
blackburnwell we have to12:53
n4nd0do I have to run it or not?12:54
blackburnhah if you want to check something - yes12:54
blackburnI mean in any case I run it sometimes and Soeren does12:55
blackburntry to run it anyway12:56
blackburnjust cd to testsuite/python_modular12:56
blackburnand run tester.py12:56
n4nd0should all of them be ok??12:56
n4nd0maybe I'm missing a library or sth, I got quite a few of ERROR, some exceptions and it ended with a seg fault :O12:58
blackburnthey should but they are not ok :D12:58
n4nd0haha ok13:00
n4nd0is the seg fault normal??13:00
n4nd0last file was13:00 setting 1/2                ERROR13:00
blackburnit could be related to some serialization stuff we need to fix still13:00
n4nd0I have also noted setting 1/1                  NO TEST13:01
n4nd0I guess I should provide one13:01
n4nd0same for QDA13:01
blackburnyes we need to generate tests for new ones13:02
blackburnit is pretty straightforward btw13:02 examplename.py13:03 setting 1/1                  OK13:07
n4nd0blackburn: about the other comment at github13:14
n4nd0blackburn: the one related to the embed_distance idea13:14
blackburnjust do as it was done in mds and isomap13:14
n4nd0blackburn: do you have time for it a moment now?13:14
blackburnyes a little13:15
n4nd0blackburn: didn't notice that it's done like that in the others13:15
blackburnI think apply should init distance with given features and delegate all the things to embed_distance13:15
blackburnand embed_distance should do all the job13:16
n4nd0looks nice like that13:16
n4nd0I have to understand exactly what is the job of embed distance though13:16
n4nd0in one case it should do exactly as it does now13:16
blackburnyes, the only difference is distance initialization13:18
n4nd0mmm I think I don't understand that clearly13:18
n4nd0initialization is simple m_distance->init(features, features) right?13:19
blackburnembed_distance makes sense if you have distances but do not have features13:20
n4nd0no ASSERT(features) then13:21
blackburndef apply(features):13:22
blackburn  m_distance.init(features,features)13:22
blackburn  return embed_distance(m_distance)13:22
blackburndef embed_distance(distance):13:23
blackburn  all the SPE stuff assuming distance is inited13:23
n4nd0aham, I see13:23
n4nd0but should sth else be added to handle the custom distance case you talked about?13:24
blackburnyes if you want to precompute it prematurely13:24
blackburnall you need is13:24
blackburndef apply(features):13:24
gsomixhi all13:24
blackburn  m_distance.init(features,features)13:24
blackburn  dist = CustomDistance(m_distance)13:25
blackburn  return embed_distance(dist)13:25
n4nd0then is it just required another apply to handle this case?13:26
blackburnjust pass custom distance to embed_distance13:29
n4nd0blackburn: ok, I think I got it, thank you very much :)13:33
-!- vikram360 [~vikram360@] has quit [Read error: Connection reset by peer]13:36
-!- vikram360 [~vikram360@] has joined #shogun13:36
-!- av3ngr [av3ngr@nat/redhat/x-mtbcichayakoymjc] has quit [Quit: That's all folks!]13:54
n4nd0blackburn: ups I got a problem with git when I tried to push the new stuff14:05
n4nd0error: failed to push some refs to ''14:06
n4nd0To prevent you from losing history, non-fast-forward updates were rejected14:06
n4nd0Merge the remote changes (e.g. 'git pull') before pushing again.  See the14:06
n4nd0'Note about fast-forwards' section of 'git push --help' for details.14:06
n4nd0I think it is the same as the other day14:06
n4nd0I should just do git push --force??14:06
n4nd0or should I pull as it suggests me14:06
blackburnno better try to rebase your branch14:06
blackburnor to merge14:07
n4nd0I did rebase before the push14:07
n4nd0I think that is why this is happening14:07
blackburnn4nd0: is it a branch?14:11
blackburnn4nd0: your master is not up to date14:12
blackburntry to push your master to github14:12
blackburnand then push your branch once again14:12
blackburni.e. you rebased it locally but not on github14:12
blackburnshould work if I understand that correctly14:12
-!- wiking [] has joined #shogun14:13
-!- wiking [] has quit [Changing host]14:13
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun14:13
n4nd0I needed to pull14:14
n4nd0it was in a branch yes14:14
n4nd0but my master is up to date, I did it a few moments ago14:15
blackburnyour local master or github one?14:15
blackburnI checked your github fork and last commit was 3 days ago14:15
n4nd0ah fuck it was my local one14:16
wikingblackburn: ey14:16
n4nd0maybe it was because of that14:16
n4nd0I did the pull and solve a couple of conflicts14:16
blackburnwiking: hey14:16
blackburnn4nd0: yes keep master up to date14:17
wikingblackburn: ok so you haven't got time yet right?14:17
blackburnboth masters14:17
blackburnwiking: hmm it seems I have actually14:17
wikingsince i haven't seen a reply by you14:17
wikingor maybe it slipped my eyes14:17
blackburnI did not understand :D14:18
blackburnwiking: did you get alex' idea14:19
sonne|workguys you should really discus on the ML14:19
blackburnhah right14:19
blackburnsonne|work: but we are talking about old mail14:20
blackburnwiking: I'd suggest you to answer to ml this time14:20
blackburnbut I still do not understand the idea14:21
sonne|workwell then invite him to this irc chat and ask him online :D14:21
blackburnsonne|work: yeah I think that's the thing wiking should do ;)14:22
wikingblackburn: he is on Skype rather than irc imho14:23
sonne|workwiking: so what - send him the irc web url and grab him - tell him that I said that :)14:24
n4nd0blackburn: I updated the PR14:27
n4nd0blackburn: I think I might have screwed though :S because of the merge14:28
n4nd0blackburn: can you take a quick look and tell if it is ok or if I should do sth?14:28
n4nd0blackburn: thank you man14:28
sonne|workwiking: btw did you submit your proposal?14:30
blackburnn4nd0: allright14:30
blackburnn4nd0: one more thing to go is to convert it to rather distance than distance matrix14:30
blackburnthis means you would have to store pointer to distance in lle point14:30
blackburnerr spe point14:30
blackburni.e. all distance_matrix[i*N+j] -> distance->distance(i,j)14:31
n4nd0aham I see14:31
wikingsonne|work: not yet14:31
sonne|workbut you plan to right?14:32
wikingsonne|work: deadline is on the 7th right14:32
sonne|work6th iirc14:32
sonne|workbut you might want to iterate with your potential mentor...14:32
blackburnhurry up!14:32
blackburngsomix: and you as well14:32
sonne|work(one can update until apr. 6)14:33
wikingi'll do something till tomorrow i think14:33
n4nd0blackburn: just in SPE_COVERTREE_POINT or also in embed_distance? the change to distance->distance(i, j)14:35
blackburnn4nd0: everywhere probably14:38
blackburnoverhead of custom distance is rather slow14:38
blackburnso it is almost the same14:38
n4nd0blackburn: then covertree has to be changed, instead of SGMatrix -> CDistance14:40
n4nd0CDistance* actually14:40
blackburnyes exactly14:40
n4nd0I am with that right now then14:40
-!- gsomix [~gsomix@] has quit [Ping timeout: 276 seconds]15:04
-!- gsomix [~gsomix@] has joined #shogun15:10
-!- harshit_ [~harshit@] has joined #shogun15:20
n4nd0flxb: hi there15:28
flxbI have a weird error. I get Illegal instruction errors for some (!) shogun examples on some (!) nodes on our cluster. On other nodes everything works find. The nodes should have the same setup. The examples that fail have Illegal instruction in the next line in this listing: Does anyone have any idea what this could be?15:29
n4nd0where does that Illegal instruction come from?15:32
n4nd0I mean, you redirect the output of the python scripts /dev/null right?15:33
flxbbut apparently illegal instruction errors are not redirected15:34
n4nd0I just executed the script and didn't get any of those Illegal instructions15:34
n4nd0let me check again15:34
n4nd0what do you get without /dev/null?15:35
n4nd0in particular for those that give illefal instruction15:36
flxbi get
flxbit must occur when shogun apply gets called15:39
n4nd0I think I need more hints because I cannot reproduce that illegal instruction in my machine15:42
n4nd0I can execute without problems15:42
blackburnwtf is illegal instruction15:43
n4nd0I think it is something related to the processor15:45
flxbn4nd0: it works on some nodes here too15:45
blackburnflxb: could you please paste configure output somewhere?15:45
blackburnit can be related to -march parameter of gcc15:45
harshit_n4nd0,blackburn : how can i make use of toy dataset in C++ , it doesn't seems to be in plain text15:46
blackburnwhy not to use it from python?15:47
n4nd0harshit_: what kind of format does the file have?15:48
flxbblackburn: configure output is here:
flxbblackburn: maybe it's worth mentioning that i had to change swig to swig2.0 in configure file15:49
harshit_blackburn : my python is not very good, Actually I can test it in octave.15:49
blackburnflxb: shouldn't cause it15:49
harshit_n4nd0: its *.dat and *.mat15:50
blackburnflxb: actually you could try to compile with --disable-optimization15:50
n4nd0harshit_: binary files then?15:50
harshit_not really when i open DynProg in text editor it shows some weird symbols15:51
blackburnflxb: it worths mentioning you have pretty old gcc btw15:51
n4nd0harshit_: then they are binary files ( != text files )15:51
n4nd0harshit_: I don't think there is something in shogun to read directly .dat and .mat in C++15:52
n4nd0harshit_: it's what we discussed last week15:52
harshit_have to make use of octave then15:52
n4nd0harshit_: python works good for them too15:52
harshit_I remember, Also in 2011 ideas list there were some project related to this issue15:53
flxbblackburn: should i try with Checking for GCC & CPU optimization abilities ... k8?15:53
harshit_project to enable shogun to support mat 7.0 files15:53
harshit_that seems interesting too15:53
blackburnflxb: k8? it seems you have xeon, why k8?15:53
blackburnI don't think we really need it - it is oneliner in python15:54
flxbblackburn: i did --disable-optimization, but i am now on a different node. can this be the problem?15:54
blackburnflxb: what do you mean? did you compile with disable opt?15:55
flxbblackburn: no, i did not, i will try that now15:55
blackburnI see15:55
flxbbut previously i compiled on another node15:56
blackburnflxb: are they identical?15:56
blackburnin case of optimizations enabled that could cause some troubles15:56
flxbblackburn: yes i guess that is the problem15:56
flxbone is Quad-Core AMD Opteron(tm) Processor 2378, the other one is Intel(R) Xeon(R) CPU15:57
blackburnyes could be a problem for sure15:57
blackburnthen just compile on each machine separately with optimizations enabled15:57
blackburnshould work15:57
blackburnit is some mtune or march or other hardware-related option issue15:58
flxbhow much performance drawbacks do i have if i just disable optimization?15:58
blackburndepends what are you doing15:59
blackburnhowever I haven't seen any drawback larger than 1.5x15:59
-!- Marty28 [~Marty@] has quit [Quit: ChatZilla [Firefox 11.0/20120310010446]]16:00
flxbblackburn: ok. everything works now!16:01
blackburnwhoa I don't like to work 8 hours straight :(16:10
n4nd0blackburn: lot of things to do at the job?16:12
blackburnn4nd0: no, but sitting for 8 hours..16:18
blackburnI mean it bothers to not change activities/place :)16:20
sonne|workharshit_: totally fine if you come up with an octave modular example16:27
harshit_sonne|work : done with it !16:27
harshit_test it on fm_train_real.dat16:27
sonne|workhow does it compare speed wise?16:27
harshit_same result on matlab16:27
sonne|workwell you can load any (bigger) data set16:27
harshit_which one ? dna ?16:28
sonne|workor mnist ...16:28
sonne|workIIRC someone has .mat files for that one around too16:28
harshit_ohk,but please tell me how to compare speeds16:29
sonne|worktic; toc; in matlab :)16:29
sonne|workor octave16:29
harshit_okay .. thanks16:30
sonne|workso just load the data tic; call newton svm matlab script ; toc16:30
sonne|workand then do the same under octave with your new shogun newton svm impl16:30
n4nd0sonne|work: I just submitted the application for multiclass16:30
sonne|workseen it16:31
n4nd0sonne|work: could you take it a look when you get some time and tell your opinion / things to improve?16:31
n4nd0sonne|work: thank you :)16:31
n4nd0sonne|work: it is quite based on SO's one so you can save some reading ;)16:32
n4nd0time for me to take a rest16:37
n4nd0see you later guys16:37
blackburnyeah you are too16:37
blackburnehm don't know right word16:37
blackburnnervous? ;)16:38
n4nd0mmm no16:38
n4nd0should I :P??16:38
blackburnno, I mean you spend too much time there16:38
n4nd0ah yes16:38
n4nd0see you later then16:39
blackburnsee you16:39
harshit_sonne|work : time  =  8.2211e-04, for training on fm database16:39
n4nd0blackburn: try to stand up and stretch your legs for a while :)16:39
harshit_now testing it on 20 newsgroup16:39
harshit_will that work16:39
-!- n4nd0 [] has quit [Quit: leaving]16:39
sonne|workharshit_: how big is the data set?16:40
sonne|workhow many examples / dims?16:40
blackburnn4ndo yeah I do16:40
sonne|workand please try matlab  too :)16:40
sonne|workso we can compare16:40
harshit_not really big16:41
harshit_now running on 20Newgroup which is very large16:41
sonne|workharshit_: how large?16:42
harshit_about 18000 examples and 6000 features in 20NEwsgroup16:42
sonne|workok - is this data set sparse?16:42
sonne|workif so please don't forget to convert it to dense16:42
sonne|workin matlab the cmd is16:43
sonne|workso we can compare16:43
sonne|workoctave modular has no sparse feature matrix support yet (one of gsomix's gsoc tasks :)16:43
harshit_yeah that is sparse :16:44
blackburnsonne|work: one of 10016:44
sonne|work100000 :D16:44
harshit_blackburn : having problems with 20Newsgroup on my problem16:52
harshit_its a multiclass problem16:52
harshit_which big dataset do you normally use16:53
harshit_thanks, :)16:58
-!- harshit_ [~harshit@] has quit [Ping timeout: 245 seconds]17:02
-!- genix [~gsomix@] has joined #shogun17:09
@sonney2kblackburn, well he could have just mergerd some labels...17:09
blackburnsonney2k: Im not sure what is you are talknig about?17:10
@sonney2kmulticlass -> binary17:10
blackburnah yes sure17:10
@sonney2kanyway is probably even better for harshit - data already is in matlab format17:11
-!- gsomix [~gsomix@] has quit [Read error: Operation timed out]17:11
-!- blackburn [5bdfb203@gateway/web/freenode/ip.] has quit [Quit: Page closed]17:32
-!- PhilTillet [] has joined #shogun18:19
-!- n4nd0 [] has joined #shogun18:35
-!- blackburn [~qdrgsm@] has joined #shogun18:43
CIA-64shogun: Soeren Sonnenburg master * rce6628c / examples/undocumented/python_modular/ : rename function to match file name -
CIA-64shogun: Soeren Sonnenburg master * ra28591d / testsuite/python_modular/ : remove pdb import from tester -
n4nd0sonney2k: aham! so it was that pdb what changed before in the tester19:09
n4nd0I executed it once and was smooth and in the second I got this gdb type interface19:09
n4nd0I was like ... what did I do?19:10
@sonney2kn4nd0, yeah I screwed up debugging the tests19:17
@sonney2kn4nd0, I went through most of the tests and checked where they differ19:18
@sonney2kand most were ok but hey I screwed up again and forgot to upload the updated test files19:18
@sonney2kwhich are now gone - bah!19:18
-!- siddharth [~siddharth@] has joined #shogun19:21
@sonney2kn4nd0, I am now trying to figure out what this QDA error is about19:22
@sonney2kQDA.cs(143,18): error CS0136: A local variable named `i' cannot be declared in this scope because it would give a different meaning to `i', which is already used in a `parent or current' scope to denote something else19:22
n4nd0what? I have never seen that19:25
n4nd0but let's check19:25
@sonney2kn4nd0, look at the buildbot19:25
@sonney2kI renamed i -> c and it will likely compile19:25
-!- siddharth [~siddharth@] has left #shogun ["Leaving"]19:26
PhilTilletsonney2k, is the kernel matrix stored columnwise or rowwise?19:28
CIA-64shogun: Soeren Sonnenburg master * r54e89fd / src/shogun/classifier/QDA.h : rename index i to c to fix clash with csharp typemap -
n4nd0sonney2k: were are those files like QDA.cs?19:29
@sonney2kn4nd0, yes all good - lets hope the buildbot is happy now19:29
@sonney2kshogun-buildbot, be happy!19:29
shogun-buildbotWhat you say!19:29
n4nd0sonney2k: let's hope so19:29
@sonney2kPhilTillet, everything in shogun is columnwise19:30
n4nd0sonney2k: ah ok, one makes the changes in the C++ source19:30
@sonney2kPhilTillet, but kernel matrix is usually not computed as one big thing (won't fit in mem...)19:30
PhilTilletwell yes but for GPUs ...19:30
PhilTilletfor a first prototype I mean19:31
@sonney2kn4nd0, yes and swig generates the csharp python etc bindings19:31
-!- genix [~gsomix@] has quit [Read error: Operation timed out]19:31
@sonney2kPhilTillet, well it is really useless to do it for kernel matrices that fit in memory19:31
@sonney2kmaybe you would rather want to speed up the compute() function? not sure what you are doing right now19:32
PhilTilletI am trying to make some opencl implementation of my dirty code19:33
n4nd0sonney2k: are the bindings normal source files? I got surprised looking at the buildbot and seeing that it identifies a QDA.cs, for example19:33
PhilTilletlike in the shogun CGaussianKerel and CKernelMachine class19:33
PhilTilletso that just have to replace svm->apply() with svm->ocl_apply()19:34
PhilTilletit internally copies everything to GPU mem (for now with the assumption that it will fit :D)19:34
n4nd0sonney2k: I thought that we could use the code we do in C++ as libraries from the other languages, but that there was no generation with SWIG19:34
@sonney2kPhilTillet, what does it copy ? the kernel matrix?19:35
@sonney2kn4nd0, exactly19:35
PhilTilletthat was my incoming question19:35
PhilTilletshould I make another temporary matrix?19:35
PhilTilletor should I make a gpu_kernel_matrix ?19:35
PhilTilletlike an attribute19:35
@sonney2kif you precompute the kernel matrix - all the rest will take basically 0 time19:35
n4nd0sonney2k: I am reading a bit about SWIG right now to understand a bit more, thank you!19:35
PhilTilletI know19:35
PhilTilletthe rest is just roughly a matrix vector product19:36
PhilTilletbut the kernel matrix is computed at the "apply()" point right?19:37
PhilTillethmm I see19:38
PhilTilletthe kernel matrix is not necessarily features*support_vectors19:38
PhilTilletsonney2k, did not read carefully enough your question, it copies features to gpu, and caches support vectors19:43
PhilTilletthen compute labels on gpu19:43
PhilTilletand copies them back to cpu19:43
@sonney2kPhilTillet, ahh so you copy just the support vectors?19:46
PhilTilletthe features too19:46
@sonney2kwhich features?19:46
PhilTilletthe examples19:46
@sonney2kwhich examples :)19:46
PhilTilletwait I'm getting confused19:46
@sonney2kthe one to compute the output for?19:46
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun19:47
@sonney2k(support vectors are also examples - a subset of the training examples)19:47
PhilTilletoh yes right19:47
@sonney2kso test examples you mean probably19:47
PhilTilletyes, I copy the support vectors and test examples to gpu19:47
PhilTilletcompute test labels on gpu19:47
PhilTilletcopies back to cpu19:47
PhilTillet(test labels)19:48
@sonney2kwhat would be more reasonable is to copy only SVs to gpu19:48
-!- harshit_ [~harshit@] has joined #shogun19:48
@sonney2kand then test example by test example to GPU mem and compute output19:48
PhilTilletI really do not think so19:48
PhilTilletwould get way less gigaflops19:48
@sonney2kI would bet that it does not make any difference19:49
PhilTilletIt does from a cahcing point of view19:49
PhilTilletI mean, it's the difference between a matrix matrix product and a matrix vector product19:50
PhilTilletwhen running on GPU19:50
PhilTilleteach work group caches chunk of the matrix19:50
PhilTilleti don't really know how to explain19:51
PhilTilletwell there are multiple problems, but this is the first one19:51
PhilTilletthe work groups on gpu will cache chunks of the feature matrix19:52
n4nd0sonney2k: it looks like it failed again19:53
-!- blackburn [~qdrgsm@] has quit [Ping timeout: 246 seconds]19:53
n4nd0sonney2k: same error, same place19:53
@sonney2kPhilTillet, ok yes matrix1 * matrix2 is faster but neither matrix1 nor matrix2 fit in GPU memory19:55
PhilTilletyes, this is why there has to be some complicated tricks to do19:56
PhilTilletto do submatrix1*submatrix219:56
@sonney2kyou could even imagine test features to be streamed from disk19:56
PhilTilletshouldn't it be possible to stream it into a buffer, and when the buffer has a certain size, transfer on gpu, compute, etc19:56
@sonney2kyes exactly19:57
PhilTilletI think if we compute example by example, performance would be even worse on GPU than on CPU19:57
PhilTilletplus, for each GPU operation (memory transfer, opencl kernel initialization, etc...)19:57
PhilTilletthere is about a 50microsec time19:57
@sonney2kbut I would really focus on only doing submatrix1 stuff and use vectors on the right hand side19:57
PhilTilletso if you get a 50microsec overhead for each example19:58
PhilTilletbut the main issues is the gigaflop one :p19:58
PhilTilletright hand side is the test examples?19:59
@sonney2kif the overhead is really that huge then it is becoming tough19:59
PhilTilletI mean, why not accumulating the test examples into a sufficiently small buffer?20:00
PhilTilletok :)20:00
n4nd0sonney2k: QDA.cpp:87, I used int instead of int32_t there, do you think it can be related?20:01
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds]20:01
-!- blackburn [~qdrgsm@] has joined #shogun20:08
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun20:12
-!- gsomix [~gsomix@] has joined #shogun20:27
@sonney2kn4nd0, be patient ... the buildbot is still working20:29
blackburngsomix: here, wanted to catch me? ;)20:31
blackburnsonney2k: what's up?20:31
blackburnstrange issue with 'i'20:32
@sonney2knot strange and problem resolved20:33
blackburnsonney2k: that mail from jacob - I can hardly come with any answer, can you?20:34
@sonney2kblackburn, no - I am not a GP expert20:34
@sonney2kolivier has to answer to this20:34
blackburnyes but he listed oliver's ideas mostly :)20:35
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Remote host closed the connection]20:35
@sonney2kno idea20:36
@sonney2kblackburn, did the buildbot get any break recently?20:39
@sonney2kseems to me it is hardly diling20:39
blackburnsonney2k: what kind of break?20:39
blackburnyeah strange20:40
blackburna few small commits probably20:40
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun20:41
CIA-64shogun: Sergey Lisitsyn master * rf1564b9 / (3 files in 3 dirs): Warnings removal -
blackburnsonney2k: have you seen git network?20:44
blackburnof shogun20:44
blackburn kind of subway20:45
blackburnjust like last year :)20:45
blackburnsonney2k: btw have you got stats of should be interesting20:46
@sonney2kblackburn, would be more impressive if we both did pull requests too20:47
blackburnsonney2k: I do for big things like edrt20:47
blackburnbut hey new branch to remove warnings..20:47
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Remote host closed the connection]20:49
harshit_blackburn :having a little error when i use lard dataset in octave : No matching function for overload20:49
blackburnwhich function?20:50
harshit_and error points to Label()20:50
harshit_on line : labels=Labels(label_train_twoclass);20:50
harshit_where label_train_twoclass is a vector20:51
harshit_having labels20:51
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun20:52
@sonney2kharshit_, what type is that?20:52
@sonney2kharshit_, does Labels([1.0, 2.0, 3.0]) work?20:52
harshit_I dont declare any type as such ,labels is returned from libsvmread() function20:53
harshit_wait i'll check20:53
harshit_No Labels([1.0,2.0,3.0]) doesn't give any error20:54
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds]20:58
blackburn skyscraper is on fire in moscow20:59
harshit_Done ! figured out the problem20:59
n4nd0blackburn: too warm around there?20:59
blackburnn4nd0: I am fortunately 1000 km far away haha21:00
n4nd0blackburn: good21:00
n4nd0blackburn: I hope it is not a big disaster21:01
blackburntoday was a plane crash in other city21:01
blackburnthat was a disaster actualyl21:01
blackburn38 died21:01
@sonney2kharshit_, so what was the problem?21:01
harshit_sonney2k: In octave , newton SVM is taking 0.09121:02
harshit_i just transposed labels and matrix and it worked !21:02
harshit_0.091 sec to train 60*1000 dataset21:03
harshit_dataset I used is called splice dataset21:04
@sonney2kharshit_, can't you use a bigger data set?21:06
harshit_couldn't find any one which doesn't requires preprocessing21:06
@sonney2kyou can even just download a .mat file21:07
@sonney2kno work needed...21:07
harshit_great .. wait i'll run on it21:08
harshit_dam thats a really big one !21:08
n4nd0sonney2k: do you have any other task in mind / something you would like to see done in shogun?21:10
@sonney2kn4nd0, want to learn a bit more about swig?21:11
harshit_sonney2k : I saw S3VM somewhere in liblinear, so why in homepage of shogun semisupervised algos is crossed21:11
harshit_at place where all toolboxes are compared21:12
@sonney2kn4nd0, or lets better ask which topic is of interest to you?21:12
@sonney2kharshit_, liblinear has s3svm?21:12
n4nd0sonney2k: I am a bit open minded in that sense, I even prefer to learn some new stuff21:13
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun21:13
harshit_I dont remember exactly which svm library it was but somewhere i saw an enum where S3VM was an option for classifier type21:13
n4nd0sonney2k: I wondered before if you had something on shell scripting to be done21:14
harshit_sonney2k : Just wondering if you want to see co-clustering by blum in shogun !21:14
n4nd0sonney2k: among the things I have done this far, I like the most QDA21:15
n4nd0sonney2k: other classifier (if there's something not around here yet!) could be good then21:15
@sonney2kn4nd0, some 'easy' decision tree then?21:16
blackburnI have my own id3 python prototype actually21:17
n4nd0sonney2k: sure, I don't know about decision trees that much21:17
n4nd0it will be good to learn about them21:17
blackburnparzen window classifier21:17
@sonney2kmaybe even start with decision stumps21:17
@sonney2kblackburn, true21:17
n4nd0haha you guys definetely have lot of ideas21:18
blackburnnot a lot, only a few (about one thousand)21:19
@sonney2kor rbf networks
blackburnARMA model haha21:20
@sonney2kblackburn, btw does KNN use covertree now?21:22
blackburnI suggested to do that a few times :)21:22
n4nd0blackburn: oh yes, that's true, you told me about that21:24
blackburnn4nd0: now you can do that I think21:24
n4nd0blackburn: ok, I will take a look to covertree in KNN then21:24
blackburn(after struggles with spe)21:24
n4nd0sonney2k: is that ok?21:25
n4nd0blackburn: sure ;)21:25
blackburnn4nd0: but will you finish spe?21:25
n4nd0blackburn: yeah!21:25
n4nd0blackburn: I said sure :P21:25
blackburnI believe we need to add more cats to shogun21:27
@sonney2kharshit_, so what are the timings?21:27
blackburnwe have no graphical examples with cats21:27
@sonney2kblackburn, well we have one cat - your gf :)21:28
blackburnsonney2k: looks like shogun mascot21:28
blackburnsonney2k: I pasted you msg to gf :D21:29
@sonney2kkind of a yakuza mafia shogun cat21:29
blackburnvery dangerous21:29
harshit_sonney2k: strange error ! nothing comes after first iteration of newtonSVM21:30
harshit_But everything was working fine for other datasets21:31
@sonney2kharshit_, even w/ matlab?21:32
harshit_I am working in octave for now21:32
blackburnsonney2k: is scatter mc svm a good idea still?21:32
@sonney2knot so much21:32
blackburnah, n4nd0 - nearest centroid classifier!21:33
n4nd0blackburn: haha another idea!?21:33
blackburnand one with median21:33
blackburntwo even21:33
blackburnI forgot how do they call it21:33
harshit_Could that be bcoz of less memory available to octave?21:33
@sonney2kharshit_, well you could use 1/2 of the data if the data set is too big21:34
harshit_sonney2k : its running now with full dataset, Problem was that I had a lot of things opened earlier21:37
PhilTillethi hi21:40
CIA-64shogun: Soeren Sonnenburg master * refd2da9 / testsuite/python_modular/ : add try catch around len() -
n4nd0sonney2k: what about the QDA.cs issue? I have checked and think that the error is still there21:43
n4nd0sonney2k: but maybe I didn't check it correctly21:43
@sonney2kn4nd0, it compiled locally here...21:44
@sonney2kPhilTillet, that's a linear method - hard to imagine how GPUs can speed up anything for this21:46
n4nd0I have no idea to check the buildbot then :S21:46
@sonney2kn4nd0, it is overly busy...21:46
PhilTilletsonney2k, what are you talking about? :p21:46
blackburnsonney2k: was it an answer for 'hi hi'?21:46
blackburnI can hardly imagine what you would answer for ho ho21:47
@sonney2kn4nd0, lets see if everything is OK tomorrow21:47
blackburnkernel laplace transformation can not map into non-euclidean space?21:47
@sonney2kblackburn, did I say that I love the shogun killer cat(tm)?21:47
@sonney2kanyway bed time for me21:48
@sonney2kcu all21:48
n4nd0good night21:48
blackburnsonney2k: yeah cats are cool :) nite21:48
-!- harshit_ [~harshit@] has quit [Read error: Connection reset by peer]21:49
-!- PSmitAalto [82e9b263@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]21:52
-!- harshit_ [~harshit@] has joined #shogun21:57
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds]22:03
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun22:16
-!- harshit_ [~harshit@] has quit [Read error: Connection reset by peer]22:16
shogun-buildbotbuild #196 of nightly_all is complete: Success [build successful]  Build details are at
blackburnn4nd0: gsomix vodka! ^22:17
n4nd0C# should work soon then22:18
n4nd0what does nightly stands for / mean?22:19
n4nd0I guess it must be a name used to refer to a special release or sth like that22:19
blackburnn4nd0: it build on nights ;)22:22
blackburnis builded*22:22
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Remote host closed the connection]22:22
blackburnfirefox also has nightly builds and a lot of other projects too22:22
n4nd0aham, I see22:23
n4nd0so it was the obvious answer :D22:23
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun22:23
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun22:26
-!- naywhaya1e [] has joined #shogun22:27
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 260 seconds]22:30
-!- naywhayare [] has quit [Ping timeout: 260 seconds]22:30
-!- wiking_ is now known as wiking22:31
shogun-buildbotbuild #435 of csharp_modular is complete: Success [build successful]  Build details are at
-!- harshit_ [~harshit@] has joined #shogun22:36
harshit_sonney2k: done !22:37
harshit_it was not running because of the value of C I set was too low22:37
harshit_now it has took 7.892sec for whole dataset22:37
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds]22:42
blackburnsee you guys22:48
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun22:54
-!- blackburn [~qdrgsm@] has quit [Ping timeout: 252 seconds]22:56
-!- vikram360 [~vikram360@] has quit [Read error: Connection reset by peer]23:05
n4nd0good night23:13
gsomixi finished repair of room.23:21
-!- flxb [] has quit [Quit: flxb]23:27
-!- PhilTillet [] has quit [Remote host closed the connection]23:31
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]23:46
-!- tibi_popa [tibi_popa@] has joined #shogun23:54
--- Log closed Tue Apr 03 00:00:19 2012