tag:blogger.com,1999:blog-15418143.post3339709172492766899..comments2024-03-09T05:42:18.102-05:00Comments on Tombone's Computer Vision Blog: Deep down the rabbit hole: CVPR 2015 and beyondTomasz Malisiewiczhttp://www.blogger.com/profile/17507234774392358321noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-15418143.post-51138117772949922802016-02-26T04:50:52.942-05:002016-02-26T04:50:52.942-05:00i prefer Caffe because it is writted by C/C++, it&...i prefer Caffe because it is writted by C/C++, it's faster than torch. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15418143.post-77054608241088074102016-01-12T08:31:58.061-05:002016-01-12T08:31:58.061-05:00Convolutional neural networks have been around for...Convolutional neural networks have been around for a long time now. Just because the networks are now bigger and deeper doesn't mean we are any step further in understanding their inner workings. We have a general idea about their structure and what can be achieved and there exist some tweaks like drop-out that are known to work well. But why they work? Your guess is as good as mine.Anonymoushttps://www.blogger.com/profile/04294021868373303284noreply@blogger.comtag:blogger.com,1999:blog-15418143.post-80049758990179865072016-01-12T08:30:23.426-05:002016-01-12T08:30:23.426-05:00Convolutional neural networks have been around for...Convolutional neural networks have been around for a long time. Nowadays they might be deeper and more complex, but that does not mean we are any further in the process of understanding their underlying workings. We have a general idea of their structure and what can be achieved with them and there are some tweaks like drop-out that are known to work well. But why it works? Your guess is as good as mine.Anonymoushttps://www.blogger.com/profile/04294021868373303284noreply@blogger.comtag:blogger.com,1999:blog-15418143.post-80527512600794017392015-08-31T06:14:01.873-05:002015-08-31T06:14:01.873-05:00'The Video Game Engine Inside Your Head'- ...'The Video Game Engine Inside Your Head'- This fascinating title is enough to intrigue many minds to work deeper on the concept of Virtual Gaming. Also, the Torch vs Caffe discussion is also of equal important as the methods of development of new technologies is the focus point. Henry Baridhttp://infinityleap.com/noreply@blogger.comtag:blogger.com,1999:blog-15418143.post-50336686029833376782015-08-13T21:32:17.253-05:002015-08-13T21:32:17.253-05:00---
Let's hire the best of the best, encourage...---<br />Let's hire the best of the best, encourage truly great research, and stop chasing percentages.<br />---<br /><br />Yes, this is trending all over education: http://www.theatlantic.com/education/archive/2015/08/when-success-leads-to-failure/400925/?single_page=trueRobhttps://www.blogger.com/profile/00453790306268952243noreply@blogger.comtag:blogger.com,1999:blog-15418143.post-64979842267252115682015-07-08T09:06:11.672-05:002015-07-08T09:06:11.672-05:00Great summary, Tomasz! Fascinating to see the simi...Great summary, Tomasz! Fascinating to see the similarities between so many independent works: e.g., LSTMs and RNNs for captioning, etc. In this environment, who wants to argue that a 0.1% improvement is really connected to some scientific insight, rather than some lucky parameter choice or data augmentation scheme. <br /><br />Regarding Arxiv, I've come over to that side. Too many times I've had a paper narrowly rejected only to find that by the next deadline, I've been scooped and then need to compare with new benchmarks or abandon and move on. We haven't had true double blind submission for a long long time -- many reviewers can reliably guess the authors of the papers they are reviewing. <br /><br />But, interestingly -- I felt industry people tended to be more positive about Arxiv than academics. Lots of discussion about tenure decisions, bias, etc. We need to think about solutions, but the publication and review process for vision and learning simply can't stay the way it was, now that the cat is out of the bag. Andy Gallagherhttp://chenlab.ece.cornell.edu/people/Andy/noreply@blogger.comtag:blogger.com,1999:blog-15418143.post-79394803612364261422015-06-29T12:11:55.169-05:002015-06-29T12:11:55.169-05:00Yann LeCun posted a brief reply to this blog post ...Yann LeCun posted a brief reply to this blog post on his Facebook feed. Obviously he agrees that ConvNets work, and just for completeness I'm going to include his last paragraphs right here:<br /><br />---<br />Torch is for research in deep learning, Caffe is OK for using ConvNets as a "black box" (or a grey box), but not flexible enough for innovative research in deep learning. That's why Facebook and DeepMind both use Torch for almost everything.<br /><br />Tomasz concludes with "learning large ConvNets from videos without annotations is going to be big at next year's CVPR."<br /><br />I agree.<br />---<br /><br /><br />You can follow him on Facebook (where he appears to be the most active) if you care to read his commentary on lots of recent, popular, and intellectually stimulating topics:<br /><br />Yann's Facebook feed: <a href="https://www.facebook.com/yann.lecun" rel="nofollow">https://www.facebook.com/yann.lecun</a><br />Regarding a modern version of "Cogito Ergo Sum" for ConvNets, I vote for <b>"I back-propagate, therefore I think"</b>Tomasz Malisiewiczhttps://www.blogger.com/profile/17507234774392358321noreply@blogger.com