- [R] Google AI Research "Automatic Video Creation From a Web Page" introduces URL2Video, a research prototype pipeline to automatically convert a web page into a short video, given temporal and visual constraints provided by the content owner.
- How to learn how computers and hardware works from a low level?
- If Claude Shannon was a deep learning researcher in 2020
- Anyone else annoyed about how central ML/AI is nowadays, in both technology and academia?
- Vectorized Clustering Algorithm
- [R] Facebook AI Says Class Selectivity In Neurons May Impair DNN Performance
- Since a single change to a bit unaccounted for by error correcting codes can change an instruction into something entirely different, how is it that communication over the internet is even feasible?
Posted: 30 Oct 2020 07:56 PM PDT |
How to learn how computers and hardware works from a low level? Posted: 30 Oct 2020 07:13 AM PDT I'm interested in computer science but also the engineering (hardware) side of it. I found the website teachyourselfcs(.com), the courses nand2tetris and CS50 and the youtube Channel beneater. If I were to learn from these resources, would I know how computers work, and technology in general? What other resources do you recommend? I am also interested in cybersecurity, do you maybe know some resource which is focused on how computers work from a cybersecurity perspective? Thanks [link] [comments] |
If Claude Shannon was a deep learning researcher in 2020 Posted: 30 Oct 2020 04:47 PM PDT This blog resuscitated Claude Shannon, the founder of information theory, and gave him access to arXiV and Twitter. His skeletal fingers typed out this short paper before Shannon requested that we kindly return him to the grave. Link: https://wandb.ai/charlesfrye/reports-sandbox/reports/The-Bandwagon-2-0--VmlldzoyNjQzMTk [link] [comments] |
Anyone else annoyed about how central ML/AI is nowadays, in both technology and academia? Posted: 30 Oct 2020 04:59 PM PDT You just can't get away from it. I feel like in 20 years all CS grads will be trained in machine learning theory and methods, because it's becoming so widespread and used in every little thing. I don't have any interest in the field whatsoever except in the mathematical theory behind things like gradient descent. But I'm looking to do a CS PhD and lots of the funding opportunities are about either crypto/privacy, and AI/ML. I'm choosing my master's project among a selection of projects put forward by academics at my uni, and there are loads of really interesting ones for healthcare like natal screening, emotion recognition for therapy, recognising tumours automatically - I would absolutely love to do one of these, even though they're not really my field (programming languages and comp arch). But every single one requires deep learning! I feel like the longer I avoid learning ML, the worse my academic and job prospects will become. I feel like when I'm 50 I'll be seen as an old fart stuck in the past, with the young kids shaking their heads at me because I "just don't get it", like how we shake our heads at oldies that prefer typewriters to computers. [link] [comments] |
Vectorized Clustering Algorithm Posted: 30 Oct 2020 07:01 PM PDT I've just implemented code for vectorizing one of my clustering algorithms, and the results so far are quite good, and the code is just 28 lines long in Octave: -- UCI Iris Dataset (150 rows), the runtime is 0.00972605 seconds, and the accuracy is 0.99529. -- UCI Ionosphere Dataset (351 rows), the runtime is 0.180258 seconds, and the accuracy is 0.99972. -- MNIST Numerical Dataset (1000 rows), the runtime is 7.04497 seconds, and the accuracy is 1. Accuracy = 1 - num_errors_per_row / num_rows. Note that the clusters are not mutually exclusive, and so each row can be clustered with every other row. Code here: https://derivativedribble.wordpress.com/2020/10/30/fully-vectorized-delta-clustering/ [link] [comments] |
[R] Facebook AI Says Class Selectivity In Neurons May Impair DNN Performance Posted: 30 Oct 2020 02:42 PM PDT New Facebook AI research proposes a rethinking of the role of class selective neurons in deep neural networks (DNN), arguing DNNs can perform well — and maybe even better — without them. The researchers say the findings suggest overreliance on such easy-to-interpret neurons and intuition-based methods for understanding DNNs can be misleading. Here is a quick read: Facebook AI Says Class Selectivity In Neurons May Impair DNN Performance The work highlights three germane papers: Selectivity Considered Harmful: Evaluating the Causal Impact of Class Selectivity in DNNs (arXiv link), On the Relationship Between Class Selectivity, Dimensionality, and Robustness on (arXiv link), and Towards Falsifiable Interpretability Research (arXiv link). [link] [comments] |
Posted: 30 Oct 2020 11:48 AM PDT Yes, communication over the internet is almost always imperfect, but it's still surprisingly coherent. I figured that constant bombardment by em waves from outer space (among other sources of data corruption) would distort data to a much more significant degree than what I end up seeing. [link] [comments] |
You are subscribed to email updates from Computer Science: Theory and Application. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment