CompSci Weekend SuperThread (October 09, 2020) Computer Science |
- CompSci Weekend SuperThread (October 09, 2020)
- [Quanta] Computer Scientists Break Traveling Salesperson Record
- Generalized matrix cross product as nested maps
- Scaling in numerical optimization
- Ideas for final year project
- What does "." means in a pseudocode? I find this yellow highlighted pseudocode pretty hard to understand(More in comments).
- Is there anything unique about windows 10 architecture that makes it so that onenote desktop is able to sync across multiple instances without being signed in to cloud?
- Google, Stanford, & MIT Top NeurIPS 2020 Accepted Papers List
- [R] ‘Farewell Convolutions’ – ML Community Applauds Anonymous ICLR 2021 Paper That Uses Transformers for Image Recognition at Scale
- Want to share Interview Preparation Courses
- Veteran Developer needs math course recommendation
- How to Gain a Computer Science Education from MIT University for FREE
CompSci Weekend SuperThread (October 09, 2020) Posted: 08 Oct 2020 06:04 PM PDT /r/compsci strives to be the best online community for computer scientists. We moderate posts to keep things on topic. This Weekend SuperThread provides a discussion area for posts that might be off-topic normally. Anything Goes: post your questions, ideas, requests for help, musings, or whatever comes to mind as comments in this thread. Pointers
Caveats
[link] [comments] |
[Quanta] Computer Scientists Break Traveling Salesperson Record Posted: 08 Oct 2020 04:05 PM PDT |
Generalized matrix cross product as nested maps Posted: 09 Oct 2020 01:25 AM PDT I stumbled into a nifty insight yesterday that I think is interesting enough to share. I am taking a class on databases. I was interested in whether you can implement SPJ (select/project/join) queries using matrix multiplication. Select and project are pretty obvious (just use a partial identity matrix to drop rows or columns), but joins are not. After a while I realized that the matrix cross product is really the sum of the pair-wise products from the Cartesian products on each row of the left matrix and each column of the right matrix. Here is a generalized version in Julia that accepts two matrices, an accumulation function, and a combination function.
(I am all ears if anyone can suggest a better way to flatten a singly-nested array of arrays in Julia). You can compute the matrix cross product like this:
The above gives an answer that is almost the same as
[link] [comments] |
Scaling in numerical optimization Posted: 08 Oct 2020 06:44 AM PDT I am trying to approximate a symmetric tensor of which the values are in the range of [1e-7,1e-4], by a symmetric tensor decomposition of lower rank. For this I am using the L-BFGS-B method in SciPy's optimization package. The relative errors (based on the Frobenius norm) I am obtaining are quite big (greater than 1). After some research I have come to the conclusion that there is need for scaling, as my tensor is 'poorly scaled'. When I multiply all values of X with 1e7, I do obtain a smaller relative error (in the range of [1e-4,1e-3]), but only when the modes of X have a small dimension in comparison with the chosen rank of the symmetric tensor decomposition. Since I am not that familiar with the concept of scaling in numerical optimization, is there a better way to solve this scaling problem, so that I can obtain a small relative error even when the dimension of the modes of X is large in comparison with the chosen rank? I am also doing some research about the existence of a low rank approximation, as this may not even exist. While this may be the case, reproducing the first experiment of Sherman & Kolda (2020), does not give me the same order of magnitude of the relative errors. This means that there should exist some improvements to be made to my implementation, of which I think only the scaling aspect can be improved. [link] [comments] |
Posted: 08 Oct 2020 10:48 PM PDT I am a final year student at a UK University doing a computer science degree. I have to decide on a final year project very soon and I'm not entirely sure on what to do. I hate essay writing as I am more of a practical person who would like to make something like an app/website instead of doing a research focused project. Does anyone have any decent ideas for a final year soft engineering project such as apps/websites to make? Something that would be useful but not too difficult/overcomplicated to build. Thanks! :-) [link] [comments] |
Posted: 09 Oct 2020 12:15 AM PDT |
Posted: 09 Oct 2020 02:13 AM PDT How is onenote desktop able to sync across multiple instances without being signed in to cloud?? Apparently there are tons of people that are so ignorant that onenote does that, and they can't even imagine the idea that it's possible in other desktop apps This is what was needed/asked, and still is needed. Need mutiple apps that does this:
Have tried a bunch of stuff, they don't do this SAAS/Cloud is eating the world. Even tho SAAS/Cloud has won, and desktop development is extremely lacking, do you know of any helpful, and does the needs? Please reply with something helpful. Thanks! [link] [comments] |
Google, Stanford, & MIT Top NeurIPS 2020 Accepted Papers List Posted: 08 Oct 2020 11:23 AM PDT After months marred by controversies over poorly-explained desk-rejects and other problematic aspects of its review process, the 34th Conference on Neural Information Processing Systems (NeurIPS 2020) has finally released its list of accepted papers. With 38 percent more paper submissions than 2019, this has been another record-breaking year for NeurIPS. A total of 1,903 papers were accepted, compared to 1,428 last year. The NeurIPS 2020 Program Chairs report that 12,115 paper abstracts were submitted, leading to 9,467 full submissions. After 184 submissions were withdrawn by authors or rejected for violations such as being non-anonymous or exceeding the maximum page count, 11 percent were desk-rejected, leaving 8,186 papers assigned to reviewers. The NeurIPS 2020 paper acceptance rate is 20.1 percent — slightly lower than last year's 21 percent. Here is a quick read: Google, Stanford, & MIT Top NeurIPS 2020 Accepted Papers List [link] [comments] |
Posted: 08 Oct 2020 03:07 PM PDT A new research paper, An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale, has the machine learning community both excited and curious. With Transformer architectures now being extended to the computer vision (CV) field, the paper suggests the direct application of Transformers to image recognition can outperform even the best convolutional neural networks when scaled appropriately. Unlike prior works using self-attention in CV, the scalable design does not introduce any image-specific inductive biases into the architecture. Here is a quick read: 'Farewell Convolutions' – ML Community Applauds Anonymous ICLR 2021 Paper That Uses Transformers for Image Recognition at Scale The paper An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale is available on OpenReview. [link] [comments] |
Want to share Interview Preparation Courses Posted: 08 Oct 2020 06:41 AM PDT I have organized some of the best interview preparation courses like:
And some other courses. DM me if you are interested to have these courses. [link] [comments] |
Veteran Developer needs math course recommendation Posted: 08 Oct 2020 07:14 AM PDT I've been developing for 20+ years, and does not have a CompSci degree. However, I'd like to take some math courses, but am not sure where to start. Any recommendations ? [link] [comments] |
How to Gain a Computer Science Education from MIT University for FREE Posted: 08 Oct 2020 06:57 AM PDT |
You are subscribed to email updates from Computer Science: Theory and Application. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment