CompSci Weekend SuperThread (May 01, 2020) Computer Science |
- CompSci Weekend SuperThread (May 01, 2020)
- How to emulate hand-drawn shapes / Algorithms behind RoughJS
- What are the fundamental differences between AMD and Intel CPUs and why do people make such a big fuss about it?
- An example of how compilers parse a segment of code, this uses the CLite language spec.
- Competitive programming. Where to start?
- The concept of machine learning
- Doubling major with Comp Sci
- Windows or Mac for undergrad
- From CVPR '20: Robust 3D Self-portraits in Seconds
- [D] AI21 Labs Asks: How Much Does It Cost to Train NLP Models?
- best way to write 3? and what would be 4 and 5?
CompSci Weekend SuperThread (May 01, 2020) Posted: 30 Apr 2020 06:04 PM PDT /r/compsci strives to be the best online community for computer scientists. We moderate posts to keep things on topic. This Weekend SuperThread provides a discussion area for posts that might be off-topic normally. Anything Goes: post your questions, ideas, requests for help, musings, or whatever comes to mind as comments in this thread. Pointers
Caveats
[link] [comments] |
How to emulate hand-drawn shapes / Algorithms behind RoughJS Posted: 30 Apr 2020 09:43 AM PDT |
Posted: 30 Apr 2020 09:12 PM PDT |
An example of how compilers parse a segment of code, this uses the CLite language spec. Posted: 01 May 2020 04:21 AM PDT |
Competitive programming. Where to start? Posted: 30 Apr 2020 03:44 PM PDT Hey, I want to start competitive programming asap. I want to know where I can start. Also what are the pre requisites needed. What things I need to learn. I am good in Java and Cpp. And does competitive programming help in getting jobs too? [link] [comments] |
The concept of machine learning Posted: 01 May 2020 02:12 AM PDT I am trying to learn to program. I am no-where near the skill level to make even a basic neural network or machine learning program, I just think I understand the basic concept but want to be sure it's correct for personal interest. So, for a really basic neural network you have an input layer, where the input values of some labeled data are given to. The inputs give the numbers to hidden layer nodes, who then multiply the values by weights, and at this point the values of all the inputs are summed up. The sum can either remain un-changed or a bias can be applied. The hidden layer gives the outputs the new values it created, and the program as a whole somehow determines if the outputs are "correct", or at least how close to correct they are. Then the program modifies some numbers like the weights of certain inputs on a specific hidden layer node and repeats the process until it gets closer to correct. For a practical example, lets say you are a biologist specializing in lizards and you find an unexplored area with only 2 types of lizard. Your computer scientist friend tagged along and wants to make a program that uses machine learning to guess which of 2 species local to the area that a given random lizard is. Species 1 has 7 horns about 95% of the time and weighs 11 - 13 ounces about 80% of the time. Species 2 has 3 horns about 95% of the time and weighs 8 - 10 ounces about 80% of the time. Naturally, not ALL lizards of either species will be strictly within these perameters, but they are pretty accurate. Some lizards of species 1 could be smaller, weighing similar to a species 2, or some species 2 could grow a bit larger than average and be in species 1's weight class. However, it is pretty rare that a specimen of species 1 does not have 7 horns, or that a specimen of species 2 does not have 3 horns. So, if this A.I is made properly that is, it would start out with random weights and biases, but over time the weight applied to the input <number of horns> would increase, because it is a stronger indicator of the species, with way less similarity between the two species with regards to that feature. Gradually, the A.I "learns" that a specimen's weight has some impact, but that looking at how many horns it has is the better option and will lead to a far higher number of correct outputs, which it does by modifying the weight that input gets multiplied by in nodes of the hidden layer. So, if a given hidden layer node starts off, randomly having a high weight on the number of horns input, the program will start realising that this node is producing alot of correct answers, and will start changing other hidden layer nodes the be like that one, albeit with small variations, treating weight as being of secondary importance and prioritizing horns. However, even if all the nodes have a weight on horn number that is just average, it can also detect that prioritizing horn number leads to more correct outputs, even if it didn't randomly have that already sorta figured out. (And yes, I know humans, especially a biologist, could learn how to differentiate the two species quicker and better than a program, but that's because humans kind of do the same thing just insanely faster.) If you could please tell me if I have this concept right, expand on it, or go more in depth on what's going on under the hood so I know not just what is happening, but how it's happening, I would greatly appreciate it. Also, how does this analogy change when there are more than 2 inputs? I am already well aware that highly advanced neural networks can have an extremely high number of inputs, and even that more than one hidden layer can be in between input and output, but I want to take baby steps in grasping this concept. [link] [comments] |
Posted: 01 May 2020 12:07 AM PDT Hi everybody, Im a finance student doubling my major with comp sci. This will mean I will take longer in school but it'll be def worth. I got into programming after my MIS course and learned java script pretty easily. So for this summer are there any languages I should learn that'll help me through school cuz I'm starting pretty fresh with this new major. I wanna be prepared for the fall semester and I have all summer to learn. Any websites you could recommend. I'd highly appreciate it! [link] [comments] |
Posted: 30 Apr 2020 11:05 PM PDT Deciding between purchasing another MacBook Pro or switching to a ThinkPad. Any advice [link] [comments] |
From CVPR '20: Robust 3D Self-portraits in Seconds Posted: 30 Apr 2020 09:10 PM PDT |
[D] AI21 Labs Asks: How Much Does It Cost to Train NLP Models? Posted: 30 Apr 2020 12:18 PM PDT AI21 Labs Co-CEO, Stanford University Professor of Computer Science (emeritus), and AI Index initiator Yoav Shoham describes the motivation for the project. "It started with an inquiry we got at the AI Index. I started jotting down a quick answer and realized it deserved a longer one. I also realized we had a lot of the expertise at AI21 Labs. So we spun up a small effort to put this report together, to benefit the community." The team compared three different-sized Google BERT language models on the 15 GB Wikipedia and Book corpora, evaluating both the cost of a single training run and a typical, fully-loaded model cost. The team estimated fully-loaded cost to include hyperparameter tuning and multiple runs for each setting: "We look at a somewhat modest upper bound of two configurations and ten runs per configuration."
Read more: AI21 Labs Asks: How Much Does It Cost to Train NLP Models? The paper The Cost of Training NLP Models: A Concise Overview is on arXiv. [link] [comments] |
best way to write 3? and what would be 4 and 5? Posted: 30 Apr 2020 10:55 PM PDT |
You are subscribed to email updates from Computer Science: Theory and Application. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment