GitHub Copilot Generated Insecure Code In 40% Of Circumstances During Experiment Computer Science |
- GitHub Copilot Generated Insecure Code In 40% Of Circumstances During Experiment
- Why Do People Use CPU Clock Frequency As a Stand-alone Metric…
- Four Pillars Data Model Governance - How Data Models Give the Basis for Data Governance
- Trying to understand average case complexity for insertion sort
- Skin Cancer Classification
- Any ideas for High School Computer Science Club?
- Details on Ant Colony Optimization Algorithm?
GitHub Copilot Generated Insecure Code In 40% Of Circumstances During Experiment Posted: 09 Sep 2021 02:23 AM PDT |
Why Do People Use CPU Clock Frequency As a Stand-alone Metric… Posted: 08 Sep 2021 05:05 PM PDT Why do people use CPU clock frequency (ie. 2.5 GHZ vs 3 GHZ etc) as a stand-alone metric when comparing different processors. Wouldn't you also need the Clock Cycles Per Instruction (CPI) as well when determining which processor is more performant? For example, say I have a 2GHZ processor that runs a set of instructions in 40 Million clock cycles. That means it would take 0.02 seconds to execute this set of instructions. Now let's say I have another 1GHZ processor that runs this exact set of instructions in 19 Million clock cycles. This processor takes 0.019 seconds to execute this set of instructions, and thus faster than the 2GHZ CPU for this particular set of instructions. Let's assume that for all the programs in the world, the same is true, where the 1GHZ cpu outperforms the 2GHZ one. The 1GHZ cpu is the better choice in terms of speed. So, why do most people care about the clock frequency when comparing CPUs? Most importantly, why do manufacturers care about increasing clock frequency over the years when they can just focus on executing more instructions per cycle? [link] [comments] |
Four Pillars Data Model Governance - How Data Models Give the Basis for Data Governance Posted: 09 Sep 2021 12:54 AM PDT With the help of well-articulated roles and metrics, you can craft a data governance practice to align with your company's overall business goals for establishing the processes that guard the data throughout its lifecycle and defining the policies for accessing data: Data Models Give Companies the Good Oil for Data Governance The approach represented in more details in the guide above could be called the four pillars of data model governance. These will help you gauge the effectiveness of data models to connect data management and data definition:
[link] [comments] |
Trying to understand average case complexity for insertion sort Posted: 09 Sep 2021 04:37 AM PDT Does anyone have a good explanation on average case analysis for insertion sort? I've searched everywhere and most just claim "since average case is usually the same as worst case, hence average case is O(n^2)", which is not helpful. I've tried to analyse, using the concept of summing from 1/i probability of swaps, to no luck [link] [comments] |
Posted: 08 Sep 2021 08:30 PM PDT Just following up on my previous post about medical imaging classification, I've now applied the same ideas to the skin cancer classification task from Harvard that I mentioned (link below), and the results are excellent: Original Dataset Size: 7470 RGB images of various dimensions; Compressed Dataset Size: 7470 x 198; Preprocessing / Compression time: 37.9 seconds (in total, for all 7470 images); Supervision Training time: 55.9 seconds (in total, for all 7470 images); Prediction time: 13 seconds, on average (run 25 times, in total, for all 7470 images); Prediction Accuracy: Worst case, 86% (effectively without supervision); Best case 100% (supervised, rejecting all but roughly 30 rows). All runtimes are the result of running the code on an iMac, in Octave. I've included a "sliding scale" of rejections, really to address some of the criticisms in the last post that the algorithm is somehow manipulating accuracy, and it's not, the point is, rather than overfit, you allow for rejections, that simply flag predictions as possibly incorrect. The bottom line is, this software reliably diagnoses over 7000 possible cases of skin cancer in about two minutes, on a home computer. Code with no explainer here: https://derivativedribble.wordpress.com/2021/09/08/skin-cancer-classification/ Dataset: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T For an explainer, see this: https://www.researchgate.net/publication/346009310_Vectorized_Deep_Learning [link] [comments] |
Any ideas for High School Computer Science Club? Posted: 08 Sep 2021 05:59 PM PDT I've really wanted to make a computer science club due to the fact that we do not have one at my school. Now, a couple of my friends are really interested in the club, but we have no idea what we would do in it. I've always wanted to inspire other people with programming because I like it a lot, and I've always wanted to share my passion with other people. Any code-related ideas or projects to kick start our new club? Any advice? Thank you! [link] [comments] |
Details on Ant Colony Optimization Algorithm? Posted: 08 Sep 2021 08:34 AM PDT Basically, I have a project due a few months from now on it, can anyone please give me a rough description of it to start me off? I don't really understand it, is it a sorting algorithm? Thanks in advance! Edit: I realize now that I should have mentioned in my original post (though I did mention it in a comment) that I'd already googled it and didn't understand the explanation on google. [link] [comments] |
You are subscribed to email updates from Computer Science: Theory and Application. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment