• Breaking News

    Wednesday, February 24, 2021

    How does a degree in math compare to one in CS when it comes to doing CS research? Computer Science

    How does a degree in math compare to one in CS when it comes to doing CS research? Computer Science


    How does a degree in math compare to one in CS when it comes to doing CS research?

    Posted: 23 Feb 2021 03:17 PM PST

    For the purposes of this post, by "CS research" I mean mostly the theoretical one.

    Why this question? For the following reasons

    • Many undergraduate CS courses tend to impart a mixture of practice and theory, with no particular focus on either side
    • Many CS pioneers of the last century were either physicists or mathematicians (John von Neumann, Alan M. Turing, Edsger W. Dijkstra, Claude Shannon, ...). Andrew S. Tanenbaum himself holds a degree in physics and astrophysics
    • Very often TAs and PhD researchers at my university have a formal training in maths. One also was a physicist
    submitted by /u/al_taken
    [link] [comments]

    [N] Visualize Your Model Errors! Microsoft Toolkit Identifies and Diagnoses ML Failures

    Posted: 23 Feb 2021 09:14 AM PST

    A Microsoft team has introduced Error Analysis, a responsible AI toolkit designed to identify and diagnose errors in machine learning models.

    Here is a quick read: New Microsoft Toolkit Identifies and Diagnoses ML Model Errors

    The toolkit is on the project GitHub. Additional information is available on the Error Analysis website.

    submitted by /u/Yuqing7
    [link] [comments]

    Open source means surrendering your monopoly over commercial exploitation

    Posted: 24 Feb 2021 01:29 AM PST

    Goto Was a Terrible Disaster. Mutable State is the New Goto.

    Posted: 23 Feb 2021 08:53 AM PST

    Python Math Library made in 3 Days as a 14 year-old - libmaths

    Posted: 23 Feb 2021 11:12 PM PST

    Hey r/compsci! Today, I wanted to make a post about a recent project I took upon myself (This is my first "major" project). This project is both a mixture of math, and computer science and I thought it was worth sharing here.

    Install libmaths on PyPi or from my GitHub: PyPi | GitHub Repo

    If you have GitHub account, please star the repository. I'd greatly appreciate it.

    Three days ago, I decided to create my own Python library as a 14-year old high school student, libmaths. I've always used them but something I never understood was how they were made or where they were coming from. I did the research on how to design my library and publish it and immediately started. I plan on maintaining the library and dealing with any issues or concerns everyday.

    An issue I thought mathematicians faced in programming was the incapability to draw graphs and models in a short period of time within their code. With some research, I gathered a list of functions I wanted to implement to begin with. I have no calculus experience but I was determined to add a couple calculus functions. I needed a lot of help to understand the math and google came pretty clutch.

    libmaths is an extremely efficient library allowing the user a smooth experience in graphing and modeling functions. From linear functions all the way to sextic functions and much more, libmaths has it all.

    In the GitHub repository, examples for every single function are provided as well as the file itself if you would like to play around with the values or change code yourself.

    If there's one thing I learned from this experience, it's that math and computer science put together can be an amazing tool and there's no limit to how much you can learn with the internet.

    To anyone trying to pursue coding, there's plenty of resources on the internet and you should start now!

    One of the many functions available in libmaths!

    submitted by /u/Simp0le
    [link] [comments]

    Please STOP asking Data Scientists about Leetcode questions meant for Software Engineers for job interviews

    Posted: 24 Feb 2021 03:07 AM PST

    What is called for a data structure that can process INSERT, DELETE, MEMBER, UNION and FIND?

    Posted: 24 Feb 2021 05:32 AM PST

    In Aho's The Design and Analysis of Computer Algorithms, Section 4.9 says:

    Consider the following sets of instructions.

    • INSERT, DELETE, MEMBER
    • INSERT, DELETE, MIN
    • INSERT, DELETE, UNION, MIN
    • INSERT, DELETE, FIND, CONCATENATE, SPLIT

    We shall call a data structure that can process instructions from set 1 a dictionary, from set 2 a priority queue, from set 3 a mergeable heap, and from 4 a concatenable queue.

    what is called for a data structure that can process INSERT, DELETE, MEMBER, UNION and FIND, which appears in Section 4.6, 4.7, 4.8 for union-find algorithms?

    Thanks.

    submitted by /u/timlee126
    [link] [comments]

    Object-Oriented Programming is The Biggest Mistake of Computer Science

    Posted: 24 Feb 2021 04:44 AM PST

    SIG February Coding Challenge

    Posted: 23 Feb 2021 08:13 PM PST

    Anybody else take it? Thoughts? How did you solve it?

    submitted by /u/throwaccount98672899
    [link] [comments]

    Life of an HTTP request in a Go server

    Posted: 22 Feb 2021 09:10 PM PST

    What language do you think you are most productive?

    Posted: 23 Feb 2021 12:04 PM PST

    I work with Java and make some hobby projects in Clojure. When I'm prograaming with Java I have to code so much code to do simple things that I get bored easily. But when I am working with Clojure, it feels so fast that I think I am kinda 5x faster than with Java. So, what's the language do you like to program and think you can ship features faster?

    submitted by /u/thiagomiranda3
    [link] [comments]

    Math Course Difficulty in Computer Science

    Posted: 23 Feb 2021 01:50 PM PST

    I was wondering what math courses are "easiest" to "hardest" in this major. What was your experience taking upper level math? From your experience, I'm curious to know what was the least to most challenging courses you've taken in Computer Science. Why? Thanks :)

    submitted by /u/beinqjackie
    [link] [comments]

    Computational Ayahuasca: Simulating DMT on Artifical Neural Network

    Posted: 22 Feb 2021 02:30 PM PST

    Is the Handbook of Theoretical Computer Science too outdated?

    Posted: 22 Feb 2021 09:55 AM PST

    Hi, I think I am interested in Theory and plan to spend a few months trying to get a broad brush overview of some of the areas so that I can narrow down my interests. I planned to use the Handbook of Theoretical Computer Science (primarily Volume B) as my main reference.

    Is it too outdated to be used in this way? Are there any topics in particular that havent aged well? And are there any that have?

    Thanks.

    submitted by /u/fleotan
    [link] [comments]

    Free March 4 Talk on Software Requirements with Software Engineering Pioneer Bertrand Meyer

    Posted: 23 Feb 2021 07:49 AM PST

    March 4, join Bertrand Meyer, ACM Fellow and Software System Award recipient, Professor of Software Engineering and Provost at the Schaffhausen Institute of Technology in Switzerland and CTO of Eiffel Software, for the free ACM TechTalk, "The Four PEGS of Requirements Engineering."

    Bad software requirements can jeopardize projects. There is a considerable literature on requirements, but practice is far behind. Can we fix requirements engineering (known in other circles as business analysis) so that it is no longer the weak link in software engineering? Meyer will present ongoing work intended to help industry produce more useful requirements. It includes precise definitions of requirements concepts and a standard plan for requirements specifications. The approach builds on existing knowledge to define a practical basis for requirements engineering and provide projects with precise and helpful guidelines.

    Register to attend the talk live or be notified when the on-demand recording is available.

    submitted by /u/ACMLearning
    [link] [comments]

    i ask no credit or payment please consider the idea (sfwot)

    Posted: 23 Feb 2021 01:33 PM PST

    I beg you to please observe the following, to consider the proposed solution that constitutes an idea and to pass it along to someone actually competent enough to implement it:

    There exist many relatively free high quality educational resources on the internet BUT there exist very few highly respected acreditation sources anywhere. There is a glut of potential teaching sources and a scarcity of examinations.

    What kind of software could provide sufficient examination opportunities to fully leverage the potential of our incredible modern communication networks?

    I once expressed the proposed solution to some mathematicians and one response was "there is no scarcity of exam questions" and for whatever reason i did not know how to reply even though they had obviously missed the point entirely.

    Of course anyone can collect a few questions together and call it an exam, but no one would care about such exams, because no employer has time to review the individual exams the applicants took to determine whether or not they constitute a fair test!

    Instead, the employers work on a trust basis eg. 'I am happy to assume that the people at MIT/Harvard/Oxford/Cambridge etc. know precisely what they are talking about therefore I will consider their examination results relevant.'

    The question then becomes: what kind of examination generation software would be considered relevant enough to supercede the current trust system?

    Here are the properties I propose such a piece of software would have to possess:

    a) it must be open source & legible enough for distributed source-checking, else we would find ourselves having to trust either the firm that developed and maintained the software or the so called experts qualified to adjudicate on it's veracity and would have only superceded the trust system with a marginally better trust system.

    2) it must be restricted for now to what are called truth functional symbolic notational systems, such as [integers] * [addition,subtraction], which I describe as truth functional because every possible combination of corresponding symbols obviously has a corolary practicable thought experiment which we call a proof of either its truth or falsehood. (I chose an over-trivial set of objects * operators for simplicities sake, but am literally betting my body that the majority of x <= undergraduate science course exams are majority truth functional by this 'definition'.)

    d) it must have a set of relatively legible functions that any contemporary truth functional exam producer can use to translate their exam question & answer pairs into the code we use to communicate with computers (i suspect (legibility * popularity) might be the most important property of the coding language for our purposes and would hazard a guess at python but am not 'qualified' to even guess)

    5) it must be able to take such a question & answer pair and do at least 1 of 2 things. 5.1) it !must! be able to generate a practicable thought experiment that 'we' are happy to consider a proof of which kind of truth functional symbolic notation a given question & answer pair is testing the testee on & 5.2) it !may! also need to demonstrate the 'hardness' of the question !but! this may only be an attempt to pander to the constraints that human exam generators and markers operate under... the nuances of this issue are beyond my capabilities to discuss in this notational system at present, but we can at least consider: why can't every question be worth 1 'point' or 'mark'? why are we sometimes required to 'show our working'? must it be so?

    e) as a piece of exam generation software, it must of course be able to take a set of symbols &/or 'theorems' that is sometimes called a curriculum or exam specification, and cheaply generate an exam to test the testee on their mastery of said symbols & theorems.

    eleventyseven) trivially, the software ought also be able to mark it's own tests. This is obviously easier if the user is not tested on showing their working and if the answer types are resricted to constants rather than formulas, though i readily admit that for some of the contemporary answer types it would require LaTeX for the students to express their answers, which I consider... plausibly undesirable.

    NOW

    Here is my prediction for why these properties would engender the smooth revolution to relatively freer information distribution:

    1) at first only a few curious peoples might test their own exams on this new metric of exam-toughness. Some of their exams might 'score highly' in which case they would proudly boast of this new metrics high regard for their exams. Some of their exams might not, in which case they would either ignore or repress the new metric OR improve their own exams OR switch over entirely to this new exam generation software, simply because it was PROVABLY hard to pass such exams and some universities don't like to appear to be mere accumulations of trusted thinkers.

    b) through time, eventually the fancy universities might start to get embaressed by their lack of critical testing of their exams and might start to check their exams against the metric. Surely MIT cannot afford to appear scared of BU's exams!? Can Oxford afford to appear scared to test their math exams against Brooke's!? Can Cambridge uni even afford to trust Harvard not to employ the new metric, even were it outwardly dismissed? Surely they must adopt the latest technology however begrudgingly if they wish to remain competetive!?

    4) Because the software is free to use and slowly being adopted by the elite institutions, there is ample opportunity for business starters to create testing cent(re/er)s, charge say 50ish units per test and make several thousands of units of currency per exam simply by running tests that cost them virtually nothing to create or mark!?

    f) Because they no longer need to spend k * 105 units to get a top rated degree, BILLIONS more people have an incentive to use the free resources to study and BILLIONS more people both learn and become acredited!!! BILLIONS more people become vastly more numeric and society is saved from informational monopolies once and for all time!!!!!!!!!

    If think that idea is too dangerous, too free market, too fair, too good, too utilitarian then I can only suggest that you start eating your hats sooner rather than later, for this idea WILL be implemented sooner or later and whichever societies fail to adopt it will be enslaved by those that do.


    Here I will draw a line and say that I have said enough for now for I can feel myself losing my temper.

    I have some other questions that you might find interesting, but you might not find me at this email address any longer.

    So you will have to keep your eyes open if you think that 'I' might be a thinker whose potential utility is being overlooked.

    I must also warn you that me and my fuzzy type are PHYSICALLY AND MENTALLY INCAPABLE of sitting idly by and watching OUR useful free ideas be monopolized. Don't bite off more than you can chew, you frail hegemonists, and attempt to steal this idea for your own benefit alone.

    Give up while you still can.

    Lastly, you probably want some kind of name to assosciate with the 'creator/author/body' behind this idea.

    But I refuse to give you one, else how could I prove that I am trying to be more than the narcisistic, arrogant, deluded, addicted, easily tempted, executive functionally defficient failure I probably am!?

    <3

    TL;DR THE COEFFICIENT OF FRICTION IN THE ACREDITATION SYSTEM IS TOO DAMN HIGH, BUT I STILL LOVE YOU FLY YOU FOOLS!

    submitted by /u/doxmebitchidontcare
    [link] [comments]

    I created an AI to beat Pong!

    Posted: 22 Feb 2021 08:03 PM PST

    Hey everyone! I just graduated with a CS degree and wanted to delve deeper into AI/ML and game dev. I just created pong in unity and built an AI for it! I made a video about it here if you want to have a quick laugh and check it out! My channel is CS mixed in with comedy.

    https://www.youtube.com/watch?v=X7zulp2SV6k&t

    submitted by /u/guest92
    [link] [comments]

    [N] BENDR for BCI: UToronto's BERT-Inspired DNN Training Approach Learns From Unlabelled EEG Data

    Posted: 22 Feb 2021 09:23 AM PST

    University of Toronto researchers propose a BERT-inspired training approach as a self-supervised pretraining step to enable deep neural networks to leverage newly and publicly available massive EEG (electroencephalography) datasets for downstream brain-computer-interface (BCI) applications.

    Here is a quick read: BENDR for BCI: UToronto's BERT-Inspired DNN Training Approach Learns From Unlabelled EEG Data

    The paper BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data is on arXiv. The source code and pretrained BENDR models can be found on the project GitHub.

    submitted by /u/Yuqing7
    [link] [comments]

    No comments:

    Post a Comment