• Breaking News

    Tuesday, March 13, 2018

    Confusion between big-O/big-Ω/big-Θ and best-case/average-case/worst-case analyses, and Harvard's CS50 course Computer Science

    Confusion between big-O/big-Ω/big-Θ and best-case/average-case/worst-case analyses, and Harvard's CS50 course Computer Science


    Confusion between big-O/big-Ω/big-Θ and best-case/average-case/worst-case analyses, and Harvard's CS50 course

    Posted: 12 Mar 2018 11:02 AM PDT

    A friend recently came to me about some confusion as to why Wikipedia uses big-O for both the average-case and the worst-case analyses for algorithms. They thought that big-O was used for the worst-case analysis, and that big-Ω was used for best-case analysis.

    I replied that this was wrong, since big-O and big-Ω are descriptions of properties of mathematical functions and therefore don't have any relation (per se, obviously they are used in these analyses) to best/average/worst-case analyses, which went completely against his intuition. After some investigation it turns out he picked up this notion from Havard's CS50 course, which in lecture 3 claims (or appears to claim, which is confusing if you're a student) that big-O was used for worst-case analysis, and big-Ω was used for best-case analysis. (link to relevant part of the lecture, link to summary video which re-enforces this)

    It turns out that this is a pretty common confusion, and plenty of examples can be found on the internet (1, 2, 3, 4, just google "big o best case worst case"), but pretty much the majority of other answers along with Wikipedia back up my notion that big-O is to do with properties of mathematical functions and not intrinsically related to best/worst-case analyses.

    What's confusing then is that a very well-known course at a very prestigious university seems to teach something that's just straight-up wrong and confusing, which conflicts with what I've been taught and other resources I've found on the internet. There also seems to be no other discussions on this I've found in relation to this specific course, and since it's supposed to be a introductory course it seems weird for it to teach a wrong idea. I'm not the one going crazy, am I?

    submitted by /u/MrInanimated
    [link] [comments]

    Verify that someone ran a program without also running the program?

    Posted: 13 Mar 2018 12:21 AM PDT

    Any solution to a problem in NP can be verified quickly. This could be useful if, say, I'm an AI researcher, and I need someone in the cloud to solve an NP complete problem for me. I could send the script to solve the original NP complete program to a bunch of server-owners, and the first person to solve it gets paid. I save time because I can verify that their solution is correct quickly.

    But for general computing, I'm not sure there's a way to do this. For example, if I just want to run some general code in the cloud. How do I verify that people have actually run the code correctly, without having to run it myself?

    I could do something similar to Ethereum and have the entire network verify it for me, and only accept it as correct when other people have verified it and the result is on the central chain long enough. However, this creates a lot of extra work for the whole network.

    I realize this question is a little vague, but are there any strategies to verify that a general program was run correctly without requiring someone else (either me, or an entire network) to also run the program?

    submitted by /u/cglva
    [link] [comments]

    Are there are any practical benefits to context-sensitive languages, as opposed to context-free?

    Posted: 12 Mar 2018 04:08 AM PDT

    Coloring a Graph with K colors

    Posted: 12 Mar 2018 12:53 PM PDT

    No comments:

    Post a Comment