• Breaking News

    Friday, February 12, 2021

    Uncovering a 24-year-old bug in the Linux Kernel Computer Science

    Uncovering a 24-year-old bug in the Linux Kernel Computer Science


    Uncovering a 24-year-old bug in the Linux Kernel

    Posted: 11 Feb 2021 11:00 AM PST

    Tutorial: How to specify a large formula for a regression model using the R programming language

    Posted: 12 Feb 2021 01:49 AM PST

    Hey, I've created a tutorial on how to specify a large formula for a regression model using the R programming language. The tutorial shows different tips and tricks to make your code more efficient: https://statisticsglobe.com/write-model-formula-with-many-variables-in-r

    submitted by /u/JoachimSchork
    [link] [comments]

    What is the granularity of specifying applicative or normal evaluation order in lambda calculus (and functional languages)?

    Posted: 12 Feb 2021 04:11 AM PST

    In lambda calculus and other functional languages, what is the granularity of specifying applicative or normal evaluation order: - language-wide (for all the function calls), - individual functions (for all the function calls for a given function), - individual function calls?

    I also seem to remember in some functional language (Haskell?), the evaluation order is a language wide choice. By default, it is either applicative or normal order, and it can be switched by a statement in programs. When it is switched, will the evaluation order for all the functions defined both before and after the switch be changed? Can we switch the evaluation order for all the calls to a function, without affecting the calls to other functions? Can we switch the evaluation order of a function call, without affecting other function calls?

    I have the above question, because for the normal order sequencing combinator Seq, when evaluating ((Seq (display "hello")) (display "world")) to (λz.(display "world") (display "hello"), I am wondering if (λz.(display "world") (display "hello") is evaluated in applicative or normal order, since I guess both "hello" and "world" are supposed to be printed but in normal order "hello" isn't printed.

    Thanks.

    submitted by /u/timlee126
    [link] [comments]

    Best sorting algorithm for gnomes?

    Posted: 11 Feb 2021 10:19 AM PST

    I was recently looking at the wikipedia article for Gnome Sort and it got me thinking. Gnomes sorting garden pots have a slightly different computational model than computers do, in that moving along the line of pots takes time.

    So given the constraints on the gnomes (can only carry one pot at once, walking takes time), Gnome sort seems like it really does take less time (walking steps) than, say, Bubble Sort.

    It looks to me like, for gnomes:

    • Gnome sort is O(n2 )
    • Bubble sort is O(n2 ) with a worse constant
    • Quicksort is worse, at O(n2 * log n), because partitioning takes n2 time

    Can a gnome sort a line of flower pots faster than O(n2 ), counting walking steps? Merge sort seems promising maybe, and it seems like it might matter whether the gnome is allowed to temporarily lay the pots out in 2D space or not.

    submitted by /u/Tai9ch
    [link] [comments]

    (AI) How should this be approached?

    Posted: 12 Feb 2021 03:23 AM PST

    I want to create 9 groups of patterns based on thousands of patterns and each pattern fitting in either of the 9 groups based on similarity.

    Any referance to library or tool that does it beautifully?

    Note: I do not want to create the groups myself, I need the groups to be auto-generated.

    submitted by /u/rajsharma2020
    [link] [comments]

    [N] ICMI 2020 Best Paper | Gesticulator: A framework for semantically-aware speech-driven gesture generation

    Posted: 11 Feb 2021 06:12 PM PST

    Human communication is, to no small extent, non-verbal. While talking, people spontaneously gesticulate, which plays a crucial role in conveying information. Think about the hand, arm, and body motions we make when we talk. Our research is on machine learning models for non-verbal behavior generation, such as hand gestures and facial expressions. We mainly focus on hand gestures generation. We develop machine learning methods that enable virtual agents (such as avatars from a computer game) to communicate non-verbally.

    Here is a quick read: Gesticulator: A framework for semantically-aware speech-driven gesture generation

    The paper Gesticulator: A framework for semantically-aware speech-driven gesture generation is on arXiv.

    submitted by /u/Yuqing7
    [link] [comments]

    How are applications of the normal/applicative order sequencing combinators evaluated?

    Posted: 11 Feb 2021 04:23 PM PST

    From Programming Distributed Computing Systems, there is a section under the chapter for lambda calculus:

    2.7 Sequencing Combinator

    The normal order sequencing combinator is:

    Seq = λx.λy.(λz.y x), 

    where z is chosen so that it does not appear free in y.

    This combinator guarantees that x is evaluated before y, which is important in programs with side-effects. Assuming we had a "display" function sending output to the console, an example is

    ((Seq (display "hello")) (display "world")). 

    The combinator would not work in applicative order (call by value) evaluation because evaluating the display functions before getting them passed to the Seq func- tion would defeat the purpose of the combinator: to sequence execution. In particular, if the arguments are evaluated right to left, execution would not be as expected.

    The applicative-order sequencing combinator can be written as follows:

    ASeq = λx.λy.(y x), 

    where y is a lambda abstraction that wraps the original last expression to evaluate. The same example above would be written as follows:

    ((ASeq (display "hello")) λx.(display "world")), 

    with x fresh, that is, not appearing free in the second expression.

    (1) In Seq = λx.λy.(λz.y x), is y required to be a lambda abstraction i.e. a function definition? (I guess so, because (λz.y x) is evaluated to (y,x)?)

    (2) How is ((Seq (display "hello")) (display "world")) evaluated? Is "hello" supposed to be printed before "world"? Is it evaluated

    • first to (λy.(λz.y (display "hello")) (display "world")), in normal order
    • then to (λz.(display "world") (display "hello")), in normal order
    • then to (display "world"), in normal order.
    • then to nil?

    So is (display "hello") not evaluated?

    (3) Similarly, how is ((ASeq (display "hello")) λx.(display "world")) evaluated? Is "hello" supposed to be printed before "world"? Is it evaluated

    • first evaluate the argument (display "hello") to nil,
    • then evaluate the application ((ASeq nil) λx.(display "world")) to (λy.(y nil) (display "world")),
    • then evaluate the argument (display "world") to nil,
    • then evaluate the application (λy.(y nil) nil) to (nil nil),
    • then evaluate (nil nil) to nil?
    submitted by /u/timlee126
    [link] [comments]

    Is eta-reduction by default not performed in lambda calculus?

    Posted: 11 Feb 2021 05:30 PM PST

    In lambda calculus,

    • is a lambda abstraction always a value and therefore not evaluated further, regardless of whether the body of the abstraction can be further evaluated?

    • In other words, is eta-reduction by default not performed? Are only beta-reduction and alpha-conversion performed by default?

    Are the answers to the above questions the same for both applicative order and normal order evaluations?

    Thanks.

    submitted by /u/timlee126
    [link] [comments]

    [N] Transformers Scale to Long Sequences With Linear Complexity Via Nyström-Based Self-Attention Approximation

    Posted: 11 Feb 2021 06:11 PM PST

    Researchers from the University of Wisconsin-Madison, UC Berkeley, Google Brain and American Family Insurance propose Nyströmformer, an adaption of the Nystrom method that approximates standard self-attention with O(n) complexity.

    Here is a quick read: Transformers Scale to Long Sequences With Linear Complexity Via Nyström-Based Self-Attention Approximation

    The paper Nyströmformer: A Nyström-based Algorithm for Approximating Self-Attention is on arXiv.

    submitted by /u/Yuqing7
    [link] [comments]

    Google Foobar Challenge

    Posted: 11 Feb 2021 09:27 PM PST

    So today I got the Google foobar challenge thing in my browser. I know it's pretty rare, so I started googling around and seeing what people had to say about the challenge. I am reading recounts of people with 10+ years experience barely making it though the thing. Now, I am a high school senior lol and I don't think I'd be remotely prepared for something like this. I am fluent in 17 languages and I've made a bunch of CS projects and stuff but there is a lot about CS that I don't even know in the slightest (I am not at all familiar with complex linear algebra and advanced algorithms which I would probably need to know). I've already signed in and I don't want to waste this opportunity (and I haven't clicked request yet for the first challenge), so my question would be if I can wait to do this challenge sometime down the line when I know more about CS. I'm not even sure if google will be doing this for much longer tho... Any suggestions?

    submitted by /u/throwawaymagic2021
    [link] [comments]

    Sites/Blogs for case studies of large and complex projects.

    Posted: 11 Feb 2021 12:06 PM PST

    Thermodynamic costs of Turing Machines: heat function, thermodynamic complexity and fundamental tradeoffs

    Posted: 11 Feb 2021 02:46 PM PST

    What are google's internal versions of apache kafka or flink, or do they just use those?

    Posted: 11 Feb 2021 12:40 PM PST

    Thanks!

    submitted by /u/ajtyeh
    [link] [comments]

    I’m not sure if this fits here, but I was looking into Computer Science and I was wondering if anyone here can explain what this first answer from a Quora post is implying?

    Posted: 11 Feb 2021 10:03 AM PST

    https://www.quora.com/Should-I-pursue-computer-science-if-I-am-bad-at-math

    From Dima Korolev, regarding Point 3, is he implying that you have to be autistic or have been interested in logic puzzles before Age 10 to do well in Computer Science? Seems a bit absurd but maybe I am not understanding.

    submitted by /u/0J0SE0
    [link] [comments]

    I used artificial intelligence to write an article and here’s what happened.

    Posted: 11 Feb 2021 04:35 AM PST

    Hey, artificial intelligence seems to be all over today's world. These fake, silicon brains have been replacing our 'feeling brain' slowly but surely. Taking on more and more tasks off of us. But, have you ever wondered about the limits of A.I. What it can and what it can't do? That's the point where I decided that I'm going to run an experiment. The purpose was to find how good modern machine learning really is. So, I decided to try to get an article written exclusively by a computer.

    Continue reading here

    submitted by /u/Bananas8ThePyjamas
    [link] [comments]

    No comments:

    Post a Comment