- It appears that a majority of random hash functions are secure-hashes, of the form: forest of xor of [half of them NANDs and half of them ANDs] of random pairs of lower nodes in such forest
- Computational complexity - Big O notation
- Predicting and Imitating Complex Data Using Information Theory
- npcomplete random number generation
- InstaGAN Excels in Instance-Aware Image-To-Image Translation
- Is pursuit of distributed computing an entirely economical one?
Posted: 04 Jan 2019 11:28 PM PST In the simplest case, leaf bit vars a b c d, and a node is e=xor(not(and(a,b)),and(c,d)), but xor is much more effective the more things xored together. After that there are a b c d e instead of just a b c d, so more combos are possible. for example, 10 ands and 10 nands with no or little overlap of vars between them. UPDATE: more pseudorandom if e=xor(and(a,not(b))... N such ANDs) or even better, e=xor(and(a,not(b)),c,not(d)) which is guaranteed to be random if a b c and d are random (or still more pseudorandom than they are individually), while any AND of 2 random vars tends toward 1/4 chance true 3/4 false (which I meant in the limit of many params of XOR to converge toward half/half). For example, e=xor(and(a,not(b)),c,not(d)) in blocks of 64 bits in some random forest of bitstring windowed random number generator, reading some set of int64s and writing the next int64 maybe (very speculatively) faster and more pseudorandom than existing methods of random number generation. [link] [comments] |
Computational complexity - Big O notation Posted: 05 Jan 2019 02:02 AM PST |
Predicting and Imitating Complex Data Using Information Theory Posted: 04 Jan 2019 10:45 AM PST Following up on previous posts, I'm now presenting two prediction algorithms, one with a run time that is polynomial in the number of items in the underlying dataset, and another that is extremely fast, with an average case run time that appears to be significantly less than O(log(n)), where n is the number of items in the dataset. I believe this second algorithm could turn out to be a powerful and novel tool in AI, since, in addition to accurately predicting data, it is so fast that it can quickly take an entire dataset, and reproduce it, regardless of the complexity of the original dataset. Another way to think about this is that this prediction algorithm is so good and so fast, that random inputs to the algorithm end up generating a new version of the original dataset, allowing it to expand the domain of a function, and in the extreme case, quickly imitate the structure of a three-dimensional object. This is possible because these algorithms don't attempt to interpolate data, and as a result, any inputs that don't map well to the existing dataset are rejected. This allows us to provide the algorithm with random inputs, some of which will successfully map to the original dataset, which will accumulate over time, and eventually generate a new dataset that is similar to the original. Put in more formal terms, not only can this algorithm predict values in the range of a function, it can also reproduce and expand the domain of a function that it has analyzed. Code, theory, and examples here: [link] [comments] |
npcomplete random number generation Posted: 04 Jan 2019 10:02 PM PST Start with a single 1 bit in the middle, others 0. Given a set of small integers, create an array of bits of size (slightly bigger than) the sum of their absolute values, or even truncated much smaller is ok as long as its repeatable. Loop over such integers. Xor the array onto itself offset by each next such integer in a loop. Npcomplete by subset-sum. Generates very near equal number of 0s and 1s especially near the middle if near equal number of positive and negative integers. See pictures of 2d graphics (soon to be a puzzle game) form of this at https://github.com/benrayfield/hypercubewave Generates strange nonlinear curves. To generate row 3 (or any nth row) of pascals-triangle (aka gaussian blur), choose any random integer to add to the right or upward, and that integer plus one, and the negative of both of those, so you get 1 2 1. But in that prototype I just do it by offset by 1 pixel so I dont have to compute it so far away. [link] [comments] |
InstaGAN Excels in Instance-Aware Image-To-Image Translation Posted: 04 Jan 2019 07:43 AM PST |
Is pursuit of distributed computing an entirely economical one? Posted: 04 Jan 2019 11:13 AM PST In other words, is the entire point of distributed computing and the pursuit of distributed computing science only to serve cheaper compute power? For example, if one wanted to perform computation on their own hardware assuming they can buy and built a single discrete computer with equivalent hardware to the summed distributed hardware components identical to performing computation on a distributed system? Perhaps even being faster because of closer proximity? Or is there something more that I am missing? [link] [comments] |
You are subscribed to email updates from Computer Science: Theory and Application. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment