Comments for Not so Great Ideas in Theoretical Computer Science https://mittheory.wordpress.com A student blog of MIT CSAIL Theory of Computation Group Wed, 22 Mar 2017 17:39:45 +0000 hourly 1 http://wordpress.com/ Comment on Beyond Locality-Sensitive Hashing by pavan ahire https://mittheory.wordpress.com/2013/11/22/beyond-lsh/comment-page-1/#comment-556 Wed, 22 Mar 2017 17:39:45 +0000 http://mittheory.wordpress.com/?p=11#comment-556 sir i want a numerical example on the basis of instance selection usinh locality sensitive hashing (LSH-IS) ibut i cannot pretend how to compute this in mathematics so plz help me

]]>
Comment on Estimating Transitive Closure via Sampling by rgrig https://mittheory.wordpress.com/2016/10/03/estimating-transitive-closure/comment-page-1/#comment-509 Tue, 11 Oct 2016 05:23:36 +0000 http://mittheory.wordpress.com/?p=821#comment-509 That’s a very nice summary of the paper: thanks!

I once asked on CSTheory how to do this. One comment pointed me to the paper you mention. There’s also an answer pointing to some more recent work on how to compute not just cardinalities but also the sets R_v: http://cstheory.stackexchange.com/q/553/236

]]>
Comment on Estimating Transitive Closure via Sampling by mittheory https://mittheory.wordpress.com/2016/10/03/estimating-transitive-closure/comment-page-1/#comment-506 Tue, 04 Oct 2016 05:29:05 +0000 http://mittheory.wordpress.com/?p=821#comment-506 In reply to Urbanowicz.

Yes, the first problem can be solved for any f. But see problem 2. :)

]]>
Comment on Estimating Transitive Closure via Sampling by Urbanowicz https://mittheory.wordpress.com/2016/10/03/estimating-transitive-closure/comment-page-1/#comment-504 Tue, 04 Oct 2016 04:06:26 +0000 http://mittheory.wordpress.com/?p=821#comment-504 Why do we requre $f$ to be special (in terms of randomness) in Problem 1? Seems like it will work for any reasonable $f$.

]]>
Comment on Purifying spoiled randomness with spoiled randomness by Henry Yuen https://mittheory.wordpress.com/2015/08/15/purifying-spoiled-randomness-with-spoiled-randomness/comment-page-1/#comment-503 Mon, 03 Oct 2016 19:18:12 +0000 http://mittheory.wordpress.com/?p=796#comment-503 In reply to npc.

Hi npc,

Sorry for the late response!

Yes, you can think of weak randomness is that on average, there is less than 1 bit of entropy per bit of data. One example of this might be, even though the first three bits of the string are completely uniform, the fourth bit is a complicated function of the first three. However you have no way of knowing what that complicated function is, nor where this “dependent bit” may be.

Re: using cryptographic hash function. I think something like this is used in practice, but the problem is that this isn’t a theoretically sound method for extracting randomness. The problem is that if you’re given a fixed function (say, for example, SHA1 or AES), then there always *exists* a weak random source X such that SHA1(X) or AES(X) is not uniformly random. Whether we as human beings can find X or not is another question, but theoretically it is not a hard task.

This work instead focuses on getting sound theoretical guarantees. You need a function that takes in two independent sources of randomness and combines them together, such that no matter what sorts of dependencies exist within each source, the function will squeeze it out. Furthermore, there are no cryptographic assumptions needed!

Do you have to rewrite your RNG code? It depends on what sorts of guarantees you’re looking for. If you’re looking for sound mathematical guarantees that your randomness extractor is always going to give you near uniform randomness, then the RNG in the C compiler is not going to cut it, nor is using some fixed function such as SHA1 or AES. However, “in practice”, probably using a hash function such as SHA will work for most applications. It’s only when you’re writing code that will protect, say, a power plant that you might want to look into something a little more secure.

]]>
Comment on Purifying spoiled randomness with spoiled randomness by npc https://mittheory.wordpress.com/2015/08/15/purifying-spoiled-randomness-with-spoiled-randomness/comment-page-1/#comment-483 Thu, 28 Jul 2016 00:08:35 +0000 http://mittheory.wordpress.com/?p=796#comment-483 This doesn’t seem as exciting to me as you make it sound, so I’d like to understand what this is a little better, if you’ll indulge me.

It seems to me that a weak source of randomness means you have less than 1 bit of entropy for each bit of data in a “random” string. Consequently, the trick is to “squeeze” out the predictable portion. My understanding is that in practice the most common way to do this is to use a cryptographically secure hash function. Why is this new method better? Is it because it doesn’t rely on the strength of the cryptography? Even if the hash could be reversed, it would merely reveal one (of several, probably) weakly random input strings. How does that help one predict the next set of random bits? Yes, there’s a correlation bit-by-bit with the next input string to be hashed, but the whole idea of hashing it is to eliminate that correlation, right? What am I missing? Should I be preparing to rewrite my RNG code? Feel free to point me to some place that already explains this. I couldn’t find such a thing on my own.

Thanks in advance.

]]>
Comment on Purifying spoiled randomness with spoiled randomness by Martin Stein https://mittheory.wordpress.com/2015/08/15/purifying-spoiled-randomness-with-spoiled-randomness/comment-page-1/#comment-482 Wed, 27 Jul 2016 16:39:02 +0000 http://mittheory.wordpress.com/?p=796#comment-482 Could you point us to some source code implementing this?

]]>
Comment on Faster, I say! The race for the fastest SDD linear system solver – STOC 2014 Recaps (Part 9) by Hao https://mittheory.wordpress.com/2014/07/07/faster-i-say-the-race-for-the-fastest-sdd-linear-system-solver-stoc-2014-recaps-part-9/comment-page-1/#comment-345 Mon, 02 Nov 2015 19:43:12 +0000 http://mittheory.wordpress.com/?p=525#comment-345 Reblogged this on Exponentials and commented:
Look forward to seeing the results on the simulation solver. The connections between SDD solver to matrix exponential.

]]>
Comment on Spaghetti code crypto – STOC 2014 Recaps (Part 6) by Justin Holmgren https://mittheory.wordpress.com/2014/06/27/spaghetti-code-crypto-stoc-2014-recaps-part-6/comment-page-1/#comment-111 Wed, 02 Jul 2014 18:38:32 +0000 http://mittheory.wordpress.com/?p=484#comment-111 You’re right that the security property they prove is like the one you mention. However, this together with the ordinary (indistinguishability against chosen-plaintext attack) security of the encryption system imply the “definition” I gave. This is described in section 2.2 of [2], and is another hybrid argument. The definition I gave was originally proposed as an open question by Canetti, Dwork, Naor, and Ostrovsky at CRYPTO 1997 in a paper entitled “Deniable Encryption”, and in my opinion is a more natural / motivated definition.

]]>
Comment on FOCS 2013 Recaps, Part 2 by Efficient Distribution Estimation 4: Greek Afternoon – STOC 2014 Recaps (Part 7) – Not so Great Ideas in Theoretical Computer Science https://mittheory.wordpress.com/2013/11/15/focs-2013-recaps-part-2/comment-page-1/#comment-108 Mon, 30 Jun 2014 17:51:13 +0000 http://mittheory.wordpress.com/?p=65#comment-108 […] were both named after our good friend Siméon. **For another perspective on the SIIRV result, see this post by my partner in crime, […]

]]>