#### Log in

No account? Create an account
 How to sample in constant time - Arvind Narayanan's journal [entries|archive|friends|userinfo]

## Arvind Narayanan's journal

Home | RSS Feed

Research Blog | Web page

How to sample in constant time [Jul. 30th, 2009|01:24 am]
Arvind Narayanan
 [ Tags | algorithms, puzzle ]

I can't remember the last time I felt like a bigger idiot than I do right now.

I spent like forever writing up my ugly solution to the constant time sampling puzzle. With exercises for the reader, no less. Dear god, the pomposity. Just as I was about to post it, I found this extremely elegant solution that can be explained in 30 seconds (slides 41-48).

Of course, I never thought I'd discovered something new; that would have been downright delusional. On the contrary, I thought this must be folklore knowledge, and therefore perhaps never actually published. I did some Googling before I started writing and even asked a couple of statisticians, but got no answer. I guess I just didn't look hard enough.

On another note, this is also a sad reminder of how much my puzzle solving skills have degenerated since my IMO days.

At any rate, I encourage you to have a hearty laugh at my expense.

Edit. Fixed broken link.
LinkReply

Comments:
 From: 2009-07-30 10:17 am (UTC) (Link)
My puzzle-solving skills have gone all to shit; I really feel like I should practice more, but it's hard to find the time.

The link to the solution is a 404 BTW.
(Reply) (Thread)
 From: 2009-07-30 07:06 pm (UTC) (Link)
fixed, thanks.
(Reply) (Parent) (Thread)
From:
2009-07-30 03:48 pm (UTC)

### Strange, that URL shows up on search, but doesn't work

(Reply) (Thread)
From:
2009-07-30 07:06 pm (UTC)

### Re: Strange, that URL shows up on search, but doesn't work

thanks, updated the link.
(Reply) (Parent) (Thread)
 From: 2009-07-30 05:32 pm (UTC) (Link)
gooodness. You were IMO too? This is becoming a disturbing trend in the friends I've made in grad school.
(Reply) (Thread)
 From: 2009-07-30 10:13 pm (UTC) (Link)
heh. hardly surprising, considering that perhaps the majority of IMO kids who are either from the U.S. or from countries that have significant U.S. emigration end up in top-10 U.S. grad programs in math/CS.
(Reply) (Parent) (Thread)
From: (Anonymous)
2009-08-04 12:18 am (UTC)

### Very nice!

Thanks for posting both your solution and the other one. I learned something from your solution; often in problems like these it's interesting what sort of "reductions" one makes.

The linked solution is, of course, very very nice. A small question, not that it matters: is it really O(n) pre-processing -- or is it O(n log n)?
(Reply) (Thread)
From:
2009-08-04 12:37 am (UTC)

### Re: Very nice!

thanks.

you're right -- as presented, the Walker solution has O(N log N) preprocessing. i can think of a variant that achieves O(N) though: at each step, fill *any* column that hasn't been filled to 1/N using *any* column that is strictly bigger than 1/N. no column will be the target of a fill operation more than once (although it may be the source of many fill operations). this ensures that the number of fill operations is <= N.

the only problem with this is that this will increase the average number of partitions per column, whereas the Walker method minimizes that number. however, in both cases the number is O(1) (and in fact <= 2), so it is of no consequence to the asymptotic analysis.

thanks for bringing that up.
(Reply) (Parent) (Thread)
 From: 2009-08-04 11:11 pm (UTC) (Link)
The Walker method (insert a pun about random walker) is possibly better known as the alias method. It's telling that the was it was initially described (and presented in the link you gave), its setup running time was n log n (it requires sorting of the vector of probabilities). It was further improved by Kronmal and Peterson to linear running time.

The algorithm is still very simple and elegant, but we can do a bit better under the original constraints of your puzzle. You initially asked for an algorithm with linear pre-processing and constant expected sampling time. The alias method gives constant worst case running time of the sampling procedure. The following is an even simpler method with trivial pre-processing and constant expected sampling time, where the constant is no more than 3.

Take the probability vector as it was given x1,...xn and line up the events on the line: y0=0,y1=x1,y2=x1+x2,...,yn=x1+...+xn=1. For a variable uniformly distributed between 0 and 1 the event [yi-1,yi] has probability xi.

Divide the line into bins on length 1/n. The claim is that each of the bins in expectation intersects with at most than three intervals [yi-1,yi]. Indeed, each bin which intersects with more than two intervals, fully contains all intervals except possibly for the leftmost and the rightmost one, of which there are at most n. This suggests the following trivial procedure: 1) Sample a real from [0,1] 2) Identify the bin it falls into 3) Using linear search in this bin, find its containing interval.

The algorithm can be implemented with two arrays of length n, of integers and reals, where the integer array contains the index j such that yj <= i/n < yj+1 (the first event which intersects with the bin) and the reals array contains yj+1-i/n for such j. Both arrays can be initialized in one pass through the array. All in all, I think the above is simpler than even the binary search.

(Reply) (Thread)
 From: 2009-08-05 05:43 am (UTC) (Link)
ilya,

see my response to the comment above -- as far as i can see, you can make a tiny change to the walker/alias method which achieves both O(N) preprocessing and constant worst-case sampling time.
(Reply) (Parent) (Thread)
 From: 2009-08-05 06:17 pm (UTC) (Link)
Yep, this is Kronmal and Peterson (rediscovered, using the same notation here).
(Reply) (Parent) (Thread)
 From: 2009-08-05 05:56 am (UTC) (Link)
on another note, usually an algorithm with a constant expected running time is derived by iterating an RP algorithm, which means it has a negligible probability of requiring time (say) omega(log^2 N), so we can call it constant time for all practical purposes. for that reason, i wouldn't do linear search if i were using your method, because that gives you a 1/N probability of requiring omega(N) sampling time.
(Reply) (Parent) (Thread)
 From: 2009-08-05 06:24 pm (UTC) (Link)
I was going for beauty. My method can be implemented in literally five lines of code.
(Reply) (Parent) (Thread)
 From: (Anonymous)2009-08-14 02:22 pm (UTC) (Link)
This seems to happen all the time...

See
http://jonkatz.wordpress.com/2009/07/29/the-disadvantages-of-blogging-about-research/

-Vinod
(Reply) (Thread)