that's a really neat solution; my own was to do a binary search (split the bottles into two equally sized groups; test one pooled sample at random; if it's positive, that's the group, if negative, it's the other group; repeat) which will also ID the poison bottle out of 100 in 7 rounds providing you know there's one and one only.

Adam's search also doesn't work if there's more than one bottle. Consider the case where all 7 tests return positive. There might only be seven poison bottles.

Like you Alexander I was thinking binary chop as, when I read the tweet beofore followng it to the article and got the overall point (just find the poison), I was trying to work out how many unopened bottles I could maximise while also identifying where the poison was. 50% chance of 50 bottles, 75% chance of at least 25 bottles etc. And of course, once the selection with poison is found you can still chop until you find the poison bottles and then invite around friends to polish off the open bottles until they start to go off :-). I guess you pick your method to suit your desired end point. What I can't thnk through is if simply serially testing sets of 15 bottles would be statistially more likely to preserve more unopened bottles. Adam?

I'm afraid pooled testing means we'll have to draw wine from every bottle. It's true that the bigger the pool, the more bottles you get to declare clean in the case where the test is negative, but on the other hand, if the test is positive the pool of bottles you can declare clean is smaller, and there's no reason to think either pool is more or less likely to contain the poison. So splitting them equally is optimal.

The advantage of Adam's solution is that (I think) it will work if there is more than one bad bottle. It's also possible to parallelize the tests in his solution, which would help a lot if the test process is slow. In the binary search, the tests have to happen sequentially so the time taken to run it increases linearly with the number of tests, which isn't bad (this is after all the point of binary search); but in the multi-way encoder the tests can all happen at the same time.

This is really sweet from a performance point of view; as there are more tests to set up, the set-up phase will take longer with more tests, but as the test phase will only ever take as much time as one test takes to run, the total waiting time increases more slowly than the number of tests. At some point we'll hit a constraint in terms of how many tests we can fit on a bench or something but sublinear scaling is pretty damn good.

NB I am not an expert in this domain whatsoever, just a somewhat bored guy at his desk who likes a good thought experiment.

In my head, the ideal goal would be to minimize the total amount of wasted wine and maximize the monetary value of the wine we are able to drink/not open.

- We have 100 bottles of wine, one of which is poisoned.

- We have 7 tests, which can detect positive/negative as a binary from a pooled sample.

- We have no incentive to be efficient about the number of tests we use.

So, why not:

1. Create a list of all the bottles in order of least to most expensive.

2. Divide the wine into 6 groups of 15 and 1 group of 10 (so it fits evenly into 7 tests)

3. Create a pooled sample from the cheapest 15 wines and test them.

4. If no poison is detected, mark the wines as safe and move on to the next cheapest 15 with a new test.

5. Continue this process until you get a positive result, throw away that batch, and call it a day, or...

6. If you have any remaining tests, divide the poisoned batch appropriately, create new pooled samples, and save yourself even more wine.

that's a really neat solution; my own was to do a binary search (split the bottles into two equally sized groups; test one pooled sample at random; if it's positive, that's the group, if negative, it's the other group; repeat) which will also ID the poison bottle out of 100 in 7 rounds providing you know there's one and one only.

Adam's search also doesn't work if there's more than one bottle. Consider the case where all 7 tests return positive. There might only be seven poison bottles.

Like you Alexander I was thinking binary chop as, when I read the tweet beofore followng it to the article and got the overall point (just find the poison), I was trying to work out how many unopened bottles I could maximise while also identifying where the poison was. 50% chance of 50 bottles, 75% chance of at least 25 bottles etc. And of course, once the selection with poison is found you can still chop until you find the poison bottles and then invite around friends to polish off the open bottles until they start to go off :-). I guess you pick your method to suit your desired end point. What I can't thnk through is if simply serially testing sets of 15 bottles would be statistially more likely to preserve more unopened bottles. Adam?

I'm afraid pooled testing means we'll have to draw wine from every bottle. It's true that the bigger the pool, the more bottles you get to declare clean in the case where the test is negative, but on the other hand, if the test is positive the pool of bottles you can declare clean is smaller, and there's no reason to think either pool is more or less likely to contain the poison. So splitting them equally is optimal.

The advantage of Adam's solution is that (I think) it will work if there is more than one bad bottle. It's also possible to parallelize the tests in his solution, which would help a lot if the test process is slow. In the binary search, the tests have to happen sequentially so the time taken to run it increases linearly with the number of tests, which isn't bad (this is after all the point of binary search); but in the multi-way encoder the tests can all happen at the same time.

This is really sweet from a performance point of view; as there are more tests to set up, the set-up phase will take longer with more tests, but as the test phase will only ever take as much time as one test takes to run, the total waiting time increases more slowly than the number of tests. At some point we'll hit a constraint in terms of how many tests we can fit on a bench or something but sublinear scaling is pretty damn good.

edited Jun 19NB I am not an expert in this domain whatsoever, just a somewhat bored guy at his desk who likes a good thought experiment.

In my head, the ideal goal would be to minimize the total amount of wasted wine and maximize the monetary value of the wine we are able to drink/not open.

- We have 100 bottles of wine, one of which is poisoned.

- We have 7 tests, which can detect positive/negative as a binary from a pooled sample.

- We have no incentive to be efficient about the number of tests we use.

So, why not:

1. Create a list of all the bottles in order of least to most expensive.

2. Divide the wine into 6 groups of 15 and 1 group of 10 (so it fits evenly into 7 tests)

3. Create a pooled sample from the cheapest 15 wines and test them.

4. If no poison is detected, mark the wines as safe and move on to the next cheapest 15 with a new test.

5. Continue this process until you get a positive result, throw away that batch, and call it a day, or...

6. If you have any remaining tests, divide the poisoned batch appropriately, create new pooled samples, and save yourself even more wine.

Did I like Adam's article? 1111!