## Sunday, September 2, 2012

### Some Annotations to the Previous Post

1. Joe, at this point I'd advice students to draw a decision tree. Some would draw one with six nodes in the first layer, representing the machines $M_1, M_2, ... , M_6$ and then 36 nodes in the second layer, representing each of the possible outcomes from $1,2,...,6$ for each of the machines. At each of the branches, they put the possibility to get from one node to the next, and at the end of the the diagram they write down the 36 probabilities for the outcomes which they get by multiplying the probabilities on the branches which lead to the outcome. However, others would opt for a much easier design, summarizing the machines $M_2, M_3, ..., M_6$ as $\overline{M_1}$, and the non-desirable outcomes $\{1,2, ...,5\}$ as $\overline{6}$, which leads to the following graph:

## Friday, August 31, 2012

William A. Dembski wrote "a long article […] on conservation of information. " at Evolution News and Views (ENV), an outlet of the Discovery Institute. Others have commented on more sophisticated problems, either at Uncommon Descent or at The Skeptical Zone. Here I want just to correct some simple math which occurs in a toy example used in the article:
To see how this works, let's consider a toy problem. Imagine that your search space consists of only six items, labeled 1 through 6. Let's say your target is item 6 and that you're going to search this space by rolling a fair die once. If it lands on 6, your search is successful; otherwise, it's unsuccessful. So your probability of success is 1/6. Now let's say you want to increase the probability of success to 1/2. You therefore find a machine that flips a fair coin and delivers item 6 to you if it lands heads and delivers some other item in the search space if it land tails. What a great machine, you think. It significantly boosts the probability of obtaining item 6 (from 1/6 to 1/2).

## Friday, May 25, 2012

### Is the average active information a suitable measure of search performance?

(Though the following doesn't include any maths, the reader is expected to be familiar with William Dembski's and Robert Marks's paper The Search for a Search and should have glanced at On a Remark by Robert J. Marks and William A. Dembski)

One of my problems with the modeling of searches by William Dembski and Robert Marks is that I don't see how every assisted search can be described as a probability measure on the space of the feasible searches. But nevertheless, Winston Ewert insisted that

All assisted search, irrespective of the manner in which they are assisted, can be modeled as a probably distribution biased towards selecting elements in the target.
Marks and Dembski claim that the average active information is a measure of search performance - at least they write in their remark:
If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search.
Their erratum seems indeed to proof the remark in a slightly modified way:
Given a uniform distribution over targets of cardinality k, and baseline uniform distribution, the average active information will be non-positive
(The proof of this statement in the erratum is correct - at least as far as I can see...)
So, lets play a game: From a deck of cards one is chosen at random. If you want to play, you have to pay 1\$, and you get 10\$ if you are able to guess the card correctly. But you are not alone, there are three other people A, B and (surprisingly) C who'll announce their guesses first. They use the following search strategies:
• A: he will announce a card according to the uniform distribution
• B: he will always announce ♦2
• C: He has access to a very powerful oracle, which gives him the right card. Unfortunately - due to an old superstition - he is unable to say ♦2, so every time this card appears he will announce another one at random
##### Conclusion
Dembski and Marks have introduced their concept of Active Information using a great number of examples. Their model for an assisted search works only for one or two of these examples. The theoretical results presented in The Search for a Search seem not to work at all. That leaves the Active Information as being a new performance measure, but it can't be seen how it improves the understanding of the concept of information in connection with search algorithms.