Staking Horse Racing
And Other Thoughts
(Ed: This is one
of the many e-mails we receive every week. We thought this was most
Hi oh intrepid method maker. I’ve
looked at some of your work and I’m convinced it’s on
the level. Your (disk) has been a lot of fun.
I would like to make
a point about staking, and comment on computer use and on the strategy
of systematic betting
As to qualifications, I’m a hard scientist but not a mathematician,
I’ve been involved in betting since the late seventies, and
was doing well until diabetes 2 wrecked things for a long while.
I can’t easily make a rigorous mathematical statement about
staking but I will do a very silly example, where we have two bettors,
Fearless Fosdike, and Minnie Mouse.
Fearless has a staking system where he doubles a bet after each loss
unless he goes over half his bank which starts at 100 units, then
returns to base bet; while Minnie uses the same horses and has the
same bank but she simply bets 1% or one unit of her bank. Their shared
method gets three wins out of ten.
Since we have godlike abilities we fix things so that the protagonists
have three successive winning bets each paying five dollars, so after
that they both have 112 units in the bank
Now they have 7 successive losers, Minnie loses 7 units and now she
is down to 105 units. Fearless loses 64 dollars and returns to his
one dollar base bet with 48 units left.
Now we have another cycle where the same thing happens and Minnie
has gone to 110 units. Fearless has gone to 16 units after losing
32 on the fifth loss....
OK, it’s a silly scenario, but it helps show that random outcomes
can play against staking.
One is trading a temporary very high return against a much increased
probability of losing most or all of the bank.
There are various very ingenious variations, such as having a number
of banks in parallel, but no method can beat the chances of a very
long (run of outs) , (and) nothing can easily outperform level stakes.
Or can it?
First off, I would like to talk about random outcomes. What is random:
most people think of it as “anything can happen”
What random probably really means is that the reasons for an outcome
are so many with so many interactions, that it’s not possible
to make a law about the outcome
That’s why we have statistics, and I’m very suspicious
of them: their protagonists mostly display beautifully bell shaped
distribution curves of results, but they get rather quiet about skewed
distribution curves which mean that several things are probably pushing
outcomes in a certain direction.
Since they don’t know the reasons for the skewing, they don’t
know that its direction might not quickly reverse!
OK, how do we beat the problem. All I’ve come up with is the
moving average idea. A moving average exemplified by say four outcomes
would add up the values of the first four outcomes and make an average.
When a fifth outcome comes along the first outcome is dropped off
and a new average is arrived at with the newer set of four in this
Now a moving average doesn’t have to be linear. Let’s
suppose we consider the oldest result to be less valuable, ok we might
give it half the value of the newest outcome.
This will give us a moving average skewed toward the present. A longer
moving average, over say 10 races will be a lot smoother than one
on three or four.
A moving average could have one value for the average for a whole
day, which would smooth things further
Now let’s suppose we have a slightly more sophisticated moving
average, we have a number of events every day to consider, and we
give the event outcomes a different value according to the day of
the week after observing how the average moves around with the day
of the week...
We might have seven different moving averages for the various days
Now suppose that the value, weighted or otherwise, of the moving average
is going down. It makes sense, I think, to adjust the value of our
bet downwards, as effectively we are expecting an overall lesser return...
We could apply the moving average to our return rather than to the
events involved … or to a mixture of factors....
If the moving average goes up then we can increase the amount bet....
This assumes that we have a large bank in terms of units, so that
in an extreme case we might bet two units instead of one or half a
unit if things are not going too well, on a bank of 100 units
I think that this is moving the random factor in our favour: we still
don’t know what is causing the result fluctuations , but we
have a good idea of what the results are likely to be at the moment
from performance in the recent past and we can better use our bank
to maximise results when things are going well rather than chase losses
when they aren’t!!
I might call this reverse staking. Hope this makes sense
I think this approach is more useful where we have a lot of bets for
each day to give a good average and where the bets can all be put
on at one time rather than chasing probability against the clock!
We also need to know the overall performance of our method over a
large number of bets which of course is another average....
I also have the sneaking suspicion that the methods used should be
ones with very few rules, not dependent on the market price but on
the three immortal factors, form weight and class.
You have done some good work on the class angle, and again, methods
that minimise that most difficult of factors, such as 2yo 3yo and
maidens are more likely to fit a moving average … imho.
You might work this out on a spreadsheet.
I will stick to Lisp which is the absolute pinnacle of computer languages...The
first lisp program which could decide which problem on a particular
day was the most interesting, and which could learn from its mistakes,
with self awareness, was in the 1980s: the Eurisko program. (Ed:More
on Eurisko can be found here)
It beat the pants off humans in a naval strategy game, where it drew
on its experience in other fields.
I’ve used lisp to write a program on your use of specific class
values. It’s easy to extend such a program to encompass other
rules as its function oriented.
I remember that I wrote 800 odd lines of program and comment in two
days, and that kind of productivity probably can’t be matched
by any other language, particularly when you consider that a program
in lisp takes a third of the lines of one in basic....
A lisp program can easily be an expert system, which is roughly a
photograph of ones thinking about a problem.
Years ago there was one great expert Robin Adair at putting out oilwell
fires. There were a lot of fires and only one Robin. So an expert
system was set up with his expertise so that anyone with a problem
could access the system and know what was the best thing to do next.
On a very superficial reading, your ideas on class might make an excellent
Now to strategy.
I’m going to make the assumption that one is betting for profit,
rather than for fun, so that we can work on logical premises rather
than the emotional satisfaction of winning.
If we consider it as a business, there look to be three main factors,
profit and risk which are related, and the time that a human takes
to make decisions....that’s limited and
subject to error and error compounds with fatigue.
If we leave out staking and consider level stake, not compounding,
then our risk is related to the strike rate.
For a very high 40% strike rate we might need a 20 bet bank, and for
a 20% strike rate we might need a 100 bet bank.
Now within those parameters it looks as though the number of bets
Let’s suppose that our 40% strike rate method has a 50% profit
on turnover and that it has one bet a week, maybe 50 bets per year.
It returns 25 units per year on its level stake which is a bit over
a hundred percent on its bank, which seems about right...
Our other method has only a profit of 10% on turnover but it produces
a thousand bets a year.
So over that year it produces 100 units of profit.
If the units bet are the same for both methods then its clear that
the low profit low strike rate is only making 50% on its bank …
but its providing a profit of four times the more reliable method
at roughly equivalent risk....
OK the other point is the human effort involved, and that’s
related to the number of rules involved....
The basic thing that’s done is to eliminate as many as possible
of the horses involved in one stage, simply, and concentrate effort
on the rest.
I’ve been looking at the “Magic Maidens” and “Outrater
2011” methods, though so far I’ve only had enough confidence
to put token bets on the Maidens.
In both cases a large number of bets is generated, and using both
methods almost everything is covered. In both cases its possible to
eliminate most horses very rapidly, in the one case on form, in the
other on win rate and past form sum
This means that it’s possible to cover a large number of races
with minimal effort.
This is not a case of laziness: a human makes 2% of mistakes and while
a computer has an error rate of around one in a million million, it
takes quite a lot of time to key data into the computer, usually more
time than mental arithmetic.
And you still have the two percent error problem!!
If the number of rules is small then we have less chance of human
In the case of the very high strike rate methods, an unfortunate thing
is that one tends when constructing such methods to make a lot of
rules to cut out less profitable investments.
This has two nasty outcomes: firstly the secondary rules are difficult
to assess properly and need a large database, and secondly they take
a lot of effort and time to remember and apply.
Often such methods tend to fail long term because the rules that were
tailored to past results don’t apply quite so well to future
The conclusion is that methods with a large number of selections providing
a modest profit with a modest strike rate and very simple rules that
eliminate most contenders quickly are superior to methods with a small
number of selections, complex rules, a high strike rate and a high
profit on turnover
Some of this is counterintuitive, but I hope it makes sense.