Optimizing Trading Decisions for Hydro Storage Systems using Approximate Dual Dynamic Programming Nils L ohndorf1, David Wozabal2, Stefan Minner2 1Vienna University of Economics and Business, Vienna, Austria 2Technische Universit at Munc hen, Munich, Germany [email protected], [email protected], [email protected].
Financial market participants |
---|
Algorithmic trading is a method of executing a large order (too large to fill all at once) using automated pre-programmed trading instructions accounting for variables such as time, price, and volume[1] to send small slices of the order (child orders) out to the market over time. They were developed so that traders do not need to constantly watch a stock and repeatedly send those slices out manually. Popular 'algos' include Percentage of Volume, Pegged, VWAP, TWAP, Implementation Shortfall, Target Close. In the twenty-first century, algorithmic trading has been gaining traction with both retail and institutional traders.Algorithmic trading is not an attempt to make a trading profit. It is simply a way to minimize the cost, market impact and risk in execution of an order.[2][3] It is widely used by investment banks, pension funds, mutual funds, and hedge funds because these institutional traders need to execute large orders in markets that cannot support all of the size at once.
The term is also used to mean automated trading system. These do indeed have the goal of making a profit. Also known as black box trading, these encompass trading strategies that are heavily reliant on complex mathematical formulas and high-speed computer programs.[4][5]
Such systems run strategies including market making, inter-market spreading, arbitrage, or pure speculation such as trend following. Many fall into the category of high-frequency trading (HFT), which are characterized by high turnover and high order-to-trade ratios.[6] As a result, in February 2012, the Commodity Futures Trading Commission (CFTC) formed a special working group that included academics and industry experts to advise the CFTC on how best to define HFT.[7][8] HFT strategies utilize computers that make elaborate decisions to initiate orders based on information that is received electronically, before human traders are capable of processing the information they observe. Algorithmic trading and HFT have resulted in a dramatic change of the market microstructure, particularly in the way liquidity is provided.[9]
Profitability projections by the TABB Group, a financial services industry research firm, for the US equities HFT industry were US$1.3 billion before expenses for 2014,[10] significantly down on the maximum of US$21 billion that the 300 securities firms and hedge funds that then specialized in this type of trading took in profits in 2008,[11] which the authors had then called 'relatively small' and 'surprisingly modest' when compared to the market's overall trading volume. In March 2014, Virtu Financial, a high-frequency trading firm, reported that during five years the firm as a whole was profitable on 1,277 out of 1,278 trading days,[12] losing money just one day, empirically demonstrating the law of large numbers benefit of trading thousands to millions of tiny, low-risk and low-edge trades every trading day.[13]
A third of all European Union and United States stock trades in 2006 were driven by automatic programs, or algorithms.[15] As of 2009, studies suggested HFT firms accounted for 60–73% of all US equity trading volume, with that number falling to approximately 50% in 2012.[16][17] In 2006, at the London Stock Exchange, over 40% of all orders were entered by algorithmic traders, with 60% predicted for 2007. American markets and European markets generally have a higher proportion of algorithmic trades than other markets, and estimates for 2008 range as high as an 80% proportion in some markets. Foreign exchange markets also have active algorithmic trading (about 25% of orders in 2006).[18]Futures markets are considered fairly easy to integrate into algorithmic trading,[19] with about 20% of options volume expected to be computer-generated by 2010.[needs update][20]Bond markets are moving toward more access to algorithmic traders.[21]
Algorithmic trading and HFT have been the subject of much public debate since the U.S. Securities and Exchange Commission and the Commodity Futures Trading Commission said in reports that an algorithmic trade entered by a mutual fund company triggered a wave of selling that led to the 2010 Flash Crash.[22][23][24][25][26][27][28][29] The same reports found HFT strategies may have contributed to subsequent volatility by rapidly pulling liquidity from the market. As a result of these events, the Dow Jones Industrial Average suffered its second largest intraday point swing ever to that date, though prices quickly recovered. (See List of largest daily changes in the Dow Jones Industrial Average.) A July 2011 report by the International Organization of Securities Commissions (IOSCO), an international body of securities regulators, concluded that while 'algorithms and HFT technology have been used by market participants to manage their trading and risk, their usage was also clearly a contributing factor in the flash crash event of May 6, 2010.'[30][31] However, other researchers have reached a different conclusion. One 2010 study found that HFT did not significantly alter trading inventory during the Flash Crash.[32] Some algorithmic trading ahead of index fund rebalancing transfers profits from investors.[33][34][35]
Computerization of the order flow in financial markets began in the early 1970s, with some landmarks being the introduction of the New York Stock Exchange's “designated order turnaround” system (DOT, and later SuperDOT), which routed orders electronically to the proper trading post, which executed them manually. The 'opening automated reporting system' (OARS) aided the specialist in determining the market clearing opening price (SOR; Smart Order Routing).
Program trading is defined by the New York Stock Exchange as an order to buy or sell 15 or more stocks valued at over US$1 million total. In practice this means that all program trades are entered with the aid of a computer. In the 1980s, program trading became widely used in trading between the S&P 500 equity and futures markets.
In stock index arbitrage a trader buys (or sells) a stock index futures contract such as the S&P 500 futures and sells (or buys) a portfolio of up to 500 stocks (can be a much smaller representative subset) at the NYSE matched against the futures trade. The program trade at the NYSE would be pre-programmed into a computer to enter the order automatically into the NYSE’s electronic order routing system at a time when the futures price and the stock index were far enough apart to make a profit.
At about the same time portfolio insurance was designed to create a synthetic put option on a stock portfolio by dynamically trading stock index futures according to a computer model based on the Black–Scholes option pricing model.
Both strategies, often simply lumped together as 'program trading', were blamed by many people (for example by the Brady report) for exacerbating or even starting the 1987 stock market crash. Yet the impact of computer driven trading on stock market crashes is unclear and widely discussed in the academic community.[36]
Financial markets with fully electronic execution and similar electronic communication networks developed in the late 1980s and 1990s. In the U.S., decimalization, which changed the minimum tick size from 1/16 of a dollar (US$0.0625) to US$0.01 per share in 2001,[37] may have encouraged algorithmic trading as it changed the market microstructure by permitting smaller differences between the bid and offer prices, decreasing the market-makers' trading advantage, thus increasing market liquidity.
This increased market liquidity led to institutional traders splitting up orders according to computer algorithms so they could execute orders at a better average price. These average price benchmarks are measured and calculated by computers by applying the time-weighted average price or more usually by the volume-weighted average price.
Robert Greifeld, NASDAQ CEO, April 2011[38]
A further encouragement for the adoption of algorithmic trading in the financial markets came in 2001 when a team of IBM researchers published a paper[39] at the International Joint Conference on Artificial Intelligence where they showed that in experimental laboratory versions of the electronic auctions used in the financial markets, two algorithmic strategies (IBM's own MGD, and Hewlett-Packard's ZIP) could consistently out-perform human traders. MGD was a modified version of the 'GD' algorithm invented by Steven Gjerstad & John Dickhaut in 1996/7;[40] the ZIP algorithm had been invented at HP by Dave Cliff (professor) in 1996.[41] In their paper, the IBM team wrote that the financial impact of their results showing MGD and ZIP outperforming human traders '...might be measured in billions of dollars annually'; the IBM paper generated international media coverage.
As more electronic markets opened, other algorithmic trading strategies were introduced. These strategies are more easily implemented by computers, because machines can react more rapidly to temporary mispricing and examine prices from several markets simultaneously. For example, Chameleon (developed by BNP Paribas), Stealth[42] (developed by the Deutsche Bank), Sniper and Guerilla (developed by Credit Suisse[43]), arbitrage, statistical arbitrage, trend following, and mean reversion.
This type of trading is what is driving the new demand for low latency proximity hosting and global exchange connectivity. It is imperative to understand what latency is when putting together a strategy for electronic trading. Latency refers to the delay between the transmission of information from a source and the reception of the information at a destination. Latency is, as a lower bound, determined by the speed of light; this corresponds to about 3.3 milliseconds per 1,000 kilometers of optical fiber. Any signal regenerating or routing equipment introduces greater latency than this lightspeed baseline.
Most retirement savings, such as private pension funds or 401(k) and individual retirement accounts in the US, are invested in mutual funds, the most popular of which are index funds which must periodically 'rebalance' or adjust their portfolio to match the new prices and market capitalization of the underlying securities in the stock or other index that they track.[44][45] Profits are transferred from passive index investors to active investors, some of whom are algorithmic traders specifically exploiting the index rebalance effect. The magnitude of these losses incurred by passive investors has been estimated at 21-28bp per year for the S&P 500 and 38-77bp per year for the Russell 2000.[34] John Montgomery of Bridgeway Capital Management says that the resulting 'poor investor returns' from trading ahead of mutual funds is 'the elephant in the room' that 'shockingly, people are not talking about.'[35]
Pairs trading or pair trading is a long-short, ideally market-neutral strategy enabling traders to profit from transient discrepancies in relative value of close substitutes. Unlike in the case of classic arbitrage, in case of pairs trading, the law of one price cannot guarantee convergence of prices. This is especially true when the strategy is applied to individual stocks – these imperfect substitutes can in fact diverge indefinitely. In theory the long-short nature of the strategy should make it work regardless of the stock market direction. In practice, execution risk, persistent and large divergences, as well as a decline in volatility can make this strategy unprofitable for long periods of time (e.g. 2004-7). It belongs to wider categories of statistical arbitrage, convergence trading, and relative value strategies.[46]
In finance, delta-neutral describes a portfolio of related financial securities, in which the portfolio value remains unchanged due to small changes in the value of the underlying security. Such a portfolio typically contains options and their corresponding underlying securities such that positive and negative delta components offset, resulting in the portfolio's value being relatively insensitive to changes in the value of the underlying security.
In economics and finance, arbitrage/ˈɑːrbɪtrɑːʒ/ is the practice of taking advantage of a price difference between two or more markets: striking a combination of matching deals that capitalize upon the imbalance, the profit being the difference between the market prices. When used by academics, an arbitrage is a transaction that involves no negative cash flow at any probabilistic or temporal state and a positive cash flow in at least one state; in simple terms, it is the possibility of a risk-free profit at zero cost. Example: One of the most popular Arbitrage trading opportunities is played with the S&P futures and the S&P 500 stocks. During most trading days these two will develop disparity in the pricing between the two of them. This happens when the price of the stocks which are mostly traded on the NYSE and NASDAQ markets either get ahead or behind the S&P Futures which are traded in the CME market.
Arbitrage is possible when one of three conditions is met:
Arbitrage is not simply the act of buying a product in one market and selling it in another for a higher price at some later time. The long and short transactions should ideally occur simultaneously to minimize the exposure to market risk, or the risk that prices may change on one market before both transactions are complete. In practical terms, this is generally only possible with securities and financial products which can be traded electronically, and even then, when first leg(s) of the trade is executed, the prices in the other legs may have worsened, locking in a guaranteed loss. Missing one of the legs of the trade (and subsequently having to open it at a worse price) is called 'execution risk' or more specifically 'leg-in and leg-out risk'.[a]
In the simplest example, any good sold in one market should sell for the same price in another. Traders may, for example, find that the price of wheat is lower in agricultural regions than in cities, purchase the good, and transport it to another region to sell at a higher price. This type of price arbitrage is the most common, but this simple example ignores the cost of transport, storage, risk, and other factors. 'True' arbitrage requires that there be no market risk involved. Where securities are traded on more than one exchange, arbitrage occurs by simultaneously buying in one and selling on the other. Such simultaneous execution, if perfect substitutes are involved, minimizes capital requirements, but in practice never creates a 'self-financing' (free) position, as many sources incorrectly assume following the theory. As long as there is some difference in the market value and riskiness of the two legs, capital would have to be put up in order to carry the long-short arbitrage position.
Mean reversion is a mathematical methodology sometimes used for stock investing, but it can be applied to other processes. In general terms the idea is that both a stock's high and low prices are temporary, and that a stock's price tends to have an average price over time. An example of a mean-reverting process is the Ornstein-Uhlenbeck stochastic equation.
Mean reversion involves first identifying the trading range for a stock, and then computing the average price using analytical techniques as it relates to assets, earnings, etc.
When the current market price is less than the average price, the stock is considered attractive for purchase, with the expectation that the price will rise. When the current market price is above the average price, the market price is expected to fall. In other words, deviations from the average price are expected to revert to the average.
The standard deviation of the most recent prices (e.g., the last 20) is often used as a buy or sell indicator.
Stock reporting services (such as Yahoo! Finance, MS Investor, Morningstar, etc.), commonly offer moving averages for periods such as 50 and 100 days. While reporting services provide the averages, identifying the high and low prices for the study period is still necessary.
Scalping is liquidity provision by non-traditional market makers, whereby traders attempt to earn (or make) the bid-ask spread. This procedure allows for profit for so long as price moves are less than this spread and normally involves establishing and liquidating a position quickly, usually within minutes or less.
A market maker is basically a specialized scalper. The volume a market maker trades is many times more than the average individual scalper and would make use of more sophisticated trading systems and technology. However, registered market makers are bound by exchange rules stipulating their minimum quote obligations. For instance, NASDAQ requires each market maker to post at least one bid and one ask at some price level, so as to maintain a two-sided market for each stock represented.
Most strategies referred to as algorithmic trading (as well as algorithmic liquidity-seeking) fall into the cost-reduction category. The basic idea is to break down a large order into small orders and place them in the market over time. The choice of algorithm depends on various factors, with the most important being volatility and liquidity of the stock. For example, for a highly liquid stock, matching a certain percentage of the overall orders of stock (called volume inline algorithms) is usually a good strategy, but for a highly illiquid stock, algorithms try to match every order that has a favorable price (called liquidity-seeking algorithms).
The success of these strategies is usually measured by comparing the average price at which the entire order was executed with the average price achieved through a benchmark execution for the same duration. Usually, the volume-weighted average price is used as the benchmark. At times, the execution price is also compared with the price of the instrument at the time of placing the order.
A special class of these algorithms attempts to detect algorithmic or iceberg orders on the other side (i.e. if you are trying to buy, the algorithm will try to detect orders for the sell side). These algorithms are called sniffing algorithms. A typical example is 'Stealth.'
Some examples of algorithms are TWAP, VWAP, Implementation shortfall, POV, Display size, Liquidity seeker, and Stealth. Modern algorithms are often optimally constructed via either static or dynamic programming.[47][48][49]
Recently, HFT, which comprises a broad set of buy-side as well as market making sell side traders, has become more prominent and controversial.[50] These algorithms or techniques are commonly given names such as 'Stealth' (developed by the Deutsche Bank), 'Iceberg', 'Dagger', 'Guerrilla', 'Sniper', 'BASOR' (developed by Quod Financial) and 'Sniffer'.[51]Dark pools are alternative trading systems that are private in nature—and thus do not interact with public order flow—and seek instead to provide undisplayed liquidity to large blocks of securities.[52] In dark pools trading takes place anonymously, with most orders hidden or 'iceberged.'[53] Gamers or 'sharks' sniff out large orders by 'pinging' small market orders to buy and sell. When several small orders are filled the sharks may have discovered the presence of a large iceberged order.
'Now it’s an arms race,' said Andrew Lo, director of the Massachusetts Institute of Technology’s Laboratory for Financial Engineering. 'Everyone is building more sophisticated algorithms, and the more competition exists, the smaller the profits.'[54]
Strategies designed to generate alpha are considered market timing strategies. These types of strategies are designed using a methodology that includes backtesting, forward testing and live testing. Market timing algorithms will typically use technical indicators such as moving averages but can also include pattern recognition logic implemented using Finite State Machines.
Backtesting the algorithm is typically the first stage and involves simulating the hypothetical trades through an in-sample data period. Optimization is performed in order to determine the most optimal inputs. Steps taken to reduce the chance of over optimization can include modifying the inputs +/- 10%, schmooing the inputs in large steps, running monte carlo simulations and ensuring slippage and commission is accounted for.[55]
Forward testing the algorithm is the next stage and involves running the algorithm through an out of sample data set to ensure the algorithm performs within backtested expectations.
Live testing is the final stage of development and requires the developer to compare actual live trades with both the backtested and forward tested models. Metrics compared include percent profitable, profit factor, maximum drawdown and average gain per trade.
As noted above, high-frequency trading (HFT) is a form of algorithmic trading characterized by high turnover and high order-to-trade ratios. Although there is no single definition of HFT, among its key attributes are highly sophisticated algorithms, specialized order types, co-location, very short-term investment horizons, and high cancellation rates for orders.[6]In the U.S., high-frequency trading (HFT) firms represent 2% of the approximately 20,000 firms operating today, but account for 73% of all equity trading volume.[citation needed] As of the first quarter in 2009, total assets under management for hedge funds with HFT strategies were US$141 billion, down about 21% from their high.[56] The HFT strategy was first made successful by Renaissance Technologies.[57]
High-frequency funds started to become especially popular in 2007 and 2008.[57] Many HFT firms are market makers and provide liquidity to the market, which has lowered volatility and helped narrow Bid-offer spreads making trading and investing cheaper for other market participants.[56][58][59] HFT has been a subject of intense public focus since the U.S. Securities and Exchange Commission and the Commodity Futures Trading Commission stated that both algorithmic trading and HFT contributed to volatility in the 2010 Flash Crash. Among the major U.S. high frequency trading firms are Chicago Trading, Virtu Financial, Timber Hill, ATD, GETCO, and Citadel LLC.[60]
There are four key categories of HFT strategies: market-making based on order flow, market-making based on tick data information, event arbitrage and statistical arbitrage. All portfolio-allocation decisions are made by computerized quantitative models. The success of computerized strategies is largely driven by their ability to simultaneously process volumes of information, something ordinary human traders cannot do.
Market making involves placing a limit order to sell (or offer) above the current market price or a buy limit order (or bid) below the current price on a regular and continuous basis to capture the bid-ask spread. Automated Trading Desk, which was bought by Citigroup in July 2007, has been an active market maker, accounting for about 6% of total volume on both NASDAQ and the New York Stock Exchange.[61]
Another set of HFT strategies in classical arbitrage strategy might involve several securities such as covered interest rate parity in the foreign exchange market which gives a relation between the prices of a domestic bond, a bond denominated in a foreign currency, the spot price of the currency, and the price of a forward contract on the currency. If the market prices are sufficiently different from those implied in the model to cover transaction cost then four transactions can be made to guarantee a risk-free profit. HFT allows similar arbitrages using models of greater complexity involving many more than 4 securities. The TABB Group estimates that annual aggregate profits of low latency arbitrage strategies currently exceed US$21 billion.[16]
A wide range of statistical arbitrage strategies have been developed whereby trading decisions are made on the basis of deviations from statistically significant relationships. Like market-making strategies, statistical arbitrage can be applied in all asset classes.
A subset of risk, merger, convertible, or distressed securities arbitrage that counts on a specific event, such as a contract signing, regulatory approval, judicial decision, etc., to change the price or rate relationship of two or more financial instruments and permit the arbitrageur to earn a profit.[62]
Merger arbitrage also called risk arbitrage would be an example of this. Merger arbitrage generally consists of buying the stock of a company that is the target of a takeover while shorting the stock of the acquiring company. Usually the market price of the target company is less than the price offered by the acquiring company. The spread between these two prices depends mainly on the probability and the timing of the takeover being completed as well as the prevailing level of interest rates. The bet in a merger arbitrage is that such a spread will eventually be zero, if and when the takeover is completed. The risk is that the deal 'breaks' and the spread massively widens.
One strategy that some traders have employed, which has been proscribed yet likely continues, is called spoofing. It is the act of placing orders to give the impression of wanting to buy or sell shares, without ever having the intention of letting the order execute to temporarily manipulate the market to buy or sell shares at a more favorable price. This is done by creating limit orders outside the current bid or ask price to change the reported price to other market participants. The trader can subsequently place trades based on the artificial change in price, then canceling the limit orders before they are executed.
Suppose a trader desires to sell shares of a company with a current bid of $20 and a current ask of $20.20. The trader would place a buy order at $20.10, still some distance from the ask so it will not be executed, and the $20.10 bid is reported as the National Best Bid and Offer best bid price. The trader then executes a market order for the sale of the shares they wished to sell. Because the best bid price is the investor’s artificial bid, a market maker fills the sale order at $20.10, allowing for a $.10 higher sale price per share. The trader subsequently cancels their limit order on the purchase he never had the intention of completing.
Quote stuffing is a tactic employed by malicious traders that involves quickly entering and withdrawing large quantities of orders in an attempt to flood the market, thereby gaining an advantage over slower market participants.[63] The rapidly placed and canceled orders cause market data feeds that ordinary investors rely on to delay price quotes while the stuffing is occurring. HFT firms benefit from proprietary, higher-capacity feeds and the most capable, lowest latency infrastructure. Researchers showed high-frequency traders are able to profit by the artificially induced latencies and arbitrage opportunities that result from quote stuffing.[64]
Network-induced latency, a synonym for delay, measured in one-way delay or round-trip time, is normally defined as how much time it takes for a data packet to travel from one point to another.[65] Low latency trading refers to the algorithmic trading systems and network routes used by financial institutions connecting to stock exchanges and electronic communication networks (ECNs) to rapidly execute financial transactions.[66] Most HFT firms depend on low latency execution of their trading strategies. Joel Hasbrouck and Gideon Saar (2013) measure latency based on three components: the time it takes for 1) information to reach the trader, 2) the trader’s algorithms to analyze the information, and 3) the generated action to reach the exchange and get implemented.[67] In a contemporary electronic market (circa 2009), low latency trade processing time was qualified as under 10 milliseconds, and ultra-low latency as under 1 millisecond.[68]
Low-latency traders depend on ultra-low latency networks. They profit by providing information, such as competing bids and offers, to their algorithms microseconds faster than their competitors.[16] The revolutionary advance in speed has led to the need for firms to have a real-time, colocated trading platform to benefit from implementing high-frequency strategies.[16] Strategies are constantly altered to reflect the subtle changes in the market as well as to combat the threat of the strategy being reverse engineered by competitors. This is due to the evolutionary nature of algorithmic trading strategies – they must be able to adapt and trade intelligently, regardless of market conditions, which involves being flexible enough to withstand a vast array of market scenarios. As a result, a significant proportion of net revenue from firms is spent on the R&D of these autonomous trading systems.[16]
Most of the algorithmic strategies are implemented using modern programming languages, although some still implement strategies designed in spreadsheets. Increasingly, the algorithms used by large brokerages and asset managers are written to the FIX Protocol's Algorithmic Trading Definition Language (FIXatdl), which allows firms receiving orders to specify exactly how their electronic orders should be expressed. Orders built using FIXatdl can then be transmitted from traders' systems via the FIX Protocol.[69] Basic models can rely on as little as a linear regression, while more complex game-theoretic and pattern recognition[70] or predictive models can also be used to initiate trading. More complex methods such as Markov chain Monte Carlo have been used to create these models.[citation needed]
Algorithmic trading has been shown to substantially improve market liquidity[71] among other benefits. However, improvements in productivity brought by algorithmic trading have been opposed by human brokers and traders facing stiff competition from computers.
Technological advances in finance, particularly those relating to algorithmic trading, has increased financial speed, connectivity, reach, and complexity while simultaneously reducing its humanity. Computers running software based on complex algorithms have replaced humans in many functions in the financial industry. Finance is essentially becoming an industry where machines and humans share the dominant roles – transforming modern finance into what one scholar has called, “cyborg finance.”[72]
While many experts laud the benefits of innovation in computerized algorithmic trading, other analysts have expressed concern with specific aspects of computerized trading.
'The downside with these systems is their black box-ness,' Mr. Williams said. 'Traders have intuitive senses of how the world works. But with these systems you pour in a bunch of numbers, and something comes out the other end, and it’s not always intuitive or clear why the black box latched onto certain data or relationships.' [54]
'The Financial Services Authority has been keeping a watchful eye on the development of black box trading. In its annual report the regulator remarked on the great benefits of efficiency that new technology is bringing to the market. But it also pointed out that 'greater reliance on sophisticated technology and modelling brings with it a greater risk that systems failure can result in business interruption'.' [73]
UK Treasury minister Lord Myners has warned that companies could become the 'playthings' of speculators because of automatic high-frequency trading. Lord Myners said the process risked destroying the relationship between an investor and a company.[74]
Other issues include the technical problem of latency or the delay in getting quotes to traders,[75] security and the possibility of a complete system breakdown leading to a market crash.[76]
'Goldman spends tens of millions of dollars on this stuff. They have more people working in their technology area than people on the trading desk...The nature of the markets has changed dramatically.' [77]
On August 1, 2012 Knight Capital Group experienced a technology issue in their automated trading system,[78] causing a loss of $440 million.
This issue was related to Knight's installation of trading software and resulted in Knight sending numerous erroneous orders in NYSE-listed securities into the market. This software has been removed from the company's systems. [..] Clients were not negatively affected by the erroneous orders, and the software issue was limited to the routing of certain listed stocks to NYSE. Knight has traded out of its entire erroneous trade position, which has resulted in a realized pre-tax loss of approximately $440 million.
Algorithmic and high-frequency trading were shown to have contributed to volatility during the May 6, 2010 Flash Crash,[22][24] when the Dow Jones Industrial Average plunged about 600 points only to recover those losses within minutes. At the time, it was the second largest point swing, 1,010.14 points, and the biggest one-day point decline, 998.5 points, on an intraday basis in Dow Jones Industrial Average history.[79]
Financial market news is now being formatted by firms such as Need To Know News, Thomson Reuters, Dow Jones, and Bloomberg, to be read and traded on via algorithms.
'Computers are now being used to generate news stories about company earnings results or economic statistics as they are released. And this almost instantaneous information forms a direct feed into other computers which trade on the news.'[80]
The algorithms do not simply trade on simple news stories but also interpret more difficult to understand news. Some firms are also attempting to automatically assign sentiment (deciding if the news is good or bad) to news stories so that automated trading can work directly on the news story.[81]
'Increasingly, people are looking at all forms of news and building their own indicators around it in a semi-structured way,' as they constantly seek out new trading advantages said Rob Passarella, global director of strategy at Dow Jones Enterprise Media Group. His firm provides both a low latency news feed and news analytics for traders. Passarella also pointed to new academic research being conducted on the degree to which frequent Google searches on various stocks can serve as trading indicators, the potential impact of various phrases and words that may appear in Securities and Exchange Commission statements and the latest wave of online communities devoted to stock trading topics.[81]
'Markets are by their very nature conversations, having grown out of coffee houses and taverns,' he said. So the way conversations get created in a digital society will be used to convert news into trades, as well, Passarella said.[81]
'There is a real interest in moving the process of interpreting news from the humans to the machines' says Kirsti Suutari, global business manager of algorithmic trading at Reuters. 'More of our customers are finding ways to use news content to make money.'[80]
An example of the importance of news reporting speed to algorithmic traders was an advertising campaign by Dow Jones (appearances included page W15 of The Wall Street Journal, on March 1, 2008) claiming that their service had beaten other news services by two seconds in reporting an interest rate cut by the Bank of England.
In July 2007, Citigroup, which had already developed its own trading algorithms, paid $680 million for Automated Trading Desk, a 19-year-old firm that trades about 200 million shares a day.[82] Citigroup had previously bought Lava Trading and OnTrade Inc.
In late 2010, The UK Government Office for Science initiated a Foresight project investigating the future of computer trading in the financial markets,[83] led by Dame Clara Furse, ex-CEO of the London Stock Exchange and in September 2011 the project published its initial findings in the form of a three-chapter working paper available in three languages, along with 16 additional papers that provide supporting evidence.[84] All of these findings are authored or co-authored by leading academics and practitioners, and were subjected to anonymous peer-review. Released in 2012, the Foresight study acknowledged issues related to periodic illiquidity, new forms of manipulation and potential threats to market stability due to errant algorithms or excessive message traffic. However, the report was also criticized for adopting 'standard pro-HFT arguments' and advisory panel members being linked to the HFT industry.[85]
A traditional trading system consists primarily of two blocks – one that receives the market data while the other that sends the order request to the exchange. However, an algorithmic trading system can be broken down into three parts:
Exchange(s) provide data to the system, which typically consists of the latest order book, traded volumes, and last traded price (LTP) of scrip. The server in turn receives the data simultaneously acting as a store for historical database. The data is analyzed at the application side, where trading strategies are fed from the user and can be viewed on the GUI. Once the order is generated, it is sent to the order management system (OMS), which in turn transmits it to the exchange.
Gradually, old-school, high latency architecture of algorithmic systems is being replaced by newer, state-of-the-art, high infrastructure, low-latency networks. The complex event processing engine (CEP), which is the heart of decision making in algo-based trading systems, is used for order routing and risk management.
With the emergence of the FIX (Financial Information Exchange) protocol, the connection to different destinations has become easier and the go-to market time has reduced, when it comes to connecting with a new destination. With the standard protocol in place, integration of third-party vendors for data feeds is not cumbersome anymore.
Though its development may have been prompted by decreasing trade sizes caused by decimalization, algorithmic trading has reduced trade sizes further. Jobs once done by human traders are being switched to computers. The speeds of computer connections, measured in milliseconds and even microseconds, have become very important.[86][87]
More fully automated markets such as NASDAQ, Direct Edge and BATS (formerly an acronym for Better Alternative Trading System) in the US, have gained market share from less automated markets such as the NYSE. Economies of scale in electronic trading have contributed to lowering commissions and trade processing fees, and contributed to international mergers and consolidation of financial exchanges.
Competition is developing among exchanges for the fastest processing times for completing trades. For example, in June 2007, the London Stock Exchange launched a new system called TradElect that promises an average 10 millisecond turnaround time from placing an order to final confirmation and can process 3,000 orders per second.[88] Since then, competitive exchanges have continued to reduce latency with turnaround times of 3 milliseconds available. This is of great importance to high-frequency traders, because they have to attempt to pinpoint the consistent and probable performance ranges of given financial instruments. These professionals are often dealing in versions of stock index funds like the E-mini S&Ps, because they seek consistency and risk-mitigation along with top performance. They must filter market data to work into their software programming so that there is the lowest latency and highest liquidity at the time for placing stop-losses and/or taking profits. With high volatility in these markets, this becomes a complex and potentially nerve-wracking endeavor, where a small mistake can lead to a large loss. Absolute frequency data play into the development of the trader's pre-programmed instructions.[89]
In the U.S., spending on computers and software in the financial industry increased to $26.4 billion in 2005.[2][90]
Algorithmic trading has caused a shift in the types of employees working in the financial industry. For example, many physicists have entered the financial industry as quantitative analysts. Some physicists have even begun to do research in economics as part of doctoral research. This interdisciplinary movement is sometimes called econophysics.[91] Some researchers also cite a 'cultural divide' between employees of firms primarily engaged in algorithmic trading and traditional investment managers. Algorithmic trading has encouraged an increased focus on data and had decreased emphasis on sell-side research.[92]
Algorithmic trades require communicating considerably more parameters than traditional market and limit orders. A trader on one end (the 'buy side') must enable their trading system (often called an 'order management system' or 'execution management system') to understand a constantly proliferating flow of new algorithmic order types. The R&D and other costs to construct complex new algorithmic orders types, along with the execution infrastructure, and marketing costs to distribute them, are fairly substantial. What was needed was a way that marketers (the 'sell side') could express algo orders electronically such that buy-side traders could just drop the new order types into their system and be ready to trade them without constant coding custom new order entry screens each time.
FIX Protocol is a trade association that publishes free, open standards in the securities trading area. The FIX language was originally created by Fidelity Investments, and the association Members include virtually all large and many midsized and smaller broker dealers, money center banks, institutional investors, mutual funds, etc. This institution dominates standard setting in the pretrade and trade areas of security transactions. In 2006–2007 several members got together and published a draft XML standard for expressing algorithmic order types. The standard is called FIX Algorithmic Trading Definition Language (FIXatdl).[93]
How algorithms shape our world, TED (conference) |
Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure.
If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems.[1] In the optimization literature this relationship is called the Bellman equation.
In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. This is done by defining a sequence of value functionsV1, V2, ..., Vn taking y as an argument representing the state of the system at times i from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The values Vi at earlier times i = n −1, n − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation. For i = 2, ..., n, Vi−1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from a decision at time i − 1 and the function Vi at the new state of the system if this decision is made. Since Vi has already been calculated for the needed states, the above operation yields Vi−1 for those states. Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
In control theory, a typical problem is to find an admissible control which causes the system to follow an admissible trajectory on a continuous time interval that minimizes a cost function
The solution to this problem is an optimal control law or policy , which produces an optimal trajectory and an optimized loss function . The latter obeys the fundamental equation of dynamic programming:
a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which and . One finds the minimizing in terms of , , and the unknown function and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition .[2] In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship.
Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation:
at the -th stage of equally spaced discrete time intervals, and where and denote discrete approximations to and . This functional equation is known as the Bellman equation, which can be solved for an exact solution of the discrete approximation of the optimization equation.[3]
In economics, the objective is generally to maximize (rather than minimize) some dynamic social welfare function. In Ramsey's problem, this function relates amounts of consumption to levels of utility. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment in capital stock that is used in production), known as intertemporal choice. Future consumption is discounted at a constant rate . A discrete approximation to the transition equation of capital is given by
where is consumption, is capital, and is a production function satisfying the Inada conditions. An initial capital stock is assumed.
Let be consumption in period t, and assume consumption yields utility as long as the consumer lives. Assume the consumer is impatient, so that he discounts future utility by a factor b each period, where . Let be capital in period t. Assume initial capital is a given amount , and suppose that this period's capital and consumption determine next period's capital as , where A is a positive constant and . Assume capital cannot be negative. Then the consumer's decision problem can be written as follows:
Written this way, the problem looks complicated, because it involves solving for all the choice variables . (Note that is not a choice variable—the consumer's initial capital is taken as given.)
The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence of value functions, for which represent the value of having any amount of capital k at each time t. Note that , that is, there is (by assumption) no utility from having capital after death.
The value of any quantity of capital at any previous time can be calculated by backward induction using the Bellman equation. In this problem, for each , the Bellman equation is
This problem is much simpler than the one we wrote down before, because it involves only two decision variables, and . Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At time t, his current capital is given, and he only needs to choose current consumption and saving .
To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted as k. is already known, so using the Bellman equation once we can calculate , and so on until we get to , which is the value of the initial decision problem for the whole lifetime. In other words, once we know , we can calculate , which is the maximum of , where is the choice variable and .
Working backwards, it can be shown that the value function at time is
where each is a constant, and the optimal amount to consume at time is
which can be simplified to
We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in period T, the last period of life.
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called 'divide and conquer' instead.[1] This is why merge sort and quick sort are not classified as dynamic programming problems.
Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into sub-paths p1 from u to w and p2 from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in Introduction to Algorithms). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does.
Overlapping sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci series: Fi = Fi−1 + Fi−2, with base case F1 = F2 = 1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being solved in the recursive sub-trees of both F43 as well as F42. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once.
This can be achieved in either of two ways:[citation needed]
Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need). Some languages make it possible portably (e.g. Scheme, Common Lisp or Perl). Some languages have automatic memoization built in, such as tabled Prolog and J, which supports memoization with the M. adverb.[4] In any case, this is only possible for a referentially transparent function. Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such as Wolfram Language.
Dynamic programming is widely used in bioinformatics for the tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by Charles DeLisi in USA[5] and Georgii Gurskii and Alexander Zasedatelev in USSR.[6] Recently these algorithms have become very popular in bioinformatics and computational biology, particularly in the studies of nucleosome positioning and transcription factor binding.
From a dynamic programming point of view, Dijkstra's algorithm for the shortest path problem is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method.[7][8][9]
In fact, Dijkstra's explanation of the logic behind the algorithm,[10] namely
Problem 2. Find the path of minimum total length between two given nodes and .
We use the fact that, if is a node on the minimal path from to , knowledge of the latter implies the knowledge of the minimal path from to .
is a paraphrasing of Bellman's famous Principle of Optimality in the context of the shortest path problem.
Here is a naïve implementation of a function finding the nth member of the Fibonacci sequence, based directly on the mathematical definition:
Notice that if we call, say, fib(5)
, we produce a call tree that calls the function on the same value many different times:
fib(5)
fib(4) + fib(3)
(fib(3) + fib(2)) + (fib(2) + fib(1))
((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
(((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
In particular, fib(2)
was calculated three times from scratch. In larger examples, many more values of fib
, or subproblems, are recalculated, leading to an exponential time algorithm.
Now, suppose we have a simple map object, m, which maps each value of fib
that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time (but requires O(n) space):
This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.
In the bottom-up approach, we calculate the smaller values of fib
first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map.
In both examples, we only calculate fib(2)
one time, and then use it to calculate both fib(4)
and fib(3)
, instead of computing it every time either of them is evaluated.
Note that the above method actually takes time for large n because addition of two integers with bits each takes time. (The nth fibonacci number has bits.) Also, there is a closed form for the Fibonacci sequence, known as Binet's formula, from which the -th term can be computed in approximately time, which is more efficient than the above dynamic programming technique. However, the simple recurrence directly gives the matrix form that leads to an approximately algorithm by fast matrix exponentiation.
Consider the problem of assigning values, either zero or one, to the positions of an n × n matrix, with n even, so that each row and each column contains exactly n / 2 zeros and n / 2 ones. We ask how many different assignments there are for a given . For example, when n = 4, four possible solutions are
There are at least three possible approaches: brute force, backtracking, and dynamic programming.
Brute force consists of checking all assignments of zeros and ones and counting those that have balanced rows and columns (n / 2 zeros and n / 2 ones). As there are possible assignments, this strategy is not practical except maybe up to .
Backtracking for this problem consists of choosing some order of the matrix elements and recursively placing ones or zeros, while checking that in every row and column the number of elements that have not been assigned plus the number of ones or zeros are both at least n / 2. While more sophisticated than brute force, this approach will visit every solution once, making it impractical for n larger than six, since the number of solutions is already 116,963,796,250 for n = 8, as we shall see.
Dynamic programming makes it possible to count the number of solutions without visiting them all. Imagine backtracking values for the first row – what information would we require about the remaining rows, in order to be able to accurately count the solutions obtained for each first row value? We consider k × n boards, where 1 ≤ k ≤ n, whose rows contain zeros and ones. The function f to which memoization is applied maps vectors of n pairs of integers to the number of admissible boards (solutions). There is one pair for each column, and its two components indicate respectively the number of zeros and ones that have yet to be placed in that column. We seek the value of ( arguments or one vector of elements). The process of subproblem creation involves iterating over every one of possible assignments for the top row of the board, and going through every column, subtracting one from the appropriate element of the pair for that column, depending on whether the assignment for the top row contained a zero or a one at that position. If any one of the results is negative, then the assignment is invalid and does not contribute to the set of solutions (recursion stops). Otherwise, we have an assignment for the top row of the k × n board and recursively compute the number of solutions to the remaining (k − 1) × n board, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized. The base case is the trivial subproblem, which occurs for a 1 × n board. The number of solutions for this board is either zero or one, depending on whether the vector is a permutation of n / 2 and n / 2 pairs or not.
For example, in the first two boards shown above the sequences of vectors would be
The number of solutions (sequence A058527 in the OEIS) is
Links to the MAPLE implementation of the dynamic programming approach may be found among the external links.
Consider a checkerboard with n × n squares and a cost function c(i, j)
which returns a cost associated with square (i,j)
(i
being the row, j
being the column). For instance (on a 5 × 5 checkerboard),
5 | 6 | 7 | 4 | 7 | 8 |
---|---|---|---|---|---|
4 | 7 | 6 | 1 | 1 | 4 |
3 | 3 | 5 | 7 | 8 | 2 |
2 | – | 6 | 7 | 0 | – |
1 | – | – | *5* | – | – |
1 | 2 | 3 | 4 | 5 |
Thus c(1, 3) = 5
Let us say there was a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (the sum of the minimum costs at each visited rank) to get to the last rank; assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on (1,3)
can move to (2,2)
, (2,3)
or (2,4)
.
5 | ||||
---|---|---|---|---|
4 | ||||
3 | ||||
2 | x | x | x | |
1 | o | |||
1 | 2 | 3 | 4 | 5 |
This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function q(i, j)
as
Starting at rank n
and descending to rank 1
, we compute the value of this function for all the squares at each successive rank. Picking the square that holds the minimum value at each rank gives us the shortest path between rank n
and rank 1
.
Note that q(i, j)
is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus c(i, j)
. For instance:
5 | ||||
---|---|---|---|---|
4 | A | |||
3 | B | C | D | |
2 | ||||
1 | ||||
1 | 2 | 3 | 4 | 5 |
Now, let us define q(i, j)
in somewhat more general terms:
The first line of this equation deals with a board modeled as squares indexed on 1
at the lowest bound and n
at the highest bound. The second line specifies what happens at the last rank; providing a base case. The third line, the recursion, is the important part. It represents the A,B,C,D
terms in the example. From this definition we can derive straightforward recursive code for q(i, j)
. In the following pseudocode, n
is the size of the board, c(i, j)
is the cost function, and min()
returns the minimum of a number of values:
It should be noted that this function only computes the path cost, not the actual path. We discuss the actual path below. This, like the Fibonacci-numbers example, is horribly slow because it too exhibits the overlapping sub-problems attribute. That is, it recomputes the same path costs over and over. However, we can compute it much faster in a bottom-up fashion if we store path costs in a two-dimensional array q[i, j]
rather than using a function. This avoids recomputation; all the values needed for array q[i, j]
are computed ahead of time only once. Precomputed values for (i,j)
are simply looked-up whenever needed.
We also need to know what the actual shortest path is. To do this, we use another array p[i, j]
; a predecessor array. This array records the path to any square s
. The predecessor of s
is modeled as an offset relative to the index (in q[i, j]
) of the precomputed path cost of s
. To reconstruct the complete path, we lookup the predecessor of s
, then the predecessor of that square, then the predecessor of that square, and so on recursively, until we reach the starting square. Consider the following code:
Now the rest is a simple matter of finding the minimum and printing it.
In genetics, sequence alignment is an important application where dynamic programming is essential.[11] Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an associated cost, and the goal is to find the sequence of edits with the lowest total cost.
The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either:
The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum.
Different variants exist, see Smith–Waterman algorithm and Needleman–Wunsch algorithm.
The Tower of Hanoi or Towers of Hanoi is a mathematical game or puzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following rules:
The dynamic programming solution consists of solving the functional equation
where n denotes the number of disks to be moved, h denotes the home rod, t denotes the target rod, not(h,t) denotes the third rod (neither h nor t), ';' denotes concatenation, and
Note that for n=1 the problem is trivial, namely S(1,h,t) = 'move a disk from rod h to rod t' (there is only one disk left).
The number of moves required by this solution is 2n − 1. If the objective is to maximize the number of moves (without cycling) then the dynamic programming functional equation is slightly more complicated and 3n − 1 moves are required.[12]
The following is a description of the instance of this famous puzzle involving N=2 eggs and a building with H=36 floors:[13]
To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where
For instance, s = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process is s = (N,H) where N denotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs (n = 0) or when k = 0, whichever occurs first. If termination occurs at state s = (0,k) and k > 0, then the test failed.
Now, let
Then it can be shown that[14]
with W(n,0) = 0 for all n > 0 and W(1,k) = k for all k. It is easy to solve this equation iteratively by systematically increasing the values of n and k.
An interactive online facility is available for experimentation with this model as well as with other versions of this puzzle (e.g. when the objective is to minimize the expected value of the number of trials.)[14]
Notice that the above solution takes time with a DP solution. This can be improved to time by binary searching on the optimal in the above recurrence, since is increasing in while is decreasing in , thus a local minimum of is a global minimum. Also, by storing the optimal for each cell in the DP table and referring to its value for the previous cell, the optimal for each cell can be found in constant time, improving it to time. However, there is an even faster solution that involves a different parametrization of the problem:
Let be the total number of floors such that the eggs break when dropped from the th floor (The example above is equivalent to taking ).
Let be the minimum floor from which the egg must be dropped to be broken.
Let be the maximum number of values of that are distinguishable using tries and eggs.
Then for all .
Let be the floor from which the first egg is dropped in the optimal strategy.
If the first egg broke, is from to and distinguishable using at most tries and eggs.
If the first egg did not break, is from to and distinguishable using tries and eggs.
Therefore, .
Then the problem is equivalent to finding the minimum such that .
To do so, we could compute in order of increasing , which would take time.
Thus, if we separately handle the case of , the algorithm would take time.
But the recurrence relation can in fact be solved, giving , which can be computed in time using the identity for all .
Since for all , we can binary search on to find , giving an algorithm.[15]
Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matrices . As we know from basic linear algebra, matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example:
and so on. There are numerous ways to multiply this chain of matrices. They will all produce the same final result, however they will take more or less time to compute, based on which particular matrices are multiplied. If matrix A has dimensions m×n and matrix B has dimensions n×q, then matrix C=A×B will have dimensions m×q, and will require m*n*q scalar multiplications (using a simplistic matrix multiplication algorithm for purposes of illustration).
For example, let us multiply matrices A, B and C. Let us assume that their dimensions are m×n, n×p, and p×s, respectively. Matrix A×B×C will be of size m×s and can be calculated in two ways shown below:
Let us assume that m = 10, n = 100, p = 10 and s = 1000. So, the first way to multiply the chain will require 1,000,000 + 1,000,000 calculations. The second way will require only 10,000+100,000 calculations. Obviously, the second way is faster, and we should multiply the matrices using that arrangement of parenthesis.
Therefore, our conclusion is that the order of parenthesis matters, and that our task is to find the optimal order of parenthesis.
At this point, we have several choices, one of which is to design a dynamic programming algorithm that will split the problem into overlapping problems and calculate the optimal arrangement of parenthesis. The dynamic programming solution is presented below.
Let's call m[i,j] the minimum number of scalar multiplications needed to multiply a chain of matrices from matrix i to matrix j (i.e. Ai × .... × Aj, i.e. i<=j). We split the chain at some matrix k, such that i <= k < j, and try to find out which combination produces minimum m[i,j].
The formula is:
where k ranges from i to j − 1.
This formula can be coded as shown below, where input parameter 'chain' is the chain of matrices, i.e. :
So far, we have calculated values for all possible m[i, j], the minimum number of calculations to multiply a chain from matrix i to matrix j, and we have recorded the corresponding 'split point's[i, j]. For example, if we are multiplying chain A1×A2×A3×A4, and it turns out that m[1, 3] = 100 and s[1, 3] = 2, that means that the optimal placement of parenthesis for matrices 1 to 3 is and to multiply those matrices will require 100 scalar calculation.
This algorithm will produce 'tables' m[, ] and s[, ] that will have entries for all possible values of i and j. The final solution for the entire chain is m[1, n], with corresponding split at s[1, n]. Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices.
Therefore, the next step is to actually split the chain, i.e. to place the parenthesis where they (optimally) belong. For this purpose we could use the following algorithm:
Of course, this algorithm is not useful for actual multiplication. This algorithm is just a user-friendly way to see what the result looks like.
To actually multiply the matrices using the proper splits, we need the following algorithm:
The term dynamic programming was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions,[16] and the field was thereafter recognized by the IEEE as a systems analysis and engineering topic. Bellman's contribution is remembered in the name of the Bellman equation, a central result of dynamic programming which restates an optimization problem in recursive form.
Bellman explains the reasoning behind the term dynamic programming in his autobiography, Eye of the Hurricane: An Autobiography (1984, page 159). He explains:
The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive.[11] The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases linear programming and mathematical programming, a synonym for mathematical optimization.[17]
The above explanation of the origin of the term is lacking. As Russell and Norvig in their book have written, referring to the above story: 'This cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953.”[18] Also, there is a comment in a speech by Harold J. Kushner, where he remembers Bellman. Quoting Kushner as he speaks of Bellman: 'On the other hand, when I asked him the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true.'