Quiz
Test your knowledge
Flashcards
Study the content
Podcast
Listen to your notes
📈 Chapter 1: Introduction
High-Frequency Trading (HFT)
- Definition: High-frequency trading involves a high turnover of capital in rapid, computer-driven responses to changing market conditions.
- Characteristics:
- Higher number of trades.
- Lower average gain per trade (fraction of a percent).
- Few, if any, positions carried overnight.
- Example: Jim Simons of Renaissance Technologies Corp. reportedly earned $2.5 billion in 2008 using HFT strategies.
Advantages of HFT
- Risk Mitigation:
- Eliminates overnight risk due to market volatility.
- Full transparency of account holdings.
- Avoids overnight carry costs (interest rates above LIBOR).
- Diversification: Low or no correlation with traditional long-term strategies.
- Shorter Evaluation Periods: Performance can be statistically ascertained within a month.
- Operational Savings:
- Reduced staff headcount through automation.
- Lower incidence of errors due to human hesitation and emotion.
- Societal Benefits:
- Increased market efficiency.
- Added liquidity.
- Innovation in computer technology.
- Stabilization of market systems.
- Liquidity: HFT can be applied to any sufficiently liquid financial instrument. A “liquid instrument” can be a financial security that has enough buyers and sellers to trade at any time of the trading day.
Analogy
- Financial markets are like a human body, where HFT is analogous to blood circulating, flushing out toxins, healing wounds, and regulating temperature. Low-frequency investment decisions can destabilize the circulatory system.
Geographic Locations
- Major Hubs: New York, Connecticut, London, Singapore, and Chicago.
- Specializations:
- Chicago: Fast trading strategies for futures, options, and commodities.
- New York and Connecticut: U.S. equities.
- London: Currencies.
- Singapore: Asian markets.
Prominent Firms
- Millennium
- DE Shaw
- Worldquant
- Renaissance Technologies
Trading Strategies
- Most high-frequency firms are hedge funds or proprietary investment vehicles.
- Four main classes:
- Automated liquidity provision.
- Market microstructure trading.
- Event trading.
- Deviations arbitrage.
| Strategy | Description | Holding Period |
|---|---|---|
| Automated Liquidity Provision | Quantitative algorithms for optimal pricing and execution of market-making positions. | < 1 minute |
| Market Microstructure Trading | Identifying trading party order flow through reverse engineering of observed quotes. | < 10 minutes |
| Event Trading | Short-term trading on macro events. | < 1 hour |
| Deviations Arbitrage | Statistical arbitrage of deviations from equilibrium: triangle trades, basis trades, and the like. | < 1 day |
Challenges in Developing HFT Systems
- Dealing with large volumes of intra-day data.
- Precision of signals to trigger trades in fractions of a second.
- Speed of execution through computer automation.
- Requires advanced skills in software development.
- Human supervision to ensure the system runs within risk boundaries.
- Computer security challenges (Internet viruses).
- Ongoing maintenance and upgrades to keep up with IT expenditures.
Book's Purpose and Audience
- Purpose: To provide a practical guide for building high-frequency systems.
- Target Audience:
- Senior management in investment and broker-dealer functions.
- Institutional investors.
- Quantitative analysts.
- IT staff.
- Academics and business students.
- Individual investors.
- Aspiring high-frequency traders, risk managers, and government regulators.
Book Structure
- Part 1: History and business environment of HFT systems.
- Part 2: Statistical and econometric foundations of common HFT strategies.
- Part 3: Details of modeling HFT strategies.
- Part 4: Steps required to build a quality HFT system.
- Part 5: Running, monitoring, and benchmarking HFT systems.
Key Takeaways
- The best-performing strategies are confidential and seldom publicized.
- The book aims to illustrate how established academic research can be applied to capture market inefficiencies.
⚙️ Chapter 2: Evolution of High-Frequency Trading
Impact of Technology on Financial Markets
- Technological innovation has a persistent impact on financial markets.
- Technology has improved news dissemination, financial analysis, and communication speed.
- New arbitrage opportunities have emerged through technology.
Historical Trading Methods
- Manual Trading:
- Slow, error-prone, and expensive.
- Errors from market movements during quote requests.
- Unreliable due to human communication.
- Electronic Dealing Systems (1980s):
- Aggregated market data.
- Simultaneous information distribution.
- Automated trading capabilities.
- Key Systems:
- Designated Order Turnaround (DOT) by NYSE.
- NASDAQ’s Computer Assisted Execution System.
Systematic Trading in the 1990s
- Factors for Delay: High computing costs and low throughput of electronic orders.
- Globex (1992): Chicago Mercantile Exchange's first electronic platform.
- ISE (2000): First fully electronic U.S. options exchange.
Increased Trading Volume
- Daily trade volume increased significantly with technological developments.
- Industry Structure: Shift from rigid hierarchical to flat, decentralized networks.
Traditional 20th-Century Financial Network
- Exchanges/Inter-Dealer Networks: Centralized marketplaces.
- Broker-Dealers: Proprietary trading and customer transactions.
- Transacting Clients: Investment banking clients, corporations, medium-sized firms, high-net-worth individuals.
- Investment Institutions: Brokerages providing trading access to smaller institutions and retail clients.
Decentralized Financial Markets
- Competing Exchanges: Increased trading liquidity.
- Electronic Communication Networks (ECNs): Sophisticated algorithms for order transmission and matching.
- Dark Liquidity Pools: Trader identities and orders remain anonymous.
Technical Analysis
- Definition: Identifying recurring patterns in security prices.
- Techniques: Moving averages, MACD, market events.
- Historical Significance: Prospered when trading technology was less advanced.
- Modern Relevance: Limited to small, less liquid securities.
Fundamental Analysis
- Definition: Pricing securities based on future cash flows or expected economic variables.
- Applications:
- Equities: Present values of future cash flows.
- Foreign Exchange: Macroeconomic models.
- Derivatives: Advanced econometric models.
- Commodities: Supply and demand analysis.
Quant Trading
- Definition: Mathematical model-fueled trading methodology.
- Statistical Arbitrage (Stat-Arb): Exploiting market inefficiencies using mathematical models.
- Speed and Technology: Emphasis on fast computers and algorithmic trading.
Algorithmic Trading
- Definition: Systematic execution process to optimize buy-and-sell decisions.
- Function: Order processing, market aggressiveness, order splitting.
- Exogenous Decisions: Decisions about when to buy or sell are pre-determined.
HFT Evolution
- Response to Advances: Developed in the 1990s due to computer technology.
- Fully Automated: Fueled profitability and further technology development.
- Cost Savings: Replaced expensive traders with less expensive algorithms.
- Increased Demand: From buy-side investors.
- Pure Return (Alpha): Added to portfolios with little correlation to traditional buy-and-hold strategies.
Differentiation
- Electronic Trading: Transmitting orders electronically (becoming obsolete).
- Algorithmic Trading: Encompasses order execution and HFT portfolio allocation decisions.
- Systematic Trading: Computer-driven trading positions held for varying durations.
Key Characteristics
- Short position holding times (one day or shorter).
- Fully systematic.
- Algorithmic.
🏢 Chapter 3: Overview of the Business of High-Frequency Trading
Key Characteristics of HFT
- Tick-by-tick data processing: Analyzing every tick of data, separated by milliseconds.
- High capital turnover: Fast reallocation of trading capital.
- Intra-day entry and exit of positions: Closing positions at the end of each trading day to avoid overnight carrying costs.
- Algorithmic trading: Utilizing algorithms for market information processing and trading decisions.
Comparison with Traditional Approaches to Trading
| Characteristic | HFT | Traditional Trading |
|---|---|---|
| Data Processing | Tick-by-tick | Daily or less frequent |
| Decision Making | Quantitative, Algorithmic | Discretionary, Human-Driven |
| Position Time | Intra-day, short | Weeks, months, or years |
| Analysis | Technical, Fundamental, or Quant | Technical or Fundamental |
Market Participants
- Competitors:
- Proprietary trading divisions of investment banks.
- Hedge funds.
- Independent proprietary trading operations.
- Investors:
- Fund of funds.
- Hedge funds.
- Private equity firms.
- Services and Technology Providers:
- Electronic execution brokers (Goldman Sachs, Credit Suisse).
- Electronic communication networks (ECNs).
- Custody and clearing services.
- Software:
- Computerized generation of trading signals.
- Computer-aided analysis (MatLab, R).
- Internet-wide information-gathering.
- Trading software (MarketFactory).
- Run-time risk management applications.
- Mobile applications.
- Real-time third-party research.
- Legal, Accounting, and Other Professional Services.
Operating Model
- Three main components:
- Econometric models for short-term price forecasting.
- Advanced computer systems.
- Capital applied and monitored within a risk management framework.
- Computerized vs. Traditional Trading Costs:
| Cost Aspect | HFT | Traditional Trading |
|---|---|---|
| Initial Investment | High upfront (model design) | Lower upfront |
| Ongoing Costs | Low support staff | Consistent staffing costs |
| Model Development | High | Low |
Model Development Process
- Ideas from academic research.
- Advanced econometric modeling.
- Back-testing using tick data.
- Market depth analysis.
System Implementation Workflow
- Receive and archive real-time quotes.
- Develop buy and sell signals from econometric models.
- Keep track of open positions, P&L.
- Manage risk of open positions.
- Evaluate performance post-trade.
- Evaluate costs post-trade.
Trading Platform
- Platform-independent systems using FIX language.
Risk Management
- Competent risk management is crucial.
Economics
- Revenue is driven by leverage and Sharpe ratio.
- Leverage: Borrowing four times the investment equity.
- Sharpe Ratio: Measures returns compared to risk; high Sharpe ratios reduce the risk of losses.
Capitalizing an HFT Business
- Equity contributions from founders, private equity, investor capital, or parent company.
- Debt leverage through bank loans or margin lending.
🏦 Chapter 4: Financial Markets Suitable for High-Frequency Trading
Key Requirements
- Ability to quickly move in and out of positions.
- Sufficient market volatility.
- Liquid assets.
- Electronic execution.
Suitable Markets
- Spot Foreign Exchange
- Equities
- Options
- Futures
Liquidity Comparison
- Most Liquid: Foreign exchange.
- Followed By: Recently issued U.S. Treasury securities.
- Then: Equities, options, commodities, and futures.
Market Suitability Analysis
- Available liquidity
- Electronic trading capability
- Regulatory considerations
Fixed-Income Markets
- Interest rate market
- Bond market
- Both use spot, futures, and swap contracts.
Interest Rate Markets
- Spot interest rates (interbank interest rates)
- Quoted by banks to other banks.
- Maturity periods: Overnight, tomorrow next, one week, one month, etc.
- Interest rate futures
- Contracts to buy and sell underlying interest rates in the future.
- Based on the 3-month deposit rate.
- Typically mature in three months.
- Settle quarterly.
- Swaps
- Most still trade OTC, though selected swaps are making inroads into electronic trading.
- Electronic programs for 30-day Fed Funds futures and CBOT interest rate swap futures.
Bond Markets
- Publicly issued debt obligations.
- Issued by federal government, local governments, and publicly held corporations.
- Embed various options.
- Settlement and delivery rules vary by exchange.
Foreign Exchange Markets
- Foreign exchange rate is a swap of interest rates denominated in different currencies.
- Spot, forward, and swap foreign exchange products trade through a decentralized and unregulated mechanism.
- BIS estimates a total foreign exchange market in 2007 of $3 trillion daily.
Equity Markets
- Breadth of markets offer efficiencies; 2,764 stocks were listed on NYSE alone in 2006.
- In addition to stocks, equity markets trade exchange-traded funds (ETFs), warrants, certificates, and structured products.
- Provide full electronic trading functionality for all their offerings.
Commodity Markets
- Spot commodity contracts provide physical delivery of goods, ill-suited for high-frequency trading.
- Electronically traded and liquid commodity futures and options can provide viable and profitable trading strategies.
- Futures of agricultural commodities may have irregular expiry dates.
📊 Chapter 5: Evaluating Performance of High-Frequency Strategies
Basic Return Characteristics
- Return Frequency: Hourly, daily, monthly, quarterly, and annually.
- Average Annual Return: The simplistic summary of the location of the mean of the return distribution.
- Volatility of Returns: Measures the dispersion of returns around the average return, often computed as the standard deviation of returns.
- Maximum Drawdown: Documents the maximum severity of losses observed in historical data.
Comparative Ratios
| Measure | Description |
|---|---|
| Sharpe Ratio | (Average Return - Risk-Free Rate) / Standard Deviation. Measures risk-adjusted return. |
| Treynor Ratio | (Average Return - Risk-Free Rate) / Beta. Measures excess return per unit of systematic risk. |
| Jensen's Alpha | Measures trading return in excess of the return predicted by CAPM. It is defined as E[r_{i}] - r_{f} - β_{i}(r_{M} - r_{f}). |
| Calmar Ratio | Average Return / Maximum Drawdown. |
| Sterling Ratio | Average Return / Average Drawdown. |
| Burke Ratio | Average Return / Standard Deviation of Drawdowns. |
| Omega | ((E[r] - \tau) + 1) / LPM_{1i}((\tau)). It is defined as the first lower partial moment, the average of the returns that fell below the selected benchmark. |
| Sortino Ratio | ((E[r] - \tau) / \sqrt{LPM_{2i}(\tau)}). Measures excess return over downside risk. |
| Kappa 3 | ((E[r] - \tau) / \sqrt[3]{LPM_{3i}(\tau)}). Replaces the standard deviation in the Sharpe ratio with the third LPM of the returns, the skewness of the returns below the benchmark. |
| Upside Potential Ratio | Measures the average return above the benchmark. |
Performance Attribution
-
Goal: Identifying factors that contribute to portfolio performance.
-
Formula:
- Returns are expressed as R_{it} = α_{i} + ∑ b_{ik} F_{kt} + u_{it}.
- b_{k} measures strategy performance attributed to factor k.
- α_{i} measures the strategy’s persistent ability to generate abnormal returns.
- u_{it} measures the strategy’s idiosyncratic return in period t.
-
Benefits of Performance Attribution:
- Accurately captures investment styles.
- Measures true added value.
- Allows forecasting of strategy performance.
Other Considerations
- Strategy Capacity:
- Impacted by market liquidity and position sizes.
- May depend on manager skills and trading costs.
Length of Evaluation Period
- Minimum Evaluation Periods Required for Sharpe Ratio Verification:
- Formula:
- T_{min} = (1.645^2 / SR^2) * (1 + 0.5SR^2)
- Formula:
🧮 Chapter 6: Orders, Traders, and Their Applicability to High-Frequency Trading
📑 Order Types
- Order Price Specifications
-
Market Orders
Orders to buy or sell a security at the best available price when the order is placed.
-
Limit Orders
Orders to buy or sell a security at a particular price.
-
📈 Order Timing Specifications
| Order Type | Description |
|---|---|
| GTC | "Good Till Canceled". Remain active until completely filled. |
| GTD | "Good Till Date". Remain in the order book until completely filled or the specified expiry date. |
| Day Orders | "Good for the Day". Remain in the order book until completely filled or the end of the trading day. |
| GTT | "Good Till Time". Remain in the order book until completely filled or a specified expiry time. |
📐 Order Size Specifications
-
Vanilla Orders
Orders in standard contract sizes traded on the exchange, typically in "round lots".
-
Odd Lots
Smaller orders that are filled by a designated odd lot dealer.
-
Mixed Lots
Orders bigger than round lots, yet not in round-lot multiples.
-
Fill or Kill (FOK) Orders
Orders to be filled immediately in full, or in part, with the unfilled quantity killed.
-
Fill and Kill (FAK) Orders
Orders to be either filled immediately in full or killed in entirety.
-
All or None (AON) Orders
Remain in the order book with their original time priorities until they can be filled in full.
🕵️ Order Disclosure Specifications
-
Varies by exchange and ECN.
-
Orders can be executed with full transparency or with limited disclosure.
-
Iceberg Orders
Orders with only a portion of the order size observable to other market participants.
-
Anonymous Orders
Orders placed without disclosing the identity of the trader or the trading institution.
🛑 Stop-Loss and Take-Profit Orders
- Orders that become market or limit orders when a specified stop price is reached or passed.
🗂️ Administrative Orders
-
Change Orders
Used to change a pending limit order.
-
Margin Call Close Orders
Initiated by the executing counterparty when a trader's cash is insufficient to cover losses.
-
Phone-In Orders
Called in by a customer and are charged a transaction cost premium.
📉 Chapter 7: Market Inefficiency and Profit Opportunities at Different Frequencies
🎯 Predictability and Market Efficiency
- Aims to generate trading signals that result in consistently positive outcomes over a large number of trades.
💰 Gaining Insights on Trading Frequency Statistics
- The profitability of a trading strategy is bound by the chosen trading frequency.
- The trading frequency data helps in analyzing a particular financial security to determine whether price changes are random or not.
🧑🏫 Measuring Trading Opportunity
- Gain potential in the high-frequency space is remarkable, so is the maximum potential loss.
- With well-designed trading strategies high-frequency trading can produce the highest profitability.
⚙️ Testing for Market Efficiency and Predictability
- To identify markets with arbitrage opportunities is to find inefficient markets. The arbitrage opportunities themselves are market inefficiencies.
- Efficiency is measured with test which is designed to help the researchers to select the most profitable markets.
Random Walk Hypothesis and Market Efficiency 🚀
The random walk hypothesis suggests that price changes in a market are unpredictable and follow a random pattern.
At any given time, the change in log price is equally likely to be positive and negative.
Mathematically, the change in log price ((cid:3)lnP_t) can be represented as:
Where ε_t is the error term with a mean of 0.
- Drift: The random walk process can have a drift (µ), indicating an average change in prices over time. This could be due to factors like persistent inflation.
Lo and MacKinlay Test 🧪
- Lo and MacKinlay developed a test to check if a price follows a random walk.
- If price changes at a given frequency are random, changes at a lower frequency should also be random.
- The variances of price changes at different frequencies should be deterministically related.
- The reverse is not necessarily true: randomness in lower frequency changes doesn't guarantee randomness in higher frequency changes.
Equations for the Test
- Estimated mean:
- Variance estimator 1:
- Variance estimator 2:
- Test statistic:
- If the time series follows a random walk, the test statistic J_r will have a normal distribution.
Market Efficiency Findings 📊
- Lo and MacKinlay found that market efficiency could not be rejected for weekly and monthly equity data, but daily equity prices were not efficient.
- Six major USD crosses are more efficient than the S&P 500, indicating fewer arbitrage opportunities.
- USD/CAD is the most efficient currency pair among the six major USD crosses.
Efficiency vs. Frequency ⏱️
- The efficiency of spot instruments decreases (arbitrage opportunities increase) with increases in data sampling frequency.
- For example, EUR/USD daily spot rate inefficiency is higher when measured at 1-hour intervals than at daily intervals.
Autoregression-Based Tests ⚙️
- Trading strategies perform best in the least efficient markets, where arbitrage opportunities exist.
- Perfectly efficient markets instantaneously incorporate all available information, allowing no dependencies from past price movements.
- Market efficiency can be measured by estimating the explanatory power of past prices.
Mech and Hou-Moskowitz Approach
- Market efficiency can be measured as the difference between Adjusted R² coefficients of:
- An unrestricted model attempting to explain returns with lagged variables.
- A restricted model involving no past data.
Unrestricted Model
Where r_{i,t} is the return on security i at time t.
Restricted Model
Restricts all coefficients β_{i,j} to be 0.
Market Inefficiency Calculation
- The closer the difference is to 0, the smaller the influence of past price movements and the higher the market efficiency.
Martingale Hypothesis 🎲
- In an efficient market, prices fluctuate randomly. The expected price of a security given current information is always the current price itself. This relationship is known as a martingale.
Martingale Condition
Where:
- E[P_{t+1} | I_t] is the expected price at time t+1 given the information set I_t
- P_t is the price at time t
- {P} is a stochastic price process
Abnormal Returns
- Abnormal returns (Z_{t+1}) are returns in excess of expected returns given current information.
Market Efficiency Condition
A market is efficient when abnormal return Z_{t+1} is a “fair game”.
Fama's Suggestion
- The efficient market hypothesis is difficult to test because it contains a joint hypothesis:
- Expected values of returns are a function of information.
- Differences of realized returns from their expected values are random.
Froot and Thaler's Test
- Froot and Thaler derived a specification for a test of market efficiency of a foreign exchange rate.
Uncovered Interest Rate Parity
- The price of a foreign exchange rate is a function of interest rates in countries on either side of the interest rate and risk premium ξ_t of the exchange rate.
Where:
- r_t is the foreign interest rate
- r_t^d is the domestic interest rate
Martingale Hypothesis
Where:
- S_{t+1} is the realized spot exchange rate at time t+1
- E[u_{t+1} | I_t] = 0
Test for Market Efficiency
Where {ε_t} series is independent and identically distributed with mean 0.
Forward Rate Form
Where:
- F_t is the forward rate
- E[υ_t] = 0
- Variance of υ_t is σ²
High-Frequency Specification
Where:
- Mean of (cid:3)υ is E[(cid:3)υ] = 0
- Variance of (cid:3)υ is 2σ²
Cointegration-Based Tests 🧲
- Cointegration between two variables implies systematic predictability. If some market factor X predicts spot exchange rate S, then:
Where ε_t is stationary and E[ε_t] = 0.
Cointegration Test Specification
Where:
- η_t is an independent, identically distributed error term with mean 0
- α measures the speed of the model’s adjustment to its long-term equilibrium
- β and γ measure short-term impact of lagged changes in X and S.
Speculative vs. Arbitraging Efficiencies
- Speculative efficiency hypothesis: The expected rate of return from speculation in the forward market conditioned on available information is zero.
- Arbitraging efficiency hypothesis: The expected return on a portfolio composed of long one unit of currency and short one future contract on that unit of currency is zero (uncovered interest arbitrage).
Conclusion 🔚
- Market efficiency tests illuminate different aspects of a security’s price and return dependency on other variables.
- Taking advantage of market inefficiency requires understanding the different tests that identified the inefficiency.
- A security may be predictable at one frequency and fully random at another.
- Price changes of two or more securities may be random when considered individually, but the price changes of a combination may be predictable, and vice versa.Okay, here is a study guide that can be programmatically appended to other study guides:
💾 Autocorrelation of Order Flows
Like market aggressiveness, order flows exhibit autocorrelation. Many articles discuss this, including:
- Biais, Hillion, and Spatt (1995)
- Foucault (1999)
- Parlour (1998)
- Foucault, Kadan, and Kandel (2005)
- Goettler, Parlour, and Rajan (2005, 2007)
- Rosu (2005)
Ellul, Holden, Jain, and Jennings (2007) interpret short-term autocorrelation in high-frequency order flows as waves of competing order flows responding to current market events within liquidity depletion and replenishment. They confirm strong positive serial correlation in order flow at high frequencies but find negative order firm correlation at lower frequencies on the New York Stock Exchange.
Hollifield, Miller, and Sandas (2004) test the relationship of the limit order fill rate at different profitability conditions on a single Swedish stock. They find asymmetries in investor behavior on the two sides of the market of the Finnish Stock Exchange.
Foucault, Kadan, and Kandel (2005) and Rosu (2005) make predictions about order flow autocorrelations that support the diagonal autocorrelation effect first documented in Biais, Hillion, and Spatt (1995).
🎯 Conclusion: Market Participants
Understanding the type and motivation of each market participant can unlock profitable trading strategies. For example, understanding whether a particular market participant possesses information about impending market movement may result in immediate profitability from either engaging the trader if he is uninformed or following his moves if he has superior information.
📰 Event Arbitrage
With news reported instantly and trades placed on a tick-by-tick basis, high-frequency strategies are now ideally positioned to profit from the impact of announcements on markets. These high-frequency strategies, which trade on the market movements surrounding news announcements, are collectively referred to as event arbitrage.
This section investigates the mechanics of event arbitrage in the following order:
- Overview of the development process
- Generating a price forecast through statistical modeling of
- Directional forecasts
- Point forecasts
- Applying event arbitrage to corporate announcements, industry news, and macroeconomic news
- Documented effects of events on foreign exchange, equities, fixed income, futures, emerging economies, commodities, and REIT markets
👨🏻💻 Developing Event Arbitrage Trading Strategies
Event arbitrage refers to the group of trading strategies that place trades on the basis of the markets’ reaction to events.
The events may be economic or industry-specific occurrences that consistently affect the securities of interest time and time again. For example, unexpected increases in the Fed Funds rates consistently raise the value of the U.S. dollar, simultaneously raising the rate for USD/CAD and lowering the rate for AUD/USD. The announcements of the U.S. Fed Funds decisions, therefore, are events that can be consistently and profitably arbitraged.
The goal of event arbitrage strategies is to identify portfolios that make positive profit over the time window surrounding each event. The time window is typically a time period beginning just before the event and ending shortly afterwards. For events anticipated ex-ante, such as scheduled economic announcements, the portfolio positions may be opened ahead of the announcement or just after the announcement. The portfolio is then fully liquidated shortly after the announcement.
Trading positions can be held anywhere from a few seconds to several hours and can result in consistently profitable outcomes with low volatilities. The speed of response to an event often determines the trade gain; the faster the response, the higher the probability that the strategy will be able to profitably ride the momentum wave to the post-announcement equilibrium price level. As a result, event arbitrage strategies are well suited for high-frequency applications and are most profitably executed in fully automated trading environments.
Developing an event arbitrage trading strategy harnesses research on equilibrium pricing and leverages statistical tools that assess tick-by-tick trading data and events the instant they are released.
Most event arbitrage strategies follow a three-stage development process:
- For each event type, identify dates and times of past events in historical data.
- Compute historical price changes at desired frequencies pertaining to securities of interest and surrounding the events identified in Step 1.
- Estimate expected price responses based on historical price behavior surrounding past events.
The sources of dates and times for specified events that occurred in the past can be collected from various Internet sites. Most announcements recur at the same time of day and make the job of collecting the data much easier. U.S. unemployment announcements, for example, are always released at 8:30 A.M. Eastern time. Some announcements, such as those of the U.S. Federal Open Markets Committee interest rate changes, occur at irregular times during the day and require greater diligence in collecting the data.
🗓️ What Constitutes an Event?
The events used in event arbitrage strategies can be any releases of news about economic activity, market disruptions, and other events that consistently impact market prices. Classic financial theory tells us that in efficient markets, the price adjusts to new information instantaneously following a news release.
In practice, market participants form expectations about inflation figures well before the formal statistics are announced. Many financial economists are tasked with forecasting inflation figures based on other continuously observed market variables, such as prices on commodity futures and other market securities. When such forecasts become available, market participants trade securities on the basis of the forecasts, impounding their expectations into prices well before the formal announcements occur.
All events do not have the same magnitude. Some events may have positive and negative impacts on prices, and some events may have more severe consequences than others. The magnitude of an event can be measured as a deviation of the realized event figures from the expectations of the event. The price of a particular stock, for example, should adjust to the net present value of its future cash flows following a higher- or lower-than-expected earnings announcement. However, if earnings are in line with investor expectations, the price should not move.
Similarly, in the foreign exchange market, the level of a foreign exchange pair should change in response to an unexpected change—for example, in the level of the consumer price index (CPI) of the domestic country. If, however, the domestic CPI turns out to be in line with market expectations, little change should occur.
The key objective in the estimation of news impact is the determination of what actually constitutes the unexpected change, or news. The earliest macroeconomic event studies, such as those of Frenkel (1981) and Edwards (1982), considered news to be an out-of-sample error based on the one-step-ahead autoregressive forecasts of the macroeconomic variable in question. The thinking went that most economic news develops slowly over time, and the trend observed during the past several months or quarters is the best predictor of the value to be released on the next scheduled news release day. The news, or the unexpected component of the news release, is then the difference between the value released in the announcement and the expectation formed on the basis of autoregressive analysis.
Researchers such as Eichenbaum and Evans (1993) and Grilli and Roubini (1993) have been using the autoregressive framework to predict the decisions of the central bankers, including the U.S. Federal Reserve.
The main rationale behind the autoregressive predictability of the central bankers’ actions is that the central bankers are not at liberty to make drastic changes to economic variables under their control, given that major changes may trigger large-scale market disruptions. Instead, the central bankers adopt and follow a longer-term course of action, gradually adjusting the figures in their control, such as interest rates and money supply, to lead the economy in the intended direction.
The empirical evidence of the impact of news defined in the autoregressive fashion shows that the framework indeed can be used to predict future movements of securities. Yet the impact is best seen in shorter terms—for example, on intra-day data. Almeida, Goodhart, and Payne (1998) documented a significant effect of macroeconomic news announcements on the USD/DEM exchange rate sampled at five-minute intervals. The authors found that news announcements pertaining to the U.S. employment and trade balance were particularly significant predictors of exchange rates, but only within two hours following the announcement. On the other hand, U.S. non-farm payroll and consumer confidence news announcements caused price momentum lasting 12 hours or more following an announcement.
Lately, surprises in macroeconomic announcements have been measured relative to published averages of economists’ forecasts. For example, every week Barron’s and the Wall Street Journal publish consensus forecasts for the coming week’s announcements. The forecasts are developed from a survey of field economists.
📈 Forecasting Methodologies
Directional and point forecasts are the two approaches to estimating the price response to an announcement. A directional forecast predicts whether the price of a particular security will go up or down, whereas a point forecast predicts the level to which the new price will go.
Directional Forecasts
Directional forecasts of the post-event price movement of the security price can be created using the sign test. The sign test answers the following question: does the security under consideration consistently move up or down in response to announcements of a certain kind?
The sign test assumes that in the absence of the event, the price change, or the return, is equally likely to be positive or negative. When an event occurs, however, the return can be persistently positive or negative, depending on the event. The sign test aims to estimate whether a persistently positive or negative sign of the response to a specific event exists and whether the response is statistically significant. If the sign test produces a statistically significant result, an event arbitrage trading strategy is feasible.
MacKinlay (1997) specifies the following test hypotheses for the sign test:
- The null hypothesis, , states that the event does not cause consistent behavior in the price of interest—that is, the probability p of the price moving consistently in one direction in response to the event is less than or equal to 50 percent.
- The alternative hypothesis, , is that the event does cause consistent behavior in the price of the security of interest—in other words, the probability p of the price moving consistently in one direction in response to the event is greater than 50 percent.
We next define N to be the total number of events and let denote the number of events that were accompanied by positive return of the security under our consideration. The null hypothesis is rejected, and the price of the security is determined to respond consistently to the event with statistical confidence of if the asymptotic test statistic , where
.
Trading USD/CAD on U.S. Inflation Announcements
The latest figures tracking U.S. inflation are released monthly at 8:30 A.M. on prespecified dates. On release, USD/CAD spot and other USD crosses undergo an instantaneous one-time adjustment, at least in theory. Identifying when and how quickly the adjustments happen in practice, we can construct profitable trading strategies that capture changes in price levels following announcements of the latest inflation figures.
Even to a casual market observer, the movement of USD/CAD at the time inflation figures are announced suggests that the price adjustment may not be instantaneous and that profitable trading opportunities may exist surrounding U.S. inflation announcements. When the sign test is applied to intra-day USD/CAD spot data, it indeed shows that profitable trading opportunities are plentiful. These opportunities, however, exist only at high frequencies.
The first step in identification of profitable trading opportunities is to define the time period from the announcement to the end of the trading opportunity, known as the “event window.” We select data sample windows surrounding the recent U.S. inflation announcements in the tick-level data from January 2002 through August 2008. As all U.S. inflation announcements occur at 8:30 A.M. EST, we define 8 A.M. to 9 A.M. as the trading window and download all of the quotes and trades recorded during that time. We partition the data further into 5-minute, 1-minute, 30-second, and 15-second blocks. We then measure the impact of the announcement on the corresponding 5-minute, 1-minute, 30-second, and 15-second returns of USD/CAD spot.
According to the purchasing power parity (PPP), a spot exchange rate between domestic and foreign currencies is the ratio of the domestic and foreign inflation rates. When the U.S. inflation rate changes, the deviation disturbs the PPP equilibrium and the USD-based exchange rates adjust to new levels. When the U.S. inflation rate rises, USD/CAD is expected to increase instantaneously, and vice versa. To keep matters simple, in this example we will consider the inflation news in the same fashion as it is announced, ignoring the market’s pre-announcement adjustment to expectations of inflation figures.
The sign test then tells us during which time intervals, if any, the market properly and consistently responds to announcements during our “trading window” from 8 to 9 A.M. The sample includes only days when inflation rates were announced. The summary of the results is presented in Table 12.1.
Looking at 5-minute intervals surrounding the U.S. inflation announcements, it appears that USD/CAD reacts persistently only to decreases in the U.S. inflation rate and that reaction is indeed instantaneous. USD/CAD decreases during the 5-minute interval from 8:25 A.M. to 8:30 A.M. in response to announcements of lower inflation with 95 percent statistical confidence. The response may potentially support the instantaneous adjustment hypothesis; after all, the U.S. inflation news is released at 8:30 A.M., at which point the adjustment to drops in inflation appears to be completed. No statistically significant response appears to occur following rises in inflation.
Higher-frequency intervals tell us a different story—the adjustments occur in short-term bursts. At 1-minute intervals, for example, the adjustment to increases in inflation can now be seen to consistently occur from 8:34 to 8:35 A.M. This post-announcement adjustment, therefore, presents a consistent profit-taking opportunity.
Splitting the data into 30-second intervals, we observe that the number of tradable opportunities increases further. For announcements of rising inflation, the price adjustment now occurs in four 30-second post-announcement intervals. For the announcements showing a decrease in inflation, the price adjustment occurs in one 30-second post-announcement time interval.
Examining 15-second intervals, we note an even higher number of time-persistent trading opportunities. For rising inflation announcements, there are five 15-second periods during which USD/CAD consistently increased in response to the inflation announcement between 8:30 and 9:00 A.M., presenting ready tradable opportunities. Six 15-second intervals consistently accompany falling inflation announcements during the same 8:30 to 9:00 A.M. timeframe.
In summary, as we look at shorter time intervals, we detect a larger number of statistically significant currency movements in response to the announcements. The short-term nature of the opportunities makes them conducive to a systematic (i.e., black-box) approach, which, if implemented knowledgeably, reduces risk of execution delays, carrying costs, and expensive errors in human judgment.
Point Forecasts
Whereas directional forecasts provide insight about direction of trends, point forecasts estimate the future value of price in equilibrium following an announcement. Development of point forecasts involves performing event studies on very specific trading data surrounding event announcements of interest.
Event studies measure the quantitative impact of announcements on the returns surrounding the news event and are usually conducted as follows:
- The announcement dates, times, and “surprise” changes are identified and recorded. To create useful simulations, the database of events and the prices of securities traded before and after the events should be very detailed, with events categorized carefully and quotes and trades captured at high frequencies. The surprise component can be measured in following ways:
- As the difference between the realized value and the prediction based on autoregressive analysis
- As the difference between the realized value and the analyst forecast consensus obtained from Bloomberg or Thomson Reuters.
- The returns corresponding to the times of interest surrounding the announcements are calculated for the securities under consideration. For example, if the researcher is interested in evaluating the impact of CPI announcements on the 5-minute change in USD/CAD, the 5-minute change in USD/CAD is calculated from 8:30 A.M. to 8:35 A.M. on historical data on past CPI announcement days. (The 8:30 to 8:35 A.M. interval is chosen for the 5-minute effect of CPI announcements, because the U.S. CPI announcements are always released at 8:30 A.M. ET.)
- The impact of the announcements is then estimated in a simple linear regression:
where
- is the vector of returns surrounding the announcement for the security of interest arranged in the order of announcements;
- is the vector of “surprise” changes in the announcements arranged in the order of announcements;
- is the idiosyncratic error pertaining to news announcements;
- is the estimated intercept of the regression that captures changes in returns due to factors other than announcement surprises;
- measures the average impact of the announcement on the security under consideration.
Changes in equity prices are adjusted by changes in the overall market price to account for the impact of broader market influences on equity values. The adjustment is often performed using the market model of Sharpe (1964):
where is the expected equity return estimated over historical data using the market model:
The methodology was first developed by Ball and Brown (1968), and the estimation method to this day delivers statistically significant trading opportunities.
Event arbitrage trading strategies may track macroeconomic news announcements, earnings releases, and other recurring changes in the economic information. During a typical trading day, numerous economic announcements are made around the world. The news announcements may be related to a particular company, industry, or country; or, like macroeconomic news, they may have global consequences. Company news usually includes quarterly and annual earnings releases, mergers and acquisitions announcements, new product launch announcements, and the like. Industry news comprises industry regulation in a particular country, the introduction of tariffs, and economic conditions particular to the industry. Macroeconomic news contains interest rate announcements by major central banks, economic indicators determined from government-collected data, and regional gauges of economic performance.
With the development of information technology such as RSS feeds, alerts, press wires, and news aggregation engines such as Google, it is now feasible to capture announcements the instant they are released. A well-developed automated event arbitrage system captures news, categorizes events, and matches events to securities based on historical analysis.
🛍️ Tradable News
Corporate News
Corporate activity such as earnings announcements, both quarterly and annual, significantly impacts equity prices of the firms releasing the announcements. Unexpectedly positive earnings typically lift equity prices, and unexpectedly negative earnings often depress corporate stock valuation. Earnings announcements are preceded by analyst forecasts. The announcement that is materially different from the economists’ consensus forecast results in a rapid adjustment of the security price to its new equilibrium level. The unexpected component of the announcements is computed as the difference between the announced value and the mean or median economists’ forecast. The unexpected component is the key variable used in estimation of the impact of an event on prices.
Theoretically, equities are priced as present values of future cash flows of the company, discounted at the appropriate interest rate determined by Capital Asset Pricing Model (CAPM), the arbitrage pricing theory of Ross (1976), or the investor-specific opportunity cost:
Equity Price =
where are the expected cash flows of the company at a future time , and is the discount rate found appropriate for discounting time dividends to present. Unexpected changes to earnings generate rapid price responses whereby equity prices quickly adjust to new information about earnings. Significant deviations of earnings from forecasted values can cause large market movements and can even result in market disruptions. To prevent large-scale impacts of earnings releases on the overall market, most earnings announcements are made after the markets close.
Other firm-level news also affects equity prices. The effect of stock splits, for example, has been documented by Fama, Fisher, Jensen, and Roll (1969), who show that the share prices typically increase following a split relative to their equilibrium price levels.
Event arbitrage models incorporate the observation that earnings announcements affect each company differently. The most widely documented firm-level factors for evaluation include the size of the firm market capitalization (for details, see Atiase, 1985; Freeman, 1987; and Fan-fah, Mohd, and Nasir, 2008).
Industry News
Industry news consists mostly of legal and regulatory decisions along with announcements of new products. These announcements reverberate throughout the entire sector and tend to move all securities in that market in the same direction.
Unlike macroeconomic news that is collected and disseminated in a systematic fashion, industry news usually emerges in an erratic fashion. Empirical evidence on regulatory decisions suggests that decisions relaxing rules governing activity of a particular industry result in higher equity values, whereas the introduction of rules constricting activity pushes equity values down. The evidence includes the findings of Navissi, Bowman, and Emanuel (1999), who ascertained that announcements of relaxation or elimination of price controls resulted in an upswing in equity values and that the introduction of price controls depressed equity prices. Boscaljon (2005) found that the relaxation of advertising rules by the U.S. Food and Drug Administration was accompanied by rising equity values.
Macroeconomic News
Macroeconomic decisions and some observations are made by government agencies on a predetermined schedule. Interest rates, for example, are reset by economists at the central banks, such as the U.S. Federal Reserve or the Bank of England. On the other hand, variables such as consumer price indices (CPIs) are typically not set but are observed and reported by statistics agencies affiliated with the countries’ central banks.
Other macroeconomic indices are developed by research departments of both for-profit and nonprofit private companies. The ICSC Goldman store sales index, for example, is calculated by the International Council of Shopping Centers (ICSC) and is actively supported and promoted by the Goldman Sachs Group. The index tracks weekly sales at sample retailers and serves as an indicator of consumer confidence: the more confident consumers are about the economy and their future earnings potential, the higher their retail spending and the higher the value of the index. Other indices measure different aspects of economic activity ranging from relative prices of McDonalds’ hamburgers in different countries to oil supplies to industry-specific employment levels.
Table 12.2 shows an ex-ante schedule of macroeconomic news announcements for Tuesday, March 3, 2009, a typical trading day. European news is most often released in the morning of the European trading session while North American markets are closed. Most macroeconomic announcements of the U.S. and Canadian governments are distributed in the morning of the North American session that coincides with afternoon trading in Europe. Most announcements from the Asia Pacific region, which includes Australia and New Zealand, are released during the morning trading hours in Asia.
Many announcements are accompanied by “consensus forecasts,” which are aggregates of forecasts made by economists of various financial institutions. The consensus figures are usually produced by major media and data companies, such as Bloomberg LP, that poll various economists every week and calculate average industry expectations.
Macroeconomic news arrives from every corner of the world. The impact on currencies, commodities, equities, and fixed-income and derivative instruments is usually estimated using event studies, a technique that measures the persistent impact of news on the prices of securities of interest.
🌍 Application of Event Arbitrage
Foreign Exchange Markets
Market responses to macroeconomic announcements in foreign exchange were studied by Almeida, Goodhart, and Payne (1998); Edison (1996); Andersen, Bollerslev, Diebold, and Vega (2003); and Love and Payne (2008), among many others.
Edison (1996) studied macroeconomic news impact on daily changes in the USD-based foreign exchange rates and selected fixed-income securities, and finds that foreign exchange reacts most significantly to news about real economic activity, such as non-farm payroll employment figures. In particular, Edison (1996) shows that for every 100,000 surprise increases in non-farm payroll employment, USD appreciates by 0.2 percent on average. At the same time, the author documents little impact of inflation on foreign exchange rates.
Andersen, Bollerslev, Diebold, and Vega (2003) conducted their analysis on foreign exchange quotes interpolated based on timestamps to create exact 5-minute intervals. The authors show that average exchange rate levels adjust quickly and efficiently to new levels according to the information releases. Volatility, however, takes longer to taper off after the spike surrounding most news announcements. The authors also document that bad news usually has a more pronounced effect than good news.
Andersen, Bollerslev, Diebold, and Vega (2003) use the consensus forecasts compiled by the International Money Market Services (MMS) as the expected value for estimation of surprise component of news announcements. The authors then model the 5-minute changes in spot foreign exchange rate Rt as follows:
where is i-period lagged value of the 5-minute spot rate, is the surprise component of the kth fundamental variable lagged j periods, and is the time-varying volatility that incorporates intra-day seasonalities.
Andersen, Bollerslev, Diebold, and Vega (2003) estimate the impact of the following variables:
- GDP (advance, preliminary, and final figures)
- Non-farm payroll
- Retail sales
- Industrial production
- Capacity utilization
- Personal income
- Consumer credit
- Personal consumption expenditures
- New home sales
- Durable goods orders
- Construction spending
- Factory orders
- Business inventories
- Government budget deficit
- Trade balance
- Producer price index
- Consumer price index
- Consumer confidence index
- Institute for Supply Management (ISM) index (formerly, the National Association of Purchasing Managers [NAPM] index)
- Housing starts
- Index of leading indicators
- Target Fed Funds rate
- Initial unemployment claims
- Money supply (M1, M2, M3)
- Employment
- Manufacturing orders
- Manufacturing output
- Trade balance
- Current account
- Producer prices
- Wholesale price index
- Import prices
- Money stock M3
Andersen, Bollerslev, Diebold, and Vega (2003) considered the following currency pairs: GBP/USD, USD/JPY, DEM/USD, CHF/USD, and EUR/USD from January 3, 1992 through December 30, 1998. The authors document that all currency pairs responded positively, with 99 percent significance, to surprise increases in the following variables: non-farm payroll employment, industrial production, durable goods orders, trade balance, consumer confidence index, and NAPM index. All the currency pairs considered responded negatively to surprise increases in the initial unemployment claims and money stock M3.
Love and Payne (2008) document that macroeconomic news from different countries affects different currency pairs. Love and Payne (2008) studied the impact of the macroeconomic news originating in the United States, the Eurozone, and the UK on the EUR/USD, GBP/USD, and EUR/GBP exchange-rate pairs. The authors find that the U.S. news has the largest effect on the EUR/USD, while GBP/USD is most affected by the news originating in the UK. Love and Payne (2008) also document the specific impact of the type of news from the three regions on their respective currencies.
<br>| News Announcement Type | Region of News Origination | Increase in Prices or Money | Increase of Output | Increase in Trade Balance |
|---|---|---|---|---|
| Eurozone, Effect on EUR | Appreciation | Appreciation | - | |
| UK, Effect on GBP | Appreciation | Appreciation | Appreciation | |
| U.S., Effect on USD | Depreciation | Appreciation | Appreciation |
Equity Markets
A typical trading day is filled with macroeconomic announcements, both domestic and foreign. How does the macroeconomic news impact equity markets?
According to classical financial theory, changes in equity prices are due to two factors: changes in expected earnings of publicly traded firms, and changes in the discount rates associated with those firms. Expected earnings may be affected by changes in market conditions. For example, increasing consumer confidence and consumer spending are likely to boost retail sales, uplifting earnings prospects for retail outfits. Rising labor costs, on the other hand, may signal tough business conditions and decrease earnings expectations as a result.
The discount rate in classical finance is, at its bare minimum, determined by the level of the risk-free rate and the idiosyncratic riskiness of a particular equity share. The risk-free rate pertinent to U.S. equities is often proxied by the 3-month bill issued by the U.S. Treasury; the risk-free rate significant to equities in another country is taken as the short-term target interest rate announced by that country’s central bank. The lower the risk-free rate, the lower the discount rate of equity earnings and the higher the theoretical prices of equities.
How does macroeconomic news affect equities in practice? Ample empirical evidence shows that equity prices respond strongly to interest rate announcements and, in a less pronounced manner, to other macroeconomic news. Decreases in both long-term and short-term interest rates indeed positively affect monthly stock returns with 90 percent statistical confidence for long-term rates and 99 percent confidence for short-term rates. Cutler, Poterba, and Summers (1989) analyzed monthly NYSE stock returns and found that, specifically, for every 1 percent decrease in the yield on 3-month Treasury bills, monthly equity returns on the NYSE increased by 1.23 percent on average in the 1946–1985 sample.
Stock reaction to non-monetary macroeconomic news is usually mixed. Positive inflation shocks tend to induce lower stock returns independent of other market conditions (see Pearce and Roley, 1983, 1985 for details). Several other macroeconomic variables produce reactions conditional on the contemporary state of the business cycle. Higher-than-expected industrial production figures are good news for the stock market during recessions but bad news during periods of high economic activity, according to McQueen and Roley (1993).
Similarly, unexpected changes in unemployment statistics were found to cause reactions dependent on the state of the economy. For example, Orphanides (1992) finds that returns increase when unemployment rises, but only during economic expansions. During economic contractions, returns drop following news of rising unemployment. Orphanides (1992) attributes the asymmetric response of equities to the overheating hypothesis: when the economy is overheated, increase in unemployment actually presents good news. The findings have been confirmed by Boyd, Hu, and Jagannathan (2005).
The asymmetric response to macroeconomic news is not limited to the U.S. markets. Löflund and Nummelin (1997), for instance, observe the asymmetric response to surprises in industrial production figures in the Finnish equity market; they found that higher-than-expected production growth bolsters stocks in sluggish states of the economy.
Whether or not macroeconomic announcements move stock prices, the announcements are usually surrounded by increases in market volatility. While Schwert (1989) pointed out that stock market volatility is not necessarily related to volatility of other macroeconomic factors, surprises in macroeconomic news have been shown to significantly increase market volatility. Bernanke and Kuttner (2005), for example, show that an unexpected component in the interest rate announcements of the U.S. Federal Open Market Committee (FOMC) increase equity return volatility. Connolly and Stivers (2005) document spikes in the volatility of equities constituting the Dow Jones Industrial Average (DJIA) in response to U.S. macroeconomic news.
Higher volatility implies higher risk, and financial theory tells us that higher risk should be accompanied by higher returns. Indeed, Savor and Wilson (2008) show that equity returns on days with major U.S. macroeconomic news announcements are higher than on days when no major announcements are made. Savor and Wilson (2008) consider news announcements to be “major” if they are announcements of Consumer Price Index (CPI), Producer Price Index (PPI), employment figures, or interest rate decisions of the FOMC.
Veronesi (1999) shows that investors are more sensitive to macroeconomic news during periods of higher uncertainty, which drives asset price volatility. In the European markets, Errunza and Hogan (1998) found that monetary and real macroeconomic news has considerable impact on the volatility of the largest European stock markets.
Different sources of information appear to affect equities at different frequencies. The macroeconomic impact on equity data appears to increase with the increase in frequency of equity data. Chan, Karceski, and Lakonishok (1998), for example, analyzed monthly returns for U.S. and Japanese equities in an arbitrage pricing theory setting and found that idiosyncratic characteristics of individual equities are most predictive of future returns at low frequencies. By using factor-mimicking portfolios, Chan, Karceski, and Lakonishok (1998) show that size, past return, book-to-market ratio, and dividend yield of individual equities are the factors that move in tandem (“covary”) most with returns of corresponding equities. However, Chan, Karceski, and Lakonishok (1998, p. 182) document that “the macroeconomic factors do a poor job in explaining return covariation” at monthly return frequencies. Wasserfallen (1989) finds no impact of macroeconomic news on quarterly equities data.
Flannery and Protopapadakis (2002) found that daily returns on the U.S. equities are significantly impacted by several types of macroeconomic news. The authors estimate a GARCH return model with independent variables and found that the following macroeconomic announcements have significant influence on both equity returns and volatility: consumer price index (CPI), producer price index (PPI), monetary aggregate, balance of trade, employment report, and housing starts figures.
Ajayi and Mehdian (1995) document that foreign stock markets in developed countries typically overreact to the macroeconomic news announcements from the United States. As a result, foreign equity markets tend to be sensitive to the USD-based exchange rates and domestic account balances. Sadeghi (1992), for example, notes that in the Australian markets, equity returns increased in response to increases in the current account deficit, the AUD/USD exchange rate, and the real GDP; equity returns decreased following news of rising domestic inflation or interest rates.
Stocks of companies from different industries have been shown to react differently to macroeconomic announcements. Hardouvelis (1987), for example, pointed out that stocks of financial institutions exhibited higher sensitivity to announcements of monetary adjustments. The extent of market capitalization appears to matter as well. Li and Hu (1998) show that stocks with large market capitalization are more sensitive to macroeconomic surprises than are small-cap stocks.
The size of the surprise component of the macroeconomic news impacts equity prices. Aggarwal and Schirm (1992), for example, document that small surprises, those within one standard deviation of the average, caused larger changes in equities and foreign exchange markets than did large surprises.
Fixed-Income Markets
Jones, Lamont, and Lumsdaine (1998) studied the effect of employment and producer price index data on U.S. Treasury bonds. The authors find that while the volatility of the bond prices increased markedly on the days of the announcements, the volatility did not persist beyond the announcement day, indicating that the announcement information is incorporated promptly into prices.
Hardouvelis (1987) and Edison (1996) note that employment figures, producer price index (PPI), and consumer price index (CPI) move bond prices. Krueger (1996) documents that a decline in U.S. unemployment causes higher yields in bills and bonds issued by the U.S. Treasury.
High-frequency studies of the bond market responses to macroeconomic announcements include those by Ederington and Lee (1993); Fleming and Remolona (1997, 1999); and Balduzzi, Elton and Green (2001). Ederington and Lee (1993) and Fleming and Remolona (19## 🧪 Regression Testing
⚙️ Use Case Testing
Use case testing refers to the process of testing the system according to the system performance guidelines defined during the design stage of the system development. A dedicated tester follows the steps of using the system and documents any discrepancies between the observed behavior and the behavior that is supposed to occur.
- Ensures the system operates within its parameters.
💾 Data Set Testing
Data set testing refers to testing the validity of the data, whether historical data used in a back test or real-time data obtained from a streaming data provider.
-
The objective of data testing is to ascertain that the system minimizes undesirable influences and distortions in the data and to ensure that the run-time analysis and trading signal generation works smoothly.
-
Built on the premise that all data received for a particular security should fall into a statistical distribution that is consistent throughout time.
-
The data should also exhibit consistent distributional properties when sampled at different frequencies.
- Example: 1-minute data for USD/CAD should be consistent with historical 1-minute data distribution for USD/CAD observed for the past year.
-
Allows for distributions to change with time, but the observed changes should not be drastic, unless they are caused by a large-scale market disruption.
-
A popular procedure for testing data is based on testing for consistency of autocorrelations:
-
A data set is sampled at a given frequency.
-
Autocorrelations are estimated for a moving window of 30 to 1,000 observations.
-
The obtained autocorrelations are then mapped into a distribution; outliers are identified, and their origin is examined.
-
The distributional properties can be analyzed further to answer the following questions:
- Have the properties of the distribution changed during the past month, quarter, or year?
- Are these changes due to the version of the code or to the addition or removal of programs on the production box?
-
-
The testing should be repeated at different sampling frequencies to ensure that no systemic deviations occur.
🧩 Unit Testing
Unit testing verifies that each individual software component of the system works properly.
-
A unit is a testable part of an application.
-
Definition of a unit can range from the code for the lowest function or method to the functionality of a medium-level component.
- Example: A latency measurement component of the post-trade analysis engine.
-
-
Testing code in small blocks from the ground up ensures that any errors are caught early in the integration process, avoiding expensive system disruptions at later stages.
🤝 Integration Testing
Integration testing is a test of the interoperability of code components; the test is administered to increasingly larger aggregates of code as the system is being built up from modular pieces to its completed state.
- Follows unit testing.
- Ensures that any code defects are caught and fixed early.
⚙️ System Testing
System testing is a post-integration test of the system as a whole.
-
Incorporates several testing processes:
-
Graphical user interface (GUI) software testing: Ensures that the human interface of the system enables the user to perform their tasks.
- Typically ensures that all the buttons and displays that appear on screen are connected with the proper functionality according to the specifications developed during the design phase of the development process.
-
Usability and performance testing: Similar in nature to GUI testing but is not limited to graphical user interfaces and may include such concerns as the speed of a particular functionality.
- Example: How long does the system take to process a "system shutdown" request? Is the timing acceptable from a risk management perspective?
-
Stress testing: Attempts to document and quantify the impact of extreme hypothetical scenarios on the system's performance.
- Example: How does the system react if the price of a particular security drops 10 percent within a very short time? What if an act of God occurs that shuts down the exchange, leaving the system holding its positions?
-
Security testing: Designed to identify possible security breaches and to either provide a software solution for overcoming the breaches or create a breach-detection mechanism and a contingency plan in the event a breach occurs.
- High-frequency trading systems can be vulnerable to security threats coming from the Internet, where unscrupulous users may attempt to hijack account numbers, passwords, and other confidential information in an attempt to steal trading capital.
- Intra-organizational threats should not be underestimated; employees with malicious intent or disgruntled workers having improper access to the trading system can wreak considerable and costly havoc.
-
Scalability testing: Refers to testing the capacity of the system.
- How many securities can the system profitably process at the same time without incurring significant performance impact?
- Every incremental security measure added to the system requires an allocation of computer power and Internet bandwidth.
- A large number of securities processed simultaneously on the same machine may considerably slow down system performance, distorting quotes, trading signals, and the P&L as a result.
- A determination of the maximum permissible number of securities will be based on the characteristics of each trading platform, including available computing power.
-
Reliability testing: Determines the probable rate of failure of the system.
- What are the conditions under which the system fails? How often can we expect these conditions to occur?
- The failure conditions may include unexpected system crashes, shutdowns due to insufficient memory space, and anything else that leads the system to stop operating.
- The failure rate for any well-designed high-frequency trading system should not exceed 0.01 percent (i.e., the system should be guaranteed to remain operational 99.99 percent of the time).
-
Recovery testing: Refers to verification that in an adverse event, whether an act of God or a system crash, the documented recovery process ensures that the system's integrity is restored and it is operational within a prespecified time.
- Ensures that data integrity is maintained through unexpected terminations of the system.
- Scenarios:
- When the application is running and the computer system is suddenly restarted, the application should have valid data upon restart.
- The application should continue operating normally if the network cable should be unexpectedly unplugged and then plugged back in.
-
🎯 Determining Risk Management Goals
-
The primary objective of risk management is to limit potential losses.
-
Competent and thorough risk management in a high-frequency setting is especially important, given that large-scale losses can mount quickly at the slightest shift in behavior of trading strategies.
-
The losses may be due to a wide range of events, such as unforeseen trading model shortcomings, market disruptions, acts of God, compliance breaches, and similar adverse conditions.
-
To effectively manage risk, an organization first needs to create clear and effective processes for measuring risk.
-
The risk management goals, therefore, should set concrete risk measurement methodologies and quantitative benchmarks for risk tolerance associated with different trading strategies as well as with the organization as a whole.
- Expressing the maximum allowable risk in numbers is difficult, and obtaining organization-wide agreement on the subject is even more challenging, but the process pays off over time through quick and efficient daily decisions and the resulting low risk.
-
A thorough goal-setting exercise should achieve senior management consensus with respect to the following questions:
- What are the sources of risk the organization faces?
- What is the extent of risk the organization is willing to undertake? What risk/reward ratios should the organization target? What is the minimum acceptable risk/reward ratio?
- What procedures should be followed if the acceptable risk thresholds are breached?
-
The sources of risk should include the risk of trading losses, as well as credit and counterparty risk, liquidity risk, operational risk, and legal risk.
- Market risk is the risk induced by price movements of all market securities.
- Credit and counterparty risk addresses the ability and intent of trading counterparties to uphold their obligations.
- Liquidity risk measures the ability of the trading operation to quickly unwind positions.
- Operational risk enumerates possible financial losses embedded in daily trading operations.
- Legal risk refers to all types of contract frustration.
-
A successful risk management practice identifies risks pertaining to each of these risk categories.
-
Every introductory finance textbook notes that higher returns, on average, are obtained with higher risk.
- Yet, while riskier returns are on average higher across the entire investing population, some operations with risky exposures obtain high gains, and others suffer severe losses.
-
A successful risk management process should establish the risk budget that the operation is willing to take in the event that the operation ends up on the losing side of the equation.
-
The risks should be quantified as worst-case scenario losses tolerable per day, week, month, and year and should include operational costs, such as overhead and personnel costs.
- Examples of the worst-case losses to be tolerated may be 10 percent of organizational equity per month or a hard dollar amount—for example, $150 million per fiscal year.
-
Once senior management has agreed to the goals of risk management, it becomes necessary to translate the goals into risk processes and organizational structures.
- Processes include development of a standardized approach for review of individual trading strategies and the trading portfolio as a whole.
- Structures include a risk committee that meets regularly, reviews trading performance, and discusses the firm's potential exposure to risks from new products and market developments.
-
The procedures for dealing with breaches of established risk management parameters should clearly document step-by-step actions.
- Corporate officers should be appointed as designated risk supervisors responsible to follow risk procedures.
- The procedures should be written for dealing with situations not if, but when, a risk breach occurs.
- Documented step-by-step action guidelines are critical; academic research has shown that the behavior of investment managers becomes even riskier when investment managers are incurring losses.
- Previously agreed-on risk management procedures eliminate organizational conflicts in times of crisis, when unified and speedy action is most necessary.
📏 Measuring Risk
-
While all risk is quantifiable, the methodology for measuring risk depends on the type of risk under consideration.
-
The Basel Committee on Banking Supervision identifies the following types of risk affecting financial securities:
- Market risk—induced by price movements of market securities
- Credit and counterparty risk—addresses the ability and intent of trading counterparties to uphold their obligations
- Liquidity risk—the ability of the trading operation to quickly unwind positions
- Operational risk—the risk of financial losses embedded in daily trading operations
- Legal risk—the risk of litigation expenses
-
All current risk measurement approaches fall into four categories:
- Statistical models
- Scalar models
- Scenario analysis
- Causal modeling
-
Statistical models generate predictions about worst-case future conditions based on past information.
- The Value-at-Risk (VaR) methodology is the most common statistical risk measurement tool.
- Statistical models are the preferred methodology of risk estimation whenever statistical modeling is feasible.
-
Scalar models establish the maximum foreseeable loss levels as percentages of business parameters, such as revenues, operating costs, and the like.
- The parameters can be computed as averages of several days, weeks, months, or even years of a particular business variable, depending on the time frame most suitable for each parameter.
- Scalar models are frequently used to estimate operational risk.
-
Scenario analysis determines the base, best, and worst cases for the key risk indicators (KRIs).
- The values of KRIs for each scenario are determined as hard dollar quantities and are used to quantify all types of risk.
- Scenario analysis is often referred to as the "stress test."
-
Causal modeling involves identification of causes and effects of potential losses.
- A dynamic simulation model incorporating relevant causal drivers is developed based on expert opinions.
- The simulation model can then be used to measure and manage credit and counterparty risk, as well as operational and legal risks.
📉 Measuring Market Risk
Market risk refers to the probability of and the expected value of a decrease in market value due to market movements. It is the risk of loss of capital due to an adverse price movement in any securities.
-
Many securities can be affected by changes in prices of other, seemingly unrelated, securities.
-
To accurately estimate the risk of a given trading system, it is necessary to have a reasonably complete idea of the returns generated by the trading system.
-
The returns are normally described in terms of distributions.
-
The preferred distributions of returns are obtained from running the system on live capital.
- The back-test obtained from running the model over at least two years of tick data can also be used as a sample distribution of trade returns.
- However, the back-test distribution alone may be misleading because it may fail to account for all the extreme returns and hidden costs that occur when the system is trading live.
-
Once the return distributions have been obtained, the risk metrics are most often estimated using statistical models and VaR in particular.
Value-at-Risk (VaR)
-
The concept of Value-at-Risk (VaR) has emerged as the dominant metric in market risk management estimation.
-
The VaR framework spans two principal measures:
- VaR itself
- The expected shortfall (ES)
-
VaR:
The value of loss in case a negative scenario with the specified probability should occur.
- The probability of the scenario is determined as a percentile of the distribution of historical scenarios that can be strategy or portfolio returns.
- For example, if the scenarios are returns from a particular strategy and all the returns are arranged by their realized value in ascending order from the worst to the best, then the 95 percent VaR corresponds to the cutoff return at the lowest fifth percentile.
-
Expected shortfall (ES):
The average worst-case scenario among all scenarios at or below the prespecified threshold.
- For example, a 95 percent ES is the average return among all returns at the 5 percent or lower percentile.
-
An analytical approximation to true VaR can be found by parameterizing the sample distribution.
- The parametric VaR assumes that the observations are distributed in a normal fashion.
- Specifically, the parametric VaR assumes that the 5 percent in the left tail of the observations fall at of the distribution, where and represent the mean and standard deviation of the observations, respectively.
- The 95 percent parametric VaR is then computed as , while the 95 percent parametric ES is computed as the average of all distribution values from –∞ to .
- Similarly, the 99 percent parametric VaR is computed as , while the 99 percent parametric ES is computed as the average of all distribution values from –∞ to .
- The parametric VaR is an approximation of the true VaR; the applicability of the parametric VaR depends on how close the sample distribution resembles the normal distribution.
Extreme Value Theory (EVT)
-
While the VaR and ES metrics summarize the location and the average of many worst-case scenarios, neither measure indicates the absolute worst scenario that can destroy entire trading operations, banks, and markets.
- Most financial return distributions have fat tails, meaning that the very extreme events lie beyond normal distribution bounds and can be truly catastrophic.
- VaR was described as "relatively useless as a risk-management tool and potentially catastrophic when its use creates a false sense of security among senior managers and watchdogs" and "a fraud".
- Despite all the criticism, VaR and ES have been mainstays of corporate risk management for years, where they present convenient reporting numbers.
-
To alleviate the shortcomings of the VaR, many quantitative outfits began to parameterize extreme tail distribution to develop fuller pictures of extreme losses.
-
Once the tail is parameterized based on the available data, the worst-case extreme events can be determined analytically from distributional functions, even though no extreme events of comparable severity were ever observed in the sample data.
-
The parameterization of the tails is performed using the extreme value theory (EVT).
EVT is an umbrella term spanning a range of tail modeling functions.
-
All fat-tailed distributions belong to the family of Pareto distributions.
-
A Pareto distribution family is described as follows:
$G(x) = \begin{cases} 0 & x ≤ 0 \\ exp(-x^{-α}) & x > 0, α > 0 \end{cases}$- where the tail index is the parameter that needs to be estimated from the return data.
- For raw security returns, the tail index varies from financial security to financial security.
- Even for raw returns of the same financial security, the tail index can vary from one quoting institution to another, especially for really high-frequency estimations.
-
-
When the tail index is determined, we can estimate the magnitude and probability of all the extreme events that may occur, given the extreme events that did occur in the sample.
- Sample return observations obtained from either a back test or live results are arranged in ascending order.
- The tail index value is estimated on the bottom 5 percentile of the sample return distribution.
- Using the distribution function obtained with the tail index, the probabilities of observing the extreme events are estimated.
-
-
The tail index approach allows us to deduce the unobserved return distributions from the sample distributions of observed returns.
- Although the tail index approach is useful, it has its limitations.
- The tail index approach "fills in" the data for the observed returns with theoretical observations.
- If the sample tail distribution is sparse (and it usually is), the tail index distribution function may not be representative of the actual extreme returns.
- In such cases, a procedure known as "parametric bootstrapping" may be applicable.
- Parametric bootstrap simulates observations based on the properties of the sample distribution. The technique "fills in" unobserved returns based on observed sample returns.
- Although the tail index approach is useful, it has its limitations.
Parametric Bootstrap
-
The parametric bootstrap process works as follows:
-
The sample distribution of observed returns delivered by the manager is decomposed into three components using a basic market model:
- The manager's skill, or alpha
- The manager's return due to the manager's portfolio correlation with the benchmark
- The manager's idiosyncratic error
-
The decomposition is performed using the standard market model regression:
$R_{i,t} = α_i + β_{i,x}R_{x,t} + ε_t$- where is the manager's raw return in period t, is the raw return on the chosen benchmark in period t, is the measure of the manager's money management skill or alpha, and is a measure of the dependency of the manager's raw returns on the benchmark returns.
- Once parameters and are estimated, three pools of data are generated: one for (constant for given manager, benchmark, and return sample), , and .
-
Next, the data is resampled as follows:
-
A value is drawn at random from the pool of idiosyncratic errors, .
-
Similarly, a value is drawn at random from the pool of .
-
A new sample value is created as follows:
$R̂_{i,t}^S = α̂_i + β̂_{i,x}R_{x,t}^S + ε_t^S$ -
The sampled variables and are returned to their pools (not eliminated from the sample).
-
-
The resampling process outlined in steps a–d above is then repeated a large number of times deemed sufficient to gain a better perspective on the distribution of tails.
- As a rule of thumb, the resampling process should be repeated at least as many times as there were observations in the original sample.
- It is not uncommon for the bootstrap process to be repeated thousands of times.
- The resampled values can differ from the observed sample distribution, thus expanding the sample data set with extra observations conforming to the properties of the original sample.
-
The new distribution values obtained through the parametric process are now treated as were other sample values and are incorporated into the tail index, VaR, and other risk management calculations.
-
-
The parametric bootstrap relies on the assumption that the raw returns' dependence on a benchmark as well as the manager's alpha remain constant through time.
- This does not have to be the case.
- Managers with dynamic strategies spanning different asset classes are likely to have time-varying dependencies on several benchmarks.
-
Despite this shortcoming, the parametric bootstrap allows risk managers to glean a fuller notion of the true distribution of returns given the distribution of returns observed in the sample.
-
To incorporate portfolio managers' benchmarks into the VaR framework, analyzing the "tracking error" of the manager's return in excess of his benchmark is proposed.
Tracking error is a contemporaneous difference between the manager's return and the return on the manager's benchmark index:
TE_t = ln(R_{i,t}) - ln(R_{X,t})- where is the manager's return at time t and is return on the manager's benchmark, also at time t.
-
The VaR parameters are then estimated on the tracking error observations.
-
In addition to VaR, statistical models may include Monte Carlo simulation–based methods to estimate future market values of capital at risk.
- The Monte Carlo simulations are often used in determining derivatives exposure.
-
Scenario analyses and causal models can be used to estimate market risk as well.
- These auxiliary types of market risk estimation, however, rely excessively on qualitative assessment and can, as a result, be misleading in comparison with VaR estimates, which are based on realized historical performance.
🏦 Measuring Credit and Counterparty Risk
Credit and counterparty risk reflects the probability of financial loss should one party in the trading equation not live up to its obligations.
-
An example of losses due to a counterparty failure is a situation in which a fund's money is custodied with a broker-dealer, and the broker-dealer goes bankrupt.
-
Credit risk is manifest in decisions to extend lines of credit or margins.
-
Credit risk determines the likelihood that creditors will default on their margin calls, should they encounter any.
-
In structured products, credit risk measures the likelihood and the impact of default of the product underwriter, called the reference entity.
-
The measurement of credit and counterparty risk was delegated to dedicated third-party agencies that used statistical analysis overlaid with scenario and causal modeling.
-
As credit and counterparty data become increasingly available, it may make good sense for firms to statistically rate their counterparties internally.
-
Entities with publicly traded debt are the easiest counterparties to rank.
- The lower the creditworthiness of the entity, the lower the market price on the senior debt issued by the entity and the higher the yield the entity has to pay out to attract investors.
- The spread, or the difference between the yield on the debt of the entity under consideration and the yield on the government debt with comparable maturity, is a solid indicator of the creditworthiness of the counterparty.
- The higher the spread, the lower the creditworthiness of the counterparty.
- Because yields and spreads are inversely related to the prices of the bonds, the creditworthiness of a counterparty can also be measured on the basis of the relative bond price of the firm: the lower the bond price, the higher the yield and the lower the creditworthiness.
-
Market prices of corporate debt provide objective information about the issuer's creditworthiness.
- The prices are determined by numerous market participants analyzing the firms' strategies and financial prospects and arriving at their respective valuations.
-
A diversification of counterparties is the best way to protect the operation from credit and counterparty risk.
-
The creditworthiness of private entities with unobservable market values of obligations can be approximated as that of a public firm with matching factors.
- The matching factors should include the industry, geographic location, annual earnings of the firms to proxy for the firms' sizes, and various accounting ratios, such as the quick ratio to assess short-term solvency.
- Once a close match with publicly traded debt is found for the private entity under evaluation, the spread on the senior debt of the publicly traded firm is used in place of that for the evaluated entity.
-
In addition to the relative creditworthiness score, the firms may need to obtain a VaR-like number to measure credit and counterparty risk.
-
This number is obtained as an average of exposure to each counterparty weighted by the counterparty's relative probability of default:
$CCExposure = \sum_{i=1}^N Exposure_i \cdot PD_i$-
where CCExposure is the total credit and counterparty exposure of the organization, N is the total number of counterparties of the organization, is the dollar exposure of the ith counterparty, and is the probability of default of the ith counterparty:
$PD_i = \frac{100 - (Creditworthiness \ Rank)}{100}\%$
-
-
The total credit and counterparty exposure is then normalized by the capital of the firm and added to the aggregate VaR number.
-
💧 Measuring Liquidity Risk
Liquidity risk measures the firm's potential inability to unwind or hedge positions in a timely manner at current market prices. It can result in minor price slippages due to the delay in trade execution and can cause collapses of market systems in its extreme.
-
The inability to close out positions is normally due to low levels of market liquidity relative to the position size.
-
The lower the market liquidity available for a specific instrument, the higher the liquidity risk associated with that instrument.
-
Levels of liquidity vary from instrument to instrument and depend on the number of market participants willing to transact in the instrument under consideration.
-
Trading liquidity risk vs. balance sheet liquidity risk:
- Trading liquidity risk is the firm's potential inability to unwind or hedge positions in a timely manner at current market prices.
- Balance sheet liquidity risk is the inability to finance the shortfall in the balance sheet either through liquidation or borrowing.
-
To properly assess the liquidity risk exposure of a portfolio, it is necessary to take into account all potential portfolio liquidation costs, including the opportunity costs associated with any delays in execution.
- While liquidation costs are stable and are easy to estimate during periods with little volatility, the liquidation costs can vary wildly during high-volatility regimes.
-
Liquidity-adjusted VaR measure:
$VaR_L = VaR + Liquidity \ Adjustment = VaR - (µ_S + z_α σ_S)$- where VaR is the market risk value-at-risk, is the mean expected bid-ask spread, is the standard deviation of the bid-ask spread, and is the confidence coefficient corresponding to the desired α–percent of the VaR estimation.
- Both and can be estimated either from raw spread data or from the Roll model.
-
Using Kyle's λ measure, the VaR liquidity adjustment can be similarly computed through estimation of the mean and standard deviation of the trade volume:
$VaR_L = VaR + Liquidity \ Adjustment = VaR - (α̂ + λ̂(µ_{NVOL} + z_α σ_{NVOL}))$-
where and are estimated using OLS regression following Kyle:
$P_t = α + λ \cdot NVOL_t + ε_t$- is the change in market price due to market impact of orders, and is the difference between the buy and sell market depths in period t.
-
-
Using the Amihud illiquidity measure, the adjustment can be applied as follows:
$VaR_L = VaR + Liquidity \ Adjustment = VaR - (µ_γ + z_α σ_γ)$-
where and are the mean and standard deviation of the Amihud illiquidity measure γ,
- , is the number of trades executed during time period t, is the relative price change following trade d during trade period t, and is the trade quantity executed within trade d.
-
⚙️ Measuring Operational Risk
Operational risk is the risk of financial losses resulting from one or more of the following situations: inadequate or failed internal controls, policies, or procedures, failure to comply with government regulations, systems failures, fraud, human error, external catastrophes.
-
Operational risk can affect the firm in many ways.
- A risk of fraud can taint the reputation of the firm and will therefore become a "reputation risk."
- Systems failures may result in disrupted trading activity and lost opportunity costs for capital allocation.
-
The Basel Committee for Bank Supervision has issued the following examples of different types of operational risk:
- Internal fraud
- External fraud
- Employment practices and workplace safety
- Clients, products, and business practice
- Damage to physical assets
- Business disruption and systems failures
- Execution, delivery, and process management
-
Few statistical frameworks have been developed for measurement of operational risk; the risk is estimated using a combination of scalar and scenario analyses.
-
Quantification of operational risk begins with the development of hypothetical scenarios of what can go wrong in the operation.
- Each scenario is then quantified in terms of the dollar impact the scenario will produce on the operation in the base, best, and worst cases.
- To align the results of scenario analysis with the VaR results obtained from estimates of other types of risk, the estimated worst-case dollar impact on operations is then normalized by the capitalization of the trading operation and added to the market VaR estimates.
⚖️ Measuring Legal Risk
Legal risk measures the risk of breach of contractual obligations. It addresses all kinds of potential contract frustration, including contract formation, seniority of contractual agreements, and the like.
-
The estimation of legal risk is conducted by a legal expert affiliated with the firm, primarily using a causal framework.
- The causal analysis identifies the key risk indicators embedded in the current legal contracts of the firm and then works to quantify possible outcomes caused by changes in the key risk indicators.
-
As with other types of risk, the output of legal risk analysis is a VaR number, a legal loss that has the potential to occur with just a 5 percent probability for a 95 percent VaR estimate.
🚧 Managing Risk
-
Once market risk has been estimated, a market risk management framework can be established to minimize the adverse impact of the market risk on the trading operation.
-
Most risk management systems work in the following two ways:
- Stop losses—stop current transaction(s) to prevent further losses
- Hedging—hedge risk exposure with complementary financial instruments
🛑 Stop Losses
A stop loss is a threshold price of a given security, which, if crossed by the market price, triggers liquidation of the current position.
-
In credit and counterparty risk, a stop loss is a level of counterparty creditworthiness below which the trading operation makes a conscious decision to stop dealing with the deteriorating counterparty.
-
In liquidity risk, the stop loss is the minimum level of liquidity that warrants opened positions in a given security.
-
In operations risk, the stop loss is a set of conditions according to which a particular operational aspect is reviewed and terminated, if necessary.
- Example: Compromised Internet security may mandate a complete shutdown of trading operations until the issue is resolved.
-
In legal risk, a stop loss can be a settlement when otherwise incurred legal expenses are on track to exceed the predetermined stop-loss level.
Simple Stop Loss
- In market risk management, a simple stop loss defines a fixed level of the threshold price.
- Example: If at 12:00 P.M. EST USD/CAD was bought at 1.2000 and set a simple stop loss at 50 bps, the position will be liquidated whenever the level of USD/CAD drops below 1.1950, provided the position is not closed sooner for some other reason.
Trailing Stop
-
A trailing stop, on the other hand, takes into account the movement of the security's market price from the time the trading position was opened.
- The trailing stop "trails" the security's market price.
-
Unlike the simple stop that defines a fixed price level at which to trigger a stop loss, the trailing stop defines a fixed stop-loss differential relative to the maximum gain attained in the position.
-
Example: If USD/CAD was bought at 12:00 P.M. EST at 1.2000 and set a trailing stop loss at 50 bps.
- If by 12:15 P.M. EST the market price for USD/CAD rose to 1.2067, but by 13:30 P.M. EST the market price dropped down to 1.1940, the trailing stop loss would be triggered at 50 bps less the market price corresponding to the highest local maximum of the gain function.
- In this example, the local maximum of gain appeared at 1.2067 when the position gain was 1.2067−1.2000 = 0.0067.
- The corresponding trailing stop loss would be hit as soon as the market price for USD/CAD dipped below 1.2067–50 bps = 1.2017, resulting in a realized profit of 17 bps, a big improvement over performance with a simple stop loss.
-
If the stop-loss threshold is too narrow, the position may be closed due to short-term variations in prices or even due to variation in the bid-ask spread.
-
If the stop-loss threshold is too wide, the position may be closed too late, resulting in severe drawdowns.
-
Many trading practitioners calibrate the stop-loss thresholds to the intrinsic volatility of the traded security.
- Example: If a position is opened during high-volatility conditions with price bouncing wildly, a trader will set wide stop losses. At the same time, for positions opened during low-volatility conditions, narrow stop thresholds are required.
-
The actual determination of the stop-loss threshold based on market volatility of the traded security is typically calibrated with the following two factors in mind:
- Average gain of the trading system without stop losses in place,
- Average loss of the trading system without stop losses in place,
-
The probabilities of a particular position turning out positive also play a role in determining the optimal stop-loss threshold, but their role is a much smaller one than that of the averages.
-
The main reason for the relative insignificance of probabilities of relative occurrence of gains and losses is that per the Gambler's Ruin Problem, the probability of a gain must always exceed the probability of a loss on any given trade; otherwise, the system faces the certainty of bankruptcy.
Quiz
Test your knowledge
Flashcards
Study the content
Podcast
Listen to your notes
📈 Chapter 1: Introduction
📈 第 1 章:簡介
High-Frequency Trading (HFT)
高頻交易(HFT)
- Definition: High-frequency trading involves a high turnover of capital in rapid, computer-driven responses to changing market conditions.
定義:高頻交易涉及透過電腦對不斷變化的市場條件做出快速反應,實現資本的高週轉。 - Characteristics: 特徵:
- Higher number of trades.
交易數量增加。 - Lower average gain per trade (fraction of a percent).
每筆交易的平均收益較低(百分之幾)。 - Few, if any, positions carried overnight.
如果有的話,持倉過夜的也很少。
- Higher number of trades.
- Example: Jim Simons of Renaissance Technologies Corp. reportedly earned $2.5 billion in 2008 using HFT strategies.
例如:據報道,文藝復興科技公司的吉姆·西蒙斯 (Jim Simons) 在 2008 年利用 HFT 策略賺取了 25 億美元。
Advantages of HFT 高頻交易的優勢
- Risk Mitigation: 風險緩解:
- Eliminates overnight risk due to market volatility.
消除因市場波動而產生的隔夜風險。 - Full transparency of account holdings.
帳戶持有情況完全透明。 - Avoids overnight carry costs (interest rates above LIBOR).
避免隔夜持有成本(高於 LIBOR 的利率)。
- Eliminates overnight risk due to market volatility.
- Diversification: Low or no correlation with traditional long-term strategies.
多元化:與傳統長期策略相關性較低或不相關。 - Shorter Evaluation Periods: Performance can be statistically ascertained within a month.
更短的評估期:可以在一個月內統計確定績效。 - Operational Savings:
- Reduced staff headcount through automation.
- Lower incidence of errors due to human hesitation and emotion.
- Societal Benefits:
- Increased market efficiency.
- Added liquidity.
- Innovation in computer technology.
- Stabilization of market systems.
- Liquidity: HFT can be applied to any sufficiently liquid financial instrument. A “liquid instrument” can be a financial security that has enough buyers and sellers to trade at any time of the trading day.
Analogy
- Financial markets are like a human body, where HFT is analogous to blood circulating, flushing out toxins, healing wounds, and regulating temperature. Low-frequency investment decisions can destabilize the circulatory system.
Geographic Locations
- Major Hubs: New York, Connecticut, London, Singapore, and Chicago.
- Specializations:
- Chicago: Fast trading strategies for futures, options, and commodities.
- New York and Connecticut: U.S. equities.
- London: Currencies.
- Singapore: Asian markets.
Prominent Firms
- Millennium
- DE Shaw
- Worldquant
- Renaissance Technologies
Trading Strategies
- Most high-frequency firms are hedge funds or proprietary investment vehicles.
- Four main classes:
- Automated liquidity provision.
- Market microstructure trading.
- Event trading.
- Deviations arbitrage.
| Strategy | Description | Holding Period |
|---|---|---|
| Automated Liquidity Provision | Quantitative algorithms for optimal pricing and execution of market-making positions. | < 1 minute |
| Market Microstructure Trading | Identifying trading party order flow through reverse engineering of observed quotes. | < 10 minutes |
| Event Trading | Short-term trading on macro events. | < 1 hour |
| Deviations Arbitrage | Statistical arbitrage of deviations from equilibrium: triangle trades, basis trades, and the like. | < 1 day |
Challenges in Developing HFT Systems
- Dealing with large volumes of intra-day data.
- Precision of signals to trigger trades in fractions of a second.
- Speed of execution through computer automation.
- Requires advanced skills in software development.
- Human supervision to ensure the system runs within risk boundaries.
- Computer security challenges (Internet viruses).
- Ongoing maintenance and upgrades to keep up with IT expenditures.
Book's Purpose and Audience
- Purpose: To provide a practical guide for building high-frequency systems.
- Target Audience:
- Senior management in investment and broker-dealer functions.
- Institutional investors.
- Quantitative analysts.
- IT staff.
- Academics and business students.
- Individual investors.
- Aspiring high-frequency traders, risk managers, and government regulators.
Book Structure
- Part 1: History and business environment of HFT systems.
- Part 2: Statistical and econometric foundations of common HFT strategies.
- Part 3: Details of modeling HFT strategies.
- Part 4: Steps required to build a quality HFT system.
- Part 5: Running, monitoring, and benchmarking HFT systems.
Key Takeaways
- The best-performing strategies are confidential and seldom publicized.
- The book aims to illustrate how established academic research can be applied to capture market inefficiencies.
⚙️ Chapter 2: Evolution of High-Frequency Trading
Impact of Technology on Financial Markets
- Technological innovation has a persistent impact on financial markets.
- Technology has improved news dissemination, financial analysis, and communication speed.
- New arbitrage opportunities have emerged through technology.
Historical Trading Methods
- Manual Trading:
- Slow, error-prone, and expensive.
- Errors from market movements during quote requests.
- Unreliable due to human communication.
- Electronic Dealing Systems (1980s):
- Aggregated market data.
- Simultaneous information distribution.
- Automated trading capabilities.
- Key Systems:
- Designated Order Turnaround (DOT) by NYSE.
- NASDAQ’s Computer Assisted Execution System.
Systematic Trading in the 1990s
- Factors for Delay: High computing costs and low throughput of electronic orders.
- Globex (1992): Chicago Mercantile Exchange's first electronic platform.
- ISE (2000): First fully electronic U.S. options exchange.
Increased Trading Volume
- Daily trade volume increased significantly with technological developments.
- Industry Structure: Shift from rigid hierarchical to flat, decentralized networks.
Traditional 20th-Century Financial Network
- Exchanges/Inter-Dealer Networks: Centralized marketplaces.
- Broker-Dealers: Proprietary trading and customer transactions.
- Transacting Clients: Investment banking clients, corporations, medium-sized firms, high-net-worth individuals.
- Investment Institutions: Brokerages providing trading access to smaller institutions and retail clients.
Decentralized Financial Markets
- Competing Exchanges: Increased trading liquidity.
- Electronic Communication Networks (ECNs): Sophisticated algorithms for order transmission and matching.
- Dark Liquidity Pools: Trader identities and orders remain anonymous.
Technical Analysis
- Definition: Identifying recurring patterns in security prices.
- Techniques: Moving averages, MACD, market events.
- Historical Significance: Prospered when trading technology was less advanced.
- Modern Relevance: Limited to small, less liquid securities.
Fundamental Analysis
- Definition: Pricing securities based on future cash flows or expected economic variables.
- Applications:
- Equities: Present values of future cash flows.
- Foreign Exchange: Macroeconomic models.
- Derivatives: Advanced econometric models.
- Commodities: Supply and demand analysis.
Quant Trading
- Definition: Mathematical model-fueled trading methodology.
- Statistical Arbitrage (Stat-Arb): Exploiting market inefficiencies using mathematical models.
- Speed and Technology: Emphasis on fast computers and algorithmic trading.
Algorithmic Trading
- Definition: Systematic execution process to optimize buy-and-sell decisions.
- Function: Order processing, market aggressiveness, order splitting.
- Exogenous Decisions: Decisions about when to buy or sell are pre-determined.
HFT Evolution
- Response to Advances: Developed in the 1990s due to computer technology.
- Fully Automated: Fueled profitability and further technology development.
- Cost Savings: Replaced expensive traders with less expensive algorithms.
- Increased Demand: From buy-side investors.
- Pure Return (Alpha): Added to portfolios with little correlation to traditional buy-and-hold strategies.
Differentiation
- Electronic Trading: Transmitting orders electronically (becoming obsolete).
- Algorithmic Trading: Encompasses order execution and HFT portfolio allocation decisions.
- Systematic Trading: Computer-driven trading positions held for varying durations.
Key Characteristics
- Short position holding times (one day or shorter).
- Fully systematic.
- Algorithmic.
🏢 Chapter 3: Overview of the Business of High-Frequency Trading
Key Characteristics of HFT
- Tick-by-tick data processing: Analyzing every tick of data, separated by milliseconds.
- High capital turnover: Fast reallocation of trading capital.
- Intra-day entry and exit of positions: Closing positions at the end of each trading day to avoid overnight carrying costs.
- Algorithmic trading: Utilizing algorithms for market information processing and trading decisions.
Comparison with Traditional Approaches to Trading
| Characteristic | HFT | Traditional Trading |
|---|---|---|
| Data Processing | Tick-by-tick | Daily or less frequent |
| Decision Making | Quantitative, Algorithmic | Discretionary, Human-Driven |
| Position Time | Intra-day, short | Weeks, months, or years |
| Analysis | Technical, Fundamental, or Quant | Technical or Fundamental |
Market Participants
- Competitors:
- Proprietary trading divisions of investment banks.
- Hedge funds.
- Independent proprietary trading operations.
- Investors:
- Fund of funds.
- Hedge funds.
- Private equity firms.
- Services and Technology Providers:
- Electronic execution brokers (Goldman Sachs, Credit Suisse).
- Electronic communication networks (ECNs).
- Custody and clearing services.
- Software:
- Computerized generation of trading signals.
- Computer-aided analysis (MatLab, R).
- Internet-wide information-gathering.
- Trading software (MarketFactory).
- Run-time risk management applications.
- Mobile applications.
- Real-time third-party research.
- Legal, Accounting, and Other Professional Services.
Operating Model
- Three main components:
- Econometric models for short-term price forecasting.
- Advanced computer systems.
- Capital applied and monitored within a risk management framework.
- Computerized vs. Traditional Trading Costs:
| Cost Aspect | HFT | Traditional Trading |
|---|---|---|
| Initial Investment | High upfront (model design) | Lower upfront |
| Ongoing Costs | Low support staff | Consistent staffing costs |
| Model Development | High | Low |
Model Development Process
- Ideas from academic research.
- Advanced econometric modeling.
- Back-testing using tick data.
- Market depth analysis.
System Implementation Workflow
- Receive and archive real-time quotes.
- Develop buy and sell signals from econometric models.
- Keep track of open positions, P&L.
- Manage risk of open positions.
- Evaluate performance post-trade.
- Evaluate costs post-trade.
Trading Platform
- Platform-independent systems using FIX language.
Risk Management
- Competent risk management is crucial.
Economics
- Revenue is driven by leverage and Sharpe ratio.
- Leverage: Borrowing four times the investment equity.
- Sharpe Ratio: Measures returns compared to risk; high Sharpe ratios reduce the risk of losses.
Capitalizing an HFT Business
- Equity contributions from founders, private equity, investor capital, or parent company.
- Debt leverage through bank loans or margin lending.
🏦 Chapter 4: Financial Markets Suitable for High-Frequency Trading
Key Requirements
- Ability to quickly move in and out of positions.
- Sufficient market volatility.
- Liquid assets.
- Electronic execution.
Suitable Markets
- Spot Foreign Exchange
- Equities
- Options
- Futures
Liquidity Comparison
- Most Liquid: Foreign exchange.
- Followed By: Recently issued U.S. Treasury securities.
- Then: Equities, options, commodities, and futures.
Market Suitability Analysis
- Available liquidity
- Electronic trading capability
- Regulatory considerations
Fixed-Income Markets
- Interest rate market
- Bond market
- Both use spot, futures, and swap contracts.
Interest Rate Markets
- Spot interest rates (interbank interest rates)
- Quoted by banks to other banks.
- Maturity periods: Overnight, tomorrow next, one week, one month, etc.
- Interest rate futures
- Contracts to buy and sell underlying interest rates in the future.
- Based on the 3-month deposit rate.
- Typically mature in three months.
- Settle quarterly.
- Swaps
- Most still trade OTC, though selected swaps are making inroads into electronic trading.
- Electronic programs for 30-day Fed Funds futures and CBOT interest rate swap futures.
Bond Markets
- Publicly issued debt obligations.
- Issued by federal government, local governments, and publicly held corporations.
- Embed various options.
- Settlement and delivery rules vary by exchange.
結算和交割規則因交易所而異。
Foreign Exchange Markets
- Foreign exchange rate is a swap of interest rates denominated in different currencies.
- Spot, forward, and swap foreign exchange products trade through a decentralized and unregulated mechanism.
- BIS estimates a total foreign exchange market in 2007 of $3 trillion daily.
Equity Markets
- Breadth of markets offer efficiencies; 2,764 stocks were listed on NYSE alone in 2006.
- In addition to stocks, equity markets trade exchange-traded funds (ETFs), warrants, certificates, and structured products.
- Provide full electronic trading functionality for all their offerings.
Commodity Markets
- Spot commodity contracts provide physical delivery of goods, ill-suited for high-frequency trading.
- Electronically traded and liquid commodity futures and options can provide viable and profitable trading strategies.
- Futures of agricultural commodities may have irregular expiry dates.
📊 Chapter 5: Evaluating Performance of High-Frequency Strategies
Basic Return Characteristics
- Return Frequency: Hourly, daily, monthly, quarterly, and annually.
- Average Annual Return: The simplistic summary of the location of the mean of the return distribution.
- Volatility of Returns: Measures the dispersion of returns around the average return, often computed as the standard deviation of returns.
- Maximum Drawdown: Documents the maximum severity of losses observed in historical data.
Comparative Ratios
| Measure | Description |
|---|---|
| Sharpe Ratio | (Average Return - Risk-Free Rate) / Standard Deviation. Measures risk-adjusted return. |
| Treynor Ratio | (Average Return - Risk-Free Rate) / Beta. Measures excess return per unit of systematic risk. |
| Jensen's Alpha | Measures trading return in excess of the return predicted by CAPM. It is defined as E[r_{i}] - r_{f} - β_{i}(r_{M} - r_{f}). |
| Calmar Ratio | Average Return / Maximum Drawdown. |
| Sterling Ratio | Average Return / Average Drawdown. |
| Burke Ratio | Average Return / Standard Deviation of Drawdowns. |
| Omega | ((E[r] - \tau) + 1) / LPM_{1i}((\tau)). It is defined as the first lower partial moment, the average of the returns that fell below the selected benchmark. |
| Sortino Ratio | ((E[r] - \tau) / \sqrt{LPM_{2i}(\tau)}). Measures excess return over downside risk. |
| Kappa 3 | ((E[r] - \tau) / \sqrt[3]{LPM_{3i}(\tau)}). Replaces the standard deviation in the Sharpe ratio with the third LPM of the returns, the skewness of the returns below the benchmark. |
| Upside Potential Ratio | Measures the average return above the benchmark. |
Performance Attribution
-
Goal: Identifying factors that contribute to portfolio performance.
-
Formula:
- Returns are expressed as R_{it} = α_{i} + ∑ b_{ik} F_{kt} + u_{it}.
- b_{k} measures strategy performance attributed to factor k.
- α_{i} measures the strategy’s persistent ability to generate abnormal returns.
- u_{it} measures the strategy’s idiosyncratic return in period t.
-
Benefits of Performance Attribution:
- Accurately captures investment styles.
- Measures true added value.
- Allows forecasting of strategy performance.
Other Considerations
- Strategy Capacity:
- Impacted by market liquidity and position sizes.
- May depend on manager skills and trading costs.
Length of Evaluation Period
- Minimum Evaluation Periods Required for Sharpe Ratio Verification:
- Formula:
- T_{min} = (1.645^2 / SR^2) * (1 + 0.5SR^2)
- Formula:
🧮 Chapter 6: Orders, Traders, and Their Applicability to High-Frequency Trading
📑 Order Types
- Order Price Specifications
-
Market Orders
Orders to buy or sell a security at the best available price when the order is placed.
-
Limit Orders
Orders to buy or sell a security at a particular price.
-
📈 Order Timing Specifications
| Order Type | Description |
|---|---|
| GTC | "Good Till Canceled". Remain active until completely filled. |
| GTD | "Good Till Date". Remain in the order book until completely filled or the specified expiry date. |
| Day Orders | "Good for the Day". Remain in the order book until completely filled or the end of the trading day. |
| GTT | "Good Till Time". Remain in the order book until completely filled or a specified expiry time. |
📐 Order Size Specifications
-
Vanilla Orders
Orders in standard contract sizes traded on the exchange, typically in "round lots".
-
Odd Lots
Smaller orders that are filled by a designated odd lot dealer.
-
Mixed Lots
Orders bigger than round lots, yet not in round-lot multiples.
-
Fill or Kill (FOK) Orders
Orders to be filled immediately in full, or in part, with the unfilled quantity killed.
-
Fill and Kill (FAK) Orders
Orders to be either filled immediately in full or killed in entirety.
-
All or None (AON) Orders
Remain in the order book with their original time priorities until they can be filled in full.
🕵️ Order Disclosure Specifications
-
Varies by exchange and ECN.
-
Orders can be executed with full transparency or with limited disclosure.
-
Iceberg Orders
Orders with only a portion of the order size observable to other market participants.
-
Anonymous Orders
Orders placed without disclosing the identity of the trader or the trading institution.
🛑 Stop-Loss and Take-Profit Orders
- Orders that become market or limit orders when a specified stop price is reached or passed.
🗂️ Administrative Orders
-
Change Orders
Used to change a pending limit order.
-
Margin Call Close Orders
Initiated by the executing counterparty when a trader's cash is insufficient to cover losses.
-
Phone-In Orders
Called in by a customer and are charged a transaction cost premium.
📉 Chapter 7: Market Inefficiency and Profit Opportunities at Different Frequencies
🎯 Predictability and Market Efficiency
- Aims to generate trading signals that result in consistently positive outcomes over a large number of trades.
💰 Gaining Insights on Trading Frequency Statistics
- The profitability of a trading strategy is bound by the chosen trading frequency.
- The trading frequency data helps in analyzing a particular financial security to determine whether price changes are random or not.
🧑🏫 Measuring Trading Opportunity
- Gain potential in the high-frequency space is remarkable, so is the maximum potential loss.
- With well-designed trading strategies high-frequency trading can produce the highest profitability.
⚙️ Testing for Market Efficiency and Predictability
- To identify markets with arbitrage opportunities is to find inefficient markets. The arbitrage opportunities themselves are market inefficiencies.
- Efficiency is measured with test which is designed to help the researchers to select the most profitable markets.
Random Walk Hypothesis and Market Efficiency 🚀
The random walk hypothesis suggests that price changes in a market are unpredictable and follow a random pattern.
At any given time, the change in log price is equally likely to be positive and negative.
Mathematically, the change in log price ((cid:3)lnP_t) can be represented as:
Where ε_t is the error term with a mean of 0.
- Drift: The random walk process can have a drift (µ), indicating an average change in prices over time. This could be due to factors like persistent inflation.
Lo and MacKinlay Test 🧪
- Lo and MacKinlay developed a test to check if a price follows a random walk.
- If price changes at a given frequency are random, changes at a lower frequency should also be random.
- The variances of price changes at different frequencies should be deterministically related.
- The reverse is not necessarily true: randomness in lower frequency changes doesn't guarantee randomness in higher frequency changes.
Equations for the Test
- Estimated mean:
- Variance estimator 1:
- Variance estimator 2:
- Test statistic:
- If the time series follows a random walk, the test statistic J_r will have a normal distribution.
Market Efficiency Findings 📊
- Lo and MacKinlay found that market efficiency could not be rejected for weekly and monthly equity data, but daily equity prices were not efficient.
- Six major USD crosses are more efficient than the S&P 500, indicating fewer arbitrage opportunities.
- USD/CAD is the most efficient currency pair among the six major USD crosses.
Efficiency vs. Frequency ⏱️
- The efficiency of spot instruments decreases (arbitrage opportunities increase) with increases in data sampling frequency.
- For example, EUR/USD daily spot rate inefficiency is higher when measured at 1-hour intervals than at daily intervals.
Autoregression-Based Tests ⚙️
- Trading strategies perform best in the least efficient markets, where arbitrage opportunities exist.
- Perfectly efficient markets instantaneously incorporate all available information, allowing no dependencies from past price movements.
- Market efficiency can be measured by estimating the explanatory power of past prices.
Mech and Hou-Moskowitz Approach
- Market efficiency can be measured as the difference between Adjusted R² coefficients of:
- An unrestricted model attempting to explain returns with lagged variables.
- A restricted model involving no past data.
Unrestricted Model
Where r_{i,t} is the return on security i at time t.
Restricted Model
Restricts all coefficients β_{i,j} to be 0.
Market Inefficiency Calculation
- The closer the difference is to 0, the smaller the influence of past price movements and the higher the market efficiency.
Martingale Hypothesis 🎲
- In an efficient market, prices fluctuate randomly. The expected price of a security given current information is always the current price itself. This relationship is known as a martingale.
Martingale Condition
Where:
- E[P_{t+1} | I_t] is the expected price at time t+1 given the information set I_t
- P_t is the price at time t
- {P} is a stochastic price process
Abnormal Returns
- Abnormal returns (Z_{t+1}) are returns in excess of expected returns given current information.
Market Efficiency Condition
A market is efficient when abnormal return Z_{t+1} is a “fair game”.
Fama's Suggestion
- The efficient market hypothesis is difficult to test because it contains a joint hypothesis:
- Expected values of returns are a function of information.
- Differences of realized returns from their expected values are random.
Froot and Thaler's Test
- Froot and Thaler derived a specification for a test of market efficiency of a foreign exchange rate.
Uncovered Interest Rate Parity
- The price of a foreign exchange rate is a function of interest rates in countries on either side of the interest rate and risk premium ξ_t of the exchange rate.
Where:
- r_t is the foreign interest rate
- r_t^d is the domestic interest rate
Martingale Hypothesis
Where:
- S_{t+1} is the realized spot exchange rate at time t+1
- E[u_{t+1} | I_t] = 0
Test for Market Efficiency
Where {ε_t} series is independent and identically distributed with mean 0.
Forward Rate Form
Where:
- F_t is the forward rate
- E[υ_t] = 0
- Variance of υ_t is σ²
High-Frequency Specification
Where:
- Mean of (cid:3)υ is E[(cid:3)υ] = 0
- Variance of (cid:3)υ is 2σ²
Cointegration-Based Tests 🧲
- Cointegration between two variables implies systematic predictability. If some market factor X predicts spot exchange rate S, then:
Where ε_t is stationary and E[ε_t] = 0.
Cointegration Test Specification
Where:
- η_t is an independent, identically distributed error term with mean 0
- α measures the speed of the model’s adjustment to its long-term equilibrium
- β and γ measure short-term impact of lagged changes in X and S.
Speculative vs. Arbitraging Efficiencies
- Speculative efficiency hypothesis: The expected rate of return from speculation in the forward market conditioned on available information is zero.
- Arbitraging efficiency hypothesis: The expected return on a portfolio composed of long one unit of currency and short one future contract on that unit of currency is zero (uncovered interest arbitrage).
Conclusion 🔚
- Market efficiency tests illuminate different aspects of a security’s price and return dependency on other variables.
- Taking advantage of market inefficiency requires understanding the different tests that identified the inefficiency.
- A security may be predictable at one frequency and fully random at another.
- Price changes of two or more securities may be random when considered individually, but the price changes of a combination may be predictable, and vice versa.Okay, here is a study guide that can be programmatically appended to other study guides:
💾 Autocorrelation of Order Flows
Like market aggressiveness, order flows exhibit autocorrelation. Many articles discuss this, including:
- Biais, Hillion, and Spatt (1995)
- Foucault (1999)
- Parlour (1998)
- Foucault, Kadan, and Kandel (2005)
- Goettler, Parlour, and Rajan (2005, 2007)
- Rosu (2005)
Ellul, Holden, Jain, and Jennings (2007) interpret short-term autocorrelation in high-frequency order flows as waves of competing order flows responding to current market events within liquidity depletion and replenishment. They confirm strong positive serial correlation in order flow at high frequencies but find negative order firm correlation at lower frequencies on the New York Stock Exchange.
Hollifield, Miller, and Sandas (2004) test the relationship of the limit order fill rate at different profitability conditions on a single Swedish stock. They find asymmetries in investor behavior on the two sides of the market of the Finnish Stock Exchange.
Foucault, Kadan, and Kandel (2005) and Rosu (2005) make predictions about order flow autocorrelations that support the diagonal autocorrelation effect first documented in Biais, Hillion, and Spatt (1995).
🎯 Conclusion: Market Participants
Understanding the type and motivation of each market participant can unlock profitable trading strategies. For example, understanding whether a particular market participant possesses information about impending market movement may result in immediate profitability from either engaging the trader if he is uninformed or following his moves if he has superior information.
📰 Event Arbitrage
With news reported instantly and trades placed on a tick-by-tick basis, high-frequency strategies are now ideally positioned to profit from the impact of announcements on markets. These high-frequency strategies, which trade on the market movements surrounding news announcements, are collectively referred to as event arbitrage.
This section investigates the mechanics of event arbitrage in the following order:
- Overview of the development process
- Generating a price forecast through statistical modeling of
- Directional forecasts
- Point forecasts
- Applying event arbitrage to corporate announcements, industry news, and macroeconomic news
- Documented effects of events on foreign exchange, equities, fixed income, futures, emerging economies, commodities, and REIT markets
👨🏻💻 Developing Event Arbitrage Trading Strategies
Event arbitrage refers to the group of trading strategies that place trades on the basis of the markets’ reaction to events.
The events may be economic or industry-specific occurrences that consistently affect the securities of interest time and time again. For example, unexpected increases in the Fed Funds rates consistently raise the value of the U.S. dollar, simultaneously raising the rate for USD/CAD and lowering the rate for AUD/USD. The announcements of the U.S. Fed Funds decisions, therefore, are events that can be consistently and profitably arbitraged.
The goal of event arbitrage strategies is to identify portfolios that make positive profit over the time window surrounding each event. The time window is typically a time period beginning just before the event and ending shortly afterwards. For events anticipated ex-ante, such as scheduled economic announcements, the portfolio positions may be opened ahead of the announcement or just after the announcement. The portfolio is then fully liquidated shortly after the announcement.
Trading positions can be held anywhere from a few seconds to several hours and can result in consistently profitable outcomes with low volatilities. The speed of response to an event often determines the trade gain; the faster the response, the higher the probability that the strategy will be able to profitably ride the momentum wave to the post-announcement equilibrium price level. As a result, event arbitrage strategies are well suited for high-frequency applications and are most profitably executed in fully automated trading environments.
Developing an event arbitrage trading strategy harnesses research on equilibrium pricing and leverages statistical tools that assess tick-by-tick trading data and events the instant they are released.
Most event arbitrage strategies follow a three-stage development process:
- For each event type, identify dates and times of past events in historical data.
- Compute historical price changes at desired frequencies pertaining to securities of interest and surrounding the events identified in Step 1.
- Estimate expected price responses based on historical price behavior surrounding past events.
The sources of dates and times for specified events that occurred in the past can be collected from various Internet sites. Most announcements recur at the same time of day and make the job of collecting the data much easier. U.S. unemployment announcements, for example, are always released at 8:30 A.M. Eastern time. Some announcements, such as those of the U.S. Federal Open Markets Committee interest rate changes, occur at irregular times during the day and require greater diligence in collecting the data.
🗓️ What Constitutes an Event?
The events used in event arbitrage strategies can be any releases of news about economic activity, market disruptions, and other events that consistently impact market prices. Classic financial theory tells us that in efficient markets, the price adjusts to new information instantaneously following a news release.
In practice, market participants form expectations about inflation figures well before the formal statistics are announced. Many financial economists are tasked with forecasting inflation figures based on other continuously observed market variables, such as prices on commodity futures and other market securities. When such forecasts become available, market participants trade securities on the basis of the forecasts, impounding their expectations into prices well before the formal announcements occur.
All events do not have the same magnitude. Some events may have positive and negative impacts on prices, and some events may have more severe consequences than others. The magnitude of an event can be measured as a deviation of the realized event figures from the expectations of the event. The price of a particular stock, for example, should adjust to the net present value of its future cash flows following a higher- or lower-than-expected earnings announcement. However, if earnings are in line with investor expectations, the price should not move.
Similarly, in the foreign exchange market, the level of a foreign exchange pair should change in response to an unexpected change—for example, in the level of the consumer price index (CPI) of the domestic country. If, however, the domestic CPI turns out to be in line with market expectations, little change should occur.
The key objective in the estimation of news impact is the determination of what actually constitutes the unexpected change, or news. The earliest macroeconomic event studies, such as those of Frenkel (1981) and Edwards (1982), considered news to be an out-of-sample error based on the one-step-ahead autoregressive forecasts of the macroeconomic variable in question. The thinking went that most economic news develops slowly over time, and the trend observed during the past several months or quarters is the best predictor of the value to be released on the next scheduled news release day. The news, or the unexpected component of the news release, is then the difference between the value released in the announcement and the expectation formed on the basis of autoregressive analysis.
Researchers such as Eichenbaum and Evans (1993) and Grilli and Roubini (1993) have been using the autoregressive framework to predict the decisions of the central bankers, including the U.S. Federal Reserve.
The main rationale behind the autoregressive predictability of the central bankers’ actions is that the central bankers are not at liberty to make drastic changes to economic variables under their control, given that major changes may trigger large-scale market disruptions. Instead, the central bankers adopt and follow a longer-term course of action, gradually adjusting the figures in their control, such as interest rates and money supply, to lead the economy in the intended direction.
The empirical evidence of the impact of news defined in the autoregressive fashion shows that the framework indeed can be used to predict future movements of securities. Yet the impact is best seen in shorter terms—for example, on intra-day data. Almeida, Goodhart, and Payne (1998) documented a significant effect of macroeconomic news announcements on the USD/DEM exchange rate sampled at five-minute intervals. The authors found that news announcements pertaining to the U.S. employment and trade balance were particularly significant predictors of exchange rates, but only within two hours following the announcement. On the other hand, U.S. non-farm payroll and consumer confidence news announcements caused price momentum lasting 12 hours or more following an announcement.
Lately, surprises in macroeconomic announcements have been measured relative to published averages of economists’ forecasts. For example, every week Barron’s and the Wall Street Journal publish consensus forecasts for the coming week’s announcements. The forecasts are developed from a survey of field economists.
📈 Forecasting Methodologies
Directional and point forecasts are the two approaches to estimating the price response to an announcement. A directional forecast predicts whether the price of a particular security will go up or down, whereas a point forecast predicts the level to which the new price will go.
Directional Forecasts
Directional forecasts of the post-event price movement of the security price can be created using the sign test. The sign test answers the following question: does the security under consideration consistently move up or down in response to announcements of a certain kind?
The sign test assumes that in the absence of the event, the price change, or the return, is equally likely to be positive or negative. When an event occurs, however, the return can be persistently positive or negative, depending on the event. The sign test aims to estimate whether a persistently positive or negative sign of the response to a specific event exists and whether the response is statistically significant. If the sign test produces a statistically significant result, an event arbitrage trading strategy is feasible.
MacKinlay (1997) specifies the following test hypotheses for the sign test:
- The null hypothesis, , states that the event does not cause consistent behavior in the price of interest—that is, the probability p of the price moving consistently in one direction in response to the event is less than or equal to 50 percent.
- The alternative hypothesis, , is that the event does cause consistent behavior in the price of the security of interest—in other words, the probability p of the price moving consistently in one direction in response to the event is greater than 50 percent.
We next define N to be the total number of events and let denote the number of events that were accompanied by positive return of the security under our consideration. The null hypothesis is rejected, and the price of the security is determined to respond consistently to the event with statistical confidence of if the asymptotic test statistic , where
.
Trading USD/CAD on U.S. Inflation Announcements
The latest figures tracking U.S. inflation are released monthly at 8:30 A.M. on prespecified dates. On release, USD/CAD spot and other USD crosses undergo an instantaneous one-time adjustment, at least in theory. Identifying when and how quickly the adjustments happen in practice, we can construct profitable trading strategies that capture changes in price levels following announcements of the latest inflation figures.
Even to a casual market observer, the movement of USD/CAD at the time inflation figures are announced suggests that the price adjustment may not be instantaneous and that profitable trading opportunities may exist surrounding U.S. inflation announcements. When the sign test is applied to intra-day USD/CAD spot data, it indeed shows that profitable trading opportunities are plentiful. These opportunities, however, exist only at high frequencies.
The first step in identification of profitable trading opportunities is to define the time period from the announcement to the end of the trading opportunity, known as the “event window.” We select data sample windows surrounding the recent U.S. inflation announcements in the tick-level data from January 2002 through August 2008. As all U.S. inflation announcements occur at 8:30 A.M. EST, we define 8 A.M. to 9 A.M. as the trading window and download all of the quotes and trades recorded during that time. We partition the data further into 5-minute, 1-minute, 30-second, and 15-second blocks. We then measure the impact of the announcement on the corresponding 5-minute, 1-minute, 30-second, and 15-second returns of USD/CAD spot.
According to the purchasing power parity (PPP), a spot exchange rate between domestic and foreign currencies is the ratio of the domestic and foreign inflation rates. When the U.S. inflation rate changes, the deviation disturbs the PPP equilibrium and the USD-based exchange rates adjust to new levels. When the U.S. inflation rate rises, USD/CAD is expected to increase instantaneously, and vice versa. To keep matters simple, in this example we will consider the inflation news in the same fashion as it is announced, ignoring the market’s pre-announcement adjustment to expectations of inflation figures.
The sign test then tells us during which time intervals, if any, the market properly and consistently responds to announcements during our “trading window” from 8 to 9 A.M. The sample includes only days when inflation rates were announced. The summary of the results is presented in Table 12.1.
Looking at 5-minute intervals surrounding the U.S. inflation announcements, it appears that USD/CAD reacts persistently only to decreases in the U.S. inflation rate and that reaction is indeed instantaneous. USD/CAD decreases during the 5-minute interval from 8:25 A.M. to 8:30 A.M. in response to announcements of lower inflation with 95 percent statistical confidence. The response may potentially support the instantaneous adjustment hypothesis; after all, the U.S. inflation news is released at 8:30 A.M., at which point the adjustment to drops in inflation appears to be completed. No statistically significant response appears to occur following rises in inflation.
Higher-frequency intervals tell us a different story—the adjustments occur in short-term bursts. At 1-minute intervals, for example, the adjustment to increases in inflation can now be seen to consistently occur from 8:34 to 8:35 A.M. This post-announcement adjustment, therefore, presents a consistent profit-taking opportunity.
Splitting the data into 30-second intervals, we observe that the number of tradable opportunities increases further. For announcements of rising inflation, the price adjustment now occurs in four 30-second post-announcement intervals. For the announcements showing a decrease in inflation, the price adjustment occurs in one 30-second post-announcement time interval.
Examining 15-second intervals, we note an even higher number of time-persistent trading opportunities. For rising inflation announcements, there are five 15-second periods during which USD/CAD consistently increased in response to the inflation announcement between 8:30 and 9:00 A.M., presenting ready tradable opportunities. Six 15-second intervals consistently accompany falling inflation announcements during the same 8:30 to 9:00 A.M. timeframe.
In summary, as we look at shorter time intervals, we detect a larger number of statistically significant currency movements in response to the announcements. The short-term nature of the opportunities makes them conducive to a systematic (i.e., black-box) approach, which, if implemented knowledgeably, reduces risk of execution delays, carrying costs, and expensive errors in human judgment.
Point Forecasts
Whereas directional forecasts provide insight about direction of trends, point forecasts estimate the future value of price in equilibrium following an announcement. Development of point forecasts involves performing event studies on very specific trading data surrounding event announcements of interest.
Event studies measure the quantitative impact of announcements on the returns surrounding the news event and are usually conducted as follows:
- The announcement dates, times, and “surprise” changes are identified and recorded. To create useful simulations, the database of events and the prices of securities traded before and after the events should be very detailed, with events categorized carefully and quotes and trades captured at high frequencies. The surprise component can be measured in following ways:
- As the difference between the realized value and the prediction based on autoregressive analysis
- As the difference between the realized value and the analyst forecast consensus obtained from Bloomberg or Thomson Reuters.
- The returns corresponding to the times of interest surrounding the announcements are calculated for the securities under consideration. For example, if the researcher is interested in evaluating the impact of CPI announcements on the 5-minute change in USD/CAD, the 5-minute change in USD/CAD is calculated from 8:30 A.M. to 8:35 A.M. on historical data on past CPI announcement days. (The 8:30 to 8:35 A.M. interval is chosen for the 5-minute effect of CPI announcements, because the U.S. CPI announcements are always released at 8:30 A.M. ET.)
- The impact of the announcements is then estimated in a simple linear regression:
where
- is the vector of returns surrounding the announcement for the security of interest arranged in the order of announcements;
- is the vector of “surprise” changes in the announcements arranged in the order of announcements;
- is the idiosyncratic error pertaining to news announcements;
- is the estimated intercept of the regression that captures changes in returns due to factors other than announcement surprises;
- measures the average impact of the announcement on the security under consideration.
Changes in equity prices are adjusted by changes in the overall market price to account for the impact of broader market influences on equity values. The adjustment is often performed using the market model of Sharpe (1964):
where is the expected equity return estimated over historical data using the market model:
The methodology was first developed by Ball and Brown (1968), and the estimation method to this day delivers statistically significant trading opportunities.
Event arbitrage trading strategies may track macroeconomic news announcements, earnings releases, and other recurring changes in the economic information. During a typical trading day, numerous economic announcements are made around the world. The news announcements may be related to a particular company, industry, or country; or, like macroeconomic news, they may have global consequences. Company news usually includes quarterly and annual earnings releases, mergers and acquisitions announcements, new product launch announcements, and the like. Industry news comprises industry regulation in a particular country, the introduction of tariffs, and economic conditions particular to the industry. Macroeconomic news contains interest rate announcements by major central banks, economic indicators determined from government-collected data, and regional gauges of economic performance.
With the development of information technology such as RSS feeds, alerts, press wires, and news aggregation engines such as Google, it is now feasible to capture announcements the instant they are released. A well-developed automated event arbitrage system captures news, categorizes events, and matches events to securities based on historical analysis.
🛍️ Tradable News
Corporate News
Corporate activity such as earnings announcements, both quarterly and annual, significantly impacts equity prices of the firms releasing the announcements. Unexpectedly positive earnings typically lift equity prices, and unexpectedly negative earnings often depress corporate stock valuation. Earnings announcements are preceded by analyst forecasts. The announcement that is materially different from the economists’ consensus forecast results in a rapid adjustment of the security price to its new equilibrium level. The unexpected component of the announcements is computed as the difference between the announced value and the mean or median economists’ forecast. The unexpected component is the key variable used in estimation of the impact of an event on prices.
Theoretically, equities are priced as present values of future cash flows of the company, discounted at the appropriate interest rate determined by Capital Asset Pricing Model (CAPM), the arbitrage pricing theory of Ross (1976), or the investor-specific opportunity cost:
Equity Price =
where are the expected cash flows of the company at a future time , and is the discount rate found appropriate for discounting time dividends to present. Unexpected changes to earnings generate rapid price responses whereby equity prices quickly adjust to new information about earnings. Significant deviations of earnings from forecasted values can cause large market movements and can even result in market disruptions. To prevent large-scale impacts of earnings releases on the overall market, most earnings announcements are made after the markets close.
Other firm-level news also affects equity prices. The effect of stock splits, for example, has been documented by Fama, Fisher, Jensen, and Roll (1969), who show that the share prices typically increase following a split relative to their equilibrium price levels.
Event arbitrage models incorporate the observation that earnings announcements affect each company differently. The most widely documented firm-level factors for evaluation include the size of the firm market capitalization (for details, see Atiase, 1985; Freeman, 1987; and Fan-fah, Mohd, and Nasir, 2008).
Industry News
Industry news consists mostly of legal and regulatory decisions along with announcements of new products. These announcements reverberate throughout the entire sector and tend to move all securities in that market in the same direction.
Unlike macroeconomic news that is collected and disseminated in a systematic fashion, industry news usually emerges in an erratic fashion. Empirical evidence on regulatory decisions suggests that decisions relaxing rules governing activity of a particular industry result in higher equity values, whereas the introduction of rules constricting activity pushes equity values down. The evidence includes the findings of Navissi, Bowman, and Emanuel (1999), who ascertained that announcements of relaxation or elimination of price controls resulted in an upswing in equity values and that the introduction of price controls depressed equity prices. Boscaljon (2005) found that the relaxation of advertising rules by the U.S. Food and Drug Administration was accompanied by rising equity values.
Macroeconomic News
Macroeconomic decisions and some observations are made by government agencies on a predetermined schedule. Interest rates, for example, are reset by economists at the central banks, such as the U.S. Federal Reserve or the Bank of England. On the other hand, variables such as consumer price indices (CPIs) are typically not set but are observed and reported by statistics agencies affiliated with the countries’ central banks.
Other macroeconomic indices are developed by research departments of both for-profit and nonprofit private companies. The ICSC Goldman store sales index, for example, is calculated by the International Council of Shopping Centers (ICSC) and is actively supported and promoted by the Goldman Sachs Group. The index tracks weekly sales at sample retailers and serves as an indicator of consumer confidence: the more confident consumers are about the economy and their future earnings potential, the higher their retail spending and the higher the value of the index. Other indices measure different aspects of economic activity ranging from relative prices of McDonalds’ hamburgers in different countries to oil supplies to industry-specific employment levels.
Table 12.2 shows an ex-ante schedule of macroeconomic news announcements for Tuesday, March 3, 2009, a typical trading day. European news is most often released in the morning of the European trading session while North American markets are closed. Most macroeconomic announcements of the U.S. and Canadian governments are distributed in the morning of the North American session that coincides with afternoon trading in Europe. Most announcements from the Asia Pacific region, which includes Australia and New Zealand, are released during the morning trading hours in Asia.
Many announcements are accompanied by “consensus forecasts,” which are aggregates of forecasts made by economists of various financial institutions. The consensus figures are usually produced by major media and data companies, such as Bloomberg LP, that poll various economists every week and calculate average industry expectations.
Macroeconomic news arrives from every corner of the world. The impact on currencies, commodities, equities, and fixed-income and derivative instruments is usually estimated using event studies, a technique that measures the persistent impact of news on the prices of securities of interest.
🌍 Application of Event Arbitrage
Foreign Exchange Markets
Market responses to macroeconomic announcements in foreign exchange were studied by Almeida, Goodhart, and Payne (1998); Edison (1996); Andersen, Bollerslev, Diebold, and Vega (2003); and Love and Payne (2008), among many others.
Edison (1996) studied macroeconomic news impact on daily changes in the USD-based foreign exchange rates and selected fixed-income securities, and finds that foreign exchange reacts most significantly to news about real economic activity, such as non-farm payroll employment figures. In particular, Edison (1996) shows that for every 100,000 surprise increases in non-farm payroll employment, USD appreciates by 0.2 percent on average. At the same time, the author documents little impact of inflation on foreign exchange rates.
Andersen, Bollerslev, Diebold, and Vega (2003) conducted their analysis on foreign exchange quotes interpolated based on timestamps to create exact 5-minute intervals. The authors show that average exchange rate levels adjust quickly and efficiently to new levels according to the information releases. Volatility, however, takes longer to taper off after the spike surrounding most news announcements. The authors also document that bad news usually has a more pronounced effect than good news.
Andersen, Bollerslev, Diebold, and Vega (2003) use the consensus forecasts compiled by the International Money Market Services (MMS) as the expected value for estimation of surprise component of news announcements. The authors then model the 5-minute changes in spot foreign exchange rate Rt as follows:
where is i-period lagged value of the 5-minute spot rate, is the surprise component of the kth fundamental variable lagged j periods, and is the time-varying volatility that incorporates intra-day seasonalities.
Andersen, Bollerslev, Diebold, and Vega (2003) estimate the impact of the following variables:
- GDP (advance, preliminary, and final figures)
- Non-farm payroll
- Retail sales
- Industrial production
- Capacity utilization
- Personal income
- Consumer credit
- Personal consumption expenditures
- New home sales
- Durable goods orders
- Construction spending
- Factory orders
- Business inventories
- Government budget deficit
- Trade balance
- Producer price index
- Consumer price index
- Consumer confidence index
- Institute for Supply Management (ISM) index (formerly, the National Association of Purchasing Managers [NAPM] index)
- Housing starts
- Index of leading indicators
- Target Fed Funds rate
- Initial unemployment claims
- Money supply (M1, M2, M3)
- Employment
- Manufacturing orders
- Manufacturing output
- Trade balance
- Current account
- Producer prices
- Wholesale price index
- Import prices
- Money stock M3
Andersen, Bollerslev, Diebold, and Vega (2003) considered the following currency pairs: GBP/USD, USD/JPY, DEM/USD, CHF/USD, and EUR/USD from January 3, 1992 through December 30, 1998. The authors document that all currency pairs responded positively, with 99 percent significance, to surprise increases in the following variables: non-farm payroll employment, industrial production, durable goods orders, trade balance, consumer confidence index, and NAPM index. All the currency pairs considered responded negatively to surprise increases in the initial unemployment claims and money stock M3.
Love and Payne (2008) document that macroeconomic news from different countries affects different currency pairs. Love and Payne (2008) studied the impact of the macroeconomic news originating in the United States, the Eurozone, and the UK on the EUR/USD, GBP/USD, and EUR/GBP exchange-rate pairs. The authors find that the U.S. news has the largest effect on the EUR/USD, while GBP/USD is most affected by the news originating in the UK. Love and Payne (2008) also document the specific impact of the type of news from the three regions on their respective currencies.
<br>| News Announcement Type | Region of News Origination | Increase in Prices or Money | Increase of Output | Increase in Trade Balance |
|---|---|---|---|---|
| Eurozone, Effect on EUR | Appreciation | Appreciation | - | |
| UK, Effect on GBP | Appreciation | Appreciation | Appreciation | |
| U.S., Effect on USD | Depreciation | Appreciation | Appreciation |
Equity Markets
A typical trading day is filled with macroeconomic announcements, both domestic and foreign. How does the macroeconomic news impact equity markets?
According to classical financial theory, changes in equity prices are due to two factors: changes in expected earnings of publicly traded firms, and changes in the discount rates associated with those firms. Expected earnings may be affected by changes in market conditions. For example, increasing consumer confidence and consumer spending are likely to boost retail sales, uplifting earnings prospects for retail outfits. Rising labor costs, on the other hand, may signal tough business conditions and decrease earnings expectations as a result.
The discount rate in classical finance is, at its bare minimum, determined by the level of the risk-free rate and the idiosyncratic riskiness of a particular equity share. The risk-free rate pertinent to U.S. equities is often proxied by the 3-month bill issued by the U.S. Treasury; the risk-free rate significant to equities in another country is taken as the short-term target interest rate announced by that country’s central bank. The lower the risk-free rate, the lower the discount rate of equity earnings and the higher the theoretical prices of equities.
How does macroeconomic news affect equities in practice? Ample empirical evidence shows that equity prices respond strongly to interest rate announcements and, in a less pronounced manner, to other macroeconomic news. Decreases in both long-term and short-term interest rates indeed positively affect monthly stock returns with 90 percent statistical confidence for long-term rates and 99 percent confidence for short-term rates. Cutler, Poterba, and Summers (1989) analyzed monthly NYSE stock returns and found that, specifically, for every 1 percent decrease in the yield on 3-month Treasury bills, monthly equity returns on the NYSE increased by 1.23 percent on average in the 1946–1985 sample.
Stock reaction to non-monetary macroeconomic news is usually mixed. Positive inflation shocks tend to induce lower stock returns independent of other market conditions (see Pearce and Roley, 1983, 1985 for details). Several other macroeconomic variables produce reactions conditional on the contemporary state of the business cycle. Higher-than-expected industrial production figures are good news for the stock market during recessions but bad news during periods of high economic activity, according to McQueen and Roley (1993).
Similarly, unexpected changes in unemployment statistics were found to cause reactions dependent on the state of the economy. For example, Orphanides (1992) finds that returns increase when unemployment rises, but only during economic expansions. During economic contractions, returns drop following news of rising unemployment. Orphanides (1992) attributes the asymmetric response of equities to the overheating hypothesis: when the economy is overheated, increase in unemployment actually presents good news. The findings have been confirmed by Boyd, Hu, and Jagannathan (2005).
The asymmetric response to macroeconomic news is not limited to the U.S. markets. Löflund and Nummelin (1997), for instance, observe the asymmetric response to surprises in industrial production figures in the Finnish equity market; they found that higher-than-expected production growth bolsters stocks in sluggish states of the economy.
Whether or not macroeconomic announcements move stock prices, the announcements are usually surrounded by increases in market volatility. While Schwert (1989) pointed out that stock market volatility is not necessarily related to volatility of other macroeconomic factors, surprises in macroeconomic news have been shown to significantly increase market volatility. Bernanke and Kuttner (2005), for example, show that an unexpected component in the interest rate announcements of the U.S. Federal Open Market Committee (FOMC) increase equity return volatility. Connolly and Stivers (2005) document spikes in the volatility of equities constituting the Dow Jones Industrial Average (DJIA) in response to U.S. macroeconomic news.
Higher volatility implies higher risk, and financial theory tells us that higher risk should be accompanied by higher returns. Indeed, Savor and Wilson (2008) show that equity returns on days with major U.S. macroeconomic news announcements are higher than on days when no major announcements are made. Savor and Wilson (2008) consider news announcements to be “major” if they are announcements of Consumer Price Index (CPI), Producer Price Index (PPI), employment figures, or interest rate decisions of the FOMC.
Veronesi (1999) shows that investors are more sensitive to macroeconomic news during periods of higher uncertainty, which drives asset price volatility. In the European markets, Errunza and Hogan (1998) found that monetary and real macroeconomic news has considerable impact on the volatility of the largest European stock markets.
Different sources of information appear to affect equities at different frequencies. The macroeconomic impact on equity data appears to increase with the increase in frequency of equity data. Chan, Karceski, and Lakonishok (1998), for example, analyzed monthly returns for U.S. and Japanese equities in an arbitrage pricing theory setting and found that idiosyncratic characteristics of individual equities are most predictive of future returns at low frequencies. By using factor-mimicking portfolios, Chan, Karceski, and Lakonishok (1998) show that size, past return, book-to-market ratio, and dividend yield of individual equities are the factors that move in tandem (“covary”) most with returns of corresponding equities. However, Chan, Karceski, and Lakonishok (1998, p. 182) document that “the macroeconomic factors do a poor job in explaining return covariation” at monthly return frequencies. Wasserfallen (1989) finds no impact of macroeconomic news on quarterly equities data.
Flannery and Protopapadakis (2002) found that daily returns on the U.S. equities are significantly impacted by several types of macroeconomic news. The authors estimate a GARCH return model with independent variables and found that the following macroeconomic announcements have significant influence on both equity returns and volatility: consumer price index (CPI), producer price index (PPI), monetary aggregate, balance of trade, employment report, and housing starts figures.
Ajayi and Mehdian (1995) document that foreign stock markets in developed countries typically overreact to the macroeconomic news announcements from the United States. As a result, foreign equity markets tend to be sensitive to the USD-based exchange rates and domestic account balances. Sadeghi (1992), for example, notes that in the Australian markets, equity returns increased in response to increases in the current account deficit, the AUD/USD exchange rate, and the real GDP; equity returns decreased following news of rising domestic inflation or interest rates.
Stocks of companies from different industries have been shown to react differently to macroeconomic announcements. Hardouvelis (1987), for example, pointed out that stocks of financial institutions exhibited higher sensitivity to announcements of monetary adjustments. The extent of market capitalization appears to matter as well. Li and Hu (1998) show that stocks with large market capitalization are more sensitive to macroeconomic surprises than are small-cap stocks.
The size of the surprise component of the macroeconomic news impacts equity prices. Aggarwal and Schirm (1992), for example, document that small surprises, those within one standard deviation of the average, caused larger changes in equities and foreign exchange markets than did large surprises.
Fixed-Income Markets
Jones, Lamont, and Lumsdaine (1998) studied the effect of employment and producer price index data on U.S. Treasury bonds. The authors find that while the volatility of the bond prices increased markedly on the days of the announcements, the volatility did not persist beyond the announcement day, indicating that the announcement information is incorporated promptly into prices.
Hardouvelis (1987) and Edison (1996) note that employment figures, producer price index (PPI), and consumer price index (CPI) move bond prices. Krueger (1996) documents that a decline in U.S. unemployment causes higher yields in bills and bonds issued by the U.S. Treasury.
High-frequency studies of the bond market responses to macroeconomic announcements include those by Ederington and Lee (1993); Fleming and Remolona (1997, 1999); and Balduzzi, Elton and Green (2001). Ederington and Lee (1993) and Fleming and Remolona (19## 🧪 Regression Testing
⚙️ Use Case Testing
Use case testing refers to the process of testing the system according to the system performance guidelines defined during the design stage of the system development. A dedicated tester follows the steps of using the system and documents any discrepancies between the observed behavior and the behavior that is supposed to occur.
- Ensures the system operates within its parameters.
💾 Data Set Testing
Data set testing refers to testing the validity of the data, whether historical data used in a back test or real-time data obtained from a streaming data provider.
-
The objective of data testing is to ascertain that the system minimizes undesirable influences and distortions in the data and to ensure that the run-time analysis and trading signal generation works smoothly.
-
Built on the premise that all data received for a particular security should fall into a statistical distribution that is consistent throughout time.
-
The data should also exhibit consistent distributional properties when sampled at different frequencies.
- Example: 1-minute data for USD/CAD should be consistent with historical 1-minute data distribution for USD/CAD observed for the past year.
-
Allows for distributions to change with time, but the observed changes should not be drastic, unless they are caused by a large-scale market disruption.
-
A popular procedure for testing data is based on testing for consistency of autocorrelations:
-
A data set is sampled at a given frequency.
-
Autocorrelations are estimated for a moving window of 30 to 1,000 observations.
-
The obtained autocorrelations are then mapped into a distribution; outliers are identified, and their origin is examined.
-
The distributional properties can be analyzed further to answer the following questions:
- Have the properties of the distribution changed during the past month, quarter, or year?
- Are these changes due to the version of the code or to the addition or removal of programs on the production box?
-
-
The testing should be repeated at different sampling frequencies to ensure that no systemic deviations occur.
🧩 Unit Testing
Unit testing verifies that each individual software component of the system works properly.
-
A unit is a testable part of an application.
-
Definition of a unit can range from the code for the lowest function or method to the functionality of a medium-level component.
- Example: A latency measurement component of the post-trade analysis engine.
-
-
Testing code in small blocks from the ground up ensures that any errors are caught early in the integration process, avoiding expensive system disruptions at later stages.
🤝 Integration Testing
Integration testing is a test of the interoperability of code components; the test is administered to increasingly larger aggregates of code as the system is being built up from modular pieces to its completed state.
- Follows unit testing.
- Ensures that any code defects are caught and fixed early.
⚙️ System Testing
System testing is a post-integration test of the system as a whole.
-
Incorporates several testing processes:
-
Graphical user interface (GUI) software testing: Ensures that the human interface of the system enables the user to perform their tasks.
- Typically ensures that all the buttons and displays that appear on screen are connected with the proper functionality according to the specifications developed during the design phase of the development process.
-
Usability and performance testing: Similar in nature to GUI testing but is not limited to graphical user interfaces and may include such concerns as the speed of a particular functionality.
- Example: How long does the system take to process a "system shutdown" request? Is the timing acceptable from a risk management perspective?
-
Stress testing: Attempts to document and quantify the impact of extreme hypothetical scenarios on the system's performance.
- Example: How does the system react if the price of a particular security drops 10 percent within a very short time? What if an act of God occurs that shuts down the exchange, leaving the system holding its positions?
-
Security testing: Designed to identify possible security breaches and to either provide a software solution for overcoming the breaches or create a breach-detection mechanism and a contingency plan in the event a breach occurs.
- High-frequency trading systems can be vulnerable to security threats coming from the Internet, where unscrupulous users may attempt to hijack account numbers, passwords, and other confidential information in an attempt to steal trading capital.
- Intra-organizational threats should not be underestimated; employees with malicious intent or disgruntled workers having improper access to the trading system can wreak considerable and costly havoc.
-
Scalability testing: Refers to testing the capacity of the system.
- How many securities can the system profitably process at the same time without incurring significant performance impact?
- Every incremental security measure added to the system requires an allocation of computer power and Internet bandwidth.
- A large number of securities processed simultaneously on the same machine may considerably slow down system performance, distorting quotes, trading signals, and the P&L as a result.
- A determination of the maximum permissible number of securities will be based on the characteristics of each trading platform, including available computing power.
-
Reliability testing: Determines the probable rate of failure of the system.
- What are the conditions under which the system fails? How often can we expect these conditions to occur?
- The failure conditions may include unexpected system crashes, shutdowns due to insufficient memory space, and anything else that leads the system to stop operating.
- The failure rate for any well-designed high-frequency trading system should not exceed 0.01 percent (i.e., the system should be guaranteed to remain operational 99.99 percent of the time).
-
Recovery testing: Refers to verification that in an adverse event, whether an act of God or a system crash, the documented recovery process ensures that the system's integrity is restored and it is operational within a prespecified time.
- Ensures that data integrity is maintained through unexpected terminations of the system.
- Scenarios:
- When the application is running and the computer system is suddenly restarted, the application should have valid data upon restart.
- The application should continue operating normally if the network cable should be unexpectedly unplugged and then plugged back in.
-
🎯 Determining Risk Management Goals
-
The primary objective of risk management is to limit potential losses.
-
Competent and thorough risk management in a high-frequency setting is especially important, given that large-scale losses can mount quickly at the slightest shift in behavior of trading strategies.
-
The losses may be due to a wide range of events, such as unforeseen trading model shortcomings, market disruptions, acts of God, compliance breaches, and similar adverse conditions.
-
To effectively manage risk, an organization first needs to create clear and effective processes for measuring risk.
-
The risk management goals, therefore, should set concrete risk measurement methodologies and quantitative benchmarks for risk tolerance associated with different trading strategies as well as with the organization as a whole.
- Expressing the maximum allowable risk in numbers is difficult, and obtaining organization-wide agreement on the subject is even more challenging, but the process pays off over time through quick and efficient daily decisions and the resulting low risk.
-
A thorough goal-setting exercise should achieve senior management consensus with respect to the following questions:
- What are the sources of risk the organization faces?
- What is the extent of risk the organization is willing to undertake? What risk/reward ratios should the organization target? What is the minimum acceptable risk/reward ratio?
- What procedures should be followed if the acceptable risk thresholds are breached?
-
The sources of risk should include the risk of trading losses, as well as credit and counterparty risk, liquidity risk, operational risk, and legal risk.
- Market risk is the risk induced by price movements of all market securities.
- Credit and counterparty risk addresses the ability and intent of trading counterparties to uphold their obligations.
- Liquidity risk measures the ability of the trading operation to quickly unwind positions.
- Operational risk enumerates possible financial losses embedded in daily trading operations.
- Legal risk refers to all types of contract frustration.
-
A successful risk management practice identifies risks pertaining to each of these risk categories.
-
Every introductory finance textbook notes that higher returns, on average, are obtained with higher risk.
- Yet, while riskier returns are on average higher across the entire investing population, some operations with risky exposures obtain high gains, and others suffer severe losses.
-
A successful risk management process should establish the risk budget that the operation is willing to take in the event that the operation ends up on the losing side of the equation.
-
The risks should be quantified as worst-case scenario losses tolerable per day, week, month, and year and should include operational costs, such as overhead and personnel costs.
- Examples of the worst-case losses to be tolerated may be 10 percent of organizational equity per month or a hard dollar amount—for example, $150 million per fiscal year.
-
Once senior management has agreed to the goals of risk management, it becomes necessary to translate the goals into risk processes and organizational structures.
- Processes include development of a standardized approach for review of individual trading strategies and the trading portfolio as a whole.
- Structures include a risk committee that meets regularly, reviews trading performance, and discusses the firm's potential exposure to risks from new products and market developments.
-
The procedures for dealing with breaches of established risk management parameters should clearly document step-by-step actions.
- Corporate officers should be appointed as designated risk supervisors responsible to follow risk procedures.
- The procedures should be written for dealing with situations not if, but when, a risk breach occurs.
- Documented step-by-step action guidelines are critical; academic research has shown that the behavior of investment managers becomes even riskier when investment managers are incurring losses.
- Previously agreed-on risk management procedures eliminate organizational conflicts in times of crisis, when unified and speedy action is most necessary.
📏 Measuring Risk
-
While all risk is quantifiable, the methodology for measuring risk depends on the type of risk under consideration.
-
The Basel Committee on Banking Supervision identifies the following types of risk affecting financial securities:
- Market risk—induced by price movements of market securities
- Credit and counterparty risk—addresses the ability and intent of trading counterparties to uphold their obligations
- Liquidity risk—the ability of the trading operation to quickly unwind positions
- Operational risk—the risk of financial losses embedded in daily trading operations
- Legal risk—the risk of litigation expenses
-
All current risk measurement approaches fall into four categories:
- Statistical models
- Scalar models
- Scenario analysis
- Causal modeling
-
Statistical models generate predictions about worst-case future conditions based on past information.
- The Value-at-Risk (VaR) methodology is the most common statistical risk measurement tool.
- Statistical models are the preferred methodology of risk estimation whenever statistical modeling is feasible.
-
Scalar models establish the maximum foreseeable loss levels as percentages of business parameters, such as revenues, operating costs, and the like.
- The parameters can be computed as averages of several days, weeks, months, or even years of a particular business variable, depending on the time frame most suitable for each parameter.
- Scalar models are frequently used to estimate operational risk.
-
Scenario analysis determines the base, best, and worst cases for the key risk indicators (KRIs).
- The values of KRIs for each scenario are determined as hard dollar quantities and are used to quantify all types of risk.
- Scenario analysis is often referred to as the "stress test."
-
Causal modeling involves identification of causes and effects of potential losses.
- A dynamic simulation model incorporating relevant causal drivers is developed based on expert opinions.
- The simulation model can then be used to measure and manage credit and counterparty risk, as well as operational and legal risks.
📉 Measuring Market Risk
Market risk refers to the probability of and the expected value of a decrease in market value due to market movements. It is the risk of loss of capital due to an adverse price movement in any securities.
-
Many securities can be affected by changes in prices of other, seemingly unrelated, securities.
-
To accurately estimate the risk of a given trading system, it is necessary to have a reasonably complete idea of the returns generated by the trading system.
-
The returns are normally described in terms of distributions.
-
The preferred distributions of returns are obtained from running the system on live capital.
- The back-test obtained from running the model over at least two years of tick data can also be used as a sample distribution of trade returns.
- However, the back-test distribution alone may be misleading because it may fail to account for all the extreme returns and hidden costs that occur when the system is trading live.
-
Once the return distributions have been obtained, the risk metrics are most often estimated using statistical models and VaR in particular.
Value-at-Risk (VaR)
-
The concept of Value-at-Risk (VaR) has emerged as the dominant metric in market risk management estimation.
-
The VaR framework spans two principal measures:
- VaR itself
- The expected shortfall (ES)
-
VaR:
The value of loss in case a negative scenario with the specified probability should occur.
- The probability of the scenario is determined as a percentile of the distribution of historical scenarios that can be strategy or portfolio returns.
- For example, if the scenarios are returns from a particular strategy and all the returns are arranged by their realized value in ascending order from the worst to the best, then the 95 percent VaR corresponds to the cutoff return at the lowest fifth percentile.
-
Expected shortfall (ES):
The average worst-case scenario among all scenarios at or below the prespecified threshold.
- For example, a 95 percent ES is the average return among all returns at the 5 percent or lower percentile.
-
An analytical approximation to true VaR can be found by parameterizing the sample distribution.
- The parametric VaR assumes that the observations are distributed in a normal fashion.
- Specifically, the parametric VaR assumes that the 5 percent in the left tail of the observations fall at of the distribution, where and represent the mean and standard deviation of the observations, respectively.
- The 95 percent parametric VaR is then computed as , while the 95 percent parametric ES is computed as the average of all distribution values from –∞ to .
- Similarly, the 99 percent parametric VaR is computed as , while the 99 percent parametric ES is computed as the average of all distribution values from –∞ to .
- The parametric VaR is an approximation of the true VaR; the applicability of the parametric VaR depends on how close the sample distribution resembles the normal distribution.
Extreme Value Theory (EVT)
-
While the VaR and ES metrics summarize the location and the average of many worst-case scenarios, neither measure indicates the absolute worst scenario that can destroy entire trading operations, banks, and markets.
- Most financial return distributions have fat tails, meaning that the very extreme events lie beyond normal distribution bounds and can be truly catastrophic.
- VaR was described as "relatively useless as a risk-management tool and potentially catastrophic when its use creates a false sense of security among senior managers and watchdogs" and "a fraud".
- Despite all the criticism, VaR and ES have been mainstays of corporate risk management for years, where they present convenient reporting numbers.
-
To alleviate the shortcomings of the VaR, many quantitative outfits began to parameterize extreme tail distribution to develop fuller pictures of extreme losses.
-
Once the tail is parameterized based on the available data, the worst-case extreme events can be determined analytically from distributional functions, even though no extreme events of comparable severity were ever observed in the sample data.
-
The parameterization of the tails is performed using the extreme value theory (EVT).
EVT is an umbrella term spanning a range of tail modeling functions.
-
All fat-tailed distributions belong to the family of Pareto distributions.
-
A Pareto distribution family is described as follows:
$G(x) = \begin{cases} 0 & x ≤ 0 \\ exp(-x^{-α}) & x > 0, α > 0 \end{cases}$- where the tail index is the parameter that needs to be estimated from the return data.
- For raw security returns, the tail index varies from financial security to financial security.
- Even for raw returns of the same financial security, the tail index can vary from one quoting institution to another, especially for really high-frequency estimations.
-
-
When the tail index is determined, we can estimate the magnitude and probability of all the extreme events that may occur, given the extreme events that did occur in the sample.
- Sample return observations obtained from either a back test or live results are arranged in ascending order.
- The tail index value is estimated on the bottom 5 percentile of the sample return distribution.
- Using the distribution function obtained with the tail index, the probabilities of observing the extreme events are estimated.
-
-
The tail index approach allows us to deduce the unobserved return distributions from the sample distributions of observed returns.
- Although the tail index approach is useful, it has its limitations.
- The tail index approach "fills in" the data for the observed returns with theoretical observations.
- If the sample tail distribution is sparse (and it usually is), the tail index distribution function may not be representative of the actual extreme returns.
- In such cases, a procedure known as "parametric bootstrapping" may be applicable.
- Parametric bootstrap simulates observations based on the properties of the sample distribution. The technique "fills in" unobserved returns based on observed sample returns.
- Although the tail index approach is useful, it has its limitations.
Parametric Bootstrap
-
The parametric bootstrap process works as follows:
-
The sample distribution of observed returns delivered by the manager is decomposed into three components using a basic market model:
- The manager's skill, or alpha
- The manager's return due to the manager's portfolio correlation with the benchmark
- The manager's idiosyncratic error
-
The decomposition is performed using the standard market model regression:
$R_{i,t} = α_i + β_{i,x}R_{x,t} + ε_t$- where is the manager's raw return in period t, is the raw return on the chosen benchmark in period t, is the measure of the manager's money management skill or alpha, and is a measure of the dependency of the manager's raw returns on the benchmark returns.
- Once parameters and are estimated, three pools of data are generated: one for (constant for given manager, benchmark, and return sample), , and .
-
Next, the data is resampled as follows:
-
A value is drawn at random from the pool of idiosyncratic errors, .
-
Similarly, a value is drawn at random from the pool of .
-
A new sample value is created as follows:
$R̂_{i,t}^S = α̂_i + β̂_{i,x}R_{x,t}^S + ε_t^S$ -
The sampled variables and are returned to their pools (not eliminated from the sample).
-
-
The resampling process outlined in steps a–d above is then repeated a large number of times deemed sufficient to gain a better perspective on the distribution of tails.
- As a rule of thumb, the resampling process should be repeated at least as many times as there were observations in the original sample.
- It is not uncommon for the bootstrap process to be repeated thousands of times.
- The resampled values can differ from the observed sample distribution, thus expanding the sample data set with extra observations conforming to the properties of the original sample.
-
The new distribution values obtained through the parametric process are now treated as were other sample values and are incorporated into the tail index, VaR, and other risk management calculations.
-
-
The parametric bootstrap relies on the assumption that the raw returns' dependence on a benchmark as well as the manager's alpha remain constant through time.
- This does not have to be the case.
- Managers with dynamic strategies spanning different asset classes are likely to have time-varying dependencies on several benchmarks.
-
Despite this shortcoming, the parametric bootstrap allows risk managers to glean a fuller notion of the true distribution of returns given the distribution of returns observed in the sample.
-
To incorporate portfolio managers' benchmarks into the VaR framework, analyzing the "tracking error" of the manager's return in excess of his benchmark is proposed.
Tracking error is a contemporaneous difference between the manager's return and the return on the manager's benchmark index:
TE_t = ln(R_{i,t}) - ln(R_{X,t})- where is the manager's return at time t and is return on the manager's benchmark, also at time t.
-
The VaR parameters are then estimated on the tracking error observations.
-
In addition to VaR, statistical models may include Monte Carlo simulation–based methods to estimate future market values of capital at risk.
- The Monte Carlo simulations are often used in determining derivatives exposure.
-
Scenario analyses and causal models can be used to estimate market risk as well.
- These auxiliary types of market risk estimation, however, rely excessively on qualitative assessment and can, as a result, be misleading in comparison with VaR estimates, which are based on realized historical performance.
🏦 Measuring Credit and Counterparty Risk
Credit and counterparty risk reflects the probability of financial loss should one party in the trading equation not live up to its obligations.
-
An example of losses due to a counterparty failure is a situation in which a fund's money is custodied with a broker-dealer, and the broker-dealer goes bankrupt.
-
Credit risk is manifest in decisions to extend lines of credit or margins.
-
Credit risk determines the likelihood that creditors will default on their margin calls, should they encounter any.
-
In structured products, credit risk measures the likelihood and the impact of default of the product underwriter, called the reference entity.
-
The measurement of credit and counterparty risk was delegated to dedicated third-party agencies that used statistical analysis overlaid with scenario and causal modeling.
-
As credit and counterparty data become increasingly available, it may make good sense for firms to statistically rate their counterparties internally.
-
Entities with publicly traded debt are the easiest counterparties to rank.
- The lower the creditworthiness of the entity, the lower the market price on the senior debt issued by the entity and the higher the yield the entity has to pay out to attract investors.
- The spread, or the difference between the yield on the debt of the entity under consideration and the yield on the government debt with comparable maturity, is a solid indicator of the creditworthiness of the counterparty.
- The higher the spread, the lower the creditworthiness of the counterparty.
- Because yields and spreads are inversely related to the prices of the bonds, the creditworthiness of a counterparty can also be measured on the basis of the relative bond price of the firm: the lower the bond price, the higher the yield and the lower the creditworthiness.
-
Market prices of corporate debt provide objective information about the issuer's creditworthiness.
- The prices are determined by numerous market participants analyzing the firms' strategies and financial prospects and arriving at their respective valuations.
-
A diversification of counterparties is the best way to protect the operation from credit and counterparty risk.
-
The creditworthiness of private entities with unobservable market values of obligations can be approximated as that of a public firm with matching factors.
- The matching factors should include the industry, geographic location, annual earnings of the firms to proxy for the firms' sizes, and various accounting ratios, such as the quick ratio to assess short-term solvency.
- Once a close match with publicly traded debt is found for the private entity under evaluation, the spread on the senior debt of the publicly traded firm is used in place of that for the evaluated entity.
-
In addition to the relative creditworthiness score, the firms may need to obtain a VaR-like number to measure credit and counterparty risk.
-
This number is obtained as an average of exposure to each counterparty weighted by the counterparty's relative probability of default:
$CCExposure = \sum_{i=1}^N Exposure_i \cdot PD_i$-
where CCExposure is the total credit and counterparty exposure of the organization, N is the total number of counterparties of the organization, is the dollar exposure of the ith counterparty, and is the probability of default of the ith counterparty:
$PD_i = \frac{100 - (Creditworthiness \ Rank)}{100}\%$
-
-
The total credit and counterparty exposure is then normalized by the capital of the firm and added to the aggregate VaR number.
-
💧 Measuring Liquidity Risk
Liquidity risk measures the firm's potential inability to unwind or hedge positions in a timely manner at current market prices. It can result in minor price slippages due to the delay in trade execution and can cause collapses of market systems in its extreme.
-
The inability to close out positions is normally due to low levels of market liquidity relative to the position size.
-
The lower the market liquidity available for a specific instrument, the higher the liquidity risk associated with that instrument.
-
Levels of liquidity vary from instrument to instrument and depend on the number of market participants willing to transact in the instrument under consideration.
-
Trading liquidity risk vs. balance sheet liquidity risk:
- Trading liquidity risk is the firm's potential inability to unwind or hedge positions in a timely manner at current market prices.
- Balance sheet liquidity risk is the inability to finance the shortfall in the balance sheet either through liquidation or borrowing.
-
To properly assess the liquidity risk exposure of a portfolio, it is necessary to take into account all potential portfolio liquidation costs, including the opportunity costs associated with any delays in execution.
- While liquidation costs are stable and are easy to estimate during periods with little volatility, the liquidation costs can vary wildly during high-volatility regimes.
-
Liquidity-adjusted VaR measure:
$VaR_L = VaR + Liquidity \ Adjustment = VaR - (µ_S + z_α σ_S)$- where VaR is the market risk value-at-risk, is the mean expected bid-ask spread, is the standard deviation of the bid-ask spread, and is the confidence coefficient corresponding to the desired α–percent of the VaR estimation.
- Both and can be estimated either from raw spread data or from the Roll model.
-
Using Kyle's λ measure, the VaR liquidity adjustment can be similarly computed through estimation of the mean and standard deviation of the trade volume:
$VaR_L = VaR + Liquidity \ Adjustment = VaR - (α̂ + λ̂(µ_{NVOL} + z_α σ_{NVOL}))$-
where and are estimated using OLS regression following Kyle:
$P_t = α + λ \cdot NVOL_t + ε_t$- is the change in market price due to market impact of orders, and is the difference between the buy and sell market depths in period t.
-
-
Using the Amihud illiquidity measure, the adjustment can be applied as follows:
$VaR_L = VaR + Liquidity \ Adjustment = VaR - (µ_γ + z_α σ_γ)$-
where and are the mean and standard deviation of the Amihud illiquidity measure γ,
- , is the number of trades executed during time period t, is the relative price change following trade d during trade period t, and is the trade quantity executed within trade d.
-
⚙️ Measuring Operational Risk
Operational risk is the risk of financial losses resulting from one or more of the following situations: inadequate or failed internal controls, policies, or procedures, failure to comply with government regulations, systems failures, fraud, human error, external catastrophes.
-
Operational risk can affect the firm in many ways.
- A risk of fraud can taint the reputation of the firm and will therefore become a "reputation risk."
- Systems failures may result in disrupted trading activity and lost opportunity costs for capital allocation.
-
The Basel Committee for Bank Supervision has issued the following examples of different types of operational risk:
- Internal fraud
- External fraud
- Employment practices and workplace safety
- Clients, products, and business practice
- Damage to physical assets
- Business disruption and systems failures
- Execution, delivery, and process management
-
Few statistical frameworks have been developed for measurement of operational risk; the risk is estimated using a combination of scalar and scenario analyses.
-
Quantification of operational risk begins with the development of hypothetical scenarios of what can go wrong in the operation.
- Each scenario is then quantified in terms of the dollar impact the scenario will produce on the operation in the base, best, and worst cases.
- To align the results of scenario analysis with the VaR results obtained from estimates of other types of risk, the estimated worst-case dollar impact on operations is then normalized by the capitalization of the trading operation and added to the market VaR estimates.
⚖️ Measuring Legal Risk
Legal risk measures the risk of breach of contractual obligations. It addresses all kinds of potential contract frustration, including contract formation, seniority of contractual agreements, and the like.
-
The estimation of legal risk is conducted by a legal expert affiliated with the firm, primarily using a causal framework.
- The causal analysis identifies the key risk indicators embedded in the current legal contracts of the firm and then works to quantify possible outcomes caused by changes in the key risk indicators.
-
As with other types of risk, the output of legal risk analysis is a VaR number, a legal loss that has the potential to occur with just a 5 percent probability for a 95 percent VaR estimate.
🚧 Managing Risk
-
Once market risk has been estimated, a market risk management framework can be established to minimize the adverse impact of the market risk on the trading operation.
-
Most risk management systems work in the following two ways:
- Stop losses—stop current transaction(s) to prevent further losses
- Hedging—hedge risk exposure with complementary financial instruments
🛑 Stop Losses
A stop loss is a threshold price of a given security, which, if crossed by the market price, triggers liquidation of the current position.
-
In credit and counterparty risk, a stop loss is a level of counterparty creditworthiness below which the trading operation makes a conscious decision to stop dealing with the deteriorating counterparty.
-
In liquidity risk, the stop loss is the minimum level of liquidity that warrants opened positions in a given security.
-
In operations risk, the stop loss is a set of conditions according to which a particular operational aspect is reviewed and terminated, if necessary.
- Example: Compromised Internet security may mandate a complete shutdown of trading operations until the issue is resolved.
-
In legal risk, a stop loss can be a settlement when otherwise incurred legal expenses are on track to exceed the predetermined stop-loss level.
Simple Stop Loss
- In market risk management, a simple stop loss defines a fixed level of the threshold price.
- Example: If at 12:00 P.M. EST USD/CAD was bought at 1.2000 and set a simple stop loss at 50 bps, the position will be liquidated whenever the level of USD/CAD drops below 1.1950, provided the position is not closed sooner for some other reason.
Trailing Stop
-
A trailing stop, on the other hand, takes into account the movement of the security's market price from the time the trading position was opened.
- The trailing stop "trails" the security's market price.
-
Unlike the simple stop that defines a fixed price level at which to trigger a stop loss, the trailing stop defines a fixed stop-loss differential relative to the maximum gain attained in the position.
-
Example: If USD/CAD was bought at 12:00 P.M. EST at 1.2000 and set a trailing stop loss at 50 bps.
- If by 12:15 P.M. EST the market price for USD/CAD rose to 1.2067, but by 13:30 P.M. EST the market price dropped down to 1.1940, the trailing stop loss would be triggered at 50 bps less the market price corresponding to the highest local maximum of the gain function.
- In this example, the local maximum of gain appeared at 1.2067 when the position gain was 1.2067−1.2000 = 0.0067.
- The corresponding trailing stop loss would be hit as soon as the market price for USD/CAD dipped below 1.2067–50 bps = 1.2017, resulting in a realized profit of 17 bps, a big improvement over performance with a simple stop loss.
-
If the stop-loss threshold is too narrow, the position may be closed due to short-term variations in prices or even due to variation in the bid-ask spread.
-
If the stop-loss threshold is too wide, the position may be closed too late, resulting in severe drawdowns.
-
Many trading practitioners calibrate the stop-loss thresholds to the intrinsic volatility of the traded security.
- Example: If a position is opened during high-volatility conditions with price bouncing wildly, a trader will set wide stop losses. At the same time, for positions opened during low-volatility conditions, narrow stop thresholds are required.
-
The actual determination of the stop-loss threshold based on market volatility of the traded security is typically calibrated with the following two factors in mind:
- Average gain of the trading system without stop losses in place,
- Average loss of the trading system without stop losses in place,
-
The probabilities of a particular position turning out positive also play a role in determining the optimal stop-loss threshold, but their role is a much smaller one than that of the averages.
-
The main reason for the relative insignificance of probabilities of relative occurrence of gains and losses is that per the Gambler's Ruin Problem, the probability of a gain must always exceed the probability of a loss on any given trade; otherwise, the system faces the certainty of bankruptcy.
Ask me any question about your notes or content!