Monday, May 22, 2023

ECONOMIC ANALYSIS

All engineers must make tradeoffs in the four domains shown in Figure. Good engineers follow a disciplined and structured approach when developing a product or system. Costing hardware, software, and integration requires an understanding of many MPTs and terminology, but few engineers have received this formal training. Once technical characteristics have been ascertained from the requirements, selecting the right MPTs is critical to accurately determining costs early in the development cycle and estimating realistic LCC.



The most common engineering cost analysis techniques are those often taught in engineering economy and include the time value of money.. The ability to conduct relevant economic analysis is key to technologically innovative solutions grounded in business reality. 


Simulationbased costing has become the methodology of choice for estimating cost and schedule for large systems. Although complex and data intensive, SBC has tremendous utility in assessing risk and variability.


Because of complexity and technology, the costing of complex systems has become a tremendous challenge. Most cost estimators understand how to cost hardware and to a lesser extent software. However, we are still developing tools and processes for costing the integration and interfaces of complex systems. As engineers design and develop scale to larger and more complex systems, system of systems (SoS), and enterprises, our ability to determine costs becomes less relevant and reliable.

Sunday, January 8, 2023

Methods to Demonstrate Depreciation

Depreciation is simply the systematic allocation of cost of a capital expenditure item over its useful life. In reality, depreciation is nothing more than an accounting charge that reduces the overall value of an asset, due to its depletion, for income tax purposes. Depreciation can be calculated with many methods. To demonstrate depreciation we shall present both the straight-line and the modified accelerated cost recovery system (MACRS) methods. MACRS is the most common for tax depreciation because, as the name implies, they allow for accelerated depreciation.


Straight-line depreciation is simply

where:


Dn = depreciation allowance in year n

I = cost base

S = salvage value

n = useful life of asset in years


Note that the cost base includes both the actual cost and the cost to put the asset into operation. The book value of the asset at year n can be expressed as



MACRS is used both for tax purposes and for internal accounting. It allows for the recovery of more costs early in the life of the investment. Depending on the type of asset, the Internal Revenue Service (IRS) allows for different cost recovery periods. A yearly depreciation rate is then multiplied by the cost base to determine the annual depreciation amount. Example 3.1 shows how the MACRS allows for earlier depreciation of assets when compared to the straight-line method. With the exception of this example, the details of MACRS depreciations are not presented because those rates are subject to change by the IRS.


EXAMPLE 1


Your small consulting company is evaluating a circuit board testing machine. The device costs $35,000 and the maker estimates that it will have a salvage value of $6000 after 5 years of use. Determine the annual depreciation using both the straight-line method and the MACRS method.


SOLUTION


a. Straight-line:



b. MACRS: Under IRS guidelines (IRS, 2009), a circuit board testing machine would be classed as “high-technology equipment,” which as a 5-year recovery period. Thus, the equipment could be depreciated according to this schedule:



Note that a 5-year recovery period is depreciated over 6 years. This is based on the assumption that the equipment will be sold during the sixth year. Also note that if a piece of capital equipment is sold before it can be depreciated fully, one-half the normal amount can be depreciated during that year.

Internally developed software is amortized on a straight-line basis over 5 years (or shorter if you can show it is appropriate). Software obtained as part of a business acquisition can be amortized over 15 years. Purchased software is generally amortizable over 3 years. Depreciation can play a role in choosing whether to develop in house or procure commercially, because software can be amortized over different time horizons depending on whether it is an off-the-shelf item or is internally developed.

Thursday, September 1, 2022

CAPITAL BUDGETING DECISION

The ability to plan and evaluate economically major projects is a part of most large government agencies and engineering firms. Most agencies have guidelines and regulations to evaluate rationally proposed projects with regard to their economic feasibility. Almost all guidelines and regulations require that the benefits of the proposed project exceed the cost of the project. Because corporate investment in projects has serious consequences for the financial viability of a corporation, private-sector projects are often easier to evaluate than their public-sector counterparts. Corporate policies require rational and deliberate analysis of capital budgeting decisions before projects are approved. Corporate investment is different from government investment in major capital projects because corporate entities must also consider the source of funding. In many instances, the ability to finance a project (in lieu of the most economical alternative) determines its feasibility. In some cases, capital projects may be financed through corporate bonds or other vehicles. Invariably, when the project nears realization, financing will depend on borrowed money. The methodof financing must be considered in the corporate capital budgeting decision becausethis factor may determine the viability of the project.


It is important to understand that the economic evaluation of alternatives and the evaluation of alternative financing for a project are key to the viability of that project. Economic evaluation involves developing the cash flows representing the benefits and costs associated with the acquisition and/or operation of the system. The cash flow over the life cycle is often referred to as the economic cash flow. Economic analysis of a program or project should include the financing plan—the cash flow representing the incomes and expenses for funding the project.


1. Basic Concepts in Capital Budgeting


The object of the economic evaluation is to select the most cost-effective alternative that will satisfy the stakeholder requirements. It may be to approve or reject a single project or a family of projects (i.e., a program). Thus, prior to performing the economic analysis, one must identify the alternatives. To analyze the investment under consideration, you need to collect stakeholder requirements and then establish a life cycle. For some agencies and types of projects, regulations govern the life cycle. For instance, most federal and state flood control projects have a life cycle of 50 years. A co-generation plant (a plant that produces steam for a customer and sells excess energy, perhaps as electricity, to another organization) may have a life cycle of 20 years. The life cycle should bear some relationship to the life of the product.


To determine whether a project is feasible, you must compare its rate of return to a minimum attractive rate of return (MARR) or the required rate of return for the project. The MARR should be greater than the rate of return one may obtain with roughly the same risk in another venture. A project should be feasible if its rate of return exceeds the MARR. Obviously, MARR has different implications for publicand private-sector projects.


In economic evaluations, project alternatives are analyzed with respect to their cash flow profiles over n periods in the life cycle. This type of analysis is usually shown in the spreadsheet format and will be referred to as cash flow schedules. The interest period is traditionally, but need not be, in years.


Once money is invested in a project, those funds are no longer available for investment. The term opportunity cost is the return that could have been realized by investing in the next best alternative, if defined, or another opportunity that becomes known after the decision is made. In general, the MARR reflects the opportunity cost of capital, the market interest rates for borrowing and lending, and the risks associated with the investment in the capital project. For public agencies, policies or regulations may specify the MARR.


2. Benefit and Cost Development


For commercial for profit products, the benefit or revenue of a capital project is relatively easy to determine. The basic revenue is what you can sell the product for, the rental income it can produce, the depreciation or other asset available for tax credit, the rental cost to avoid by relocating staff, etc. For the co-generation plant, it is the steam revenue from the primary user, the revenue from the electricity sold to the grid, etc. For an office building, it is the rental avoidance as well as any rental income provided; both may include tax credits.


For public capital projects such as roads, dams, bridges, mental health, etc., the benefits may be harder to ascertain. For a flood control project, the expected benefits are the lack of property loss, lives not being disrupted, public facilities staying open, etc. The social benefits are more difficult to quantify.


The costs, on the other hand, are more easily quantified. Certainly all the capital costs (labor, equipment, materials, supplies, financing, etc.) in addition to the costs to operate and maintain the facility may be calculated. Finally, the most complicated cost may be the amount and type of financing involved. The amount will have a great impact on the cash flow analysis. Depending on the discount rate, the method of debt repayment may make the difference between a viable and a nonviable project.


Tuesday, April 19, 2022

THE NATURE OF MODEL ERROR IN FINANCIAL MODELLING

Models are, by nature, simplifications of (and approximations to) the real-world. Errors can be introduced at each stage (as presented in Figure below):

Specification error. This is the difference between the behaviour of the real-world situation and that captured within the specification or intentions of the model (sometimes this individual part is referred to as “model risk” or “model error”). Although one may often be able to provide a reasonable intuitive assessment of the nature of some such errors, it is extremely challenging to provide a robust quantification, simply because the nature of the real world is not fully known. (By definition, the ability to precisely define and calculate model error would only arise if such error were fully understood, in which case, it could essentially be captured in a revised model, with error then having been eliminated.) Further, whilst one may be aware of some simplifications that the model contains compared to the real-life situation, there are almost certainly possible behaviours of the real-life situation that are not known about. In a sense, one must essentially “hope” that the model is a sufficiently accurate representation for the purposes at hand. Of course, a good intuition, repeated empirical observations and large data sets can increase the likelihood that a conceptual model is correct (and improve one’s confidence in it), but ultimately there will be some residual uncertainty (“black swans” or “unknown unknowns”, for example).


Implementation error. This is the difference between the specified model (as conceived or intended) and the model as implemented. Such errors could result by mistake (calculation error) or due to subtler issues, such as the use of a discrete time axis in Excel (when events in fact materialise in continuous time), or of a finite time axis (instead of an unlimited one). Errors also arise frequently in which a model calculates correctly in the base case, but not in other cases (due to mistakes, or overlooking key aspects of the behaviour of the situation).


Decision error. This is the idea that a decision that is made based on the results of a model could be inappropriate. It captures the (lack of) effectiveness of the decision-making process, including a lack of understanding of a model and its limitations. Note that a poor outcome following a decision does not necessarily imply that the decision was poor, nor does a good outcome imply that the decision was the correct choice. Some types of model error relate to multiple process stages (rather than a single one), including where insufficient attention is given to scenarios, risk and uncertainties.

Tuesday, November 2, 2021

BENEFITS OF USING FINANCIAL MODELS IN DECISION SUPPORT

Providing Numerical Information

A model calculates the possible values of variables that are considered important in the context of the decision at hand. Of course, this information is often of paramount importance, especially when committing resources, budgeting and so on.

Nevertheless, the calculation of the numerical values of key variables is not the only reason to build models; the modelling process often has an important exploratory and insight-generating aspect (see later in this section). In fact, many insights can often be generated early in the overall process, whereas numerical values tend to be of most use later on.

Capturing Influencing Factors and Relationships

The process of building a model should force a consideration of which factors influence the situation, including which are most important. Whilst such reflections may be of an  intuitive or qualitative nature (at the early stages), much insight can be gained through the use of a quantitative process. The quantification of the relationships requires one to consider the nature of the relationships in a very precise way (e.g. whether a change in one would impact another and by how much, whether such a change is linear or non-linear, whether other variables are also affected, or whether there are (partially) common causal factors between variables, and so on).

Generating Insight and Forming Hypotheses

The modelling process should highlight areas where one’s knowledge is incomplete, what further actions could be taken to improve this, as well as what data is needed. This can be valuable in its own right. In fact, a model is effectively an explicit record of the assumptions and of the (hypothesised) relationships between items (which may change as further knowledge is developed). The process therefore provides a structured approach to develop a better understanding. It often uncovers many assumptions that are being made implicitly (and which may be imprecisely understood or incorrect), as well as identifying the assumptions that are required and appropriate. As such, both the qualitative and the quantitative aspects of the process should provide new insights and identify issues for further exploration.

The overlooking or underestimation of these exploratory aspects is one of the main inefficiencies in many modelling processes, which are often delegated to junior staff who are competent in “doing the numbers”, but who may not have the experience, or lack sufficient project exposure, authority, or the credibility to identify and report many of the key insights, especially those that may challenge current assumptions. Thus, many possible insights are either lost or are simply never generated in the first place. Where a model produces results that are not readily explained intuitively, there are two generic cases:

◾ It is over-simplified, highly inaccurate or wrong in some important way. For example, key variables may have been left out, dependencies not correctly captured, or the assumptions used for the values of variables may be wrong or poorly estimated.

◾ It is essentially correct, but provides results which are not intuitive. In such situations, the modelling process can be used to adapt, explore and generate new insights, so that ultimately both the intuition and the model’s outputs become aligned. This can be a value-added process, particularly if it highlights areas where one’s initial intuition may be lacking.

In this context, the following well-known quotes come to mind:

◾ “Plans are useless, but planning is everything” (Eisenhower).

◾ “Every model is wrong, some are useful” (Box).

◾ “Perfection is the enemy of the good” (Voltaire).

Decision Levers, Scenarios, Uncertainties, Optimisation, Risk Mitigation and Project Design

When conducted rigorously, the modelling process distinguishes factors which are controllable from those which are not. It may also highlight that some items are partially  controllable, but require further actions that may not (currently) be reflected in the planning nor in the model (e.g. the introduction of risk mitigation actions). Ultimately, controllable items correspond to potential decisions that should be taken in an optimal way, and non-controllable items are those which are risky or subject to uncertainty.

The use of sensitivity, scenario and risk techniques can also provide insight into the extent of possible exposure if a decision were to proceed as planned, lead to modifications to the project or decision design, and allow one to find an optimal decision or project structure.

Improving Working Processes, Enhanced Communications and Precise Data Requirements 

A model provides a structured framework to take information from subject matter specialists or experts. It can help to define precisely the information requirements, which improves the effectiveness of the research and collection process to obtain such information. The overall process and results should also help to improve communications, due to the insights and transparency generated, as well as creating a clear structure for common working and co-ordination.

Sunday, January 10, 2021

The Financial Modelling: Backward Thinking and Forward Calculation Processes

The modelling process is essentially two-directional (see Figure):

◾ A "backward thought process", in which one considers a variable of interest (the model output) and defines its underlying, or causal, factors. This is a qualitative  process, corresponding to reading Figure from left to right. For example, cash flow may be represented as being determined from revenue and cost, each of which may be determined by their own causal factors (e.g. revenue is determined by price and volume). As a qualitative process, at this stage, the precise the nature of the relationships may not yet be made clear: only that the relationships exist.
◾ A “forward-calculation process”, in which one which starts with the assumed values of the final set of causal factors (the “model inputs”) and builds the required calculations to determine the values of the intermediate variables and final outputs. This is a numerical process corresponding to reading Figure from right to left. It involves defining the nature of the relationships sufficiently precisely that they can be implemented in quantitative formulae. That is, inputs are used to calculate the intermediate variables, which are used to calculate the outputs. For example, revenue would be calculated (from an assumed price and volume), and cost (based on fixed and variable costs and volume), with the cash flow as the final output.

Note that the process is likely to contain several iterations: items that may initially be numerical inputs may be chosen to be replaced by calculations (which are determined from new numerical inputs), thus creating a model with more input variables and detail. For example, rather than being a single figure, volume could be split by product group. In principle, one may continue the process indefinitely (i.e. repeatedly replacing hard-coded numerical inputs with intermediate calculations). Of course, the potential process of creating more and more detail must stop at some point:

◾ For the simple reason of practicality.
◾ To ensure accuracy. Although the creation of more detail would lead one to expect to have a more accurate model, this is not always the case: a detailed model will require more information to calibrate correctly (for example, to estimate the values of all the inputs). Further, the capturing of the relationships between these inputs will become progressively more complex as more detail is added.

It may be of interest to note that this framework is slightly simplified (albeit covering the large majority of cases in typical Excel contexts):

◾ In some applications (notably sequential optimisation of a time series, and decision trees), the calculations are required to be conducted both forward and backward, as the optimal behaviour at an earlier time depends on considering all the future consequences of each potential decision.
◾ In econometrics, some equations may be of an equilibrium nature, i.e. they contain the same variable(s) on both sides of an equation(s). In such cases, the logic flow is not directional, and will potentially give rise to circular references in the implemented models. 

Saturday, January 9, 2021

The Stages of Financial Modelling

The modelling process can be considered as consisting of several stages, as shown in Figure

The key characteristics of each stage include:

Specification: This involves describing the real-life situation, either qualitatively or as a set of equations. In any case, at this stage one should also consider the overall objectives and decision-making needs, and capture the core elements of  the behaviour of the real-world situation. One should also address issues relating to the desired scope of model validity, the level of accuracy required and the trade-offs that are acceptable to avoid excessive complexity whilst providing an adequate basis for decision support.
Implementation: This is the process to translate the specification into numerical values, by conducting calculations based on assumed input values. For the purposes of this text, the calculations are assumed to be in Excel, perhaps also using additional compatible functionality (such as VBA  macros, Excel add-ins, optimisation algorithms, links to external databases and so on).
Decision support: A model should appropriately support the decision. However, as a simplification of the real-life situation, a model by itself is almost never sufficient. A key challenge in building and using models to greatest effect is to ensure that the process and outputs provide a value-added decision-support guide (not least by providing insight, reducing biases or correcting invalid assumptions that may be inherent in less-rigorous decision processes), whilst recognising the limitations of the model and the modelling process.

Note that in many practical cases, no explicit specification step is conducted; rather, knowledge of a situation is used to build an Excel workbook directly. Since Excel does not calculate incorrectly, such a model can never truly be “(externally) validated”: the model specification is the model itself (i.e. as captured within the formulae used in Excel). Although such “self-validation” is in principle a significant weakness of these pragmatic approaches, the use of a highly formalised specification stage is often not practical (especially if one is working under tight deadlines, or one believes that the situation is generally well-understood). Some of the techniques discussed in this text (such as sensitivity-driven model design and the following of other best practices) are particularly important to support robust modelling processes, even where little or no documented specification has taken place or is practically possible.