Services on Demand
Journal
Article
Indicators
- Cited by SciELO
- Access statistics
Related links
- Cited by Google
- Similars in SciELO
- Similars in Google
Share
Estudios Gerenciales
Print version ISSN 0123-5923
estud.gerenc. vol.32 no.140 Cali July/Sept. 2016
https://doi.org/10.1016/j.estger.2016.06.004
ARTICLE
DOI: 10.1016/j.estger.2016.06.004
An analysis on operational risk in international banking: A Bayesian approach (2007–2011)
Un análisis del riesgo operacional en la banca internacional: un enfoque bayesiano (2007-2011)
Uma análise do risco operacional no sistema bancário internacional: uma abordagem bayesiana (2007-2011)
José Francisco Martínez-Sáncheza, María Teresa V. Martínez-Palaciosa, Francisco Venegas-Martínezb
a Profesor-Investigador, Escuela Superior de Apan, Universidad Autónoma del Estado de Hidalgo, Apan, Mexico
b Profesor-Investigador, Escuela Superior de Economía, Instituto Politécnico Nacional, México D.F., Mexico. Corresponding author at: Cerro del Vigía 15, Col. Campestre Churubusco, Del. Coyoacán, 04200 México D.F., Mexico. fvenegas1111@yahoo.com.mx
Received January 16, 2016
Accepted June 27, 2016
Available online 13 September 2016
Abstract
This study aims to develop a Bayesian methodology to identify, quantify and measure operational risk in several business lines of commercial banking. To do this, a Bayesian network (BN) model is designed with prior and subsequent distributions to estimate the frequency and severity. Regarding the subsequent distributions, an inference procedure for the maximum expected loss, for a period of 20 days, is carried out by using the Monte Carlo simulation method. The business lines analyzed are marketing and sales, retail banking and private banking, which all together accounted for 88.5% of the losses in 2011. Data was obtained for the period 2007–2011 from the Riskdata Operational Exchange Association (ORX), and external data was provided from qualified experts to complete the missing records or to improve its poor quality.
Keywords: Operational risk; Bayesian analysis; Monte Carlo simulation.
JEL classification: D81. C11. C15.
Resumen
Esta investigación tiene como propósito desarrollar una metodología bayesiana para identificar, cuantificar y medir el riesgo operacional en distintas líneas de negocio de la banca comercial. Para ello se diseña un modelo de red bayesiana con distribuciones a priori y a posteriori para estimar la frecuencia y la severidad. Con las distribuciones a posteriori se realiza inferencia sobre la máxima pérdida esperada, para un período de 20 días, utilizando el método de simulación Monte Carlo. Las líneas de negocio analizadas son comercialización y ventas, banca minorista y banca privada, que en conjunto representaron el 88,5% de las pérdidas en 2011. Los datos fueron obtenidos de la Asociación Riskdata Operacional Exchange (ORX) para el período 2007-2011, y la información externa fue proporcionada por expertos calificados para completar los registros faltantes o mejorar los datos de mala calidad.
Palabras clave Riesgo operacional; Análisis bayesiano; Simulación Monte Carlo.
Códigos JEL: D81. C11. C15.
Resumo
Esta pesquisa tem como objetivo desenvolver uma metodologia Bayesiana para identificar, quantificar e medir o risco operacional em diversas linhas de negócio da banca comercial. Isso requer (e é projetado) um modelo de Rede Bayesiana (RB), com distribuições anteriores e posteriores para estimar a frequência e a severidade. Com as distribuições posteriores é realizada una inferência sobre a perda máxima esperada por um período de 20 dias, usando o método de simulação de Monte Carlo. As linhas de negócio analisadas são marketing e vendas, banca de retalho e banca privada, que juntos representaram 88,5% das perdas em 2011. Os dados foram obtidos a partir da Associação Riskdata Operacional Exchange (ORX) para o período 2007-2011, e a informação externa foi fornecida por peritos qualificados para completar os registros ausentes ou melhorar os dados de má qualidade.
Palavras-chave: Risco operacional; Análise bayesiana; Simulação de Monte Carlo.
Classificações JEL: D81. C11. C15.
Introduction
While in 2004 regulators focused on market, credit and liquidity risk, in 2011 attention was mainly placed on the high-profile loss events affecting several major financial institutions, which renewed operational risk management and corporate governance. For global markets, the significance of loss events (measured, in some cases, in billions of dollars) showed that the lack of an appropriate operational risk management may affect even major financial institutions.
The current challenge is how to manage proactively operational risk in a business environment characterized by sustained volatility. Needless to say financial organizations need advanced tools, models, techniques and methodologies that combine internal data with external data across industry. For example, organizations in the banking and insurance sectors can provide critical insights from self-assessment and scenario modeling from the combination of internal data with external data on loss events that triggers across the industry. External loss event data not only provides insights from the experiences of industry peers, but also allows a more effective identification of potential risk exposure. For increasing effectiveness in analyzing potential risk exposure, predictive indexes and indicators combining internal and external data may be developed for a more effective operational risk management. These predictions will lead to a more accurate evaluation of potential future losses.
The Bayesian approach may be an appropriate alternative for operational risk analysis when initial and/or complementary information from qualified consultants is available. By construction, Bayesian models incorporate initial or complementary information about parameter values of a sampling distribution through a prior probability distribution, which includes subjective information provided by expert opinions, analyst judgments or specialist beliefs. Subsequently, a posterior distribution is estimated to carry out inference on the parameter values. This paper develops a Bayesian Network (BN) model to examine the relationships among operational risk (OR) events in the three lines of business with greater losses in the international banking sector. The proposed BN model is calibrated with observed data from events occurred in these lines of business and/or with information obtained from experts or from external sources1. In this case, experts mainly complete missing records or improve data of poor quality. The analysis period for this research is from 2007 to 2011 on the basis of a twenty-day frequency. This period starts one year before the financial crisis generated by subprime mortgages.
OR usually involves a small part of total annual losses from commercial banks; however, at the time an extreme event of operational risk occurs, it can cause significant losses. For this reason, major changes in the worldwide banking industry are aimed at having better policies and recommendations concerning operational risk. It is noteworthy that exist in the literature various statistical techniques to identify and quantify OR, which have the underlying assumption of independence between risk events; see, for example: Degen, Embrechts, and Lambrigger (2007), Moscadelli (2004), Embrechts, Furrer, and Kaufmann (2003). However, as shown in Aquaro et al. (2009), Supatgiat, Kenyon, and Heusler (2006), Carrillo-Menéndez and Suárez-González (2015), Carrillo-Menéndez, Marhuenda-Menéndez, and Suárez-González (2007), Cruz (2002), Cruz, Peters, and Shevchenko (2002), Neil, Marquez, and Fenton (2004) and Alexander (2002) there is a causal relationship between OR factors.
Despite the research from Reimer and Neu (2003, 2002), Kartik and Reimer (2007), Aquaro et al. (2009), Neil et al. (2004) and Alexander (2002), that apply the BN scheme in OR management, there is no a complete guide on how to classify, identify, quantify OR events, and how to calculate economic capital consistently.2 This work aims to close these gaps. First, establishing OR event information structures so that it is possible to quantify the OR events and then changing the assumption of independence of events in order to model more realistically the causality relationship of OR events.
The possibility of using conditional distribution (discrete or continuous), calibrating the model with both objective and subjective information sources, and establishing causal relationships among risk factors, is precisely what distinguishes our research compared with classical statistical models. Under this framework, this paper is aimed at calculating, with several confidence levels, the maximum expected loss over a period of 20 days for the group of international banks associated to the ORX regarding the studied lines of business of commercial banks, which has to be considered to properly manage operational risk in ORX.
This paper is organized as follows. Section 2 presents the typology to be used for OR management in accordance with the Data Operational Riskdata eXchange Association (ORX). Section 3, briefly, reviews the main methods, models and tools for measuring OR. Section 4 discusses the theoretical framework needed for the development of this research, emphasizing on the advantages and benefits of using BNs. Section 5 provides two BN, one for frequency and other for severity. In order to quantify the OR at each node of the network, we fit prior distributions by using the @Risk software. Once the prior probabilities of both networks are estimated, we proceed to calculate posterior probabilities and, subsequently, we use the junction tree algorithm to eradicate cycles when the directionality is eliminated (See Appendix). Section 6 combines prior and posterior distributions to compute the loss distribution by using Monte Carlo simulation. Here, the maximum expected loss arising from operational risk events for a period of 20 days is calculated. Finally, we present conclusions and acknowledge limitations.
2. Operational risk events in the international banking sector
This section describes, in some detail, the operational risk events related to the international banking sector according with the Data Operational Riskdata eXchange Association (ORX).
- External frauds
We describe now the operational risk events related to external fraud according to ORX:
- a) Fraud and theft: these are losses due to a fraudulent act, misappropriate property, or law circumvent, by a third party without the assistance of the bank staff.
- b) Security systems: this applies to all events related to unauthorized access to electronic data files.
- Internal frauds
The operational risk events related to internal fraud are described below:
- a) Fraud and theft: losses due to fraudulent acts, improper appropriations of goods, or evasion of regulation or company policy, that involves the participation of internal staff.
- b) Unauthorized activities: losses caused from unreported intentional and unauthorized operations, or intentionally unregistered positions.
- c) Security systems: this previous category applies to all events involving unauthorized access to electronic data files for personal profit with the assistance of employee's access.
- Malicious damage
Losses caused by acts of badness or hatred, in others words malicious damage.
- a Deliberate damage: this is concerned with acts of vandalism, excluding events in security systems.
- b) Terrorism: ill-intentioned damage caused by terrorist acts excluding events related to security systems.
- c) Security systems (external): these events include security events with deliberate damage in external systems made by a third party without the assistance of internal staff (e.g., the spread of software viruses).
- d) Security systems (internal): this includes deliberate events in the security of internal systems with the participation of internal staff (e.g., the spread of software viruses).
- Labor practices and workplace safety
Labor practices and safety at workplace are losses derived from actions not in agreement with labor, health or safety regulation. Payment claims for bodily injury or loss of discriminatory events. Mandatory insurance programs for workers and regulation on safety in the workplace are included in this category.
- Customers, products and business practice
Business practices, these events consider losses arising from an unintentional or negligent breach of a professional obligation to specific clients or the design of a product, including fiduciary and suitability requirements.
- Disasters and accidents
Disasters and accidents reflects losses resulting from damage to physical assets from natural disasters, or other events like traffic accidents.
- Technology and infrastructure failure
Losses caused by failures in systems or management.
- a) Failures in technology and infrastructure, such as hardware, software and telecommunications malfunctioning.
- b) Failures in management processes.
3. Operational risk measurement in the international banking sector
Operational risk management usually involves a small part of total annual losses from international banks; however, when an unexpected extreme event, that occasionally occurs, may cause significant losses. For this reason, major changes in the worldwide banking industry are aimed at obtaining better policies and/or recommendations concerning with operational risk management. Financial globalization and local regulation leads us also to rethink and reorganize operational risk associated to international banking, including those too big to fail. In this sense, a suitable operational management in the international banking sector may avoid possible bankruptcy and contagion and, therefore, systemic risk. The available approaches to deal with this issue vary from simple to highly complex methods with very sophisticated statistical models. Now, we briefly describe some of the existing methods in the literature for measuring OR; see, for example, Heinrich (2006) and Basel II (2001a, 2001b). It will be also emphasized in this subsection on the advantages and benefits of using BN.
- The ''top-down'' single indicator methods. These methods were chosen by the Basel Committee as a first approach to operational risk measurement. A single indicator of the institution as total income, volatility of income, or total expenditure, can be considered as the functional variable to manage the risk.
- The ''bottom-up'' models including expert judgment. The basis for an expert analysis is a set of scenarios. In this case, experts mainly complete missing records or improve data of poor quality of the identified risks and their probabilities of occurrence in alternative scenarios.
- Internal measurement. The Basel Committee proposes the internal measurement approach as a more advanced method for calculating the regulatory capital.
- The classical statistical approach. This framework is similar to what is used in the quantification methods for market risk, and more recently the credit risk. However, contrary to what happens with market risk, it is difficult to find a widely accepted statistical method.
- Causal models. As an alternative to the classical statistical framework, causal models assume dependence in the occurrence of OR events. Under this approach, each event represents a random variable (discrete or continuous) with a conditional distribution function. In case that the events have no historical records or data has poor quality, it is required the opinion or judgment of experts to determine the conditional probabilities of occurrence. The tool for modeling this causality is just the BN, which is based on Bayes' theorem and the network topology.
4. Theoretical framework for Bayesian network
In this section the theory supporting the development of the proposed BN is presented. It begins with a discussion of the conditional value at risk (CVaR) as a coherent risk measure in the sense of Artzner, Delbaen, Eber, and Heath (1999). The CVaR will be used to compute the expected loss. Afterward, the main concepts of the BN approach are introduced.
Acording to Panjer (2006), the CVaR or Expected Shortfall (ES) is an alternative measure to Value at Risk (VaR) that quantifies the losses that can be found in the distributions tails. Specifically, let X be the random variable representing the losses, the CVaR of X with a (1–p)×100% confidence level, denoted by CVaR(X), represents the expected loss given that the total losses exceed the 100×p quantile of the distribution of X. Thus, CVaRp (X) can be written as:
where F(x) is the cumulative distribution function of X. Hence, the CVaR(X) can be seen as the average of all the values of VaR with a p×100% confidence level. Finally, notice that CVaR(X) can be rewritten as:
where e(xp) is the average excess of loss function.3
• The Bayesian framework
In statistical analysis there are two main paradigms, the frequentist and the Bayesian. The main difference between them is the definition of probability. The frequentist states that the probability of an event is the limit of its relative frequency in the long run. While the Bayesian argue that probability is subjective. The subjective probability (degree of belief) is based on knowledge and experience and is represented through a prior distribution. The subjective beliefs are updated by adding new information to the sampling distribution through Bayes' theorem obtaining a posterior distribution, which is used to make inferences on the parameters of the sampling model. Thus, a Bayesian decision maker learns and revises its beliefs based on new available information.4 Formally, Bayes' theorem states that
where θ is a vector of unknown parameters to be estimated, y is a vector of observations recorded, π(θ) is the prior distribution, L(θ|y) is the likelihood function for θ, and P(θ|y) is the posterior distribution of θ. Two main questions arise, how to translate prior information in an analytical form, π(θ), and how to assess the sensitive of the posterior with respect to the prior selection.5
A BN is a graph representing the domain of decision variables, its quantitative and qualitative relations and their probabilities. A BN may also include utility functions that represent the preferences of the decision maker. An important feature of a BN is its graphical form, which allows a visual representation of complicated probabilistic reasoning. Another relevant aspect is the qualitative and quantitative parts of a BN, allowing incorporate subjective elements such as expert opinion. Perhaps the most important feature of a BN is that it is a direct representation of the real world and not a way of thinking. Each node is associated with a set of tables of probabilities in a BN. The nodes stand for the relevant variables, which can be discrete or continuous.6 A causal network according to Pearl (2000) is a BN with the additional property that the ''parent'' nodes are the directed causes.7
A BN is used primarily for inference by calculating conditional probabilities given the information available at each time for each node (beliefs). There are two classes of algorithms for the inference process: the first generates an exact solution and the second produces an approximate solution with high probability to be in close proximity to the exact solution. Among the exact inference algorithms, we have for example: polytree, clique tree, tree junction, algorithms of variable elimination and Pear's method.
The use of approximate solutions is based on the exponential growth of the processing time required to obtain exact solutions. According to Guo and Hsu (2002) such algorithms can be grouped in: stochastic simulation methods, model simplification methods, search based methods, and loopy propagation methods. The best known is the stochastic simulation, which is, in turn, divided in sampling algorithms and Markov Chain Monte Carlo (MCMC) methods.
5. Building a Bayesian network for the international banking sector
In what follows, we will be concerned with building the BN for the international banking sector. The first step is to define the problem domain where the purpose of the NB is specified. Subsequently, the important variables and nodes are defined. Then, the interrelationships between nodes and variables are graphically represented. The resulting model must be validated by experts in the field. In case of disagreement between them, we return to one of the above steps until reaching consensus. The last three steps are: incorporate expert opinion (referred to as the quantification of the network), create plausible scenarios with the network (network applications), and finally network maintenance.
The main problems that a risk manager faces when using a BN are: how to implement a Bayesian network, how to model the structure, how to quantify the network, how to use subjective data (from experts) and/or objective (statistical data), what tools should be used for best results, and how to validate the model. The answers to these questions will be addressed in the development of our proposal. Moreover, one of the objectives of this paper is to develop a guide for implementing a NB to manage operational risk in international banking associated with ORX. We also seek to generate a consistent measurement of the minimal capital requirements for managing OR.
We will be concerned with the analysis of operational risk events occurring in the following lines of business: marketing and sales, retail banking and private banking of international banks joined to the Operational Riskdata eXchange Association. Once the risk factors linked with each business line are identified, the nodes that will be part of the Bayesian network have to be defined. They are random variables that can be discrete or continuous and have associated probability distributions. One of the purposes of this research is to compute the monthly maximum expected loss associated to transnational banks belonging to ORX. The frequency of the available data is every twenty days, ranging from 2007 through 2011.
• Building and quantifying the model
The nodes are connected with directed arcs (arrows) to form a structure that shows the dependence or causal relationship between them. The BN is divided into two networks, one for modeling the frequency and the other for the severity. Once the results are obtained separately, they are aggregated through Monte Carlo simulation to estimate the loss distribution. Usually, the severity network requires a significant amount of probability distributions. In what follows, the characteristics and states of each node of the networks for severity and frequency are described in Tables 1 and 2, respectively.
In the Bayesian approach, the parameters of a sample model are treated as random variables. The prior knowledge about the possible values of the parameters is modeled by a specific prior distribution. Thus, when initial is vague or has little importance a uniform, maybe improper, distribution will allow the data speak for itself. The information and tools for the design and construction of the NB constitute the main input for Bayesian analysis; therefore, it is necessary to keep sources of reliable information be consistent with best practices and international standards on quality of information systems, such as ISO/IEC 73: 2000 and ISO 72: 2006.
• Statistical analysis of the Bayesian network for frequency
In this section, each node of the network for frequency will be defined. In the case of nodes in which historical data is available, we fit the corresponding probability distribution to data. While in nodes with available prior information useful to complete missing records or improve data of poor quality, the Bayesian approach will be used. Regarding the node labeled ''In_Fraud_Labor_Pr'' (Internal Fraud and Labor Practices), the prior distribution that best fit the available information is shown in Fig. 1.
With respect to the node labeled ''Disaster_ICT'' the associated risks are in database managing, online transactions, batch processes, and external disasters, among others. We are concerned with determine the probabilities that information systems fail or that uncontrollable external events affect the operation of automated processes. In this case, the prior distribution that best fit the available information is shown in Fig. 2.
With regard to the probabilities of the labeled node ''Pract _Business'' (Business Practices), these are associated with events related to actions and activities in the banking sector that generate losses from malpractice and that directly impact the functioning of the banking. In this case, the distribution that best fit the data reported to the ORX is shown in Fig. 3.
External frauds are exogenous operational risk events for which there is no control but there is a record of their frequency and severity. In this case, the probabilities of occurrence are estimated by fitting a Negative Binomial distribution as shown in Fig. 4.
The proper functioning of banking institutions depends on the performance of their processes. The maturity of these systems is associated with quality management process and product level. The distribution of the node labeled as ''Process Management'' is shown in Fig. 5.
Finally, for the target node ''Frequency'', it is fitted a negative binomial distribution with success probability p = 0.012224, an equal number of successes, 20, is assumed. This assumption is consistent with the financial practice and studies of operational risk by assuming that the number of failures usually follows a Poisson or negative Binomial.
• Statistical analysis of the severity network
In this section, each node of the severity network is analyzed. For each node with available historical information the distribution that best fit the data is determined. The node ''Disaster _TIC'' has the following exponential density that best fit the losses caused by failure of not controllable systems and external events. The distribution for disaster and ICT Failure is shown in Fig. 6.
In order to determine the goodness of fit the Akaike's test is used. Moreover, a comparison of theoretical and sample quantiles is shown in Fig. 7.
In what follows, a proper fit is seen in most data and information. Thus, the null hypothesis that the sampling distribution is described from a Weibull is accepted. This network node for severity constitutes a prior distribution. Also, for the ''People'' node the density that best fit available information is an extreme value Weibull density, and it is shown in Fig. 8.
As before, we carry out a test for goodness of fit, and a comparative analysis of quantiles for the theoretical sampling distribution is shown in Fig. 9.
The distribution that best fit the available information for losses caused by events related to the administrative, technical and service processes performed in the various lines of business of the international banking sector is described with an extreme value Weibull distribution as shown in Fig. 10. Also, a comparative analysis of quantiles for the theoretical sampling distribution is shown in Fig. 11.
Finally, the target node ''Severity'' represents the losses associated with the nodes ''People'', ''Disaster _CIT'' and ''Processes''. To estimate the parameters of the distribution of severity, a Weibull distribution is adjusted to the severity data. The parameters found are α = 1.22 and β = 42,592, representing the location and scale, respectively. In the next section, the posterior probabilities will be computed.
• The posterior distributions
After analyzing each of the networks for frequency and severity, and assigning the corresponding probability distribution functions, the posterior probabilities will be now generated. To do this, inference techniques for the Bayesian Networks will be applied. Particularly, we will be using the junction tree algorithm (Guo & Hsu, 2002). The posterior probabilities for nodes of the network frequency having at least one parent are shown in Fig. 12.
The results of the node ''process management'' show that there is an approximate 2% chance of failures in a segment considering between 150 and 300 events related to the management process over a period of 20 days; a 27% chance of occurrences in a segment considering between 300 and 450 events; a probability of 0.47 of having failure occurrences in an interval considering between 450 and 600 events, and a 20% chance in a segment considering between 600 and 750 events associated with the administration of banking processes. The calculated probabilities are conditioned by the presence of events related to internal fraud and work processes.
Regarding the node ''external fraud'' the occurrences between 425 and 550 external frauds over a period of 20 days have an approximate probability of 0.17; between 550 and 675 events a probability of 0.3; between 675 and 800 external fraud the probability is 0.27; and for more than 800 frauds the probability is about 0.2. All these probabilities are conditional on the existence of events related disasters, failures in ICT, and labor practices.
Finally, the probability distribution of the node ''Frequency'' shows an approximate 15% chance, over a period of 20 days, that failures occur up to 1250; a probability of 25% in a segment considering between 1250 and 1500; a probability of 0.26 in an interval considering between 1500 and 1750 failures; an approximate 19% chance in a segment considering between 1750 and 2000 events; a probability of 0.9 in a segment containing between 2000 and 2250 failures, and approximately 5% chance that 2250 failures occur over a period of 20 days. These are the conditional probabilities to risk factors such as external fraud, process efficiency and people reliability.
Finally, it is important to point out that for determining the probabilities of each node in the frequency network, the negative binomial plays an important role since there is significant empirical evidence that the frequency of operational risk events have an adequate fit under this distribution. In the case of the network of severity, it has the posterior distribution shown in Fig. 13.
The losses caused by human errors on average are 12,263 Euros in periods of 20 days. With regard to losses for catastrophic events such as demonstrations, floods, and ICT failure, among others, are on average 870 Euros. In terms of process failures on average they have a loss every 20 days of 27,204 Euros. The probability distribution of the node ''Severity'' shows that there is a probability of 0.33 of the occurrence of a loss between 0 and 20,000 Euros; a probability of 0.2 between 20,000 and 40,000, a 10% chance between 60,000 and 80,000 Euros, a 6% chance between 80,000 and 100,000 Euros, and approximately a 6% chance that the loss be greater than 100,000 Euros in a period of 20 days.
6. Value at operational risk
Once we have carried out the Bayesian inference process to obtain posterior distributions for the frequency of OR events and the severity of losses in the previous section, we now proceed to integrate both distributions through Monte Carlo8 simulation by using the ''Compound'' function of @Risk. To achieve this goal, we generate the distribution function of potential losses by using a negative binomial for frequency and an extreme value Weibull distribution for severity.9 It is worthy to mention that Monte Carlo simulation method has the disadvantage that it requires high processing capacity and, of course, is based on a random number generator. For the calculation of OpVar the values obtained are arranged for expected losses in descending order and the corresponding percentiles are calculated in Table 3. Accordingly, if we calculate the OpVaR with a confidence level of 95%, we have a maximum expected loss of €88.4 million over a period of 20 days for the group of international banks associated to the ORX.10
7. Conclusions
Transnational banks generate large amounts of information from the interaction with customers, with the industry and with internal processes. However, the interaction with the individuals involved in the processes and systems also required some attention and this considered by the Operational Riskdata eXchange Association that has stated several standards for the registration and measurement of operational risk.
This paper has provided the theoretical elements and practical guidance necessary to identify, quantify and manage OR in the international banking sector under the Bayesian approach. This research uses elements more attached to reality such as: specific probability distributions (discrete or continuous) for each risk factor, additional data and information updating the model, and relationships (causality) of risk factors. It was shown that the BN framework is a viable option for managing OR in an environment of uncertainty and scarce information or with questionable quality. The capital requirement is calculated by combining statistical data with opinions and judgments of experts, as well as, external information, which is more consistent with reality. The BNs as a tool for managing OR in lines of business of the international banking sector have several advantages over other models:
- The BN is able to incorporate the four essential elements of Advanced Measurement Approach (AMA): internal data, external data, scenario analysis and factors reflecting the business environment and the control system in a simple model.
- The BN can be built into a ''multi-level'' model, which can display various levels of dependency between the various risk factors.
- The BN running on a network of decision can provide a cost-benefit analysis of risk factors, where the optimum controls are determined within a scenario analysis.
- The BN is a direct representation of the real world, not a way of thinking as neural networks. Arrows or arcs in networks stand for the actual causal connections.
It is important to point out that the CVaR used in the Bayesian approach is consistent in the sense of Artzner et al. (1999), but also summarizes the complex causal relationships between the different risk factors that result in operational risk events. In short, because the reality is much more complex than independent events identically distributed, the Bayesian approach is an alternative to model a complex and dynamic reality.
Finally, among the main empirical results, it is worth mentioning that after calculating the OpVaR, with a confidence level of 95%, the maximum expected loss over a period of 20 days for the group of international banks associated to the ORX was €88.4 million, which is a significant amount to be considered to manage operational risk in ORX for the studied lines of business of commercial banks.
Conflict of interests
The authors declare no conflict of interest.
NOTES
1 When referring to experts, they are banking officials who have the experience and knowledge of the operation and management of the bank business lines.
2 Usually, to measure the maximum expected loss (or economic capital) by OR value it is used the Conditional Value at Risk (CVaR).
3 For a complete analysis on the non coherence of VaR see Venegas-Martínez (2006).
4 For a review of issues associated with Bayes' theorem see Zellner (1971).
5 These questions are a very important topic Bayesian inference; see, in this regard, Ferguson (1973).
6 The following definitions will be needed for the subsequent development of this research: Definition 1, Bayesian networks are directed acyclic graph (DAGs); Definition 2, a graph is defined as a set of nodes connected by arcs; Definition 3, if between each pair of nodes there is a precedence relationship represented by arcs, then the graph is directed; Definition 4, A cycle is a path that starts and ends at the same node; and Definition 5, A path is a series of contiguous nodes connected by directed arcs.
7 See Jensen (1996) for a review of the BN theory.
8 The simulation results are available via e-mail request marzan67@gmail.com.
9 Other alternative statistical method is the copula approach, though not always a closed solution can be found.
10 For a complete list of banks associated see http://www.orx.org/orx-members.
References
Aquaro, V., Bardoscia, M., Belloti, R., Consiglio, A., De Carlo, F., & Ferri, G. (2009). A Bayesian networks approach to operational risk. Physica A: Statistical Mechanics and its Applications, 389(8), 1721–1728. [ Links ]
Alexander, C. (2002). Operational risk measurement: Advanced approaches. UK: ISMA Centre. University of Reading. Retrieved from: http://www.dofin.ase.ro/Lectures/Alexander%20Carol/Oprisk Bucharest.pdf [ Links ]
Artzner, P., Delbaen, F., Eber, J., & Heath, D. (1999). Coherent measures of risk. Mathematical Finance, 9(3), 203–228. [ Links ]
Basel II. (2001a). Operational risk. Consultative document. Retrieved from: https://www.bis.org/publ/bcbsca07.pdf [ Links ]
Basel II. (2001b). Working paper on the regulatory treatment of operational risk. Retrieved from: http://www.bis.org/publ/bcbswp8.pdf [ Links ]
Carrillo-Menéndez, S., & Suárez-González, A. (2015). Challenges in operational risk modelling. In International Model Risk Management Conference. Retrieved from: http://imrmcon.org/wp-content/uploads/2015/06/CS-5BModel-Risk-Within-Operational-Risk1.pdf [ Links ]
Carrillo-Menéndez, S., Marhuenda-Menéndez, M., & Suárez-González, A. (2007). El entorno AMA (Advanced Model Approach). Los datos y su tratamiento. In Ana Fernández-Laviada (Ed.), La gestión del riesgo operacional: de la teoría a su aplicación. Santander: Universidad de Cantabria. [ Links ]
Cowell, R., Dawid, G., Lauritzen, A. P., & Spiegelhalter, D. J. (1999). Probabilistic networks and expert systems. Berlin: Springer-Verlag. [ Links ]
Cruz, M. G. (2002). Modeling, measuring and hedging operational risk. In Series Wiley Finance. West Sussex, UK: John Wiley & Sons. [ Links ]
Cruz, M. G., Peters, G. W., & Shevchenko, P. V. (2002). Fundamental aspects of operational risk and insurance analytics: A handbook of operational risk. New York: John Wiley & Sons. [ Links ]
Degen, M., Embrechts, P., & Lambrigger, D. (2007). The quantitative modeling of operational risk: Between g-and-h and EVT. Switzerland: Department of Mathematics, ETH Zurich, CH-8092 Zurich. Retrieved from: https://people.math.ethz.ch/~embrecht/ftp/g-and-hMay07.pdf [ Links ]
Embrechts, P., Furrer, H., & Kaufmann, O. (2003). Quantifying regulatory capital for operational risk. Derivatives Use, Trading and Regulation, 9(3), 217–233. [ Links ]
Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. Annals of Statistics, 2(4), 615–629. [ Links ]
Guo, H., & Hsu, W. (2002). A survey of algorithms for real-time Bayesian network inference. In Join Workshop on Real Time Decision Support and Diagnosis Systems. [ Links ]
Heinrich, G. (2006). Riesgo operacional, pagos, sistemas de pago y aplicación de Basilea II en América Latina: evolución más reciente. Boletín del CEMLA. Retrieved from: https://www.bis.org/repofficepubl/heinrich_riesgo_operacional_a.pdf [ Links ] Arial, Helvetica, sans-serif" size="2">
Jensen, F. V. (1996). An introduction to Bayesian networks (1st ed.). London: UCL Press. [ Links ]
Kartik, A., & Reimer, K. (2007). Phase transitions in operational risk. Physical Review E, 75(1 part 2), 1–14. [ Links ]
Lauritzen, S. L., & Spiegelhalter, D. J. (1988). Local computations with probabilities on graphical structures and their application to expert systems. Journal of Royal Statistical Society, Series B, 50(2), 157–224. [ Links ]
Moscadelli, M. (2004). The modelling of operational risk: Experience with the analysis of the data collected by the Basel committee. Bank of Italy, Working Paper, N°157. Retrieved from: https://www.bancaditalia.it/pubblicazioni/temidiscussione/2004/2004-0517/tema_517.pdf?language id=1 [ Links ] Arial, Helvetica, sans-serif" size="2">
Neil, M., Marquez, D., & Fenton, N. (2004). Bayesian networks to model expected and unexpected operational losses. Risk Analysis, 25(4), 675–689. [ Links ]
Panjer, H. (2006). Operational risk. Modeling analytics (1st ed.). Hoboken, NJ: Wiley- Interscience. [ Links ]
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (first edition). San Francisco, CA, USA: Morgan Kaufmann Series in Representation and Reasoning. [ Links ]
Pearl, J. (2000). Causality, models, reasoning, and inference. London: Cambridge University Press. [ Links ]
Reimer, K., & Neu, P. (2003). Functional correlation approach to operational risk in banking organizations. Physica A: Statistical Mechanics and its Applications, 322(1), 650–666. [ Links ]
Reimer, K., & Neu, P. (2002). Adequate capital and stress testing for operational risks. Physical Review E, 75(1), 016111. [ Links ]
Supatgiat, C., Kenyon, C., & Heusler, L. (2006). Cause-to-effect operational risk quantification and management. Risk Management, 8(1), 1–42. [ Links ]
Venegas-Martínez, F. (2006). Riesgos financieros y económicos. Productos derivados y decisiones económicas bajo incertidumbre (1a. Ed.). México D.F.: International Thomson Editors. [ Links ]
Zellner, A. (1971). An Introduction to Bayesian inference in econometrics. New York: Wiley. [ Links ]
Appendix
Appendix. An exact algorithm for Bayesian inference. Among the accurate inference algorithms, we have: Pearl's (1988) polytree; Lauritzen and Spiegelhalter (1988) clique tree, and Cowell, Dawid, Lauritzen, and Spiegelhalter's (1999) junction tree. Pearl's method is one of the earliest and most widely used. The spread of beliefs according to Pearl (1988) follow the following process. Let e be the set of values for all observed variables. For any variable X, e can be divided into two subsets: eX– representing all the observed variables descending from X, and eX+ corresponding to all other observed variables. The impact of the observed variables on the beliefs of X can be represented by the following two values:
That is, λ(X) and π(X) are vectors whose elements are associated with the values of X:
The posterior distribution is obtained by using (A1) and (A2), thus
where α=1/P(e). In order to infer new beliefs, Eq. (A5) is used. The values of λ(X) and π(X) are calculated as follows: λ(Y1,Y2,...,Ym) where Y1,Y2,...,Ym are children of X. When X takes the value x0, the elements of vector λ(X) are assigned as follows:
In the case in which X has no value, we have Hence, by using (A1), λ(X) expands as:
By using the fact that ey1–,ey2–,...,eym– are conditionally independent, and defining
The last expression shows that in calculating the value of λ(X) the values of λ and conditional probabilities of all children X are required. Therefore, vector λ(X) is calculated as:
For the calculation of π(X) it is used the father Y of the X values. Indeed, by using (A2), it follows
This shows that when calculating π(X), the values of π of the fathers X and their conditional probabilities are necessary. There might be some difficulties in dealing with Pearl's inference method due to the generated cycles when the directionality is eliminated. Cowell et al. (1999) junction tree algorithm may overcome this situation. First, it converts a directed graph into a tree whose nodes are closed to proceed to spread the values of λ and π through the tree. The summarized procedure is as follows:
- ''Moralize'' the BN.
- Triangulate the moralized graph.
- Let the cliques of the triangulated graph be the nodes of a tree, which is the desired junction-tree.
- Propagate λ and π values throughout the junction-tree to make inference. Propagation will produce posterior probabilities.