Patrick McDermott, vice-president, enterprise operational risk oversight, Freddie Mac
Patrick O'Brien, vice-president of OpenPages product management, IBM
Kenneth Swenson, risk management specialist, supervision and regulation, Federal Reserve bank of Chicago
Questions covered include:
A good place to start is the state of play with regard to advanced measurement approach (AMA) at those institutions that are using it. Which calculation methods do they tend to use and why?
Are there any trends in the way that banks are revising or overhauling their operational risk capital models?
What has been the impact of the financial crisis on AMA modelling?
Are we going to see any kind of convergence either in the number and the type of scenarios used or in the way they are being used, as well as the goals that institutions are trying to achieve by using them?
Since the crisis, we have seen a lot of regulatory emphasis on higher capital levels. How is this going to affect banks that are using the AMA, mainly because it allowed them to reduce operational risk capital levels, which may no longer be allowed?
If institutions are drawing on sources of external loss data, what precautions and what caveats do they need to keep in mind to ensure that data remains applicable to them?
If this is such a useful tool, do regulators or the industry need to be doing more to encourage institutions to share their loss data, to share these actual case histories with each other?
In recognising that the use of the four data elements of the AMA approach is useful for managing your operational risk, is there actually a risk management benefit in quantifying it in capital terms? Some banks feel that the capital model adds little value.
Is there a sense that the BEICF area is the weak link in operational risk capital modelling? Is that the area in which most caution needs to be exercised because you are not drawing on industry-wide data sources and on generally acknowledged methods of calculating scenarios?