SUMMARY. Introduction. I. A Matter of Definition: the Notion of Automated Investment Advice. II. Benefits and Risks. III. Full or Partial Automation: Two Different Business Models. IV. The Current Regulatory Framework. V. Investor Protection in the EU Legislation. VI. Key Challenges of Private Enforcement. Conclusions. References.
Introduction
If one considers the dynamism of financial markets, it's no surprise that the most recent developments in digital technologies have had a significant impact on the elaboration of financial recommendations. In this ecosystem of financial innovations using digital technologies, generally referred to as 'FinTech'1, automated financial advice plays a pivotal role.
Even though there is no common definition of this phenomenon, automated investment advice (often called robo-advice) is generally understood as an investment service provided online with the aid of algorithmic models and through digital platforms. The novelty of the platforms increasingly appearing on the market2 consists in the algorithmic analysis models and processes FinTech operators use, in particular in the different forms of artificial intelligence3, machine learning or sometimes deep learning that are even able to exploit aggregated information flows deriving from Big Data4.
Assets under management have been growing faster and faster in the last ten years and continue to do so. Worldwide they are expected to show an annual growth rate between 2022 and 2025 of 16.58% resulting in a projected total amount of €2,293,783m by 20255. The estimated annual growth rate in the EU-Countries turns out to be similar6. Even though the volume of assets under management is still only a small fraction of the global markets, it has reached a significant market share and it is increasing faster than the overall market7.
Yet, European Authorities have not specifically addressed the phenomenon of automation in financial advice, and they consider it only in soft law measures8. The Proposal for a "Regulation laying down harmonized rules on artificial intelligence" (Artificial Intelligence Act, COM/2021/206 final) only considers financial advice when creating a supervisory system common to all Member States9. Although the potential danger of these procedures is clearly recognized, mandatory provisions have not been adopted yet. So far, the EU has focused on the governance of algorithms, using guidelines, recommendations, or best practices. This way, it specifically aims to avoid hindering progress: the demands of FinTech companies, which are insisting for a technology-neutral approach claiming it is necessary to guarantee room for innovation, are being put before all other issues involved, such as the protection of investors and clients and possibly even the financial stability of the market as a whole10.
In the absence of a new and specific regulatory framework at EU level, this article sets out to investigate how robo-advice fits in the existing European legislation on financial intermediaries and investor protection11. Digitalization poses in fact specific trials and challenges in the application of financial law regulation. This analysis therefore aims at evaluating whether the current non-intervention approach provides a sufficient response for the protection of all interests involved.
I. A Matter of Definition: the Notion of Automated Investment Advice
Robo-advice is indeed a complex phenomenon. It features different forms of expressing opinions and subsequent recommendations on financial matters that are carried out, in whole or in part, through models based on algorithms for the processing of the collected data (the so-called automated financial tools) and for the elaboration of personalized services for investors.
The international debate often shows a broad consideration12, based on the recipient's perception, regardless of the characteristics and precise legal framework of the services provided. In the European legislation, however, a precise definition of investment advice can be found. It includes only those 'personal' and 'determined' recommendations that fall within the scope of Article 9 EU Delegated Regulation No. 565/2017 implementing Article 1(1) no. 4, Directive 2014/65/EU (Markets in Financial Instruments Directive, the so-called MiFlD II)13.
Accordingly, robo-advice is considered investment advice only when it uses software based on algorithmic models that are not issued to the general public: indeed, generic recommendations on financial products are merely accessory services14 It has to provide personal recommendations on the basis of input from their clients, who are then called upon to make the final investment decision. Indeed, in order to be personal, the recommendation has to take into account (or pretend to take into account) the specific qualities of the recipients as actual or potential investors and be presented as suited to their profile, regardless of if it is actually so. This means that the regulatory response depends on the degree of personalization (whether actual or pretended) of the advice provided, because only so can the MiFlD II framework come into play. If, on the other hand, the recommendation provided by the automated system is standardized, even though it can vaguely be referred to the user's profile, it will be necessary to resort to other forms of investor protection, since those envisaged for financial advice in the strict sense do not apply15.
Once these criteria are met, it is irrelevant if the investment firms add disclaimers to their website seeking to limit the role and responsibility of the advisor. Relevant is only the nature and content of the communications taking place and if they give rise to the clients' reasonable expectation of obtaining customized investment advice. Even if the algorithm and the model used are not technically capable of processing the clients' information, the service should still be considered personal if the advisor created the expectation that the data provided are taken into account.
The advisor may, of course, also offer to monitor purchased assets with consequent warnings and updates (the so-called rebalancing services), but always within the scope of the original investment decision. Even in this case, the decision to balance the portfolio following changes in the market should be taken by the client.
The American terms "robo-advice" or "robo-advisory" are commonly used as synonyms for automated financial advice even in European legal systems. However, the phenomenon of financial advice in Europe fails to include all cases that fall under this notion in the United States16. Even though overseas there is no legal definition of these terms yet17, robo-advisors operate there as digital wealth managers, covering more than what is considered investment advice in Europe, even if this were to be understood in a broad sense. In fact, in the American legal system there is no distinction between advice and execution of investment orders on behalf of the client. Accordingly, robo-advice may consist of planning, of various forms of brokerage, of asset allocation and even standardized portfolio management18, thus enclosing an executive phase in addition to the more strictly advisory phase.
It is true that in a context of high automation boundaries tend to blur, in the perception of the client, in a sequence of clicks from page to page; nevertheless, it is fundamental to maintain a conceptual and legal distinction between these pas-sages19, also in consideration of the different duties of conduct operators have to comply with depending on the services they actually offer20.
II. Benefits and Risks
Digitalization of financial advice entails as a main advantage a reduction of transaction costs by means of an easy access to information and to computerized systems21. This enables firms to apply lower investment fees for retail customers. Robo-advisors can process a large amount of information in a very short time and thus, having evaluated and monitored a broad spectrum of financial instruments, they make quick and rational decisions, leaving no room for human errors or emotional components that could interfere with the final result. Models can also be updated, adapted to new situations, or supplied with new data more quickly than is necessary for the training of human advisors. Finally, they can simultaneously work with a large number of clients, significantly reducing management costs and charges22.
Thanks to the increased competition, a process of 'democratization' in the wealth management sector would be achieved23. The financial advice gap that currently affects the securities market would be closed by reducing costs and cognitive biases while at the same time widening supply, so as to gradually increase the accessibility of financial services24. The simplification and the reduced intermediation would guarantee, according to the unconditional supporters of these developments, a more efficient allocation of resources than the incumbents are able to offer.
However, these developments pose certain dangers that cannot be overlooked25. In particular, the use of advanced technologies is likely to give rise to misunderstandings on the part of the clients. In these contractual relationships with limited or no human interaction, investors may not always be fully aware of the service offered26. The use of (allegedly) unbiased and objective algorithms can generate illusions of control and overconfidence, so that retail investors might be led to make inappropriate and harmful decisions due to a lack of information on how automated tools or underlying models work27.
Aside from the various problematic aspects with regard to privacy and cyber security issues28, no one can guarantee that the system will be able to recognize the inaccuracy of the data entered by the client, nor that it will detect misunderstandings, either in the data collection phase or in the phase of interpretation of the received investment suggestions. Lastly, advice given by automated systems tends to be standardized and uniform in structure. Especially if the robo-advice market continues to grow at the actual rate, this might create imbalances and concerns regarding market stability29.
Possible defects of the software itself should also be considered: these instruments are not programmed to deal with unexpected market dysfunctions30 and they are not always able to react coherently to anomalous situations as a diligent operator would. For example, models used for financial services were structurally unable to cope with market stress situations such as the one triggered by the referendum on Brexit31 or by the Covid-19 pandemic32, and this has caused significant losses to investors, who had not been adequately informed about the reactions that the system would adopt when facing unexpected phenomena.
More generally, and above all because there is no universally accepted practice for coding, the detection of possible errors in the design of automated tools and of any operating defect may not be easy, even for experts. Even if we can agree that the algorithm, if correctly designed, is immune from cognitive prejudices, it's hard to be sure that it is correctly programmed, that customers are well profiled, that possible alternatives are presented in an unbiased way33, etc. This leads to the importance of assessing the concrete forms of programming, operation and updating of the algorithm, in order to identify the duties of conduct in the construction, use and management of automated financial tools. Indeed, in this regard, we can speak of 'algo-governance'34.
These issues have come up in various situations. For example, the media have brought to our attention a lawsuit filed against a broker for losses caused by incongruous investments managed by strongly recommended automated advisory sys-tems35. The case presents several interesting aspects, not only as regards those who are to be held liable36, but especially for the identification of what were in fact the duties of conduct incumbent on the intermediary that used a 'supercomputer' and presented it as highly reliable, if not infallible.
Other issues relate to possible conflicts of interest in firm-client relationships. Firms may set the algorithms according to certain objectives for the distribution or sale of financial products. The algorithm may also provide incentives to sell products for which the operator obtains particularly high commissions. This was argued by investors in a class action, subsequently dismissed by the United States District Court of the District of Illinois37: investors claimed to have been harmed since the financial advisory platform they had relied on had been designed in such a way as to direct clients to products for which its owner earned high commissions, so that the robo-advisor encouraged them to buy overpriced and risky investment funds instead of suggesting suitable options.
Many questions arise as to the possible lack of independence and impartiality of the platforms depending on their design: precisely because they derive from the way they are structured into the program, they may systematically influence the decision-making process in a penetrating and certain manner, exerting a certain negative effect on the output38. These risks are not in themselves different from those that arise in traditional financial advice, but they manifest themselves in the decision-making process in a different way, so that a change might be needed in the legislative approach to the subject.
III. Full or Partial Automation: Two Different Business Models
As regards the forms of inclusion of algorithmic elements, several models are employed in the robo-advice industry: a survey of the market shows different degrees of human interaction embedded in the digital service39.
In the broad spectrum of possible configurations, some peculiar traits allow the identification of two different models of digital investment advice. On the one hand firms offer an exclusively automated decision-making process or only marginal human interaction. On the other hand, robo-advisors can be 'hybrid', which means that the automation of some phases (such as the collection of data, the assessment of information or its processing) is coupled with the activity of human financial advisors.
The formers are evidently the most innovative: the service is provided entirely by digital platforms without human intermediation (so-called client facing tools). Every stage of the service - from data collection to profiling, analyzing, and drawing up of investment plans or recommendations - is digitally carried out. The programs collect the necessary information by means of standardized questionnaires40 and then combine it with the information already available to them or freely accessible.
There is no reason to doubt that advanced algorithms are able to replicate a cognitive process resulting in a personal recommendation according to Article 4(4) MÍFID II even without human intervention. Indeed, these unprecedented forms of financial advice are implicitly recognized by the European legislator. According to Article 54(1)(2) EU Delegated Regulation No. 565/201741, implementing Article 25(2) MÌ-FID II, the firm's responsibility in undertaking the suitability assessment when providing investment advice or portfolio management services shall not change or be reduced by the use of automated, semi-automated or electronic systems.
In these cases, there is a contractual relationship between clients and service providers owning the platform, even though they do not interfere with the service, which is entirely left to an automated execution. At the heart of the operational phase there is an algorithm replacing the professional counterpart42. Distinctive element is the model developed by a programmer starting from certain financial assumptions which, once the relevant data have been introduced, is able to give advice without human intervention43. This way, a depersonalization is achieved in situations that so far had always been characterized by strong fiduciary relationships44.
Hybrid variants of financial advice are less disruptive, as they include some level of human interaction45. These services all share the fact that the digital platforms are complementary to the human activity and are understood as tools to improve the quality of a service that is in any case provided by a person with specific qualifications. In fact, human advisors are still considered to be able to exploit the potential of technology in a safer and more competent way than retail investors46. These clients generally do not have the skills and the knowledge to fully understand how models work, they may be unaware of the risks they are about to take and may misrepresent the advice they receive.
In Europe the latter forms of financial advice are more widespread than the former47. This is consistent with the common perception that they provide a more complete service, in which the algorithmic element represents a complementary factor to the professional advice by a human being, to whom they can turn for clarifications and help. Indeed, several surveys have shown that it is still very important for customers to maintain contact with qualified advisors, and that their presence continues to have a significant reassuring effect when investment decisions are made48. The human aspect is maintained, but it is grafted onto the robotic one to guarantee a complete service. The technological element may be integrated into a complex professional structure (such as a credit institution or an agency), so that a single firm manages all phases of the evaluation or there may even be forms of outsourcing, so that independent operators offer their own software to advisors (so-called robo-for-advisor), who use the software to assist them in the process. Here, however, the software is simply an internal tool and it does not become part of the relationship with the client, so that the service can be considered traditional investment advice and none of the challenges of digitalization arise49.
IV. The Current Regulatory Framework
Digitalization can significantly change the reference market in financial advice, so that the question arises as to whether automation creates a new and radically innovative service, for which a renewal of the legal framework is needed50, or if it is simply a new form of a service that is substantially unchanged, only provided in an original way.
European authorities have so far been following the latter solution. The recently published Commission's proposal for a regulation on artificial intelligence51 does not focus on financial services, even though it acknowledges that financial services can make use of AI systems (see Recital 80) and it intends to create a harmonized supervisory system for market surveillance (Article 64(4)). Conversely, in the European financial system there is no consideration of automation in financial advice. In the relevant sources of regulation and regulatory policies the European legislator follows the principle of technology neutrality, so that the same set of rules apply to all financial services, without consideration of what type of technology is used. Following this lead, the guidelines on suitability issued by the competent European supervisory Authority, ESMA, emphasize the need not to introduce additional requirements for robo-advisors52.
This solution avoids exempting new market players from compliance with the rules applicable to incumbents so that competition is not altered. Moreover, the established rules apply to all services using new technologies without the need for a specific regulatory intervention. As a result, a technology neutral approach is considered necessary to guarantee innovation and to reduce the risk that players might circumvent regulation53. These benefits, however, come with the disadvantage that the rules are adapted on a case-by-case basis, and they may not work appropriately in the Fintech context, leading to possible legal uncertainties.
It is well known that the main reference point in the European legislation when considering financial intermediaries and investor protection is the Markets in Financial Instruments Directive (MÌFID II). This set of rules, as well as the European Commission FinTech Action plan54, makes technology neutrality one of its guiding principles55. Accordingly, ESMA's guidelines on the suitability of financial advice in the context of the MiFlD II disciplinary framework56 set the same requirements for the suitability assessment to be conducted under Article 25(2) MiFlD II and Articles 54 ff. EU Delegated Regulation No. 565/2017, regardless of the eventual automation in financial advice. No distinction is drawn between traditional methods of financial advice, 'pure' or 'hybrid' robo-advice, 'in-house' or outsourced, nor are there any attempts to fine-tune the current rules so as to take into account the procedural peculiarities of automation57.
It is however fundamental to adapt this framework to the context of complete or partial automation, identifying the diligence, correctness and expertise that are needed for those who use advanced technology in financial services. Particularly when the recipients of the automated services are consumers, who may lack the knowledge to understand the complexity of the service, but also when they are professionals who use the results obtained by a digitalized process to make investment suggestions, the novelties of the automated process must be taken into due consideration. Even in the absence of a hard-law approach, a level playing field and legal certainty must still be ensured for the operators, guaranteeing at the same time investor protection and market security.
V. Investor Protection in the EU Legislation
Investment firms providing automated advice carry out services according to Article 4 MÌFID II. This means, first and foremost, that firms offering robo-advice require an authorization by the Member States' competent authorities and need to be registered according to Article 5 MÌFID II58. To this end, they are required to have sufficient initial capital having regard to the nature of the investment service in question, in accordance with Article 15 MÌFID II and Article 9 Regulation (EU) 2019/2033 (Investment Firms Regulation, IFR). They are then subject to ongoing national supervision according to Articles 21 and 22 MÌFID II.
Considering then the rules regarding the investment services in themselves, the European legislation on investment advice is characterized by fundamental rules defining duties of conduct, as is customary in capital markets law. Indeed, one of the qualifying moments of the legislation in the field of financial services is the link between organizational rules and rules of conduct, such as product governance and the rules on conflicts of interest. Indeed, organization is to be considered when judging the ability of the company to correctly carry out the financial advice, with repercussions in terms of non-performance and liability.
According to the already discussed principle of neutrality, the general criteria listed by the MÌFID II and its implementing hard law measures do not specifically consider automation in financial advice. This, however, does not exempt from adapting standards and rules, which in turn represent an implementation and specification of the 'incomplete' general clauses (first and foremost the fundamental principle of good faith and the duty of professional diligence). This regulatory technique clearly lacks inclusivity. It is therefore essential to assess the breadth of standards of conduct in order to establish duties that go beyond what is explicitly provided.
In this operation, due account must be taken of the fact that in automated financial advice there is a human element of great importance that is sometimes left unnoticed, namely the creation of the program or platform itself, as well as its implementation and updating. The role still played by humans should indeed not be neglected59. At the heart of automated financial tools is a model that is able to translate data inputs into investment recommendations, but these models, designed to operate without human intervention, are created by individuals, with all possible biases, errors of judgment, conflicts of interest and lack of professionalism60. Consequently, we cannot assume a priori the independence, transparency, correctness, and adequacy of the automated phases, but these are mandatory in compliance with the MÌFID II framework.
A. Assessment of Suitability
The cornerstone of financial advisors' regulation is represented by the suitability rule laid down in Article 25(2) MÌFID II and implemented by Article 54 ff. EU Delegated Regulation No. 565/2017. Accordingly, advice should be based on sufficient and adequate information on the clients, so as to ensure the appropriateness of the recommended transactions with respect to the experience, the financial situation, the objectives and the risk tolerance of the investors. It is indeed well known that the know-your-customer rule, i.e., the gathering of information about the client, is part of the mandatory activity of assessing suitability61. Regarding the suitability requirements, only the ESMA guidelines take into consideration the provision of investment advice through an automated or semi-automated system, so that they are relevant in outlining the robo-advisory statute. However, they remail soft law measures with no binding force.
In the case of automated advice, the duty to assess the suitability of investment transactions entails the duty to provide an appropriate e-questionnaire to collect investors' data. Here firms should inform their clients clearly and simply about the suitability assessment, explaining the degree of human involvement and clarifying if they have access to client information other than the questionnaire62. The questionnaire should, of course, be compliant with MÌFID II63, expressed clearly, understand able even to inexperienced investors, and it should ensure that all necessary data is collected and that it is sufficient for the algorithm's evaluation64. In itself, the assessment of suitability does not differ from traditional financial advice: indeed, the absence of human interaction is not an obstacle to the correct acquisition of data. However, when there is no or limited professional intermediation, problems arise in the interaction between the clients and the software interface because there might be misunderstandings or inconsistencies that the automated system might not be able to identify. The methods of gathering information and the way it is assessed should therefore be the object of verification65. In addition to the online questionnaire, operators could resort to algorithmic profiling (which might even be carried out using Big Data analytics), automated analysis of correlated data and the application of these correlations to a specific investor66.
As for possible inconsistencies, the information provided by customers may legitimately be relied upon, unless it is manifestly outdated, inaccurate or incomplete (Article 55(3), EU Delegated Regulation No. 565/2017). However, in the transition from the collection of data to their processing, appropriate measures are needed to deal with inconsistent answers or with the lack of response to essential questions (Article 54(7)(d) EU Delegated Regulation No. 565/2017). For example, the system may be prevented from sending the data if discrepancies or unusual answers have not been clarified or if certain responses have been left incomplete. This is an indispensable step to ensure the reliability and thoroughness of the information collected directly from the clients67. Advisors should always abstain from expressing a recommendation if they have reason to believe that, notwithstanding the fulfilment of their obligations, investors are not capable of understanding the risks that the operation entails68.
The mandatory assessment of suitability also means that advisors have the duty to provide recommendations that are actually appropriate for the characteristics of individual investors. This should prevent them from recommending only a limited number of financial instruments identified in advance, and from merely allocating pre-packaged portfolios to clients without taking into account the personal information gathered through the questionnaires.
B. Conflicts of Interest
A further fundamental duty of conduct is to guarantee independence from all political and economic powers, as is widely recognized in the European regulation69. The independence of judgement is a key element, and it must concern both the company as a whole and those who carry out the recommendation, whether they are individuals or algorithms, as well as those vested with administrative or supervisory functions70. Indeed, conflicts of interest is one of the main problems when considering robo-advisors because they might adversely affect the interests of the clients71.
In this regard, Mi FID II focuses on conflicts of interest caused by inducements and distinguishes between independent and non-independent investment advice. In both cases conflicts of interests should be avoided72, but advisors aiming to offer independent investment advice have to comply with additional rules, especially relating to third-party feed, that are strictly limited by Article 24(7) MÌFID II73. This is however hardly the case for robo-advice74, where conflicts of interests seem to manifest themselves in different ways than in traditional investment advice.
Specific risks arise in fact when the advisors have structural links to entities that also provide financial services, such as banks or other intermediaries offering order execution services or other investment activities that are part of the same corporate group. Similarly, they might have entered into contractual agreements regarding marketing or distribution with other market players. Here the advisor has a concrete interest in the fact that clients invest in financial instruments issued by these entities to whom they are affiliated or linked, and they might recommend these products to the detriment of their clients.
Conflicts of interest, as argued in the class action brought in Illinois by clients of robo-advisors75, can also occur in highly automated contexts. Then they are even more problematic since they are part of the model itself, so that they have a wide and certain impact, undoubtedly bearing significant consequences. Hence the fundamental inclusion of independence, above all on an organizational level, in the algorithm design phase: programmers too, like intermediaries, must ensure that the recommendation as a whole is not influenced by actual or potential conflicts of interest and that the advice provided is in the best interest of the client.
As in traditional financial advice, it is essential to identify situations that can have a negative impact on the objectivity and impartiality of the recommendations, preventing them or managing them by means of internal policies (Article 23(1) Mi-FID II). Firms should use algorithms that that are free of conflicts of interest that might damage the clients, designed without undue influence from external powers and free of biases, also regarding the suggested investments.
These measures may not be enough, so that the investors' and the market's interests may still be affected. In this case, and as a last resort, specific disclosure is required76. Transparency is indeed still fundamental for financial services77 and it is closely linked both to independence78, and to the necessary diligence in providing the service79.
C. Disclosure
The rules provided for traditional advisory services bring about a two-way information flow. They are applicable also when some or all phases of the process are automated, but the content of the information to be provided and its presentation should be adapted to the different ways in which robo-advisory services can be carried out. Even in the absence of human interaction, clients should be allowed to properly understand the service they are being provided with, the potential risks involved, as well as their rights and obligations. Above all, operators should clarify the boundaries of the service, stating whether or not it integrates an investment service subject to MÌFID II controls, so as to prevent misunderstandings.
All this is summed up in a fundamental standard - specified by numerous rules80 - according to which advisors are required to operate in such a way that the clients as well as the market are at all times adequately informed and that information to actual or potential clients is "fair, clear and not misleading"81. A comprehensive, yet specific disclosure is required of all relevant facts, business practices and risks associated with the way in which the activity is carried out.
Disclosure is most likely to take place on the firm's website and it should be in a language that the public can easily understand. It should specify which processes in the business model are automated and what criteria are used to select investment products, as well as clarify any potential for human interaction in the process. Firms should identify technically the type of algorithms they use, as well as the hypotheses, the assumptions and the limits of the chosen model, so as to explain, at least briefly and comprehensibly, the functioning of the robo-advisor, how it is designed and governed82.
Transparency duties thus become central83. They are not meant to allow the clients to understand how the algorithm works; indeed, this is highly unlikely because of the technical skills required. Instead, they have the purpose of enabling an independent expert to verify ex post the reliability of the algorithm's programming. There are no shared, let alone codified rules for financial software programming. Therefore, the verifiability of the choices made upstream must be ensured as far as possible84. Especially in case of a dispute over the correct functioning of the program, one has to assess the correctness of the software processing85.
When the service is provided by a platform that interacts exclusively online or with limited human involvement, these communications take place electronically and the disclosure should occur before the clients sign up on the portal. From the beginning, the firm has to make clear the amount of human interaction available. As for professionals using b2b platforms, they should inform clients about the automated tools they employ, as well as about the criteria used in choosing the software as well as the control mechanisms.
D. Duty of Diligence and Governance Controls
On top of all the above, it is necessary to identify the level of diligence and fairness that should be guaranteed even in cases where no humans are involved. According to these duties of conduct - overall relevant but specified in Article 24 MÌFID II86 -advisors should carry out the service respecting the standards of diligence relevant in this field87.
Accordingly, the algorithmic models should be based on proven scientific evidence and supported by statistical methods. At the same time, they should be regularly checked to ensure their continued reliability and controllability. The duty to regularly review and revise them is essential: the adequacy of the algorithms and statistical analysis factors should be checked, and they should be reprogrammed if problems arise as to the predictive power of the advice or if there are systematic discrepancies with the range of results that might be reasonably expected.
To this extent, the European legislation implementing Mi FID II88 provides the duty to establish, apply and maintain adequate governance controls, with a consequent markedproceduralisation of the activity of investment advisors. In the context of total or partial automation, these organizational duties should be complemented by the duty to prevent structural conflicts of interest, in such a way as to inhibit them from finding their way into the program codes.
Effective compliance programs are essential. In addition to the general rules regarding the duty to allocate adequate financial and human resources in order to ensure the correctness of the advice, there should be a reporting mechanism able to trigger an alarm if the result of the automated tool differs from what could reasonably be expected89. It is thus necessary to monitor and promptly revise the digital tools in case of unforeseen alterations in the markets (for example, a sudden collapse of the markets due to unexpected causes, such as the Brexit vote or the outbreak of the Covid-19 pandemic), as well as in the event of a significant discrepancy or repeated inconsistencies in the results obtained by means of the automated procedures. The managers and the members of supervisory bodies, as well as all staff charged with material aspects of the process, compliance or audit, should have an adequate level of skills, knowledge and expertise of the digitization of services in order to be able to perform their duties in the new context of digitization90.
Lastly, cyber-security issues emerge in these situations, and they must be carefully assessed, considering the need to introduce organizational and structural requirements that do not only look at the assets, but rather at the IT apparatus and the interface platform with the client, so as to guarantee the solidity, continuity and above all the impenetrability of the IT platform.
VI. Key Challenges of Private Enforcement
The MÌFID framework does not regulate the enforcement of the rules it provides, even though this plays a fundamental role when it comes to investor protection91. The public as well as the private enforcement of the MÌFID II framework is subject to the law and the regulators of the single Member States. As for the former, the Articles 67 ff. MÌFID II harmonize the competences and the powers of the national authorities when it comes to breaches of the regulation, but the reaction is still highly dependent on the actual supervision and enforcement capabilities in the Member states. Even more noticeable are the differences when investigating the remedies allowing investors to effectively seek compensation for violations of the duties of conduct. In fact, when it comes to private enforcement there is no harmonized framework and only national law comes into play and it is well known how much the rules vary in the EU member states when it comes to liability.
Even though the specific rules will have to be identified according to the national legislation applicable to the single cases, some general considerations may still be necessary when considering the core problems of private enforcement, that are common in all Member states92. Indeed, ensuring a proper and effective enforcement of market rules is fundamental also to guarantee a proper functioning of the markets93.
Therefore, it is very important that market participants, especially investors, have the means to initialize civil litigation to get compensation when rules are violated. This liability may be contractual or pre-contractual depending on the relationship between the damaging and the damaged party and on the phase in which the breach of a duty of conduct takes place. The complexity of these cases, however, increases the uncertainty regarding the causal sequences generating errors and, consequently, possible damages.
These questions of causation have to be solved according to the peculiarities of the individual cases. Robo-advisors use machine learning models built with neural networks, fuzzy logic or genetic algorithms. These include elements of randomness and have the capacity to 'learn' from the data received, elaborating predictive decisional schemes that appear to be autonomous. Specific skills are required to understand the sometimes technically opaque path that led the algorithm to a given result, but even for experts it is not easy to assess the correctness of the procedure followed and the output obtained in absence of proper and exhaustive technical details. Nor can it be overlooked that the quality of this output does not depend solely on the instructions given, but also on the data supplied to the algorithm.
This raises the question of identifying who is liable when the investment advice is not appropriate, resulting in financial damage. The attribution of liability deriving from the harmful action of a procedure based on artificial intelligence systems, together with the rules that these should follow, are, moreover, essential issues for all reflections on the use of any robotic tool94, such as self-driving cars95.
Three alternatives can be identified: there may be a liability of the person who programmed the automated system, of the professional who uses it as a tool in providing a more complex service, or finally one may conceive of a 'liability' of the algorithm itself to the extent that the harmful result is derived from its autonomous calculations.
It is not possible de iure condito to recognize a specific legal status for robots. This might be the case in the future, if the national legislators decide to follow the lead of the European Parliament96, who calls for the creation of an 'electronic personality' that would allow them to be held liable in their own right, through assets they would 'own', for the damage caused by their activity and their autonomous de-cisions97. However, the decision-making process of artificial intelligence systems is inherently different from humans98. Indeed, already as of 1950 Wiener, the father of modern cybernetics, noted: "for the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility to the winds, and to find it coming back seated on the whirlwind"99.
Moreover, identifying specific assets for the compensation of damage caused by artificial intelligence does not necessarily entail the recognition of digital per-sonhood. On the contrary, this would only increase the legal complexity of the case without answering the real question: that is, the identification of criteria for assessing the liability100. Ultimately, there seems to be no choice but to consider artificial intelligence systems as tools used in giving the advice101. Firms who use AI systems are not relieved from the duty to control them and they are therefore responsible for any damage these robots might have caused.
As far as hybrid models are concerned, a principle emerges from various regulations on the subject102 according to which, regardless of the way in which the elements of automation are integrated into the financial service, the damaged party can always sue the advisor for damages. Advisors are therefore liable for compensation for the damaging consequences deriving from an incongruous advice, and the use of automated systems is irrelevant for excluding or limiting this liability.
Accordingly, advisors using automated financial tools are liable both in cases where they tamper with them or use them improperly (e.g., limiting the options to be provided to clients or inputting incomplete data), as well as in all other cases where they fail to comply with their duties of conduct. At the same time, they also bear the risk of malfunctioning and are liable for programming defects. Except in cases where a client has, for example, negligently transmitted incorrect data, advisors are always liable for the entire financial damage and cannot be exempt by adducing technological errors or operating defects, in the same way as they are liable for any errors and failures caused by those who help them carry out the evaluation and express the advice. Then, in turn, they might address those who directly caused the damage, such as the programmers in the event of a defect in the algorithm.
Finally, as regards the programmers, according to the general principles they can be called upon in their own right to answer on a tort level for the harmful consequences deriving from their actions. In these cases, however, the loss is purely economic103. The well-known and still highly debated question arises regarding how to compensate financial loss when there is no infringement of a legally protected right or interest104.
Even when there is no human interaction, the complete automation of the advice cannot affect the liability of the platform owners vis-à-vis their contractual counterparties. Nonetheless, the final allocation of the liability has to consider the individual cases. The accuracy and consistency of automated or semi-automated advice depends, in fact, not only on the quality and completeness of the information available, but also on the correctness and repeatability of the models.
Conclusions
Legal predictability is fundamental in order to guarantee an effective investor protection. Clear and precise rules are therefore needed to uniformly address the problems posed by technological innovation in all fields of financial advice. It is however highly unlikely that the EU might choose to move away from the technology neutrality approach of the MiFID II system. Moreover, even if the new AI framework has not yet been finalized, one can assume it will not alter this approach by introducing new technology-specific provisions in the context of financial services law.
However, this does not entail market players should be left alone, relying only on the scarce soft-law measures currently available. Even if no substantial amendments to MÌDIF II regarding automation seem necessary, is might be advisable, at least in the main areas of concern such as conflict of interests or suitability assessments, to enact binding rules specifically concerned with automated financial advice105, possibly through more flexible Delegated Regulations, that could be changed faster when confronted with new and unexpected developments in the markets106. When it comes to self-enforcement, it does not seem necessary to introduce new and harmonized forms of liability. Indeed, resorting to traditional structures common to European Member States and considering AI systems as tools (or maybe even going back to Roman law107) seems a better solution than creating a specific legal personality for robots, hence relieving those who manage them of their responsibilities.
Cross-disciplinary cooperation between the competent authorities in this field is of central importance, as is happening in Europe through the joint initiatives of the ESAs108. However, it would be more appropriate to adopt legally binding standards, not just guidelines or joint reports.
The ESAs have argued that there is currently no need for specific regulatory intervention in this area because the phenomenon is not so widespread. This is far from adequately guaranteeing sufficient protection for investors increasingly relying on automation. The regulation, of course, needs to be balanced in order to avoid discouraging innovation and to assure that there are no barriers to the entry of the automated advice market. At the same time, the safety and transparency of the markets have to be guaranteed, also limiting the systemic risks that these new forms of financial advice may entail. In fact, once completely automated recommendations will have become more and more present on the market, as seems likely in the near future109, the feared risks will become significant. But this means shutting the door after the horse has already bolded.
The size and complexity of the financial sector, as well as the strong competition that characterizes it, have in the past prevented regulators from promptly taking legislative action and from timely responding to concerns expressed by the most attentive observers regarding risky practices and hazardous situations. It is to be hoped that European authorities will move quickly and adopt hard-law measures before it is too late110. Specific duties of conduct and private enforcement mechanisms can guarantee the solidity of the markets and, consequently, allow the gradual stabilisation of a single European market for financial services. Only in this way will it be possible to profitably take advantage of the massive technological developments that, also in this field, represent an opportunity that should certainly be exploited, but with caution, both for financial operators and investors.