Talking Tech: Episode two transcript – What can legal services learn from medicine and finance?


The Legal Services Board podcast – Talking Tech

Episode two transcript – What can legal services learn from medicine and finance?

 

DAVID FOWLIS: Welcome to the Legal Services Board’s series of podcasts on developing approaches to regulation for the use of technology in legal services.

 

Applications of technologies like artificial intelligence and blockchain are increasingly enabling lawyers to provide services to their clients in new and innovative ways and have the potential to transform the market for legal services in England and Wales.

 

In this series of podcasts, the LSB will be speaking to experts in legal services, technologies and regulation about the challenges that new technologies present to legal services regulation, and how legal services regulators in England and Wales can approach these challenges.

 

I’m David Fowlis, a regulatory policy manager at the Legal Services Board, and I’m joined today by Roger Brownsword, Professor of Law at King’s College London. We’re going to discuss how technology is being used and regulated in other professional services sectors in the UK and what lessons legal services regulators can learn from the experiences of other professions.

 

Roger, if you want to introduce yourself.

 

ROGER BROWNSWORD: Thanks David. My name is Roger Brownsword, and as an academic lawyer I’ve been interested in researching law, regulation and technology for about 30 years. My interest initially was in the patentability of new products and processes arising from genetic engineering, so it was in the area largely of health and environment and biotechnologies, but over the years, my interest has become much more general as more technologies have come onto the radar. Information technologies, e-commerce, then nanontechnologies and neurotechnologies, and the most recent stuff about AI and machine learning, blockchain, additive manufacturing, AV, and goodness knows what that is now on the regulatory radar.

 

It’s against that background that I was very pleased indeed to accept the invitation to write a paper for the LSB exploring what regulators within the legal services sector could learn about the regulation of new technologies from other service sectors.

 

DAVID FOWLIS: Roger, very many thanks for joining us today. Before we get into the detail, it’d be really helpful to understand which professional sectors’ experience of technology and regulation you consider to be most relevant to legal services, and why that is?

 

ROGER BROWNSWORD: Well, before I wrote the paper, I would have not known quite how to answer that question. Even having written it, it’s still quite a tricky one. The main candidates do seem to be health, medicine and finance. In health and medicine there’s a long experience of engaging with new technologies, particularly biotechnologies, but also information technologies. In finance, again, a considerable record of engaging with new technologies. In both cases we’re now talking about engagement with AI and, in finance, blockchain. They are important sectors, I think. Important comparators for law.

 

DAVID FOWLIS: Okay, great. If we look to the medical sector; what technologies are being used in this sector, and what issues has their use raised for consumers of medical services and practitioners?

 

ROGER BROWNSWORD: Okay, that’s quite a long question. I’ll break that one down. First of all, the technologies being used. There’s a very broad sweep of new technologies being employed in medicine, in research, in clinics, in reproductive settings. There are biotechnologies, info-technologies, imaging technologies. Nanotechnologies have been employed in nanomedicines. Robotics – anybody who hasn’t seen the da Vinci robot in operation on YouTube should have a look at that. It’s wondrous to see how it peels a grape. I think the hope today is that with the use of AI and big datasets and an improved understanding of genetics, this will lead to more personalised treatments. So, there’s a very broad sweep of technology. Almost anything you can think of is potentially applicable in health and we’re keen to explore these technologies there.

 

The question of what issues have they raised for, first of all, consumers of medical services and then for the practitioners. For consumers, the dominant issues in health are about patient health and safety. Not harming people. Not killing them. We don’t want dangerous technologies! There have also been concerns here about the way in which some research and some treatments impinge on fundamental values like human dignity.

 

Wherever digital technologies are being employed – as they are increasingly in health – then there are concerns about the security of information, about hacks and viruses, privacy and data protection. It’s not all sweetness and light. One commentator says that the reality is behind the facade of superficial wonder, modern hospital IT is too complicated for its own good, for the good of patients and for the good of staff.

 

So, there are plenty of issues there. Of course, a lot of AI is only recently coming onto the radar as an applicable technology. There have already been scandals, shall I say, in relation to agreements made between the Royal Free Hospital and Google DeepMind about the use of patient data. Although it’s a bit unusual, there are also concerns in the health sector about financial interest for consumers because there’s a concern that consumers shouldn’t be spending their money unwisely. The market for IVF, in particular. It really is a business and there are worries there that consumers are spending more money than they probably should.

 

On the other hand, for practitioners, let’s start with researchers in health. I hope this isn’t unfair to say, but they seem to complain a lot about the pathway for research approval. It is time consuming. They have to go through all sorts of hoops and over hurdles, mountains of paperwork, but if the regulatory aspiration is as it is – which is to of course encourage innovation, but at the same time to make sure that the interests of prospective patients aren’t being jeopardised – you can’t just wave things through. There has to be some sense here you’re checking, reporting. This is a regulatory burden that falls on researchers and on practitioners.

 

There’s an interesting reflection on this. Simon Fishel, he was one of the pioneers with Steptoe and Edwards of IVF. In a recently published autobiography, he says this:

 

‘To develop something innovative that will help people in a way that’s not been done before, we need to do new things. These new things may not work, and they might even offend people. The regulator doesn’t like that uncertainty. But how can we know if they work on humans, unless we try them on humans?

 

You can see the catch-22 situation I’ve worked in for most of my career.

 

Here’s a thought: in regulated countries such as the UK, IVF couldn’t be invented today. The regulatory bodies that govern medical research would forbid it.’

 

This is the view of a researcher who’s saying regulation doesn’t actually help us, versus the view of the regulators – and probably the consumers’ and patients’ associations – who think that by and large we couldn’t manage without regulation, and it’s the regulatory apparatus here that’s given the UK the reputation that it has for a good regulatory environment for health research.

 

DAVID FOWLIS: We probably have partly answered this already, but how have medical regulators responded to the introductions of technology? If we took AI as an example, and the issues that its raised, what approaches are they taking there?

 

ROGER BROWNSWORD: Regulators in the medical sector are characteristically precautionary and the precaution means that where something gets through the net and they need to adjust the regulatory framework, they do this. It’s not AI, but the example of the PIP breast implants is a good illustration of how something has gone seriously wrong with this product. It’s gone through the regulatory system and we now have a new medical devices regulation in Europe which is extended to cover cosmetic or aesthetic devices.

 

But your question about AI – it’s still relatively early days in relation to AI. In Europe, the European Union set up a high-level expert group on the ethics of AI and there’s been a huge amount of focus on the ethical questions arising from AI. I say ethical in a very broad sense ethical. These are questions about: what does AI need to look like for it to be socially acceptable? We have a couple of months ago this report from the high-level expert group on basic principles for trustworthy AI which is advocating a human-centric approach. That’s almost echoed to the word by the OECD in a report that’s come out within the last fortnight where again they’re saying just the same sort of things. Here in the UK, we don’t have a standard operating procedure for dealing with new tech like AI.

 

I was on the Royal Society working party on machine learning because I think it’s the machine learning aspect of AI that makes it really interesting. This is quite fortuitous in a way. The parliamentary committees that look at AI as well. There’s a certain sort of convergence towards these ideas about openness and transparency for explainability. How can you justify deciding the case this way?

 

I think it’s still early days for AI. In the UK, AI’s got caught up as well in reflective thinking about data governance more generally. Certainly, the Royal Society working party was very mindful that at the same time there was a working party on data governance in operation. I did hear at the time the data governance people were saying, ‘Should we do something like another Warnock Committee?’ which was a Parliamentary Committee to look at IVF. It took two years for that committee to deliberate and took five years then for Parliament to discuss and debate it before the legislation came into place. So, probably 10-12 years before IVF first came on the radar. Now, I just don’t think that kind of timeframe is feasible for AI. We can’t sit on our hands and deliberate for 12 years. Regulators in the legal services sector will find that by the time that they are seriously engaging with AI that a lot of the preparatory work’s probably been done elsewhere.

 

DAVID FOWLIS: It’s interesting what you said there because I think from the legal side of things we tend to think of medicine as an area where there’s obviously been technological development over decades, and it’s interesting that when confronted with something like AI there wasn’t a way for the sector to say: okay, this is a new thing, how do we handle it?

 

ROGER BROWNSWORD: No.

 

DAVID FOWLIS: It’s quite an interesting observation because obviously in the legal sector, technology is quite new. There aren’t many things that lawyers do today that couldn’t have been done perhaps with a pen and ink and paper, but it’s just interesting that despite the fact that you have medical devices regulation and you have drug regulation – was it because it fell between the stools in some way? Did it not fit into criteria, or was it just that AI was so different?

 

ROGER BROWNSWORD: I’m not sure about this. There are some technologies, and AI is definitely one of them, which have across-the-board potential applications. They don’t neatly fall into anybody’s regulatory jurisdiction, as it were, and there might be too many possible candidates to be dealing with this. On the other hand, it might be, as you say, that this slips through the cracks. I think again there is a certain reluctance to institute new regulatory arrangements for a technology unless you’re convinced that we can’t already handle it through existing arrangements.

 

DAVID FOWLIS: Yeah, which isn’t unreasonable.

 

ROGER BROWNSWORD: Yeah. Going back to the Royal Society working party on machine learning, the view there was that we need to get the general principles for data collection and data processing right, and of course, the General Data Protection Regulation in Europe is a very important new piece of law there. That will regulate how the data is collected. For AI and machine learning you need all the training data and big datasets to do the job. So, the thought was that that might be sufficient, and then you would have a domain-specific regulation that was sensitive to the context in which AI was being applied. Now, I’ve got to say that I was never entirely convinced that that was right. I thought maybe AI went beyond simple questions of data governance. Particularly where you’re talking about the kind of processing of data that can’t really be explained, and that seemed to me to be a bit of a leap into the dark.

 

DAVID FOWLIS: Do medical regulators’ concerns about AI relate – from the practitioner perspective, is there a concern that you would have practitioners relying too much on the AI when making diagnoses?

 

ROGER BROWNSWORD: Yeah, okay, so sticking to medicine, I’m sure that wherever there is a diagnostic and then an advisory role to be played by the professional, there is a concern that we become over-reliant on AI. I think that there’s also a concern to the extent that AI is being sold, not so much as a professional aid for practitioners, but as a product for consumers to add on to Alexa or whatever it might be at home, and you get your home diagnosis through a smart device. I’m not sure that becoming over-reliant will be the problem there, but just how accurate. For example, I heard a doctor some time ago, who was also interested in the development of AI tools, talking about an app on your phone which had a store of photographs of marks on people’s skin which might be worrying them and they thought maybe I ought to see the doctor and ask whether this is something that needs treating. So, you take a photograph of whatever it is that’s troubling you, and then the AI would check it out with its library and would tell you it’s benign or you really ought to get further investigation. Some people worry about that particularly if it’s, ‘You ought to get this checked out,’ without having a medical counsellor there at the time. I think that there are several dimensions of concern about AI just plugging into…

 

DAVID FOWLIS: So, again, it’s that human interaction with the professional.

 

ROGER BROWNSWORD: Yeah, exactly. At some stage there would need to be some human interaction to, you might say, validate the diagnosis in the first place. But just the human interaction itself, as you say, is something that people will value. Particularly where the news is worrying.

 

DAVID FOWLIS: Is there an overall rationale to how medical regulation works? Perhaps in terms of protecting consumers, is there a basis on which regulation tends to be founded and then is applied?

 

ROGER BROWNSWORD: I think there are two things here, David. I was going to start off earlier on with a very helpful comment by Martin Wheatley from the FCA. Martin is saying that we’ve got all these technologies now promising new beneficial developments for consumers in finance. The phrase he used is: ‘We don’t want this to be a tech wild west.’ We have to monitor this; we have to regulate it. On the other hand, and this is the other horn of the dilemma, we don’t want to over-regulate to the point where innovators are put off innovating. I think that the rationale underpinning the approach in all these sectors, including medicine, is that there’s something of a seesaw here where you don’t want to overburden innovators or researchers and developers with regulatory requirements, less they don’t innovate at all.

 

But on the other hand, you’re concerned that if you lighten that load, then you expose consumers to risks, or it might even be the whole integrity of the system. You can’t just be too laissez-faire. Although, as I say, Europe has the reputation for being very precautionary, so it is very sensitive to that; whereas in the States, the stereotype is of a regulatory system that’s much more the other way. ‘Let the internet develop’ was their philosophy. We don’t need to regulate it. Then we’ll worry about any problems that it gives rise to as and when we meet them.

 

DAVID FOWLIS: If we look at the lessons that the legal services sector could learn from medical regulation and its history, what do you think those are?

 

ROGER BROWNSWORD: So, because medical regulation is very much orientated towards protecting the health and safety of patients, and because there have been some hard cases about fundamental values in medicine, I could say that that stuff actually doesn’t really impact or help legal regulators very much because it’s telling them how to tackle problems that they are not actually going to have to face very often. I just don’t think legal services are going to come up with those.

 

But I still think that the precautionary approach that is characteristic of European health regulation is something that legal services regulators should take note of because some of these technologies, particularly AI, are really unknown quantities. Like Stephen Hawking said in his valedictory book, we don’t really know whether this is the best thing ever or the worst thing ever for humanity.

 

You mentioned earlier on that human interaction is something that people might value. In legal services, they might value this. In the paper, I talk about a case from California that was reported where a family visiting an older guy in hospital, Ernest Quintana, were shocked to find that there was a robot in his room conveying the news that the medics really can’t do any more for him and he’ll be dying very shortly. Although he had face-to-face interaction at some earlier stage apparently – the family weren’t aware of that at the time – even knowing that, they still don’t think that this was the right thing to do.

 

There’s also the concern about the way in which we rely on technologies which look extremely benign, designed to help us, and we find in the end that we’ve corroded the context in which we are responsible for our own actions. There are deep lessons like that just in the way people worry now about what impact the internet or social media sites might be having on our children. We’ve conducted a massive global experiment without really appreciating what sort of impact this might have.

 

DAVID FOWLIS: Just briefly, we’ve talked a lot about how medical regulators have the interests of patients and ultimately consumers of medical services. How important is consultation in developing regulation in medical services?

 

ROGER BROWNSWORD: I think consultation is one of the most important things to take into account here. In any sector, the social acceptability of the regulation will hinge on whether the regulators are permitting red lines to be crossed or balancing up interests in a way that are just not acceptable. They won’t really know whether there are red lines out there or whether there are balances that will be unacceptable until they’ve done some consultation.

 

There’s no magic formula for doing the consultation. It has to be done right, and that’s something we could say more about, but assuming the consultation is done right and you have a fair idea of how the land lies then you shouldn’t have too many nasty shocks when you proceed with the regulation. Care.data in medicine is a constant reminder to us that if you don’t do the consultation and the communication right in the first place then patients or consumers might resist and frustrate what on paper looks like a really smart idea.

 

DAVID FOWLIS: Okay, thank you. Roger, if we now turn to look at the financial sector. Again, perhaps similar questions. What technologies are being used here, and what issues have their use raised for consumers of financial services and, I guess practitioners isn’t quite the right word, but suppliers?

 

ROGER BROWNSWORD: In finance, you’ve got pretty much the whole range of technologies that you find in medicine apart from biotech. No doubt there’s biometrics in finance too, but not biotech in the same way that it dominates the landscape really in medicine. Of course, all the IT stuff is very much a part of finance, and the whole infrastructure for e-commerce and those sorts of technologies. Of the ones that we’re interested in right now, like AI and machine learning and blockchain, I think blockchain is far more important in finance than it is in medicine. You’ve got a big sweep again here of technologies. Where we’re looking at the issues that these technologies give rise to, then for consumers first of all I think, again, unlike medicine where maybe the main risks are health and safety and possibly some fundamental values, here in finance it’s financial. You’re going to lose your money if you’re not very careful. You put your money in the wrong place. Consumer protection for the FCA in this sector is very much about protecting financial interests for consumers. Whilst lots of money can be made with Bitcoin or whatever other ventures you’re in, it can also be massively lost.

 

In the paper I talk about a hack at Mt. Gox in 2014 – I think at that time it was the largest exchange in the world – where very large sums of money were lost by people who were investing there. Again, I’m not as familiar with this sector, so I’m not sure whether practitioners or suppliers complained as much about the pathway to approval as in medicine where researchers are notorious for thinking that the regulatory burden there is too high. Clearly, when you read the background to the sandbox, there you can see that the intention is to try to make the regulation more proportionate relative to the risks and ease the way in for the approval. To me, on paper, the approval process looks pretty extensive and intensive and demanding for suppliers. I can imagine that there’s some concern that, for the sake of protecting consumers, the FCA or other regulators are asking too much of new entrants into this sector. Again, there is a concern that the legal position is unclear in many respects in relation to these new products that are coming on with blockchain. It’s not just in the UK. In all legal systems in the world regulators are asking the question: how does our regulatory regime map onto this new stuff? And it doesn’t always fit very neatly. Sometimes of course there will be immediate responses to prohibit or permit or whatever it might be. I think for suppliers there is in this sector some uncertainty about exactly what the legal position is.

 

DAVID FOWLIS: Okay, if we maybe just talk about blockchain for a moment. Is there something particularly novel about it that makes it challenging for financial regulators, or that’s really different from what they’ve dealt with before as far as how transactions have been done previously?

 

ROGER BROWNSWORD: Yeah, I think there might be. The blockchain technology itself, unless you’re a cryptographer, is probably a bit puzzling. The early accounts of blockchain, where it’s just blockchain and Bitcoin, are extremely puzzling as to exactly how bitcoins are produced and exactly what sort of a money system this is. That’s all pretty puzzling and I think of all the technologies that we talk about, this is perhaps the most difficult one to get your head round initially. But once you’ve got your head round the basic idea of blockchain, and then the possibility of applications running on blockchain, then it looks a bit more like the internet because you’ve got an infrastructure here and you can build on that to do all sorts of different kinds of transactions or activities. To that extent it raises the same sort of issues as the internet that these potential applications are a bit uncertain.

 

DAVID FOWLIS: Do you think then it’s a case perhaps of regulators being concerned more about the activities and the applications that blockchain is used for rather than the technology itself?

 

ROGER BROWNSWORD: Yeah, I would be inclined to say that but, again, I’m mindful that the genesis of blockchain seems to be a reaction to the global financial crisis, and the originators saying you can’t trust the traditional banking sector. We don’t actually need these intermediaries. We can do it ourselves. I suppose this is a bit unusual in that this is a technology that seems to be set up against the establishment.

 

I know in the early days of the internet, the cyber libertarians so-called were very keen on the idea that this should be an unregulated space, and indeed could not be regulated. It might be that similar thoughts operate in relation to blockchain, but I think it’s probably the applications rather than the technology. Although obviously the problems of accessing what’s in the blockchain are going to give – more in the area of security and law enforcement, I would say – questions there.

 

DAVID FOWLIS: Obviously in legal services you have a number of types of transactions. You have conveyancing, you have contracts, you have patent registry. All of which blockchain could be applicable to. Is there perhaps a a concern that regulators sometimes become fixated on the technology itself rather than the applications it’s being put to use for, which are things which have been done for many years? Is that something that we’ve seen in financial services as well, and what’s your view on that?

 

ROGER BROWNSWORD: First of all, if the application is something that we’re familiar with, and the technology is a tool which is doing something that’s functionally equivalent to what a human operative would have done in the past, then that tends to ease concerns about what’s going on here. Yeah, in financial services you would accept that idea. Secondly, some of the functions that we’re thinking for blockchains – where they’re registry functions or record functions, authoritative records – we do need to be careful about who validates these because the original Bitcoin model seems to be allowing the whole world to get involved in the validating network and that might seem to be not the greatest trust mechanism. Then the third thing is with smart contracts where payments are being made using blockchain applications. There might be situations where technology does do something which a court would not order to be done. The extent to which you’ve got parallel codes, as it were – one for blockchain and one for the law of contract and fee of contracts – that’s a question that’s currently being explored in the academic literature.

 

DAVID FOWLIS: Roger, I want to ask again about consumers in financial services. What steps have regulators taken to protect them, and how successful have those been?

 

ROGER BROWNSWORD: When we start talking today about consumer protection, I think it’s important to differentiate between the view that what you’re trying to protect is consumers who want to make their own choices in the marketplace but need proper information to do that, and the responsibility of regulators largely to make sure that consumers are properly informed. Of course, if you look at the FCA’s webpages you will see from time to time there are warnings about dangerous investments or risky products that consumers need to be aware of, not least in the area of cryptocurrencies. The other meaning of consumer protection is a more paternalistic meaning. That it’s not so much letting consumers make their own choices, but actually removing certain choices from the table for consumers where we think they are overly risky. In the area of medicine or health where we’re worried that some product might actually be dangerous to your physical wellbeing, it’s perhaps more easy to say we just shouldn’t leave that as a choice for consumers, than in finance where this is you might think a very volatile or risky product and provided consumers know exactly what the score is then they should make their own call on it. When you say, to what extent are consumers being protected? To really follow that through you would need to see whether consumers have been informed about the right products at the right time, and whether there have been certain products taken off the table for consumers.

 

My understanding is that here the FCA is taking it very steadily indeed in deciding just how to intervene in relation to blockchain and cryptocurrencies. They don’t want to stifle what might be potentially beneficial lines of innovation. I know in several presentations they’ve given they’ve talked about certain sorts of products that they are thinking maybe should be put off limits for consumers altogether.

 

DAVID FOWLIS: In coming to those decisions about what products are maybe just too dangerous for consumers, have they also looked at how much consumers actually understand, or demonstrate understanding, or can make informed choices? I’m going to declare a past life here having spent quite a bit of time with the CMA group of energy consumers. One of the issues there was that consumers found it very difficult to judge whether they should switch energy suppliers. There were some consumers who went and engaged, and there were many who were just put off by what they saw as the complexity. I imagine that’s probably something that happens in financial services. Do we have ideas about just how much consumers can actually understand, and how informed their choices can really be? Are these things tested when, for example, the warning notices on financial products are put in front of consumers?

 

ROGER BROWNSWORD: Yeah, the FCA talk about their consultations, but I don’t know to what extent they have investigated the understanding of consumers in relation to the warnings they give. There’s a whole literature from behavioural economists on what behavioural economists would see as irrational consumer decisions. Favouring short term over the long term. Guided by all sorts of heuristics that have salience for them at a particular time. This body of literature is extensive, but it’s only within the last 10 years or so I’d say that it’s achieved any real prominence. It makes you think that the premise of consumer protection policy, that we are dealing with rational people who given the right information will make the right decisions, is seriously misguided. That then encourages a more – not openly paternalistic, but nudging sort of approach where if we can set the defaults in a clever way, then these consumers who will otherwise make bad choices, their own inertia will mean they just stay with the default. The great example of that is the one on pensions where, if when you join a new organisation the welcoming letter says, ‘We have a pension scheme here to which, if you wish, you can opt in and contribute and these will be the benefits,’ versus the letter which welcomes people in terms of, ‘We have a pension scheme from which you can opt out if you so wish.’ The evidence is that the opt-out letter will leave about 80 per cent of the employees in the pension scheme, whereas the opt in, you’re only picking up maybe 20 per cent of people.

 

DAVID FOWLIS: It’s almost reversed.

 

ROGER BROWNSWORD: Yeah, it’s reversed. I think that saying the regulators have a responsibility to make sure that the consumers are properly informed, it doesn’t follow from that that consumers then act in a way that you expect them to act.

 

DAVID FOWLIS: Perhaps expanding on that a little bit. That overall rationale that we’ve seen in financial regulation has been largely about information. That is fair to say as far as consumers are concerned, but perhaps its moving now toward something that’s a bit more interventionist, or perhaps protectionist. Would that be a fair summing up?

 

ROGER BROWNSWORD: I would be fairly cautious about broad-sweep generalisations, but I think that’s right. I know we’re going to talk about the sandbox in a minute. This seems to me to be a smart ex ante check on things. In health and medicine there’s a great premium on ex ante checking of stuff before it gets out there and can harm anybody. The sandbox is trying to do that in an area which is conspicuously innovative.

 

DAVID FOWLIS: Well, let’s move on and talk about sandboxes. In terms of lessons that legal services regulators can learn from financial regulators, obviously the sandbox seems to be one of those things, and there is some preliminary application going on in the legal sector now. Perhaps if we just talk about that and why it’s relevant to the legal sector.

 

ROGER BROWNSWORD: Any products which are innovative which could potentially harm consumers but also benefit them, and which have uncertain characteristics but which don’t get really developed because the regulatory requirements for approval or authorisation are just too stringent, then this means that we don’t really know whether these products would or wouldn’t have benefited consumers. The sandbox seems to me to be a really clever way of trying to take account of the interests of innovators in getting them to first base with this, while not neglecting the interests of consumers because various arrangements can be made to protect consumers. At the same time, the regulators are closely sitting on board with these pilots or trials to see how they go. Although it’s quite resource intensive I suppose, and still not involving that many parties, potentially it looks like a really good idea.

 

DAVID FOWLIS: I suppose the logic of it is that you don’t really know how something’s going to perform until you expose it to the market. This is a sort of controlled way of doing that. Is there an issue though that because it is under observation, that you might still get different results, and that once you start looking at something it behaves differently than when you’re not looking at it?

 

ROGER BROWNSWORD: Absolutely. Although it’s not an example from the services sector; if you think about environment. GM being trialled in laboratory conditions, first of all, then within contained conditions. It looks okay, and then you have controlled releases into the environment and things don’t pan out in quite the way you expect because the environment out there is different. I suppose by analogy you can say, yes, the controlled environment of the sandbox is one thing; once it gets out into the real world then maybe different variables come into play.

 

DAVID FOWLIS: The sandbox is better than just trying to predict what would happen or, for example, for lawyers sitting around the table trying to guess how consumers are going to behave. You do get the idea of how the company might behave, how the consumer might behave, how the product might function, but it maybe isn’t absolutely 100 per cent like putting it out into the market.

 

ROGER BROWNSWORD: No, and again, you could compare this with clinical trials. Phase one clinical trials. First time this drug has been tried in humans. We’ve tried it in animals before, but we don’t know how humans will react. Okay, it doesn’t kill them; although sometimes you find that people can be made very ill. But suppose it gets through phase one. You could say at that point, why don’t we now let people get hold of it? In the paper I talk about those right-to-try laws and debates in America where people do want to get hold of highly experimental drugs before they’ve got to the point of, into phase two, seeing whether they are at all efficacious, and phase three, seeing whether they are more efficacious than comparable drugs that are already available. In drug development, it’s a very long road indeed from phase one right through to approval. Far too long for these financial products, I would say.

 

After all, it’s out in the market. If the product doesn’t attract the interest or the demand that you have expected, it’s a failed product. That’s not the end of the world. Unlike a drug that turns out to be a complete dud when you’ve invested a huge amount of money in it.

 

DAVID FOWLIS: Are there generic challenges and principles that legal services regulators should be aware of that you can derive from medicine, from finance, and any other sectors?

 

ROGER BROWNSWORD: Yeah, absolutely. In the paper I say that there are three kind of high-level generic challenges that regulators face. We’re particularly thinking about national or regional regulators who perhaps have their first stab at setting the framework for a new technology, such as the Data Protection Act or the Human Fertilisation Embryology Act. The first problem is this problem of regulatory connection and sustainability. Here’s a technology coming over the horizon. We don’t really know what to make of it. We don’t know what the benefits are, what the risks are. There’s pressure to do something about it, but we don’t really know what to do, how to do it, or when to do it. When we do do it, particularly if it’s in a hardwired legislative reform, people demand a lot of precision, and before you know where you are, the technology or its application has moved on. You see this time after time. The big legislative schemes have to be brought in for serious overhaul. They’re no longer fit for purpose.

 

That’s the first challenge. There’s always the challenge about compliance, so regulatory effectiveness, and a huge amount of regulatory scholarship is dedicated to the question: finding what works. What does work? How can we be smarter? How can we be more responsive? What does better regulation look like? The LSB is committed to principles of ‘better regulation’ or ‘good regulation’ in the case of the FCA. There are any number of theories about how to do it better, more effectively, and new technologies certainly present their challenges here. I think one of the keys to effectiveness is whether regulatees are on board with the regulation that’s being proposed, and if they’re not, then there’s going to be resistance. You can see this around new technologies. One of the classic examples I don’t talk about this in the paper is with file-sharing. Youngsters who can use the technology to share their music. It’s clearly in breach of copyright laws. The intermediaries who are facilitating this are clearly in serious breach of that, but it’s being done on an industrial scale. What are regulators supposed to do about this when there are just so many people involved in infringing the law? So, the second problem is about effectiveness, and the third challenge for regulators is connected to that, but it’s about the legitimacy and acceptability of the regulation. The problem here is that people have very different priorities, very different interests, and very different values. Again, particularly in the area of medicine, we’ve seen some very dramatic divisions between more conservative, ethical views, and more liberal progressive views. You see these in debates about abortion or about euthanasia, in the use of embryos for research, and once upon a time when stem cells were all the rage. In the United States and the UK and in continental Europe, there were huge debates and divisions around these issues. These are the three generic challenges.

 

DAVID FOWLIS: To sum things up a little bit, what approaches and practices in terms of technology regulation from other sectors do you think legal services regulators will want to emulate?

 

ROGER BROWNSWORD: In the paper, I have suggested that the way that the government and the ABI handled the problems presented by genetic testing for insurance is something that we might learn from. I’m not quite sure how much we can learn from this, but potentially that was going to be a major problem for insurers. Not just for individual prospective insureds, who found that with having to disclose a higher-risk genetic profile either they couldn’t get insured or they were facing huge premiums. Then where this was scaled up and you had adverse selection in insurance pools, then those pools would no longer be economically sustainable because the insurers would be ending up with a pool of very high-risk people. It just would become not good business for them. It wasn’t just economic questions here that were at stake. There were questions of fundamental fairness about whether people who – important heads of insurers not able to get this because they just hadn’t done well in the genetic lottery. That just didn’t seem fair. Initially that was handled by a sort of moratorium in which the industry agreed it wouldn’t ask for genetic tests and wouldn’t ask for genetic information except in the case of certain very high-value policies. I think there was a feeling at the time that maybe that wasn’t entirely satisfactory because it was just for now. We’re now in the sixth iteration of the code and although, again, I don’t have inside information on this, it looks to me from the outside as though this has managed the risks pretty well.

 

The characteristics here are that industry is able to modify the terms and conditions of the code when the scenery changes and when the risks change, which gives them flexibility. So, they’re not locked into something that is not viable for them. There’s a certain amount of flexibility for the regulatees, but equally I suppose the regulators too have got on the government’s side in that sense. The other thing is that nothing’s been hardwired here. I know that that kind of close cooperation is perhaps just the kind of thing that’s difficult for the legal services regulators to emulate. Insofar as they’re working with their regulatees, provided that this is within the parameters of protecting the interests of consumers as well, then I don’t think that there’s a risk associated. That would seem to me to be a desirable thing to do. But I know that that is challenging.

 

In other sectors, where there have been examples of consultations that haven’t worked well, on the other hand, there have been plenty of consultations that have I think worked pretty well. Particularly from the HFEA. Where you know you’re on sensitive territory then it’s very important to consult, but equally – and I’ve said this before in relation to AI where we don’t really know what the space looks like – then consultation is a way of trying to inform ourselves about this. Unfortunately, I don’t think that regulators in other sectors can offer you very great advice on how to run the consultation. You can say some very general things about this and clearly, if what you are doing is just trying to get your message across more effectively, consultation isn’t really about hearing what other people have to say; it’s just about persuasion. People then see through that then that just blows the credibility of the exercise. I think you have to be in good faith. You really want to know what consultees do think about this, and you can’t act on everybody’s view because you’ll get a plurality of views back, but you want to hear what they feel strongly about and what they’re more relaxed about. In the paper, I do talk about some guidelines from the Royal Academy of Engineering and the Royal Society in a joint report on nanotechnologies 15 years ago. They’re one of the best things I think that’s out on how to consult. That’s certainly something that’s important.

 

DAVID FOWLIS: So, we’ve got try and come up with a way that we can be flexible in terms of how we regulate as technologies develop.

 

ROGER BROWNSWORD: Yeah.

 

DAVID FOWLIS: And then consult and make sure that people, whether that’s the public or the industry sector, is on board.

 

ROGER BROWNSWORD: Yeah, that sounds like sort of slightly thin gruel, but if you don’t do those things then the ramifications can be quite serious. You would regret not having done simple things that need to be done.

 

DAVID FOWLIS: Roger, thank you very much for joining us today. It’s been extremely useful and interesting. If you haven’t already, please listen to our podcast with Alison Hook on international legal services technology regulation. We’ll also be releasing more podcasts and papers later this year, so please keep an eye out on our channels for those.