One simple example could be the disposability of consumer applications of rapidly developing technologies whereby products that need specific batteries simply fade out of circulation as manufacturers cease to produce certain items when demand slumps. While this makes eminent sense to the manufacturers’ shareholders it can be infuriating for consumers who may have invested heavily in say photographic or hi-fi equipment. On a recent holiday in the Outer Hebrides, we stumbled on a shop that mended all types of electrical equipment – an increasingly rare occurrence in today’s throwaway society – but these everyday frustrations are nothing compared with the bigger ethical and moral considerations with which we need to wrestle if we are to assimilate accelerating technology into people’s real lives.
The technology supporting driverless cars is a good example; in modern life, we have developed a powerful impulse to find someone to blame for every adverse eventuality and, when fatal accidents involving self-driving cars occurred in Florida and Arizona manufacturers involved in the underlying technology were forced to suspend testing and, later, Uber was banned from autonomous vehicle testing in Arizona but these concerns have not materially slowed the global pace of autonomous driving technology. Perhaps this is because the manufacturers are focused on the bigger prize of securing a precocious grasp on commercial rewards of an inevitable development whereby the technology will deliver political and social solutions to the ‘bigger’ problems of road safety and ecological damage that blighted the twentieth century. The question then becomes ‘Quis custodiet ipsos custodies?’ or ‘Who will guard the guards themselves?’ The Roman poet Juvenal succinctly phrased a concern that is even more pressing now than it was two thousand years ago. In today’s rapidly developing technologies, who will set out the moral and ethical framework that is necessary to protect society?
To maintain standards in medicine and other critical fields, we now rely on evidence based medicine (EBM) and a commonly used definition would, until comparatively recently, have been ‘The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients’ but this presupposed that the decision making is hierarchical and is then imposed on the patients.
Perhaps not surprisingly, in 2005, a working party looking at EBM came up with a more rounded ambition, in the Sicily statement on evidence-based practice’, which included the following definition: ‘Evidence-Based Practice (EBP) requires that decisions about health care are based on the best available, current, valid and relevant evidence. These decisions should be made by those receiving care, informed by the tacit and explicit knowledge of those providing care, within the context of available resources’ thus ensuring that those receiving the care had an equal opportunity to determine the outcome.
Of course, as in any field, there are those who argue for and against any framework that is imposed to determine the way in which advances can be applied in real life and one ethical conundrum continually raises its head – ‘just because we can, should we do something?’
This applies just as much in the field of data sharing and storage as it does in other more seemingly dramatic areas. All internet companies have now followed Facebook’s lead in using the data that it amassed to make informed decisions – a commercial form of evidence based practice – and the technology is now widely used by supermarket companies, airlines and most recently Nike Inc. that has also acquired a data analytics operation because, quite clearly, data is the future currency. However, the secure and transparent nature of how data is stored, protected and used is both a matter for commercial sensitivity and for moral resilience.
On a small scale, receiving emails from companies that bought your data on a list sold by a company that acquired it for another purpose is both irritating and invasive but, at the other end of the scale, having one’s personal or financial data stolen from a trusted partner like a bank is worrying and potentially damaging. Taking the argument a stage further, in a world where Alexa and her friends are monitoring much more than our purchasing patterns and merging these observations with our preferences for which music we like, which TV programmes we watch, what we store on our laptops and other devices and which apps and websites we favour, opens us all up to a rewriting of the concept of personal privacy. None of this is an issue if those who collect and store our data behave with consideration and within a moral framework which we would be happy to support but, as with the driverless cars where the commercial rewards of such developing technologies have driven the industry forward without a backwards glance to the ethical considerations which should be properly debated, not just by those involved in the industry but also by those who will be affected by their decisions, the big moral and ethical considerations of the data industry are not being debated in a public forum. As these are issues that affect us all, every minute of our days, there has never been a time when trust in our data partners has been more critical.
Ross Tiffin, Social Observer
Please note all comments need to be approved before appearing on the page. Please respect others when posting.