Discover the importance of ethics in digital technology. Explore its impact, ethical dilemmas, and solutions.
Digital technology and its associated devices, processes and systems have transformed the modern world, not least the way we work, relax, travel, talk, shop and save. The potential and power are undeniable, but as a wise man once said: “With great power comes great responsibility”.
While it is most obviously associated with Spider-Man’s superhuman powers, this proverb is no less relevant in the context of digital technology. As the pace of technological change continues to increase, we explore the importance of ethics and why it is essential to balance the benefits and harms digital technology can bring.
People talk about a digital revolution because of the scale, speed and impact digital technology advancement has made on society and business. Consider digital services and technologies, such as artificial intelligence (AI), big data, cloud computing, the Internet of Things (IoT) and robotics.
The various systems and devices that incorporate these technologies have changed our personal and professional lives dramatically. In many cases these have been changes for the better. But they also raise serious ethical issues. Social media platforms connect us, but they also commodify our data and render us vulnerable to misinformation; cloud computing services let us access files from anywhere, anytime, but they are powered by carbon-hungry data centres; gig economy apps have made it easier to order a taxi or a takeaway, but they have also pushed down wages and eroded workers’ rights.
Did you know that our ancestors spent the best part of 2.4 million years learning how to control fire and use it for cooking? It was 66 years between the Wright Brother’s first flight (1903) and the moon landing (1969), 43 years from the discovery of DNA structure (1953) to the cloning of Dolly the Sheep (1996).
The digital technology timeline is even more dramatic. We’ve moved from the first digital computers built in the 1940s to the first web browser and website in 1991 and the iPhone in 2007. In less than 10 years, AI has developed to achieve language and image recognition capabilities with some predicting that human-level AI systems will be a reality by the 2060s.
The problem with too much technological change, too fast, is that it may not give us the time to reflect on the forces driving these changes - or their longer-term impacts.
AI advocates might promise that machine learning will eliminate human error and risk, reduce costs, and accurately diagnose disease and other illnesses or injuries, for example. But these promises require serious scrutiny. And we are already witnessing the harmful impacts AI can have in the here and now, from the massive amounts of natural resources required to keep generative AI systems running to the mental toll on the Kenyan ‘clickworkers’ who spend their days tagging toxic content used to train AI filtering mechanisms. Then there are privacy violations from deepfakes - like the disturbing AI-generated videos of the deceased toddler James Bulger that caused further heartache and distress to his mum in 2023.
According to the United Nations (UN), “responsible technology is an approach that aligns technological advancements and organizational conduct with the wellbeing of people and the planet.”
Also referred to as ethical technology, it essentially calls for foresight, planning, and in-depth consideration of the consequences of our decisions and actions in relation to technology – before we make them.
The purpose of responsible technology is to identify and mitigate against any risks in implementing technology, such as privacy violations, discrimination or inequality. As part of the process, individuals are expected to seek out best practice and guidance.
The UN, for example, as part of its own operations, has produced Principles on Personal Data Protection and Privacy and the Ethical Use of AI, as well as a Responsible Tech Playbook.
In 2009, Professor Rafael Capurro first coined the term ‘digital ethics’ when describing the impact of digital information and communication technologies on societies and the environment at large. The main concerns – intellectual property, privacy, security, information overload, digital divide, gender discrimination, and censorship – still ring true today.
One of the biggest risks for digital technology centres around data management and data integrity. The way data is collected, managed, stored and used must be designed in ways which safeguard individuals' privacy rights and protect personal data. How many times has personal data has been lost or leaked from all sorts of organisations? Examples range from genetic testing service 23andMe (2023) and project management software platform Trello (2023) to social media platform Facebook (2021) and Capital One bank (2019). And even when data doesn’t go astray, there are legitimate questions to be asked about how our data is being used and who stands to benefit.
More recently, the World Economic Forum has highlighted the importance of digital technology in supporting the newer agenda of sustainability and minimising environmental impact. In a 2022 survey of 400 executives from various industries and regions, digital technology was already being used to measure and track sustainability progress, optimise the use of resources, and reduce greenhouse gas emissions.
The King’s online Digital Futures MA explores both the positive and negative effects of digital technology. You’ll discuss important issues around ethics, risks, regulation, sustainability and more, covering digital technologies across different industries and contexts. You’ll develop the creativity and critical thinking to lead and optimise digital innovation and development.
Do you want to develop the digital skills predicted to be among those in greatest demand in coming years? On our MA, you’ll learn alongside others eager to build a future where technology betters our lives: