Regulating AI: The Moral and Practical Imperatives
Artificial Intelligence will be the most important innovation that society will embrace since the steam engine. Although AI technologies are already widely available and are having dramatic impact on society and industry, the true potential of AI is yet to be met or even fully understood...
Written by
Martin Bell
Tech & Digital Policy Manager at Panasonic Europe B.V.
Artificial Intelligence will be the most important innovation that society will embrace since the steam engine. Although AI technologies are already widely available and are having dramatic impact on society and industry, the true potential of AI is yet to be met or even fully understood. However, like most major technological developments comes the potential risks, this is the challenge that we are facing today. Both as a society and at Panasonic, we are discussing these opportunities and challenges faced by AI and our aim is to come to a common governance structure that will allow industry to continue its’ exponential innovation but also ensure security, trust, safeguards and in some situations overall boundaries to AI.
In light of this rapidly changing world, regulators are quickly and yet carefully looking to develop and implement the rules around AI. Take for example, the European Union (EU), the co-legislators of the EU have recently reached a political agreement on the first ever legislation around AI, more commonly known as the AI Act.
The AI Act aims to have a risk-based approach towards AI systems being deployed in the EU market. This risk-based approach ranges from outright prohibition to certain AI systems to transparency and control requirements for other AI systems.
The EU's approach to AI regulation is twofold: to protect EU citizens' fundamental rights and to protect them from harmful or discriminatory AI practises, and to foster the development and use of AI across member states, thereby placing Europe as a global leader in trustworthy AI.
Ethics is at the heart of Europe's approach to AI. Discussions on AI's potentials are invariably intertwined with concerns about its moral implications, social impact, and inherent risks.
In April 2019, the EU published the "Ethics Guidelines for Trustworthy AI." Human agency and oversight, technical robustness and safety, privacy and data governance, openness, diversity, non-discrimination and fairness, environmental and societal well-being, and responsibility were all outlined in this report.
These Ethics Guidelines were a fundamental paper that established the tone for future European AI regulation. The recommendations' concepts have since been the backbone of the proposed AI Act.
In addition, the United Nations has also started its’ detailed journey into proposed international AI governance models. Panasonic Connect was invited to attend an exclusive event as part of the 78th Session of the United Nations General Assembly. The event titled, ‘Governing AI for Humanity’ was the beginning of a UN led forum discussion on best practices, opinions and next steps for the governance of AI.
This road to AI governance is essential to get correct, the benefits of AI on society and industry are truly limitless but they also come with risks. It is imperative that all voices are heard during the discussions, that includes not only industry, but academia, civil society and NGOS. Advocacy groups, technology corporations, and members from civil society have all played critical roles in framing the narrative and ensuring a balanced approach.
It is through open discussion that at Panasonic, we have been able to have detailed exchanges on the opportunities and challenges posed by AI and the measures Panasonic are currently taking to ensure that our customers will always come first.
In addition, it is imperative that training and upskilling remains a top priority that mirrors the evolution and adoption of AI within our societies. We must ensure that the workforce is well equipped to adapt to the new industrial revolution. Panasonic in particular has placed great emphasis on this through multiple trainings and workshops around not only the opportunities brought about by AI but also the challenges. Accordingly, Panasonic has implemented extensive trainings for all employees, regardless of role or level, and also role-specific sessions in areas like HR, Marketing, and Sales, ensuring a thorough understanding of AI across the organization.
The complex road towards AI legislation and ethical agreements demonstrates a complicated tango between innovation and protection. While AI has the potential to alter industries, improve lives, and drive economic growth, it also contains threats. Governments seeks to ensure that when AI technologies grow, they do so in a way that prioritises human rights, data protection, and social well-being. The penultimate aim is for AI systems to serves society, respect human rights, and innovate responsibly.
The role of AI in our future cannot be overstated, especially when it comes to global challenges like climate change. The EUs AI Act, which is nearing completion, which will set clear rules for AI and protect users and will ultimately set balanced rules on protection of EU citizens alongside allowing the EU AI innovation to grow. At Panasonic, we believe that the true power of AI lies not just in its innovation but in its ethical application. Every AI solution we develop is rooted in our core ethic principles, ensuring the utmost protection and value for our customers, consumers, and partners. For Panasonic, it's not just about leading in technology, but leading with responsibility and integrity
Share page
Share this link via:
Twitter
LinkedIn
Xing
Facebook
Or copy link: