We recently attended CogX Festival’s AI & DeepTech Summit 2023, the world’s leading festival of inspiration, impact, and transformational change at The O2 London. 3 days full of keynotes and panel discussions from experts from all over the world, all with one thing in mind – how is AI going to affect our future? Here’s a summary of what we heard, in collaboration with ChatGPT to help synthesise the team’s notes from the conference and a group discussion.
Opportunities vs risks
The first step in navigating the AI landscape is comprehending the inherent risks. The duality of AI as a tool for both constructive and harmful purposes necessitates careful consideration. On one hand, AI promises to revolutionise industries, boost productivity, and solve complex problems in healthcare, education, and beyond. On the other hand, there are concerns regarding bias, privacy, job displacement, and the ethical implications of AI systems making critical decisions.
This duality necessitates a delicate balance between harnessing AI’s opportunities for societal benefit, while implementing rigorous regulations, ethical guidelines, and responsible practices to mitigate its associated risks, ensuring that the technology serves humanity’s best interests rather than causing harm or exacerbating inequalities.
Importance & urgency of policymaking
Policy-making is the cornerstone of managing AI’s impact on society. While businesses and organisations increasingly take control of AI applications, policy frameworks must evolve to keep pace. These policies should address data privacy issues, algorithm transparency, and ethical AI use. It is an ongoing process that demands time and dedication, but it is vital for ensuring that AI benefits all of humanity. However, managing this development is no small feat.
One of the main challenges of AI is its lack of borders. AI technologies transcend geographical boundaries, making them a global phenomenon. This borderless nature of AI has significant implications for issues such as international collaboration, data privacy, and regulation. It highlights the need for coordinated efforts among nations to establish ethical standards, legal frameworks, and guidelines that can effectively govern the global AI landscape while ensuring that AI technologies are developed and deployed responsibly and in a manner that respects individual rights and societal values.
It’s essential to recognise that policy-making will likely always lag behind technological developments. AI’s rapid evolution makes it challenging for regulations to keep pace. However, this doesn’t mean that policymakers should relinquish their efforts. Instead, it emphasises the need for flexible and adaptable policies that can evolve with the technology. A policy sandbox, suggested by one of the presenters, can offer a safe space for policymakers, industry leaders and others involved in AI development to try out new policies and make mistakes to learn lessons in a way that minimises impact. This would call for collaboration at both global and local levels, and different governments would want different levels of local control, yet a global agreement has to be achieved.
The UK will host the very first global summit on Artificial Intelligence this November. Many speakers mentioned that the scale of this event and the parties involved clearly demonstrate that we are in a pinnacle moment with AI developments. We are in a position where we can maintain agency over this rapidly developing technology if we establish a global policy to ensure ethical application.
Call for responsible business and practitioners
Much of the development and deployment of AI technologies rests with professional businesses and organisations. These entities are the driving force behind AI innovation, and their actions significantly influence AI development. Therefore, these stakeholders must act responsibly and ethically, recognising their pivotal role in shaping AI’s impact on society.
Mustafa Suleyman, co-founder and former head of applied AI at DeepMind, advocated for the idea of business, who at the moment seemed to be driving the development and setting up the guardianship of AI ethics implications, also to take up the responsibility to ensure a socially responsible approach to AI development and commercialisation, rather than leave the matter entirely to the policymakers.
An interesting analogy of the matter at hand, by Reid Hoffman, compares this stage of AI development to accelerating a car. The idea is that when accelerating, the driver should become more alert, watch out for signs of potential risks and respond to them fast and thoughtfully. We don’t accelerate a car while we close our eyes and wave manically in the air, so shouldn’t we do this with developing new AI applications? Thus, everyone (every company) developing something with AI should take a shared responsibility approach to what they have in their hands. We all know this is a powerful tool that can bring mass distraction and a great benefit, so we should stay alert and act thoughtfully.
Exciting AI futures
The conference also gathered an exciting crowd of business people, researchers, scientists and AI experts worldwide. We saw a few trends that really interest us and potentially bring change to our work at Stby.
The healthcare sector is one of AI technology’s most promising beneficiaries. AI’s capacity to swiftly process and analyse vast volumes of medical data has the potential to catalyse a profound transformation in healthcare, making it more efficient, personalised, and accessible. This transformative power translates into tangible benefits, from drug discovery, diagnosis, outbreak predictions, social care, and tailored treatment or preventative plans for both physical and mental health. AI’s ability to analyse natural language also offers vast potential in new ways for it to interact directly with doctors, nurses, carers and patients. However, how do AI tools work with human emotions? How does it deal with the anticipation, rejection and bias from healthcare professionals, patients, and carers? Despite not being an expert in Healthcare, introducing the human factors to a fast-evolving technology and finding that middle ground is something that Stby has been exploring with various clients for the past 20 years. We very much look forward to being part of the conversation on how to maximise the collaboration between AI and healthcare for the benefit of countless individuals with our future clients or collaborators.
The opportunity for AI in education is substantial, with AI technologies poised to revolutionise the learning experience. AI-powered tools can personalise education by adapting content and pacing to individual student needs, fostering more profound understanding and engagement. Intelligent tutoring systems provide real-time feedback and assistance, improving student performance and retention. Additionally, AI-driven analytics can help educators identify at-risk students early, enabling timely interventions and support. As AI continues to evolve, it offers the potential to make education more accessible, equitable, and effective, shaping a future where every learner can access tailored, high-quality educational experiences that cater to their unique strengths and challenges.
A similar point was made before, AI tools that students have access to are developing at a rate that education frameworks and guidelines cannot keep up with. Jenny Taylor MBE, IBM UK Foundation Leader, stated that a lecture may be outdated within just a few weeks. A way of mitigating this challenge is for universities and schools to build better relationships with businesses and organisations with the knowledge, experience and capacity to facilitate this learning. Degree apprenticeships are something that IBM is pushing and encourages other tech-focused businesses to do.
The notion of an AI-driven personal assistant is not a future but a reality. Digging deeper to look beyond the technical hiccups of our current AI personal assistant that often lives on our phones or our home devices, we are also forced to look at how this reflects us as individual humans and as a society.
The idea of an AI assistant that does not try to pretend to be another human is an interesting one. One business, Inflection AI, is looking into how we can create a respectful AI that helps its human user to set their boundaries, with their AI assistant product: Pi. An AI with a moral compass that it won’t impose or submit to abuse, yet, is highly tailored to a user’s needs and acts to reflect the best of humanity. We don’t know if this is possible, but intending to build a respectful AI is already a big step towards an open and continuous conversation about AI ethics.
By Ed Louch, Jeanne Renoult, Qin Han, ChatGPT