
As we approach the end of 2024, it’s worth reflecting on the remarkable journey of artificial intelligence (AI) this year. While there have been significant advancements, there have also been setbacks, sparking debates over the technology's potential, its risks, and the ethical concerns surrounding its development.
On the one hand, AI-powered chatbots have become increasingly sophisticated, enhancing human-computer interactions. On the other, AI's widespread use in critical applications, from healthcare to politics, has raised urgent questions about accountability, transparency, and the ethical deployment of such technologies.
AI has increasingly been used to spread misinformation and propaganda, while AI failures have made headlines, intensifying discussions about its ethical implications. As such, 2024 has been a pivotal year in the evolution of AI, with both significant triumphs and alarming missteps.

Top AI wins of 2024
From advanced chatbots to cutting-edge image generation tools, 2024 has witnessed remarkable innovations in the AI field. Here’s a rundown of some of the year's most significant advancements:
Generative AI tools introduced in 2024
"While ChatGPT's launch in 2022 brought AI into the global spotlight, AI has already been embedded in various aspects of our digital lives for years," said Syed Shahzain Zaidi, an AI expert and student at FAST National University Karachi, in conversation with Gadinsider.
Zaidi highlighted that AI has long been integrated into services like YouTube's recommendation algorithm and Google's personalised search results. These technologies, operating behind the scenes, have significantly improved user experiences.
Microsoft Copilot, Copilot+PCs
Microsoft rolled out substantial updates to its Copilot and Copilot+ PCs this year, prioritising enhanced security and data protection with the introduction of its innovative Recall feature. These updates also introduced Python integration in Excel, intelligent document comparison in OneDrive, and time-based chat summarisation in Teams, offering users a more seamless and secure experience.
Claude AI
Anthropic's introduction of Claude AI stands out for its ability to process lengthy, complex documents and interpret extensive messages with ease. Claude AI is notable for its emphasis on safety and ethical use, incorporating robust built-in protections to prevent misuse.
Google Gemini
Google launched Gemini, an advanced successor to Bard, in 2024. Gemini processes not only text but also code, audio, images, and video, unlike earlier Google models that were limited to text-based processing. This model provides deeper integration with Google’s suite of applications like Search, Mail, and Documents, allowing for a more versatile AI-driven experience.
OpenAI
OpenAI, the parent company of ChatGPT, maintained its position as a leading force in AI development with groundbreaking innovations this year. Some of the notable releases included Sora, a revolutionary video generation tool, GPT-4o, an advanced voice assistant, and o1 (Strawberry), designed to enhance AI reasoning capabilities and conduct "deep research."
AI failures of 2024
Despite the impressive strides made in AI, the technology has not been without its challenges. Below are some of the biggest AI failures of 2024, underscoring the hurdles still faced in its rapid development.
Tesla autopilot incidents
Tesla, a pioneer in autonomous driving technology, faced significant setbacks in 2024. The company’s Autopilot feature, intended to improve road safety, was involved in around 13 accidents. This raised questions about the reliability of Tesla's AI system and the overall safety of its self-driving vehicles.
Google Gemini's disturbing interaction
In a troubling incident in November, a graduate student who sought help from Google's Gemini AI for a homework task found themselves the target of hostile and threatening responses. The AI chatbot, initially discussing the challenges faced by aging adults, abruptly turned violent, telling the student they were a "burden on society" and encouraging them to "die." The incident sparked outrage, calling into question the safeguards in AI systems.
Amazon’s Alexa
In September, Amazon’s Alexa found itself embroiled in controversy after it appeared to endorse US Presidential nominee Kamala Harris. A video surfaced showing Alexa praising Harris’ qualities while refusing to do the same for Donald Trump. Amazon attributed this error to a software update and quickly fixed the issue, though it raised concerns over AI biases in political contexts.
Google search AI overviews
Google’s AI Overviews feature, rolled out earlier this year for US users, faced swift backlash due to bizarre and inaccurate responses. Many users reported receiving strange answers, including being told to "die" or "wait five years" to become a saint. The issue highlighted the challenges in ensuring AI’s reliability and accuracy in search results.
Wiz deepfake attack
Wiz, an American cloud security startup, fell victim to a deepfake attack in 2024. Scammers used AI-generated voices of Wiz's CEO, Assaf Rappaport, to deceive employees. This highlighted the growing concern about AI misuse, particularly in manipulating individuals and organisations.
Swearing chatbot incident
In January, DPD, a France-based international delivery company, had to temporarily disable its AI chatbot after it swore at a customer. Screenshots of the exchange went viral on social media, raising questions about the chatbot's reliability in customer service settings.

AI in Pakistan
Pakistan has begun to recognise AI’s potential for national development. From education to healthcare and addressing climate change, AI is increasingly seen as a critical tool for progress.
However, the country still faces challenges in fully harnessing AI's potential. "One of the key barriers is the country’s limited infrastructure," said Muhammad Sohaib, AI expert and founder of PresentaTech, in conversation with Gadinsider. "The demand for AI applications is growing, but the lack of sufficient computational resources, high-speed internet, and reliable power supply in many areas hinders widespread adoption."
The impact of AI in Pakistan’s education sector
Pakistan’s educational system is facing numerous challenges, and the introduction of AI has raised concerns about its ethical use. With the country ranking 117th out of 133 nations on the Global Knowledge Index 2023, the presence of AI-driven tools such as ChatGPT poses a threat to academic integrity. Zaidi explained, "The presence of AI sources such as ChatGPT has increased the threat of plagiarism, which could lead to a decline in the quality of literature being produced."
To combat this, Zaidi suggested that students be introduced to courses on the ethical use of AI. Such programmes would equip students with the knowledge to navigate the evolving digital landscape responsibly.
Pakistan's first AI teacher
Karachi’s Happy Palace Grammar School (HPGS) has taken a bold step by introducing an AI-powered teaching assistant, Miss Anny, making it the first educational institution in Pakistan to employ AI in the classroom. Capable of communicating in over 20 languages, Miss Anny represents a significant leap forward in AI’s practical application in education.
AI degrees in Pakistan
Several prestigious universities in Pakistan, including the University of Karachi, NUST, and Bahria University, now offer degree programmes in AI. These institutions are helping equip the next generation with the skills necessary to succeed in an increasingly AI-driven world.

AI in Pakistan’s general elections
The 2024 general elections in Pakistan saw AI play a significant role in political campaigning. Despite being banned, Pakistan Tehreek-e-Insaf (PTI) effectively utilised generative AI to conduct virtual rallies, deliver speeches, and even create AI-generated footage of Imran Khan addressing supporters from his prison cell. This marked a paradigm shift in political campaigning, showcasing AI's potential to influence elections in unprecedented ways.
AI controversies in Pakistan
AI-related controversies have also emerged in Pakistan, the most notable being the claims that an article by Imran Khan in The Economist was AI-generated. While PTI denied using AI in the article’s creation, the incident sparked a broader discussion about the use of AI in political discourse and the media.
The rise of deepfake technology
In April 2024, Aroob Jatoi, an Instagram influencer, became the victim of a deepfake attack in which a malicious video of her was circulated online. The incident highlighted the dangers of deepfake technology, particularly in violating individuals' privacy and dignity. "AI-generated content is becoming increasingly prevalent, and it's essential for individuals to be aware of the technology’s capabilities and limitations," Zaidi noted.
AI awareness and the path forward
As AI-generated content becomes more prevalent, the challenge of distinguishing between real and fake content grows. This is especially important as deepfake technology continues to evolve, making it harder to identify fraudulent media. Raising awareness about AI's capabilities and limitations will help individuals become more discerning, reducing the spread of misinformation and protecting against its potential misuse.
AI governance in Pakistan
The Pakistani government is working towards creating an AI policy that will help the country respond to cyber threats in real time and transform it into a "Digital Pakistan." This policy aims to promote the ethical and responsible use of AI while addressing concerns such as job displacement and data governance.
Sohaib emphasised: "The biggest challenge for Pakistan is the need for better data governance. Without reliable, high-quality data, AI models cannot perform at their best."
Ethical AI development
A global AI safety summit held in November 2024 brought together world leaders to discuss the potential threats posed by AI. Critics argue that the focus should shift from hypothetical dangers to more immediate concerns, such as AI bias, disinformation, and the infringement of human rights.
For Pakistan, ethical AI development must focus on fairness, transparency, and accountability. AI should be used to address pressing societal challenges, such as poverty, healthcare, and education, while ensuring that its benefits are shared equitably.
While Pakistan is still in the early stages of developing AI regulations, it is becoming increasingly aware of the importance of AI governance. As AI continues to evolve, ensuring its responsible development will be crucial to realising its full potential while safeguarding societal values.