Can AI Outgrow Its Creators? Towards Self-Optimising Networks

Posts  |  Blog Posts

Mohamed Shahawy profile picture Mohamed Shahawy Published 25/03/2024 Viewed 159 times · 3 min read
Share this

The Road to Adaptive AI

The fusion of Neural Architecture Search (NAS) and Continual Learning (CL) led me into a rabbit-hole where AI’s adaptability and autonomy are pushed beyond traditional boundaries. The intersection of these two domains is not just another merge of two disjoint areas, it’s a potential paradigm shift to how we approach AI development.

Imagine an AI that evolves, learns continually, and adapts without human intervention, similar to living organisms adapting to their environment. A deep dive into NAS and CL reveals a future where AI systems can autonomously refine their architectures, ensuring they remain optimal as they encounter new tasks or data. It’s analagous to a human child growing, learning, and adapting to the world around them, but at an accelerated pace. We have formalised these frameworks as Continual Neural Architecture Search (CNAS).

Where Neural Architecture Search Meets *Continual Learning *

CNAS frameworks begin with NAS, a technique that automates the design of neural network architectures. Traditional methodologies where tweaking layers and parameters are a fundamental, tiresome process could be soon behind us. NAS introduces methodologies where AI designs its successors, iteratively optimised over generations.

The second half of CNAS is where it truly shines; Continual Learning capabilities. CL approaches allow these AI systems to learn from new data without forgetting previous knowledge, a common pitfall known as catastrophic forgetting.

By creating NAS / CL hybrid frameworks, we unlock the potential for AI systems that are not only self-improving, but also adaptable to new challenges in a domain-agnostic manner. One popular Continual Learning approach is Elastic Weight Consolidation (EWC). EWC works by finding the intersection of “good solutions” for two tasks in their solution spaces.

Elastic Weight Consolidation

[credit: Kirkpatrick et al. 2016 - image host]

Furthermore, these systems could also potentially generalise learned knowledge to new, unforeseen tasks. For instance, when humans learn to perform a task, such as learning to ride a bicycle, they can generalise their learned knowledge over to similar tasks, like riding a motorcycle.

How CNAS Overcomes the Caveats of Traditional Methodologies

The current, widely adopted methodologies in Deep Learning are simply optimised for a single task with strict boundaries. When the data sources shift ever so slightly, we typically experience some form of Data Drift. There are a few types of drift that can occur in a deep learning problem, but they typically all have the same result; an abrupt drop in model performance.

Machine Learning Engineers are expected to tune models consistently as the datasets’ underlying distributions change. With the adoption of CNAS frameworks, we:

  1. Ensure scalability through large scale autonomous optimisation
  2. Improve model reliability and overcome Data Drift
  3. Build more optimal models autonomously
  4. Democratise AI for non-expert usage

This fusion also addresses a critical challenge in AI: the resource-intensive nature of training and maintaining models. With NAS and CL working hand in hand, AI can self-optimise in a more resource-efficient manner, potentially making powerful AI tools more accessible and sustainable.

Looking Ahead

By leveraging and combining approaches from Neural Architecture Search and Continual Learning, more robust and adaptive agents can be developed. Through self-development and lifelong plasticity, CNAS aims to overcome numerous limitations posed by human factors and result in more optimal models.

In doing so, Continual Neural Architecture Search could potentially introduce a new spectrum of applications where the functionality of the model
is pliable, even after the deployment phase when remote access is often limited or unavailable.

The implications of this are vast and varied, from autonomous vehicles that adapt to new driving conditions in real-time, to personalised AI tutors that evolve with a student’s learning pace. The possibilities are as limitless as they are exciting.

Thanks for reading, and stay tuned for more updates on our journey with CNAS! 🌌


Neural Architecture Search Continual Learning Deep Learning
Mohamed Shahawy profile pic
Written by Mohamed Shahawy

Mohamed Shahawy is a dedicated Ph.D. Candidate at Staffordshire University, where he is deeply immersed in the exploration of cutting-edge technologies within the realms of Neural Architecture Search, AutoML, Continual Learning, and Adaptive AI. His thesis entitled **"Evolving Intelligence: Designing Adaptive AI Systems through Automated Neural Architecture Search and Continual Multimodal Learning"** is supervised by [Prof. Elhadj Benkhelifa](@e.benkhelifa). His research journey is marked by a strong commitment to advancing the understanding and application of artificial intelligence to address complex challenges. Shahawy's work has been recognized through various scholarly contributions, highlighting his innovative approaches to leveraging AI for real-world impact. Throughout his academic career, Shahawy has made significant contributions to the scientific community, evidenced by his involvement in various research projects and publications. Notably, his work on "Mining and analysis of air quality data to aid climate change," co-authored with L Babu Saheer and J Zarrin, showcases his commitment to utilizing AI for environmental sustainability. This particular research, presented at the AIAI 2020, underlines the potential of artificial intelligence applications in analyzing air quality data to combat climate change. Additionally, his exploration into the intersection between Neural Architecture Search and Continual Learning, as well as his development of the HiveNAS framework using Artificial Bee Colony Optimization, reflect Shahawy's innovative spirit and dedication to pushing the boundaries of AI research. Shahawy's versatility and prowess in AI research are further exemplified by his work on a self-supervised approach for urban tree recognition on aerial images, which was presented at the IFIP International Conference on Artificial Intelligence Applications. This project, among others, highlights his ability to apply AI solutions to diverse domains, from environmental conservation to urban planning. Moreover, his contribution to the development of SIoTSim, a simulator for the Social Internet of Things, underscores his foresight in anticipating and shaping the future of IoT systems. As Mohamed Shahawy continues his journey at Staffordshire University, his research not only contributes to the academic community but also paves the way for practical AI applications that address pressing global issues.