The Faculty of Engineering, Computing and the Environment is seeking to make a substantial investment in new one year full-time MSc by Research and three year full-time PhD studentships for January 2025 entry. Applications for specific pre-approved projects will be given priority for funding (see Project proposals below). In addition, the Faculty welcomes applications for projects advertised by its staff on the Find a PhD website.
A project proposal is not required with the application, but you must upload a Word document with the title of the selected project and supervisor's name instead (see How to Apply section below).
Those shortlisted for interview will need to send a full proposal and timeline before the interview. The timeline should be for a three-year PhD project, but it should have a tangible outcome by the end of the first year.
After the interviews, three-year full-time PhD studentships will be offered to the strongest candidates. Other candidates may be offered one-year MSc by Research studentships instead that may be extended for a further two years, subject to the student's performance and funding availability.
Applications for MSc by Research only will not be considered.
Applicants are strongly encouraged to contact the project's first supervisor to discuss their interest before making an application.
Closing date for applications: 13:00 on 7 November 2024
You must include the following in your application:
If you are applying for more than one project, you must submit only one application form, but include the titles of all the projects for which you are applying. If more than one application form is received, only one of them will be considered.
Please ensure that all required documents are submitted together with your application form as we are unable to consider incomplete applications or documents sent separately.
If you have not heard from us by four weeks after the closing date, your application has been unsuccessful.
Please note:
Please see the project proposals for PhD study, listed under the Faculty's Schools.
With the promise of unprecedented data speeds, near-zero latency, and ubiquitous connectivity, 6G networks are poised to revolutionise industries ranging from healthcare to autonomous vehicles. This represents a pivotal moment in wireless communication history, offering a canvas for innovation that few could have imagined. However, with great power comes great responsibility. The surging demand for wireless connectivity and the proliferation of energy-intensive devices pose profound technical and environmental challenges.
By focusing on the millimeterWave band and energy efficiency, this project aims to provide targeted and impactful contributions to improving the performance and sustainability of networks, especially in the context of IoT and emerging high-data-rate applications aimed at the development of advanced medical wearable devices. Indeed, such technology will enable a new generation of applications such as remote health monitoring, early disease detection from health data collected in real time, and wearable implants.
At the intersection of millimeterWave communication, energy efficiency, and healthcare, this research not only addresses critical challenges in wireless technology, but also has the potential to save lives, improve healthcare outcomes, and enhance the quality of life for individuals worldwide.
This PhD project will centre its research on the following specialised objectives, with a primary focus on the millimeterWave band and energy efficiency in wireless communication:
The project will involve theoretical research, simulations, and practical implementations. Machine learning algorithms, especially deep learning, will be leveraged to tackle the unique challenges of wireless communication. Real-world datasets and experiments will validate proposed solutions in the context of medical wearable devices.
This research initiative seeks to contribute to the advancement of wireless communication technology through the application of machine learning. The anticipated outcomes include improvements in spectrum efficiency, network security, and energy-efficient protocols for emerging IoT applications, especially in the area of medical wearable devices.
Supervisor: Dr Neda Ahmadi
This PhD opportunity explores the exciting intersection of AI and Iris Recognition, aiming to revolutionise biometric security for a safer world. In an era where security threats are continually evolving, biometric authentication methods have gained prominence as robust safeguards for protecting sensitive information and assets. Among these methods, iris recognition stands out as an exceptionally secure and efficient approach.
While iris recognition has indeed matured as a technology, it continues to confront multifaceted challenges and opportunities. These challenges encompass several critical domains, including:
By addressing these multifaceted challenges and leveraging the power of AI, this self-funded PhD project seeks to not only advance iris recognition technology but also contribute to the broader field of biometric security. Through innovation and research, we aim to enhance the security, fairness, and accessibility of iris recognition systems, ultimately making biometric authentication more reliable and inclusive for a safer world.
Research Objectives:
This PhD project will focus on the following broad research objectives:
Methodology:
The methodology for this research will encompass:
Supervisor: Dr Neda Ahmadi
Connected and automated vehicle (CAV) technology plays a significant role in the exciting transportation revolution today in the UK, offering road safety, high efficiency, less emission and a better user experience. For keeping the UK's world-class research base of CAVs, the government and industry invested £440m into more than 90 projects involving over 200 organisations. As a result, the UK industry could be worth £62bn by 2035, 40% of the entire automotive industry.
CAVs are considered "data centres on wheels". More than 100 electronic control units (ECUs) for different functions are controlled and managed by various software; the digitisation of CAVs brings new challenges than the conventional one-off security audits of automotive security. The traditional cyber security approaches are no longer appropriate for new challenges because they are changing and moving fast. In this project, we proposed Digital twins as a cyber security analysis tool, which is qualified for realising a holistic approach to the cyber-physical system security of CAVs.
Digitisation in the automotive industry, especially in connected and autonomous vehicles, makes the cyber security of CAVs focus on a topic for vehicle software management. Several international regulatory bodies are currently working on the security guidelines of CAVs. The benefits of autonomous systems have been explored extensively and continue to be developed. However, this rapid advancement has come somewhat unchecked. At the same time, vehicles are controlled by different software with increasing autonomous functions such as driverless. However, understanding the most lavish threat landscapes and the attack surfaces is limited. The OEMs must establish processes and systems following the new cyber security requirements as quickly as possible for the industry sectors. "Digital Twin" can provide significant support in this area. This proposal will ensure we secure the new function delivered by OEMs in CAVs.
This proposal complements the Hub and Nodes' work by bringing in expertise from digital twins, coupled with cyber security; examining the knowledge and departments in the Hub and Nodes show none from economics. The proposal has touchpoints with the Security, Trust and Resilience nodes.
Supervisor: Dr Hu Yuan
Large Language Models (LLMs) have revolutionised various fields with remarkable performance across diverse tasks. However, their training often relies on massive datasets scraped from the internet, which may contain sensitive personal information. This raises significant privacy concerns, as LLMs can inadvertently memorise and regurgitate such information. Differential Privacy (DP) is a promising framework to mitigate these risks.
The Challenge: Balancing Privacy and Utility
While traditional DP mechanisms offer a way to quantify and limit privacy risks by adding noise to the training data, directly applying them to LLMs can significantly degrade their performance. This necessitates research efforts to develop efficient DP algorithms that minimise this performance loss while maintaining robust privacy guarantees. This could involve exploring novel noise addition techniques or selectively applying DP mechanisms to areas with minimal impact on model utility.
Optimising the Trade-off
Finding the optimal balance between privacy and utility is crucial. This requires a two-pronged approach:
Adaptive Mechanisms for Enhanced Protection
Adaptive DP mechanisms, which adjust the level of privacy based on data sensitivity or query context, hold promise for LLMs. Research in this area could explore dynamic privacy budget allocation methods, where more sensitive data or queries receive stronger privacy guarantees.
Beyond Training: Ensuring Privacy Throughout the Pipeline
While most DP research focuses on the training phase, ensuring privacy during model fine-tuning and inference is equally important. Developing mechanisms to apply DP during these phases can prevent privacy leaks from fine-tuned models or models generating predictions on sensitive data.
Addressing Scalability Challenges
Implementing DP in the context of LLMs presents significant computational challenges due to the immense size of models and datasets. Research is needed to develop scalable DP solutions that can be efficiently implemented without prohibitive computational resources.
Impact and Applications
Implementing DP mechanisms in LLMs can profoundly impact various sectors, particularly those where privacy is paramount, such as healthcare, finance, and legal. For instance, DP-enabled LLMs could generate medical reports, financial advice, or legal documents without compromising the privacy of sensitive information. Furthermore, these models can help organisations comply with stringent data protection regulations like GDPR and HIPAA.
Supervisor: Dr Hu Yuan
This project does not require any previous knowledge of Biology.
Proteins are the building blocks of all living organisms' cells. Many types of cancer as well as Alzheimer's disease, Parkinson's disease, Mad-cow disease, and others are all associated with protein misfolding. Everyone has memorised the ‘logo' of COVID-19: that grey Styrofoam ball dotted with red spikes. Those red spikes have been considered the most ‘famous' part of the virus since they are crucial not only to their ‘attack job' in human bodies but also as a target of most vaccines. COVID-19's red spikes are simply proteins. Therefore, any new insight into the structural and functional features of proteins is considered invaluable to biologists, drug and vaccine designers.
In the past few years, AI, using deep learning techniques, has successfully and brilliantly solved a puzzle that had challenged scientists for more than five decades. DeepMind developed a computer program called AlphaFold that is capable of predicting the structure of any protein from its amino acid sequence alone. Since then, protein structures have been pouring into the Protein Data Bank (PDB) at an unprecedented rate. Ideally, all structures should be annotated and classified in two popular databases: CATH and SCOP. One of the main reasons for annotating protein structures is to gain insight into the molecular basis of their functions. Additionally, such annotations may help connect different proteins with possible evolutionary relationships, a crucial step toward fully understanding any protein. In both databases, annotations involve some human intervention, which has created a new challenge: the need for a fully automated way to classify the vast number of protein structures being deposited every day.
There have been several attempts to build a machine learning/deep learning model for this task. Earlier this year, the CATH team developed a sequence-based neural network model to predict a protein's superfamily, called CATHe, where 'e' stands for embeddings. This term is used to denote the numerical representations of protein sequences obtained from a hot topic called protein language models (pLMs), inspired by the well-known subject of Natural Language Processing (NLP). The best model was the artificial neural network (ANN), which reported an F1 Score of approximately 0.72% on a dataset containing the largest 1,773 superfamilies. Despite the relatively high F1 score on such a large number of classes (1,773), such a model cannot be considered reliable for annotating proteins.
This project aims to leverage state-of-the-art techniques used in NLP to develop two new pLMs: one for protein sequences and the other for the sequence of Structural Alphabets (SA). The goal is to build a model that accurately annotates more than 200 million protein structures deposited so far in the AlphaFold Database – AlphaFold DB.
Supervisor: Dr Jad Abbass
We are looking for a motivated PhD candidate to join our innovative research project focused on developing cutting-edge solutions for cybersecurity risk assessment and mitigation in the Artificial Intelligence (AI) life cycle. In recent years, AI has seen rapid growth in its development and integration across various industries, economies, and societies worldwide. This significant progress is mainly due to significant breakthroughs in machine learning, deep learning, and the massive advancement of computational capabilities. As AI technologies have advanced and enhanced efficiency and productivity, they remain susceptible to an ever-growing number of security threats and vulnerabilities. The rapid advancement of AI therefore requires the need for a robust understanding of the evolving risks that are specifically associated with AI.
In the literature, there is no single comprehensive evaluation of AI-specific cybersecurity risks across each stage of the AI lifecycle. This project aimed to fill this gap in cybersecurity risk assessment and mitigation in the AI life cycle. This project thoroughly evaluates the potential exploitation, impact, and AI-specific cyber security risks associated with each lifecycle phase. As a research team member, the candidate will have access to state-of-the-art research facilities and collaborate with a highly interdisciplinary team of researchers. The candidate will also have opportunities to present their research findings at top-tier conferences and publish in high-impact academic journals.
The ideal candidate should have or expect to achieve at least a 2:1 Honours degree (or equivalent) in Computer Science, Cyber Security, or a related subject. Relevant experience in AI will be an advantage. The applicant should be familiar with C/C++, various scripting languages, machine learning and more particularly deep learning (PyTorch, TensorFlow, etc.) and with the Linux environment.
Supervisor: Dr Razi Arshad
We are seeking a motivated PhD candidate to join our cutting-edge research project on developing post-quantum cryptographic systems for securing communication in Internet of Things (IoT) systems.
Classical cryptographic systems which are used to protect our sensitive data, are now under threat from the advent of quantum computing. Quantum computers use the principles of quantum mechanics to perform computations that are not possible for classical computers. This immense processing power poses a significant risk to traditional cryptographic algorithms, such as RSA and ECC, which form the backbone of our current security infrastructure.
Post-quantum cryptography (PQC) involves the development of cryptographic algorithms that can provide security against the power of quantum computing. The National Institute of Standards and Technology (NIST) has started this initiative for evaluation and standardising PQC algorithms for replacement or augmentation of existing standards. The famous PQC algorithms are lattice-based algorithms such as Kyber (a key encapsulation mechanism) and Dilithium (a digital signature algorithm), both of which offer robust security against quantum attacks while maintaining efficiency and performance.
In general, PQC algorithms can be hard to compute on IoT systems, which usually consist of lightweight devices with limited computational power. This project will take up these new PQC algorithms and their implementations and test, evaluate, and scrutinise them given a wide range of fundamental design constraints and implementation requirements for securing communication in the IoT systems.
As a research team member, the candidate will have access to state-of-the-art research facilities and collaborate with a highly interdisciplinary team of researchers. The candidate will also have opportunities to present their research findings at top-tier conferences and publish in high-impact academic journals.
The ideal candidate should have or expect to achieve at least a 2:1 Honours degree (or equivalent) in Computer Science, Cyber Security, or a related subject. The applicant should be familiar with C/C++, python, etc and with the Linux environment.
Supervisor: Dr Razi Arshad
The aim of this project is to develop and analyse novel steganographic blockchain protocols integrated with Secure Multi-party Computation (SMPC) and Artificial Intelligence (AI) privacy-preserving approaches that emphases on developing a robust framework for strengthened privacy and security in decentralised systems. The synthesis of these techniques is aimed towards solving the requirement for enhanced security solutions in the ever-progressive landscape of the digital world.
The foremost aspect of this project is to develop novel steganographic protocols that incorporate covert techniques for embedding transaction details or confidential information. The unique security characteristics such as immutability enables the blockchain to be a fundamental candidate for such improvement, allowing a synthesis of confidentiality and integrity of information. Through this approach, information can be hidden securely and free from authorised disclosure within the blockchain.
Secure Multi-party Computation (SMPC) complements the steganographic blockchain by enabling secure computations on concealed data by various participating entities. The fundamental goal of SMPC within this framework is to facilitate collaborative computations while keeping individual inputs private, adding an extra layer of security in multi-party environments, where revealing sensitive data is not an option.
In order to improve hidden transaction and privacy optimisation within the blockchain domain, the project will utilise Artificial Intelligence (AI) to develop adaptive steganographic algorithms and privacy-preserving methods that aim at contributing to the undetectability of confidential information and also refining the steganographic techniques continuously to mitigate new threats.
The latter aspects of this project will involve an extensive security analysis with the use of established cryptographic and mathematical formal approaches to verify the security of the developed protocols against different attack types.
Supervisor: Dr Obinna Omego
The need for robust information security has never been more urgent, given the proliferation of digital data and sophisticated cyber-attacks. Steganography, the art of concealing information within seemingly benign media, has emerged as a powerful tool for secure communication. However, traditional steganographic techniques, while effective, are becoming increasingly vulnerable to modern steganalysis methods. Attackers and analysts are leveraging advanced tools to detect hidden data within media files, compromising the security and confidentiality of sensitive information. This project aims to tackle these challenges by introducing a novel approach: Hybrid Black-Box Steganography Enhanced with Artificial Intelligence (AI).
In traditional steganography, static algorithms are used to embed data, often leading to predictable patterns that adversaries can exploit. To address this limitation, this project propose a hybrid model that combines multiple steganographic techniques, such as spatial domain, frequency domain, and transform domain methods. These methods will be applied dynamically and intelligently based on the characteristics of the cover media. By using a combination of techniques, the system can increase resilience against detection while optimising data embedding for different media types.
The true innovation of this project lies in the integration of Artificial Intelligence (AI) to enhance steganographic processes. AI, particularly machine learning, can be used to analyse patterns in both the cover media and hidden information, enabling the system to make intelligent decisions on how to apply steganographic methods. The AI component allows for real-time adaptability, where the system can respond to varying media types, data sizes, and security needs without requiring human intervention.
The black-box aspect of this project refers to a design approach where the internal processes and mechanisms of the system are kept concealed from users and external parties. This means that while the system functions efficiently and securely, its internal decision-making, algorithms, and techniques remain hidden from both authorised users and potential attackers. This approach enhances security because it prevents adversaries from understanding or reverse-engineering the method by which data is hidden, making it significantly harder to detect or breach the system.
To validate the effectiveness of this Hybrid Black-Box Steganography system, we will conduct a series of evaluations, focusing on key metrics such as imperceptibility, payload capacity, and resilience to steganalysis.
In conclusion, this project aims to develop an advanced steganography system that leverages the strengths of hybrid models and artificial intelligence to create a secure, adaptive, and intelligent solution for concealing sensitive information.
Supervisor: Dr Obinna Omego
The healthcare industry is increasingly moving towards personalised medicine, which tailors medical treatment to the individual characteristics of each patient. This approach can significantly improve treatment efficacy and patient satisfaction. However, achieving true personalisation requires the collection and analysis of vast amounts of patient data, often from IoT-enabled health devices such as wearables and mobile apps.
Traditional cloud-based systems for data processing present challenges, including latency, bandwidth constraints, and data privacy concerns. 6G Mobile edge computing (MEC) offers a promising solution by processing data closer to the source, reducing latency, improving responsiveness, and enhancing data security.
This project aims to harness the potential of edge computing and AI to develop a personalised healthcare platform that provides real-time, customised health insights and recommendations. This research will contribute to the advancement of personalised healthcare by leveraging cutting-edge technologies in AI and edge computing. The proposed system has the potential to transform healthcare delivery by providing timely, personalised insights that empower patients and healthcare providers to make informed decisions. Furthermore, the focus on data privacy and security will address critical concerns associated with health data management, paving the way for more widespread adoption of AI-driven healthcare solutions.
Research Objectives:
Supervisor: Dr M Arslan Usman
With the research gaining momentum in 6G wireless networks, the landscape of enriched multimedia content delivery is poised for a transformative change. 6G aims to provide ultra-low latency (less than 1ms), unprecedented data rates (Tera bps), and massive connectivity, enabling a new era of immersive and interactive applications such as augmented reality (AR), virtual reality (VR), and 8K video streaming. These advancements present both opportunities and challenges for video quality assessment (VQA), a critical component in ensuring optimal user experiences.
Traditional video quality assessment methods, which often rely on full-reference (FR) or reduced-reference (RR) techniques, are inadequate in the context of 6G, where real-time processing and adaptation are paramount. No-reference video quality assessment (NR-VQA) emerges as a promising solution, capable of evaluating video quality without access to original content. However, existing NR VQA models face limitations in accuracy, efficiency, and scalability, particularly when confronted with the diverse and complex scenarios enabled by 6G networks.
This project aims to develop novel NR VQA algorithms that capitalise on the unique capabilities of 6G, including ultra-low latency, extremely high data rates, and massive device connectivity. By integrating cutting-edge machine learning techniques and adaptive streaming protocols, this study seeks to enhance video quality assessment and optimise Quality of Experience (QoE) for users. Through comprehensive user studies (subjective testing) and real-world testing, this research will provide valuable insights into the intersection of 6G technology and video quality, ultimately guiding the design of next-generation multimedia services that deliver seamless and high-quality user experiences.
Research Objectives:
Supervisor: Dr M Arslan Usman
The project will focus on addressing the crucial need for security in the ongoing field of AI-enhanced healthcare systems, particularly those operating over next-generation 6G networks. With the integration of AI into healthcare, systems become vulnerable to adversarial attacks that can manipulate or mislead AI decision-making processes. The research will fortify AI systems against such attacks within a decentralised health data management context.
The selected candidate is expected to achieve the following objectives:
Supervisor: Dr Muhammad Rehan Usman
View Dr Muhammad Rehan Usman's profile and contact details >
The research will focus on developing integrated AI algorithms that will influence adaptive beamforming and dynamic resource allocation to optimise interference management in cell-free massive MIMO configurations within 6G networks.
The key areas of investigation would include:
Supervisor: Dr Muhammad Rehan Usman
View Dr Muhammad Rehan Usman's profile and contact details >
An ultrasonic monitoring system has been built that is able to characterise liquids based on the growth of pendant liquid drops. The system operates as an ultrasonic interferometer, whereby part of the acoustic wavefront interacts with the growing liquid drop and is then recombined with another part of the wavefront which acts as a reference path.
The technique has been able to distinguish different types of soft drink, lagers and many other liquids. It operates quickly and can provide a result within a minute or two of sample addition. The capability of identifying and distinguishing test liquids quickly is especially important, and the potential applications for such a device are very wide. These range well beyond beverages to environmental monitoring, biomedical monitoring, and quality control as well as many others.
It is required to model the interaction of the acoustic wave both with the drop head through which the liquid is transmitted, as well as with the liquid drop itself, to gain a better understanding of the behaviour of the system, as well as to inform design improvements. It is anticipated that the modelling will take place using a combination of Finite Element Modelling for the drop head and liquid drop and a newer technique known as Boundary Element Methods for the wave propagation.
This project would suit graduates in the areas of physics, engineering, computing and mathematics, and would provide an excellent training in mathematical and computational modelling of real-world problems. If you are interested in being involved in developing a leading-edge system that has multiple practical applications, then you should apply.
Supervisor: Dr Michal Bosy
Modelling of many modern applications leads to linear systems whose size is too large to allow the use of direct solvers. Thus, parallel solvers are becoming increasingly important in scientific computing. A natural paradigm to take advantage of modern parallel architectures is the Domain Decomposition method that uses iterative solvers based on a decomposition of a global domain into subdomains. At each iteration, one (or two) boundary value problem(s) are solved in each subdomain and the continuity of the solution at the interfaces between subdomains is only satisfied at convergence of the iterative procedure.
Another powerful and quickly developing tool is Machine Learning. The combination of Machine Learning and Domain Decomposition methods can be used in various way. On the one hand, Machine Learning techniques are used to improve convergence or the computational efficiency. On the other hand, "deep" neural networks are used as discretisation methods. Considering potential advantages and fitting the most suitable approach for Fluid Dynamic systems, we will propose an efficient solver based on combination of appropriate Machine learning technique and Domain Decomposition method.
This project would suit graduates in the areas of computing, mathematics, physics or engineering, and would provide an excellent training in mathematical and computational modelling. If you are interested in being involved in developing a leading-edge system that has multiple practical applications, then you should apply.
Candidates should have experience of programming in at least one high-level computer language, such as C/C++/C# or Python, and have some knowledge of Finite Element methods. Some background in Numerical Analysis, Fluid Dynamics or Machine Learning would be an advantage.
Supervisor: Dr Michal Bosy
Sketching has long served as a powerful tool for communication and expression throughout history, from early cave drawings to renaissance masterpieces. From childhood, many of us have sketched on surfaces frowned upon by guardians – walls, furniture, or school desks. This natural inclination to create and communicate through drawing underscores sketching's fundamental role in human development. Today, digital technology has revolutionised sketching, offering dynamic, interactive experiences with tablets and styluses that enhance creativity and accessibility.
For individuals with disabilities, especially those who are neurodivergent – such as those with autism, ADHD, or dyslexia –sketching becomes a unique mode of communication and self expression. Visual mediums like sketching offer a pathway to express complex thoughts and emotions that words may not easily convey. However, traditional sketching environments often lack accommodations for sensory sensitivities or physical disabilities.
Individual vs. Collaborative Sketching
Sketching individually allows creators to find a private space for self expression, enabling them to articulate thoughts and emotions visually without external pressures. In contrast, collaborative sketching fosters the exchange of ideas and perspectives, promoting innovation and creative problem-solving. It also nurtures peer support and learning opportunities, boosting collective knowledge and experiences.
When Virtual Reality (VR) Meets Sketching
Digital platforms, while flexible, can fall short of providing the personalised support that neurodivergent users require. However, the integration of advanced technologies like Virtual Reality (VR) in HCI offers new possibilities for exploring sketching in immersive environments. Customised VR sketching environments can cater to the specific needs of neurodivergent users. It can simulate calming, distraction-free spaces conducive to individual sketching, enhancing focus, reducing sensory overload, and potentially improving social interaction and cooperative skills.
This PhD research will delve into a comparative exploration of individual versus collaborative sketching in VR environments tailored explicitly for neurodivergent sketchers. It aims to investigate various VR tools and platforms, assessing their effectiveness in meeting their unique needs.
Join the project supervisors in exploring how VR can transform sketching from a solitary activity into a collaborative, inclusive experience that empowers neurodivergent users to express their creativity.
Supervisor: Dr Makayla Lewis
The rising demand for sustainable and efficient energy management has brought attention to the vast amounts of waste heat produced by industries. Heavy industries emit a significant portion of their consumed energy as waste heat. Harnessing this energy, characterised by high energy density and minimal energy loss during extended heat storage, holds a transformative potential for industrial sustainability and environmental conservation.
The central focus of this PhD research lies in the comprehensive study and enhancement of Salt-In-Matrix (SIM) thermochemical heat storage materials. Thermochemical sorption using salt hydrates offers promising avenues due to its high energy density and near-zero energy loss during prolonged storage. However, the pure salt hydrates face thermodynamic and kinetic challenges during their hydration and dehydration phases. The project aims to address and optimise these challenges by:
The future of energy conservation and sustainability lies in innovative solutions. This PhD position, deeply embedded in advanced material science and thermochemical research, promises to pioneer new pathways for harnessing waste energy. Through the extensive study and enhancement of SIM materials, this position aims to contribute a vital piece to the puzzle of sustainable energy management.
The ideal candidate for this PhD position should have a strong background in renewable energy and material science with a keen interest in thermochemistry. Proficiency in simulation software such as COMSOL Multiphysics, ANSYS Fluent, or MATLAB for modelling thermochemical processes is crucial. Their commitment to addressing waste heat challenges, along with analytical skills and innovative thinking, is essential. Collaborative teamwork and prior experience in heat storage and simulation are advantageous.
Supervisor: Dr Sahand Hosouli
Floatovoltaic systems are photovoltaic (PV) installations that float on bodies of water. This approach has a number of merits as the water is able to provide a cooling effect on the PV system, thereby increasing the efficiency of converting solar energy to electricity as part of power generation through renewable energy. Additionally, land is not required for installation of the solar panels. However, there are presently a lack of empirical studies that have comprehensively evaluated the performance of Floatovoltaic systems. Therefore, this PhD project will conduct a comprehensive engineering analysis and modelling study to determine the performance of Floatovoltaic systems. This will initially include engineering modelling using software such as ANSYS to determine the extent of the cooling effect on PV systems caused by water. This will be followed by techno-economic analysis (TEA) to establish the economic performance of Floatovoltaic systems. Furthermore, the sustainability of Floatovoltaic systems will also be considered (including the ecological impact of installing solar panels on bodies of water) as well as different installation scenarios (e.g. geographical locations and types of water bodies, such as lakes or seas). The study will be underpinned by a systematic literature review (SLR) in order to identify the state of the art on Floatovoltaic systems.
The study will generate impact in terms of academic developments in this emerging area of renewable energy as well as being of industrial relevance to PV manufacturing and installation companies and power generation companies concerned with improving the performance of solar panels.
Supervisor: Professor Simon Philbin
View Professor Simon Philbin's profile and contact details >
Artificial intelligence (AI) is being adopted across many applications and more recently this has been accelerated by the widespread adoption of large language models (LLM), which are based on deep learning algorithms designed to perform natural language processing (NLP) tasks. AI is beginning to reshape how organisations conduct many processes and this includes how projects are managed. Although much project management practice is still based on the use of traditional techniques, such as Gantt Charts and earned value management (EVM). Project management is therefore in a prime position to adopt AI technologies as part of the digital transformation of the subject and in the context of the wider Industry 4.0 paradigm. Adoption of AI in project management offers various potential benefits, including more efficient planning and monitoring of progress; automated decision-making; reduced workload pressure on project managers; and the development of new project management processes and capabilities.
This PhD project will investigate how AI can be effectively adopted to improve the project management process through identifying specific technology pathways to enable this adoption. The project will begin with a systematic literature review (SLR) to capture the state-of-the-art on digitalisation in project management. This will be followed by the capture of empirical data with project management practitioners. The interviews will help to build an overall framework to support the adoption of AI in project management, including identification of specific use cases and corresponding technology pathways. The findings will be validated through an interactive session with project management professionals to provide additional empirical data. The project has the capacity to enable a significant breakthrough in how projects are managed in different industrial sectors, thereby leading to improved project performance and delivery of wider stakeholder benefits.
Supervisor: Professor Simon Philbin
View Professor Simon Philbin's profile and contact details >
The field of robotics is rapidly advancing, with an increasing focus on cooperative and networked robotic systems. Distributed control and communication play a critical role in enabling effective coordination, decision-making, and task allocation among multiple robots. This research work aims to investigate and develop novel approaches for distributed and network control in swarm robotics, addressing the challenges associated with scalability, robustness, and real-time performance.
The effectiveness of swarm robotics heavily relies on the distributed and network control mechanisms that enable efficient communication, collaboration, and decision-making among swarm members. The use of distributed control paradigms enables robots to make autonomous decisions, adapt to dynamic environments, and optimise task allocation and resource utilisation. As the number of robots involved in a task increases, centralised control becomes less practical. Distributed control allows for scalability, where robots can coordinate and make decisions locally, leading to robust robotic systems. Furthermore, real-time decision-making is crucial in dynamic environments, where robots must adapt to changing conditions and unforeseen events. Distributed control facilitates collaboration and coordination among robots, enabling them to work together towards a common goal. This is particularly important in applications such as swarm robotics, where robots must work collectively to achieve complex tasks.
The main objective of the project is to develop novel distributed control algorithms and protocols specifically designed for swarm robotic systems, addressing key issues such as swarm formation, task allocation, motion coordination, obstacle avoidance, and resource optimisation. The developed distributed control algorithms should be integrated with existing swarm robotic platforms, enabling real-time coordination and communication among swarm members. Potential applications of the proposed research work could span across different industries such as manufacturing, logistics, healthcare, and agriculture.
Supervisor: Dr Nadia Djaid
Industry 5.0 has emerged as the next wave after Industry 4.0. The emergence of Industry 4.0 has revolutionised the manufacturing sector, leading to the integration of advanced technologies such as robotics, automation, and artificial intelligence. While Industry 4.0 focuses on optimisation and efficiency through technology, Industry 5.0 aims to reintroduce the human element into the equation. In this context, cooperative robots will collaborate with humans to perform complex tasks to improve productivity, flexibility, and efficiency. However, achieving seamless cooperation between robots and humans in a dynamic and uncertain environment poses numerous challenges that need to be addressed. Model Predictive Interactive Control (MPIC) presents a promising approach to tackle these challenges and optimise the performance of cooperative robotic systems in Industry 5.0 settings.
This research aims to explore the application of MPIC techniques to enhance the cooperative capabilities of robots and enable efficient coordination and decision-making in real-time scenarios. The proposed work should consist of designing a novel MPIC framework that optimises the cooperative behaviour of multiple robots with humans while considering system dynamics, constraints, and coordination objectives. The project objective is to develop an algorithm for trajectory planning, obstacle avoidance, collision detection, and task allocation within the cooperative robotic environment. Comprehensive simulations should be conducted to evaluate the performance of the proposed MPIC framework, by implementing the framework on a real world multi-robot industrial setup to validate its effectiveness, robustness, and scalability.
The developed model predictive interactive control algorithms will enable more efficient coordination, optimisation, and adaptability of multiple robots working collaboratively with humans in industrial settings. The research outcomes will benefit industries by enhancing productivity, reducing downtime, improving resource allocation, and enabling the effective integration of cooperative robotic systems within the Industry 5.0 framework.
Supervisor: Dr Nadia Djaid
In the UK we each create a huge amount of electronic waste – the second highest in the world. A large and critical component of Waste Electrical and Electronic Equipment (WEEE) comprises printed circuit boards (PCBs). The typical substrate for PCB manufacturing and lamination (i.e., FR4) alone contributes roughly 70% of waste and poses major environmental challenge as it contains toxic substances such as brominated flame retardants (BFR), and non-metallic materials. The wastage of highly purified metals (copper, silver, nickel, etc.), which have limited reserves, also raise the concern related to sustainability. Manufacturing of PCBs using lesser materials could be potential solution for above challenges.
The vision of this project is to develop eco-friendly PCBs through additive manufacturing and provide a lasting solution for the e-waste generated by the conventional PCB during its manufacturing and the end-of-life (EOL). When the PCB reaches its EOL, the layers of ink, adhesive, and polymers will degrade naturally, and the electronic components will be available for reuse. This work proposes a resource saving way of manufacturing of PCBs through the additive layer manufacturing (ALM). It will essentially investigate the following:
The outcome of this work will be the savings of resource and the resource saving manufacturing of printed circuit boards.
The ideal candidate for this PhD position should have some knowledge of electronic circuit and material characterisation.
Supervisor: Dr Moupali Chakraborty
Our reliance on renewable and sustainable energy is undoubtedly becoming more prevalent, especially with the current worldwide climatic challenges and the decarbonisation targets set by many countries. Therefore, the need for the development of more efficient, more sustainable and more economically viable energy systems is inexorably becoming a must in order to meet such worldwide goals. The use of solar energy is a very attractive option, as it offers not only the ability to produce electricity but also the possibility for illumination, also known as daylighting, the benefits of which extends beyond the energy and cost saving factor.
Although solar energy is one of the most established and dominant renewable energies, yet there is still a need to improve the efficiency of its harvesting and delivery. The use of optical fibres has been demonstrated to be a viable option for such an application. However, such models can have complex designs resulting in high-priced systems.
The main target of the project is to design and develop an innovative technique for the efficient capture, transport and delivery of solar energy using optical fibres. To achieve this, novel structures of optical fibres will be theoretically and analytically investigated prior to a comprehensive laboratory-based experimental work on the proposed designs. Numerical modelling and simulation will be performed using MATLAB, optical design software LightTools and other available software tools. Machine Learning techniques would be utilised to improve the operation of the proposed system.
The project will require strong analytical, practical and computational skills, and would suit a graduate in one of the following areas: Electrical and Electronic Engineering, Mechanical Engineering, Optical Engineering, Applied Physics or other similar disciplines. Programming skills with MATLAB, C/C++ will be considered favourably. The candidate should be able to work in a collaborative environment, with a strong commitment to reaching research excellence and achieving assigned objectives.
Supervisor: Dr Kevin J Munisami
The construction sector has seen a high share of energy-related carbon emissions globally due to building operation, posing a great threat to achieving net zero carbon targets by 2050. According to the Global Status Report for Buildings and Construction, in 2021, the buildings and construction sector accounted for about 34% energy consumptions and circa 37% of energy-related carbon emissions globally. The split share of operational carbon emissions from residential and non-residential buildings is around 28% (with the remaining 9% from construction industry). The operational carbon emissions of new buildings in compliance with current Building Regulations in the UK predominates the building whole life carbon (circa 2/3), using conventional passive solutions for the building envelope design. To tackle the challenges of developing near or net zero carbon buildings for environmental sustainability, it is promising to develop thermally active building envelope, which has a great potential to reduce building energy demands and operational carbon by 50% or more, respectively.
The research project aims to delve into integrated design approaches of novel thermo-active building envelope for near or net zero carbon buildings, using the iconic building Town House at Kingston University London as an archetypal building. Both passive and active solutions for energy-efficient building design will be explored and analysed based on building performance modelling and simulation (e.g. EnergyPlus, IES VE, TRNSYS). It is further expected to extract new key thermal indexes that can characterise the thermal behaviour of thermo-active building envelope in a simple way for engineering applications. Climate-responsive integrated design approaches of buildings in the UK will be ascertained, especially for commercial buildings with the archetypal building Town House.
The Town House, built in 2020 and located on the Penrhyn Road Campus at Kingston University London, is recognised as an energy-efficient building, with the features of good thermal insulation and built-in heat recovery systems, as well as rooftop solar photovoltaic, etc. Especially, a thermo-active system was embedded in the concrete slabs as part of the building envelope, allowing building mass to passively heat and cool the interior. The Town House has won the 2021 Royal Institute of British Architects (RIBA) Stirling Prize (an indicator for Britain's best new building), as well as 2022 EU Prize for Contemporary Architecture – Mies van der Rohe Award (recognised as the highest accolade in European architecture). It is interesting to assess the building performance of its thermo-active building envelope (or components) to inform the practical building energy efficiency and compare with the current Building Regulations for environmental sustainability assessment. In the meantime, some other novel active solutions such as thermally active insulation, pipe-embedded walls, PCM heat storage walls, facade integrated solar photovoltaic / thermal collectors, the thermoelectric battery, etc. can be integrated to improve the thermal performance of thermo-active building envelopes, further reducing the building energy demands and carbon emissions. The research will provide insights into developing near or net zero carbon buildings, in contrast to the new building design in terms of the current Building Regulation in the UK. It will have impact on addressing contemporary construction methods for architects, building engineers, and construction-related stakeholders to reference, as well as decarbonising new buildings for contribution to the net zero carbon targets in the long run.
Supervisor: Dr Jie Deng
We are seeking an ambitious PhD candidate to join a cutting-edge research project aiming to map the solar photovoltaic (PV) potential for existing building stocks in the UK through the application of deep learning on satellite and aerial image data. The anticipated research outcomes will provide valuable insights for the construction industry to strategically promote solar PV installations across the UK. The findings will significantly contribute to enhancing sustainability, aligning with the UK's goal of achieving net-zero carbon targets by 2050.
Solar PV technology stands out as one of the most promising renewable energy technologies in the global energy markets. Over the past two decades, the installation capacity of solar PV systems in the UK has experienced rapid growth. As a representative active solution for sustainable buildings and infrastructure, it holds great potential to significantly reduce operational carbon emissions in buildings. Simultaneously, it contributes by generating renewable electricity, easing the loads on the state grid. However, a notable research gap exists regarding the extent to which rooftop solar PV panels can be installed on existing building stocks in specific regions of the UK for renewable electricity generation and the decarbonisation of building energy use. Research inquiry of the proposed project includes assessing the available space for rooftop solar PV installations and mapping regional PV capacity potential across the UK.
For this purpose, it is interesting to apply Artificial Intelligence (AI) techniques to estimate the regional rooftop areas that are bare without solar PV installation. AI, currently a focal point across diverse industries, notably in the construction sector, drives digital transformation, enhancing efficiency, productivity, and quality assurance from the planning stage through construction to post-construction stages [1]. One of the advanced AI techniques, deep learning, will be employed to map the regional rooftop solar PV potential for early construction planning in construction industry.
The candidate will be expected to collect the GIS (Geographic Information Systems) data and relevant information, as well as use deep learning and semantic segmentation techniques to identify the regional potential of rooftop areas for existing building stocks from satellite and aerial images [2]. Then, regional potential of rooftop PV installation capacity across the UK will be estimated statistically and mapped using GIS analysis software (e.g. QGIS, or ArcGIS). The research results will establish a robust foundation for strategically promoting rooftop PV installation in the UK construction industry, fostering long-term sustainable development. The findings will not only inform the regional potential of PV installation capacity for renewable electricity generation, but also draw attention from government, stakeholders, and policymakers in relevant industry. This, in turn, can guide the formulation of effective incentive policies for solar PV installation, driving economic growth in both upstream and downstream production chains of solar PV systems.
The idea candidate should have obtained an Honours Degree classified above 2:1 (or equivalent) in renewable energy, architectural engineering, geographic information systems, computer science or other relevant engineering subject areas. Research experience in deep learning, big data analysis, programming language (e.g., Python, MATLAB) will be a plus in the application
Supervisor: Dr Jie Deng
Although collaboration has been recognised as a necessary component of success in the construction industry, concepts like building information modelling (BIM), integrated project delivery (IPD) and target value design (TVD) are still emerging. Yet, collaborative working (CW) is reported to be fading within the UK construction industry, largely because of "vested interest' and commercial behaviours. These behaviours are reinforced by the dominant procurement arrangements and ‘institutional' factors, which surrounds the project delivery model in construction. Accordingly, construction clients, contractors and supply-chain organisations struggle to realise the full benefits of CW across the value chain (upstream and downstream processes). In fact, these behaviours create costing approaches marred with irregularities and uncertainties with little shared understanding amongst stakeholders. Invariably, the prevailing approach remains discrete and fragmented, which continue to guide stakeholders with a narrow view to consider costing and design activities as separate functions. This lack of a collaborative approach in costing arguably accounts for much of the cost-overruns that are still prevalent in the industry. Conversely, the integration of design and construction processes creates opportunity for stakeholders to be more deeply integrated in CW approaches thus removing the major barriers to performance improvements.
The construction industry continues to face considerable uncertainty due to the economic downturn and the growing complexity and low carbon demands of construction projects. To confront these challenges, industry professionals are exploring new methods of working, and the role of quantity surveyors and cost managers has become even more critical. These professionals are responsible for ensuring the financial stability of a project through tasks such as cost estimation and project control. They are also expected to extend their role beyond traditional functions and engage in activities such as measuring social value, implementing environmental, social and governance (ESG) principles, adopting whole-life and whole-asset thinking, and calculating carbon footprints across various projects and assets. Such efforts will need the use of collaborative tools like BIM and other integrated practices, which are currently reshaping project delivery. The growth of collaborative approaches opens new opportunities for stakeholders to create value for clients. However, this requires the development of cross-functional collaboration of stakeholders which cost management is an integral part.
This doctoral study aims to explore and develop strategies that would guide stakeholders (client, designers, cost consultants, contractors, and supply chain groups) towards collaborative costing approach. For example, how would cost consultants engage in a cross-functional collaborative team to assist other stakeholders to manage costing function during early stages? The project will identify barriers, benefits, opportunities, and ethical implications for working in this collaborative fashion. The successful applicant will investigate not only "collaborative costing" approach but develop framework strategies with recommendations on how to assess the capabilities of stakeholders costing projects collaboratively. Research in this topic can derive into multiple possible PhD avenues, using either qualitative or quantitative research, at organisational (e.g., level of interactions on cost related decisions, associated risks, etc) or operational level (e.g., software to help spot the level of effectiveness).
Supervisor: Dr Sa'id Ahmed
Visit Dr Sa'id Ahmed's profile and contact details >
The construction industry is identified as an intense resource use sector and a great contributor to waste generation globally. It is estimated that in the next 9 years, there will be 2.2 billion tonnes of construction and demolition waste in the world. The recent raging regional conflicts would also have increased this figure. Research into the application of circular economic (CE) principles to mitigate these challenges in the construction industry has increased in the past 5 years however not much has been done in terms of actual implementation of the strategies. The integration of circular economy principles necessitates a collaborative effort between the government and various stakeholders involved in the construction and demolition processes. It is important to consider both macro and micro factors in implementing some of the solutions available, while also acknowledging the diverse challenges faced by different firms within the industry. Regulatory controls have been a driver for growth in some instances however value creation and benefits to the entire spectrum of stakeholders are better motivation. Collaboration among the actors along the supply chain, from planners to designers and project managers to the suppliers, and clients, is necessary for enhancing CE implementation in the construction industry. Key evaluators for CE implementation in construction are the Life Cycle Assessment method (LCA), Energy analysis, Material flow analysis, and Life Cycle Costing Assessment which have produced varying results.
Recently, CE frameworks proposed have not presented holistic factors needed for the successful handling of concrete demolition waste in a developing nation's context.
Hence intending doctoral applicants are invited to undertake this study.
This project will design approaches for mapping and locating the concrete demolition wastes. The project will propose resource-efficient methods of collection and sorting. This project will draw on the application of life cycle assessment, using structural equation modelling (SEM) to optimise the factors that are critical to ensuring the intended benefits are delivered, while also incorporating experimental design with a characterisation of demolition wastes from a developing nation.
Supervisor: Dr Bukunmi Ogunsanya
Background and Current Challenge
The modern Olympic Games take place every four years and require the construction or repurposing of numerous buildings in the host city's Olympic Park to accommodate over 200 national teams and extensive service staff. These facilities include stadiums, accommodation and catering buildings, as well as professional service buildings. In addition, large-scale regional infrastructure renovations and upgrades are often necessary to meet the heightened transportation and logistics demands during the event.
Low-carbon design principles have been applied to Olympic facilities in previous games, and there have been efforts to establish guidelines for Olympic carbon auditing, such as the "Carbon Footprint Methodology for the Olympic Games" published by the IOC in 2018. However, these measures primarily focus on reducing carbon emissions at the individual building level or on ex-post carbon emission statistics for the entire event. There is little research on preliminary carbon forecasting that could be applied before the detailed planning and design stages of the Olympic Park. As a result, decision-makers often lack a comprehensive understanding of the carbon footprint and environmental impact—both direct (from the proposed Olympic buildings) and indirect (from the long-term change of use in nearby areas due to the Olympic Park). This insufficient understanding can lead to suboptimal decisions during the bidding process and early planning stages.
To address this challenge, this project aims to develop a preliminary carbon prediction method applicable to Olympic Park buildings and nearby communities during the early planning stages. This data-driven method, powered by historical Olympic Park design, energy records, and machine learning algorithms, will provide carbon predictions with reasonable accuracy without requiring detailed design input, even when many design parameters are still uncertain. Accurate carbon predictions in the early bidding and planning stages will offer crucial insights for detailed Olympic infrastructure design in later phases, helping to deliver a low-carbon Olympic Games and fostering a more sustainable Olympic legacy for local communities.
The simplified research steps and objectives:
The Novelty:
Supervisor: Dr Zishang Zhu
Kingston University, Holmwood House, Grove Crescent, Kingston upon Thames KT1 2EE. Tel: +44 (0)20 8417 9000