Research Demos / Tools

[ Pervasive AI Hardware Design ]
We are working towards designing a brand new generation of Pervasive AI Hardware using the principle of Learning Automata. So far we are aiming towards the first ASIC based chip design by March 2020, which is supported by a suite of design exploration and automation solutions implemented in various other platforms, such as FPGAs and microcontrollers. Our early results have demonstrated up to 1000x times more energy efficient AI hardware than state-of-the-art solutions. Our research is currently funded by two grants from LRF-ICON and EPSRC IAA. More updates on this to follow soon.

[ Embedded Genomics ]
Using powerful hardware/software co-design we are developing algorithms and implementation solutions for embedded genomics. The aim is to make whole genome sequencing highly energy efficient, portable and low-cost to enable personalised healthcare. We are liaising with industries to translate our research (please write to us if you are interested)  and have already published a journal paper in IEEE TCBB (see publications) and a number other papers are soon to follow. The project is funded by grants from Royal Society and EPSRC IAA.

[ Power-infused, Real-Power AI Hardware ]
A bulk amount of energy is lost in traditional systems at the systems boundaries, such as from batteries or energy harvesters to power managers and then to the integrated circuits. We are developing solutions to minimise this loss by developing new power-infused integrated hardware solutions. We are developing highly power-elastic AI hardware that can operate in tandem with variable power scavengers with the natural capability to operate over a dynamic power domain (which we call as Real-Power AI Hardware). The aim is to enable a new generation of AI hardware that can operate in pervasive environments. The project is funded by grants from EPSRC DTPs.

[ LITTLE compute controlling the BIG convolution algorithms ]
Dave Burke (Our Stage 4 PhD researcher) explains how intelligently inferring the images prior to processing them can significantly reduce the total amount of data being processed — effectively improving the performance and reducing energy consumption by an order of magnitude or more. A classical example of a LITTLE compute routine (significance inference) controlling a BIG compute algorithm (image convolution filtering).

[ Significance-driven Adaptive Approximate Computing ]
Based on the work of Dave Burke, Dainius Jenkius and Issa Qiqieh of the Lab, we had an invited special session paper presented at CODES+ISSS 2017. The work is one of our ground-breaking approaches of inferring data significance to adapt computational efforts with the aim of minimising energy consumption, while meeting the specified performance and quality requirements.

[ Power, Energy and Reliability Trade-off Tool Demo by Dr Ashur Rafiev ]
The video created by Dr Ashur Rafiev demonstates how power, energy and reliability trade-off can be modeled using a Region of Reliable (RoR) operation, through PER tool. Our team was actively involved in supporting the tool validation using extensive characterisation experiments. The tool can be found here:

[ Many-Core Speedup and Parallelization Models Using Performance Counters]

Performance counters can be directly used to accurately estimate speedup and parallelisation of parallel many-core applications — without requiring any intrusive application instrumentation or software modification. See: for more details. This is a work-in-progress, part of the results and validations were submitted for PARMA-HIPEAC 2017. The benchmark application (pthreads.c), which can control the parallelisation and speedup, was written by Ashur Rafiev. The research work is part of PRiME research project.

[ Power-Aware Performance Optimisation of Concurrent Many-Core Applications ]

Matthew Travers, an industrial intern of the lab, demonstrates a novel approach for saving energy in many-core systems executing concurrent applications. The approach was presented in ACSD’16. See Publications.

[ Power Adaptive Distributed Power Governor for Many-Core Applications ]
A distributed Linux Power governor was demonstrated as part of PRiME research project in DATE’15 and also in HiPEAC’15. The governor is capable of annotating a power budget, based on which it will control the per core voltage and frequency. As an example demonstration Intel Xeon Processor E5-2650 v4 (with 12 cores) was used running PARSEC RayTrace benchmark on Ubuntu 12.04. The source code of this governor will be available shortly. Likwid based power monitor was used to measure the CPU power consumption using root privilege.

Power Budget: 8 watts (the per core governors limits the application to use 2 cores at low voltages/frequencies – the application is affected by lower performance in terms of fps)

Power Budget: 25 watts (the per core governors enables the application to use more cores at low voltages/frequencies – the application performance is improved)

Power Budget: 55 watts (the per core governors allows for the application to use the highest number of cores with scaled voltages/frequencies – the application performance improved)

[ Study of Impact of L1 Cache Size on Application Performance: using MBENCH FFT application in gem5 ]
A demonstration of how gem5 can be used to compile and execute application with a given L1 cache size.

A demonstration of how gem5 can be used to compare between two given cache sizes. Note the difference in performance is clear (audio to be added soon).