SustainML
SustainML: Application Aware, Life-Cycle Oriented Model-Hardware Co-Design Framework for Sustainable, Energy Efficient ML Systems

We envision a sustainable, interactive ML framework development for Green AI that will comprehensively prioritize and advocate energy efficiency across the entire life cycle of an application and avoid AI waste. Developers can describe their tasks for solving a problem, while the framework will analyze the need of the task, divide, and encode the problem into an abstract functional semantic catalogue. The framework will suggest several ML models with knowledge transfer and recycling from its neural network functional knowledge core collection. During the interactive design process, the developers can reconfigure the model with other cores with optional pre-trained parameters or design their models with popular neural network languages. The knowledge cores can shrink or expand to meet the needs of the specific problem. By understanding the ML task and the model under the design’s functional segments, the framework will convey top tier AI experts’ experience by suggesting best practices, more efficient alternatives or avoiding previous negative results. However, the sustainable ML framework is not yet another AI studio tool. We will investigate the detailed footprint of various computing and data hardware and develop novel hardware accelerators optimized for different layers and operations. So, the users will see the estimated CO2 footprint, hardware resource and training time during the design process, before these costs actually occur. Such a framework can be used by any user who needs to develop ML solutions. A novice user will know what functionality different parts of a neural network carry and why certain model structures are better suited for specific tasks. Intermediate users or experts from other fields who want to use AI to solve their problems can leverage our framework to develop efficient models that are optimized to their goals with the knowledge of AI experts. AI researchers can skip the problem describing the process and leverage the efficiency and transparency of the framework to benchmark and optimize their ML models’ carbon footprint. It is also easy to interchange and adapt efficient methods from other similar domains, which might accelerate and stimulate more novel and efficient methods. By combining the application task understanding, comprehensive optimization methods, knowledge
recycling, and efficient hardware layers, the SustainML framework will profoundly bring energy efficiency upfront to every AI researcher and developer during its entire life cycle.