Machine learning is applied to a variety of challenges in climate tech, from optimising renewable energy to forecasting energy demands or predicting solar production. We rely more on these models, but we often forget a critical piece: their security. What happens if someone tampers with your model’s inputs, poisons your training data, or sneaks malicious code into an open-source package you’re using? These attacks can throw off predictions and disrupt energy systems or even the grid itself.
In this talk, I’ll walk you through the OWASP Machine Learning Security Top 10, using real-world examples from climate tech to show how these attacks can happen. I'll show you cases like manipulating energy consumption forecasts, poisoning datasets, or sneaking malware into open-source libraries used for climate modelling. It’s not just a hypothetical threat, these risks are real and the consequences can be serious.
I’ll also share practical solutions you can use as a Python developer, data scientist, or data engineer to protect your models and systems. I’ll talk about securing your ML supply chain, validating data, and monitoring your pipelines for suspicious activity. You'll leave with strategies to defend your work so you can build systems that are not only smart but also safe and reliable.
Why does this matter? Because in climate tech, the stakes are incredibly high. The predictions we make and the systems we build influence the grid, energy policies, resource allocation, and consumers trust.
During the talk, we'll cover:
Outline of the Talk:
Key Takeaways
Climate tech is one of the most exciting and meaningful areas to work in. The systems we’re building have the potential to shape a more sustainable future. But if we don’t make security a priority, we risk undermining the customer's trust. This talk will give you the tools and confidence to keep your machine learning models safe and ensure they’re as reliable and impactful as they need to be.