Articles

When Your AI-powered Energy Optimization Doesn't Work, Who's Liable?

Picture of Tapeesh Sood
Tapeesh Sood

Director of Product, Univers

Published in the 25 May 2025

Your customer deployed an AI system that promised 30% energy savings. Six months later, their bills are higher than before. The finger-pointing begins:

  • The AI vendor says they need better data quality
  • The sensor company says its hardware is performing fine
  • The cloud provider says their platform is running perfectly
  • The building management system vendor says the integration is working
  • Everyone sends invoices for “troubleshooting time.”

Meanwhile, your customer asks you one simple question: “Who’s going to fix this, and who’s paying for it?”

Welcome to the contract crisis that’s paralyzing building tech deals.

The Real Problem: No One Owns the Outcome

Traditional building contracts were written for a simpler world. You bought an HVAC system, you got HVAC support. You bought lighting controls, you got lighting warranties. But today’s AI-powered building systems span five different technology layers:

  • Physical sensors measuring temperature, occupancy, air quality
  • Edge devices process data locally
  • Gateways sending information to the cloud
  • Cloud platforms store and analyze everything
  • AI algorithms are making optimization decisions

 

Each layer has a different vendor, each vendor has a different contract, and each contract points liability somewhere else.

The result? When the system fails to deliver promised results, everyone is responsible for their piece, but nobody is responsible for the whole.

Why AI Makes Everything Worse

AI adds a whole new layer of accountability confusion:

Data Quality Issues: When AI underperforms, is it because sensors provided bad data, cloud storage corrupted information, or algorithms need retraining? Good luck proving which vendor caused the problem.

Black Box Decisions: Your AI system decides to shut down cooling in the middle of summer, causing equipment damage. The AI vendor says the algorithm worked correctly based on available data. But who validates that claim?

Continuous Learning Confusion: AI models that “learn and improve” create moving targets for performance guarantees. How do you hold vendors accountable for systems that change their behavior over time?

Explainability Requirements: New regulations demand AI decision transparency, but most vendors consider their algorithms proprietary. Who’s liable when you can’t explain why your building’s AI made costly decisions?

Univers Is Rewriting the Rules

Building technology customers don’t want to manage vendor liability—they want results. At Univers, we’ve recognized this fundamental shift and designed our approach accordingly.
Rather than forcing customers to coordinate between multiple vendors for sensors, analytics, AI optimization, and system integration, Univers takes end-to-end responsibility for decarbonization outcomes. When a customer implements our platform to optimize their building portfolio or renewable energy systems, they work with a single point of accountability.

This end-to-end approach means we manage the complexity behind the scenes, coordinating our connected sensors, AI algorithms, and system integrations to deliver guaranteed results. Instead of customers navigating contracts with separate IoT vendors, software providers, and integration specialists, they get one contract, one relationship, and one team accountable for success. The pattern is clear: customers want partners who absorb integration liability instead of passing it along.

Questions This Raises for Building Tech

For sales teams: Are we making it easier or harder for customers to understand who’s responsible when systems underperform? There’s a tension between showcasing technical capabilities and simplifying accountability.

For product development: Each integration point we add creates another potential liability handoff. How do we balance innovation with the contract complexity we’re creating for customers?

For company strategy: As AI capabilities evolve rapidly, how do we structure partnerships and contracts that can adapt to changing technology without leaving customers stranded?

Emerging Contract Approaches

Some companies are experimenting with new models, though it’s early days:

  • Single integrator approaches involve one vendor coordinating everything behind the scenes, which concentrates risk and requires deep partnerships. 
  • Shared accountability frameworks align multiple vendors around common outcomes, although defining fair liability splits is still challenging. 
  • Performance-based contracts emphasize measurable results rather than technical deliverables, although validating AI-driven performance is still being worked on.

The Bottom Line

AI-powered building systems deliver incredible value when they work. But “when they work” depends on flawless integration across multiple vendors who traditionally blame each other when problems arise. The companies winning enterprise deals aren’t building better AI—they’re building better contracts.

Your customers want smart buildings that deliver promised results. They don’t want to become contract lawyers to get them. The vendors who figure out how to absorb integration liability while delivering guaranteed outcomes will capture the market. Everyone else will keep fighting over who’s responsible for the last system failure.

Related customer stories

Discover our latest news and inspiring stories

Request a demo

Request a demo

Request a demo

Learn More