2026年4月26日日曜日

Solar + AI: The Prediction Illusion — Why Operations, Not Forecasting, Define Performance —

 AI has become one of the dominant narratives in the solar industry.

The idea is simple and compelling:
collect large volumes of historical data, apply machine learning, and predict failures before they happen.

It sounds logical.
Predictive maintenance has delivered real value in industries led by companies like GE and Siemens.

But there is a fundamental question that is rarely asked:

👉 Does solar actually behave like the systems where prediction works?


Solar Failures Are Not Predictable in the Same Way

In real-world solar operations, failures are rarely systematic.

They tend to be:

  • A single underperforming module
  • A loose or degraded connector
  • A cable fault
  • Localized soiling or shading
  • An inverter that stops unexpectedly

These events share three defining characteristics:

👉 They are local, random, and non-reproducible

This creates a structural limitation:

  • Increasing data volume does not necessarily improve predictability
  • Historical patterns often fail to generalize
  • Early warning signals are weak or inconsistent

Yet much of the industry continues to assume that with enough data, these problems will become predictable.

That assumption deserves closer examination.


The Problem with “Average Degradation”

Another widely accepted concept is long-term degradation:

  • 0.5%–1% annual decline
  • Smooth performance curves over time

While statistically valid, this is not what drives operational outcomes.

In practice:

  • Systems operate normally → near full output
  • A fault occurs → sudden, discrete loss

👉 Performance is driven by exceptions, not averages

Focusing on average degradation can obscure the real drivers of loss:
localized, event-based failures.


Weather Data Does Not Eliminate Uncertainty

AI models often incorporate weather data to improve forecasting.

This is useful — but limited.

The reality is simple:

👉 Weather itself is uncertain

Prediction error remains a dominant factor in output variability.


What Actually Matters: Knowing the Present

If prediction is structurally limited, the optimization target changes.

The critical capability is not predicting the future, but:

👉 Understanding the present with high precision

This means:

  • Module-level visibility (ideally)
  • String-level visibility (at minimum)
  • Real-time awareness

Technologies such as module-level monitoring (e.g., SolarEdge) move in this direction.

👉 If you can pinpoint the issue immediately, prediction becomes less critical.


Not Everything Should Be Fixed

There is another uncomfortable but essential reality:

👉 Fixing every issue is economically irrational

Solar O&M is not about eliminating all faults.
It is about making decisions:

  • What to fix
  • When to fix
  • What to ignore

Key variables include:

  • Energy loss
  • Repair cost
  • Operational timing

👉 O&M is fundamentally about prioritization, not perfection


This Is Not an Argument Against AI

It is important to be clear:

👉 This is not an argument against AI

AI is valuable — but only when its role is correctly defined.

The problem is not the technology.
The problem is how it is positioned.

Rather than treating AI as a prediction engine, its real value lies in:

  • Assisting anomaly detection
  • Prioritizing issues based on impact
  • Supporting operational decision-making

👉 AI should optimize response, not attempt to predict randomness


Where This Becomes Critical: Second-Hand Modules

This perspective has direct implications for one of the most debated topics in solar:

👉 The use of second-hand (reused) modules

They are often dismissed because:

  • Quality varies
  • Failure risk is perceived as higher
  • Extensive pre-screening is assumed necessary

But this reasoning is based on a flawed premise:

👉 That quality must be guaranteed upfront


The Limits of Pre-Screening

Even new modules are not immune to:

  • Early failures
  • Random defects
  • Performance variability

👉 Perfect pre-screening is impossible.

So the question becomes:

👉 Why invest heavily in upfront filtering when variability cannot be eliminated anyway?


Quality Through Operations

A more robust model is:

  • Accept variability as a given
  • Detect issues immediately
  • Replace components selectively based on economics

With module-level control:

  • Faults can be isolated
  • System impact can be contained
  • Replacement can be targeted

👉 Variability becomes manageable, not prohibitive


The Real Conclusion

Solar is often framed as a hardware-driven industry.

It is not.

👉 Solar is an operations-driven business

Once this is understood:

  • AI shifts from prediction to operational support
  • Data shifts from volume to granularity
  • Second-hand modules become viable
  • System design becomes more flexible and scalable

Final Thought

The industry is trying to answer:

👉 “Can we predict failures before they happen?”

But the more relevant question is:

👉 “How quickly can we detect and respond when they do?”

Solar systems do not fail on average.
They fail in exceptions.

And performance is not determined by how well you predict —
but by how well you respond.

0 件のコメント:

コメントを投稿