Development Hackers

Cover image for Solar panel AI inspector
Nameen
Nameen

Posted on

Solar panel AI inspector

I've become quite interested in using machine learning to do automated detection of defects in solar panels.

A few friends of mine work as consultants in renewable energies and told me that there is a ton of investment in huge arrays of solar panel and defect detection is an important part of these projects.

An array of solar panels

Over time, defects appear on solar panels which reduce their output. It's important to detect them because:

  • you can tell investors the "health" status of the solar farms they financed
  • you can replace defective solar panels
  • you can get a better understanding of which solar panels get defective and why, in order to improve of the overall efficiency of your solar farm

Today, most defect detection is done manually. Operators go and check the status of the solar panels, visually. You can guess that this is quite expensive and time consuming.

So people started using drones to fly over the solar farms and do defect detection. However, as you can see on the image above, it's hard to notice defects visually from the sky. The problem is that defects can be a bit of broken glass, or an internal problem.

But defects have one thing in common: when a cell of a solar panel is defective, it doesn't absorb the sun's rays as efficiently. It doesn't capture the sun's energy as well, and thus radiates more heat itself. So if you looked at it in infrared, you could see the defect much better, like on the picture below.

Aerial view of solar panels in infrared
The photo above was taken by a drone in infrared.

You can see three defects in the forms of three yellow dots, which correspond to solar cells that are hotter than the others. I've circled them in blue for you all.

So at this point I'm thinking people must be using machine learning to detect these defects automatically. But actually, very few organisations seem to do that. Taking infrared photos with drones is actually quite new, and most people use this technology to have other people manually review the thousands of photographs taken by the drones !
It's better than operators on the ground, but it's still a lot of expensive work. It doesn't scale well.

So I've been spending some of my free time seeing if I can create a model for infrared defect detection on solar panels. The focus right now is actually getting clean data. So I've been using cvat.org to crop images and annotate them. Kind of a long process but I've got some nice datasets now :)

If anyone has got suggestions or wants to participate, tell me. I can also post some follow-ups on my progress !

Top comments (6)

Collapse
 
rayan profile image
Rayan Nait Mazi

I would love to see your progress on this project. What are you doing to clean up the data?

Collapse
 
zeke profile image
Nameen

Sorry for late reply, i see the community is growing! Maybe I can do a second post about it.

One of the difficulties is you cannot remove the metadata on the photo taken by the drone (annoying...i know...). So on all pictures I crop out the metadata with commands on the terminal (same position and size on all images). Then I need to crop out the grass (yellow areas around the solar panels) but it's different every photo. So I'm making a quick ML model to segment these areas, and to do that I need to annotate a bunch of photos for training. I have 3000 photos total so I am taking 150 and annotating the solar panel arrays (using cvat.org), then I trained a model to do the same on all the other photos.

This worked, so now I have 3000 photos and their masks which are the areas of solar panel arrays. Now I need to do a quick script to crop each photo along the mask. Then I will take these 3000 cleaned up photos and create a model to detect the defects on them, and that should work better now.

Tell me if that is all clear! It is interesting work but I am doing this in my free time so progress is slow (and annotating photos is long).

Collapse
 
rayan profile image
Rayan Nait Mazi

Super interesting project !
I looked it up and very few people seem to be working on this besides for RaptorMaps that seem to have a strong platform: raptormaps.com/

Collapse
 
joel profile image
joel

Interesting project, let us know when you have first results and figure out which model to use and how useful it is down the line :)

I don't know how useful this might be, but I saw this paper this week that discusses segmentation and a cool way to use semantics to generalize to unseen data (in particular this could be useful here in the zero-shot setting)

Collapse
 
joel profile image
joel

Another idea now that I've thought about it a bit more, maybe you care about whether each individual panel is correct or not, and that is both easier to label (only have to mark each panel instead of each pixel), and maybe you can have a solution that only focuses on the axis-aligned panel crops (after detecting corners + applying a homography to flatten it)

Collapse
 
joel profile image
joel

Oops forgot the link: arxiv.org/pdf/2201.03546.pdf
Language-Driven Semantic Segmentation, from Li et. al