When I think back on the beginning of my career in wind, it’s amazing to see how far we’ve come in terms of better understanding and predicting wind performance. When you compare how we go about the work of measuring and modeling wind energy today with our methods in the late 1990s and even in the early 2000s, the shift is so dramatic it’s comparable to a cave man finally moving out of his cave into a high-rise apartment. And like that cave dwelling, our methods for evaluating the wind were just as crude.
In the early days, companies sent out grizzled (but experienced) surveyors to the field to look for trees bent over by the wind. They used flares and smoke bombs or used long streamers to watch what the wind did to try to understand turbulence. If the site survey looked promising via a sufficient number of crooked trees, then the gold standard in those days for actually measuring the wind conditions at a site was to erect a 10-meter to 40-meter meteorological tower. Higher heights were not necessary because those were roughly the maximum heights of so-called utility-scale turbines at the time.
Once the wind was measured at the site, the methods for translating those measurements into a long-term estimate of wind speed at every turbine location were also fairly crude. The field of wind resource assessment has long relied on the concept of measure-correlate-predict (MCP) to estimate wind power performance. For those not familiar, this framework collects short-term (months to years) wind measurements at a prospective site, finds a long-term (hopefully decades) wind reference, and uses a statistical approach to correlate the reference with the site measurements to produce a long-term prediction of wind resource at the measurement location.
The next step is to use another set of models, again fairly rudimentary in the early days, to estimate the wind potential at each turbine location, as that is always at some distance away from the measurement location. Once the wind resource is understood, we can begin the process of estimating power, which requires an estimation of how turbines interact with each other and how turbine performance varies with different atmospheric conditions. To some degree, all methods of conducting an assessment can be fit into this MCP framework, even those of yore. However, our approaches have come from the nearly laughable to the highly sophisticated.
The old way of conducting MCP used a short met tower at the site for about a year and then correlated the measurements from the tower with a long-term reference dataset, usually from a publicly available airport station. These airport stations were never designed for this purpose and were low in height (10 meters above the ground), frequently too far away from the site, and sometimes only offered short records (five to seven years) or were inconsistent because of changes in the instruments used. Once a point correlation was made at the met tower, very basic methods were used to extrapolate this reference to all of the turbine locations, which were some distance away from the tower.
The field of atmospheric sciences knew that wind flow could be incredibly complex over even short distances and used supercomputers to understand those differences, but in the wind energy industry, that complexity was distilled down to highly simplified models that could run quickly on a consultant’s laptop. It was truly astounding to me and my colleagues that wind development companies and financiers were staking hundreds of millions of dollars on these approaches. With hindsight being 20/20, we now know that the risk was real, as it has shown itself through the dramatic over-prediction of actual performance when using these simplistic techniques.
The MCP framework has certainly evolved over time as the industry has matured in its understanding of risk and the cost benefits of conducting more rigorous wind resource assessments. For measurement, we moved to installing one or two 60-meter towers with good-quality anemometers, at a rough cost of $30,000. Today, most companies are now willing to invest the capital to install one to four 80-meter towers (or even higher to chase the ever-increasing hub heights), where the permitting costs of one tower alone can sometimes exceed $50,000 in the California market.
With modern wind turbines reaching 100 meters or higher, met towers are becoming impractical. They are expensive to permit, install and maintain, and they increase safety risks for everyone working on or near them. If a site is abandoned due to low wind resources, you are then left with a high decommissioning cost or a stranded asset.
Mobile remote sensing technologies, such as ground-based SoDAR and LiDAR that can measure up to 200 meters, overcome these measurement challenges. They can be deployed in the early stages of a prospecting campaign, typically with no permitting required, and then moved to provide better spatial characterization of the site or to a new project if the site is abandoned. They significantly reduce vertical extrapolation uncertainty when used in conjunction with a more cost-effective traditional 60-meter met mast. Today, remote sensing information is sometimes even used as the primary measurement data in securing financing for a wind project. This is something we are seeing much more of, as the industry’s understanding of the technology has matured.
The correlate component has also seen major strides with the transition to gridded re-analysis data archives like NNRP, MERRA, ERA-Interim and so on. Re-analysis data offer much stronger and more consistent references from a variety of observational sources, and they benefit greatly from long-term records of 30-40 years versus only 10. Most developers have also staffed up and now hire more meteorologists, GIS professionals and environmentalists to help uncover fatal flaws to a project, such as endangered species or other siting concerns, much earlier on.
Technologies that more accurately capture weather’s complexity and use advanced approaches to predict the wind speed at turbine locations from the on-site observations have also substantially improved. Physics-based numerical weather prediction (NWP) models, such as WRF (Weather Research and Forecasting), offer realistic wind flow information fed by numerous observational sources and are rapidly processed on powerful supercomputers. These models were initially developed within the realm of atmospheric science and then applied first to wind energy forecasting and later (and now increasingly) to wind energy assessment.
The current state-of-the-art of wind resource assessment combines the rich weather datasets obtained from NWP weather simulation models with numerous on-site measurement locations using modern data mining and machine learning approaches. These data and computationally intensive approaches also offer a much more sophisticated way of modeling turbine interactions and the time-varying effects of site weather conditions on turbine performance. All of this effort is designed to scientifically explain the weather and climate at a proposed wind energy project and to dramatically reduce the uncertainties of long-term power production estimates.
However, despite their immense potential and proven application, the technologies of remote sensing and NWP were received with strong skepticism when introduced to the wind industry. They were seen as some kind of voodoo or magic. Yet today, all of the largest players are heavily investing in them either through third-parties or by bringing them in-house. NextEra Energy Resources led the way in 2006 by acquiring WindLogics, an early NWP modeling provider. Now, many others are following that lead and using the most sophisticated technology available.
Today, large players can even leverage 10-15 years of turbine and wind farm operational records and, with the power of “big data” analysis, consider the question, “If we had to do this all over again, what would we do differently?” Which sites produced adequate energy, and which ones didn’t? What do realistic wake effects look like, and how can they be mitigated through different turbine layouts or operational strategies?
Currently, wind leases for most wind projects are 30-50 years because companies know that the technology is going to improve dramatically during that time period. Forward-thinking companies are even signing 99-year leases. This is because they know that turbines will get better, repowering will be increasingly important, and using the latest in wind assessment technology will help them better harness their available wind resources.
So let’s learn from the past and embrace the brave new world that is out there today to better address the challenge of wind resource assessment. This technology is here right now. Advanced approaches for wind resource assessment are ready to go whenever and wherever you are planning your next project.
Lee Alnes is global manager of measurement systems within Vaisala’s energy division and has a wind industry career spanning nearly two decades. Alnes has supported wind energy developers and operators all over the world to better understand wind resource variability for assessment, forecasting and many other applications. Prior to Vaisala’s acquisition of Second Wind, he served as its vice president of sales and marketing. He previously served as chief operating officer at WindLogics. He can be reached at email@example.com