As a transplant from the data communications industry, I’m often asked how that experience relates to the renewable electricity industry. In truth the similarities between the two industries are greater than the differences. From a technical standpoint, the story of the data communications market of the 90’s was fundamentally about a shift from highly centralized systems, to highly distributed systems; from fundamentally “dumb” telecommunications networks, to highly intelligent, adaptive “smart” data networks; from small, inefficient private company structures, to large, heavily leveraged, capital intensive corporate entities.
Behind all these changes were two key enabling technologies, namely cheap computers and cheap hard drives – devices which effectively acted as the Internet equivalent of small distributed generation and storage systems. These technologies enabled the creation of entirely new types of service providers. Today, an equivalent set of enabling technologies – cost effective distributed generation and energy storage – is fundamentally changing the structure of the grid, and creating an entirely new type of utility provider.
All infrastructure distributes
In the early to mid 90’s, traffic congestion on backbone Internet transmission links became a severe problem, with many websites becoming essentially unusable and digital commerce grounding to a virtual halt. The response of the traditional providers was to build bigger pipes in their transmission facilities, and to increase the size of their web servers. New customers, however, were joining the Internet ranks faster, and using exponentially more capacity with video and music, than the service providers could deliver. The problem was eventually addressed when a new group of service providers emerged providing a structural solution that effectively delivered services to local users via a hierarchy of cheap computers and hard drives – something akin to distributed “data generators” – spread across the Internet that replicated website content. No longer did users have to digitally travel around the globe to reach their web destination. Because computing and storage costs were relatively low, web destinations could be digitally and dynamically “moved” to the points of heaviest demand. This architecture had the effect of segmenting the market – leaving the smaller companies behind to deal with cheaper but less effective internet services. Those who had money to spend could buy their way into a better, non-congested experience for their customers, utilizing services offered by this new group of providers. This segmentation effectively established a hierarchy of communication based on access to cash – the net results has been an Internet that has become increasingly stable, reliable, and cost effective for various market segments.
In the context of the electricity markets, the transmission congestion that plagues much of today’s grid bears a striking resemblance to the problems of the data networks in the mid 90’s. And in a similar way, the traditional utilities are trying to solve the problem by building bigger transmission facilities and larger power plants. However, the rules of the game are changing with the emergence of a highly distributed system of renewable generation and storage technologies located close to the point of consumption. The GE’s, Evergreen’s and Capstone’s of the world are ensuring that the energy equivalents of the cheap CPU – distributed generation technologies such as wind, solar, or gas turbines – will be available for years to come. Just as critical are the evolutionary leaps in electricity storage technologies being delivered by companies like VRB Power, A123 and EEStor. These companies, and others like them, are providing the electrical equivalent of the cheap hard drive. Combined, these two technologies are enabling the creation of an entirely new type of type of service provider, something akin to a Utility Service Provider, delivering services that existing utilities can’t or won’t deliver. The end results will be a more reliable, higher quality grid with more predictable pricing and a much larger variety of differentiated offers based on quality of service and price.
The infrastructure’s edge is further out than you think
When a similar transition occurred in the 90’s with these new data communications providers, a key point of debate was how close to the customer should these “data generators” be moved? Said differently, where was the infrastructure’s “edge”? Was it at the central office, in the customer’s home, somewhere in between, or some combination of all three. The decision here was critical – move too many systems out to the edge, and infrastructure control became difficult and expensive. Keep the systems too close to the core, and end-user reliability and performance suffered. At the time, only a few service providers believed these systems would extend much beyond the central office, because the complexity of system management seemed unachievable. However, the structure that evolved was a fairly complex, hierarchical system of distributed computers, moving ever further out toward the edge of the network. As processors and storage costs continued to drop, the move finally came to rest in the TIVOs and Cable boxes resident in the living rooms of individual customers.
Amongst the more forward thinking utilities, a nearly identical debate is taking place. Should there be distributed generation at the substation? At the home? Should there be any distributed generation at all? The reality is that distributed generation is already displacing existing infrastructure, and is doing so even in the absence of market subsidies. Companies like Carmanah are finding areas of the utility grid where the marginal cost of installation and maintenance are so high that the customer is virtually forced to go with a distributed generation solution. Consider a lamppost on the edge of town. Hook it to the grid, and you need to trench. You need to hire IBEW union utility professionals to get it connected. Once it’s hooked up, you need to service it and pay for the electricity to keep it running. Compare that to setting up a self powered industrial strength LED solar lamppost. Install the lamppost…come back in 5 years and change the battery. As the next generation of batteries come online, the maintenance cycle will extend out even further, making the offer that much more compelling. The economics on the edges of the traditional grid are fairly inflexible, and as utility rates continue to rise, more and more connections will simply break away to become standalone elements. As both battery performance and distributed generation technologies continue to improve, this will become more the norm than the exception.
Think locally, but act globally
As more of these elements break away from the “connected” grid, the real challenge becomes management and control. Customers aren’t interested in running their own power plant. They just want the lights to go on when they hit the switch. And in this sense, the approach of the service providers in the 90’s is very instructive.
The distributed infrastructure they implemented was revolutionary only in the context of public data networks – distributed systems had been considered best practice amongst network engineers for years within large private data networks. In these private networks, control of the infrastructure was completely under the engineer’s purview. The revolution in the 90’s was building a control infrastructure for thousands of cheap computers and hard drives across the public Internet, creating an easily and cheaply manageable “virtual” server distributed across the globe delivering highly reliable, fast service for literally hundreds of millions of end-users.
With the grid, these same challenges are being addressed with a new host of managed generation and storage solution providers. Companies like Gridpoint and Gaia Power are delivering the benefits of high quality, reliable power in a simple way by storing energy at the point of consumption, coupled with remote management of those systems. This allows these companies to sell the consumer a better energy experience, and once deployment levels reach sufficient size – say a thousand homes in a given distribution grid – to act as a “virtual” peaking power plant for utilities by drawing on the energy stored at these systems. Here again, as battery storage capacity and cycle life increases, the proposition becomes increasingly compelling. And while these customers may not know fully realize it, they are essentially joining the equivalent of a USP – or Utility Service Provider – where they enjoy the benefits of clean, green power without any of the hassles of maintenance and service. From the standpoint of these new utility service providers, a key requirement for their success can be gleaned from the experiences of the service providers in the 90’s.
All network infrastructure drives to scale
When customers find something they like, they tend to want it everywhere. And for networked infrastructure, this demand tends to increase exponentially with the number of end-points, as does the value of the network itself – a rule known as Metcalfe’s Rule. In other words, a phone network with fifty phones is worth 100 times as much as a phone network with five phones. These forces – consumer demand and Metcalfe’s Rule – work in concert to drive the creation of infrastructure which enjoy tremendous economies of scale, but which require very high levels of capital. This was something the providers of the 90’s knew intuitively, so they aggressively pursued capital, and then rapidly undertook large-scale deployments.
In regards to the grid, an end-point would be defined as a participant in the network, acting both as a provider and as a consumer of the networks product. Viewed in this context, the grid actually has a fairly limited number of end-points – basically the generators, the various T&D operators, and a few other players. Thousands of endpoints, yes. Millions of endpoints, most definitely not. With the arrival of distributed generation and storage, however, the number of end-points goes up tremendously, theoretically into the tens of millions. This has big consequences for the industry. Scale becomes absolutely critical. Those who “scale up” will be able to deliver significant peaking power to the traditional utilities, increasing their valuations relative to their competitors. Those who do not “scale up” will be either put out of business or acquired. A corollary to this is deep access to large amounts of capital.
In the United States, the early stage of this capital aggregation is already manifesting itself in the form of companies like Sun Edison, and Renewable Ventures. These emergent players understand the need for capital formation, and are beginning to impose themselves as the necessary middleman between the utility and the customers. And while it is not yet clear exactly how the domestic operations will play out over the months and years ahead, what is clear is that with regards to the domestic solar market at least, there is a high degree of inefficiency, with literally dozens if not hundreds of small operators offering installation solutions across the market spectrum. Virtually without exception, all of them are undercapitalized, without the cash needed to break away from the pack. Probably fewer than 1 in 500 offer some sort of energy storage solution, arguably the most critical piece of the puzzle. In the US market, one can count on one hand the number of installers and integrators who have seized a market segment, and leveraged their size to shape pricing options in their own favor.
The golden rule
Which brings us to the Golden Rule; he who has the gold, makes the rule. Said differently, in a fasting contest between the fat man and the skinny man, the fat man is king. These are not sayings so much as they are insurmountable rules of business, and when it came to the data communications service providers of the late 90’s, all of these rules were tossed out the window. The market poured billions of dollars into data communication infrastructure contenders with truly astronomical burn rates, relatively small customer bases, and generally modest revenue growth. As the old players began to finally catch on, they found themselves in a unique position to capitalize on the markets that others had spent huge sums slowly prying open. Once the crash came, these older players were able to buy their way into these markets at a significant discount. There is a reason why the name of AT&T is still around, while virtually every other player has fallen from the national scene. A few were able to exit the market gracefully, but most simply evaporated as if they never even existed.
From the standpoint of the emergent utility service provider market there is a key lesson to take home. Ignore the old guard at your own peril – there is a reason why many utility companies were born before your parents, yet survive and in some cases thrive still today. They may be slow to act, but they have the patience, memory, and weight of an elephant. The sooner you can make them your allies, the better. As natural gas and oil prices rise, and the cost of expanding their legacy infrastructure grows, these companies will be looking for allies.
Those companies that survived the great data-communications bust managed to do so because they understood how to quickly build a world class operational system, manage their costs, maintain the discipline of market focus, and extend their value proposition to the traditional players. None of this is necessarily easy and there are no magic tricks here. But aligning a business with the existing utility providers will be a key first step.
The path comes full circle
The data communications service providers segmented into their respective markets over the course of roughly 6 years, between 1993 and 1999. The core architectural shifts that occurred then are still in place today, and likely will remain relatively unchanged for years to come. The leaders who emerged from this time period tended to be well capitalized, knew who their customers were, and had a clear vision as to where they were going. They correctly identified the impact that cheap storage and cpu technology was having in their industry. They used this fact to consistently position themselves 12 – 18 months ahead of their competitors and undercut their business models. By 2003, with exception of one provider, all were consolidated under the legacy telecommunications companies. With the re-emergence of AT&T in 2005, the clock had come full circle.
The clean energy segment finds itself in interesting times, not dissimilar at all in the structural transition that occurred with the service providers of 90’s. Just as in that era, the potential for financial gain is tremendous, the window of opportunity is relatively small, and the challenges are significant. Installers, integrators and manufacturers with clear vision have a unique opportunity to align themselves with the emerging utility service provider market, and to realize the dream of clean energy production well into the new millennium.
About the author…
Mark Culpepper is a published author and 15-year marketing and sales veteran of the data communications sector, working at industry leading firms such as Cisco, Digital Island, Cable & Wireless and Symbol Technologies. With a focus on market analysis, competitive positioning, message management, and process development, he recently left high tech to pursue opportunities in the renewable energy sector and currently works full time as a free-lance business and marketing consultant within the solar industry. A native Californian, Mark and his wife live in the San Francisco bay area.