Wednesday, March 6, 2019

Amazon's Eero Acquisition and the Future of Home

Amazon's announcement of its agreement to buy Eero, one of the early players in the mesh WiFi systems, a couple of weeks ago was hailed as a significant event for the smart home market. As the CNET article on the news rightly points out, it is a clear indication of Amazon's ambitions to control as many aspects of the connected home as it can. But, there is much more to this acquisition than just another device in the house. Of course, it's about the smart home ecosystem. But, it's much more than that. It speaks to Amazon's service ambitions. Amazon is likely going after some of the subscription services such as the eero Plus subscription for $99 per year providing threat scanning, ad blocking and content filtering. Subscription services today are just the beginning of this new opportunity.

What Makes Routers Special in the Smart Home?


I have always believed that the router was a special and important device in the smart home. As the first stop in the connectivity path inside the home, routers (and mesh WiFi systems) are different from other devices at home. They are the first stop for all traffic and security - a device that allows connectivity to other devices and also, protecting them from the big bad world out there. Over the years, this quality is the basis for the router incorporating features such as internet security, parental controls etc. With the popularity of smart home devices now, especially the WiFi devices, we can add home automation, home security to the list. All these services are on top of the device management and firmware update services that are necessary for the healthy functioning of the device.

The advent of the mesh routing has added to the work load on the old girl! Spotty WiFi has been the bugbear of the large homes in the US and remedying it has been lucrative to Google, Linksys and Netgear lately and has been in the eyes of the internet service providers every where. While today the emphasis in mesh routing is on the coverage, as the internet speeds continue to climb, optimizing throughput performance across the home and among the devices will be the name of the game in 2020 and beyond, in the US and everywhere. We should expect to hear a lot more about Self Organizing Networks (SON) shortly.

Modular OS for the Home


With this explosion of services managed by the router, prior models of the custom devices with custom operating systems (OS) has outlived its purpose. While many vendors have contributed to and depended on OpenWrt, most commercial products are only loosely based on OpenWrt. Specific needs of roadmap of services required deviation from the OpenWrt standard release. But, over the years, as the number of services grew, this model was not sustainable any more.

Complexity of delivery of multiple services over different versions of hardware is driving the need for a modular OS. A modular OS could allow for a write-once-and-done client applications which will improve the user experience, decrease the time to market and maintenance cost for service delivery. This has been recognized very early by many of the service providers which led to RDK and softathome. More recently, prpl has also jumped on the bandwagon trying to solve this same problem, but, based on the OpenWrt base code.

At a high level, RDK, softathome and prpl are all implementing some version of the modular OS layers shown in the figure below. It accomplishes two things. By introducing a hardware abstraction layer (HAL), the OS becomes largely independent of the hardware. Application frameworks and containers further free the applications from both the hardware and the OS, allowing the application developers to be able to "write-once-and-done".

Modular OS for Home Device

History is Repeating!


If this diagram feels familiar, it should be! Other than the routing and WiFi services, it resembles a typical smartphone OS today. The similarity in the model is not accidental! Similar to the services on the home devices today, the services on the cellphones (now called feature phones) increased back in the early 2000's. In response, the cellphone manufacturers like Nokia, Blackberry, Motorola and others started experimenting with modular operating systems for the cellphones.

Home devices are at the same crossroads as the cellphones were in 2005/ 2006. Evidently, several options of OS are already available. So, what can we learn from that era?

The Battle of the Phone OS'

Mobile operators, just like today's internet service providers, were incentivized to control the devices and the services that flowed through them. And, they have had a history of working together to create inter-operable, standards based network technologies through the 3GPP. Options, then as now, were abundant. Apart from Symbian from Nokia, Windows from Microsoft and Blackberry, multi-vendor initiative LiMo and open source MeeGo were contenders. Yet, iOS and Android prevailed.

Creating 3GPP standards, I would contend, is different from creating a modular OS. The careful orchestration of the roadmap for application frameworks and HAL as the needs of the developers evolve and the technology behind the hardware leaps is difficult to achieve without a strong entity investing in the OS by employing thousands of software developers, which Google and Apple were. The ecosystem required not just the initial standardization and coding, but, continuous support to all the ecosystem partners. This needed leadership that is difficult to achieve in a collective process such as the 3GPP or a Linux Foundation. Google and Apple nicely filled in this role.

We also can not downplay the importance of the outsider status of Google and Apple in that battle. While service providers and chipset and hardware vendors participated, Google was the outsider and the lead protagonist in Android. Every service provider, chipset and hardware vendor knew that none of their direct competitors had an advantage with the Android platform.

How Will the Home Devices Battle Shape Up?


These two lessons are important for the home devices OS market. While RDK and softathome are driven by different service providers, prpl is driven by chipset and hardware vendors. None of them enjoy the same advantages as either Google or Apple had in the phone OS.

So, who are some contenders this time? Amazon is clearly throwing its hat in the ring with the Eero acquisition and its vast experience with Fire OS. We can not discount Apple and Google, despite Amazon's lead in smart speakers. Their vast experience with phone, computer OS' and obvious ambitions in smart home make them difficult to ignore.

So, who might be our dark horse? I nominate Facebook. Why? Facebook already dipped its toes in the smart home market with the Facebook Portal. With Terragraph, it has some street cred in internet service provider market. The icing on the cake is Oculus Go, the OS for VR headsets Facebook has acquired. So, OS experience covered there. With 5G, the time is ripe for the convergence in VR experience between home (fixed) and mobile experiences. So, the question is not whether Facebook would want to step into this market. The question is why would it not?



Tuesday, January 23, 2018

Last Mile Considerations: Cable, Fiber and 5G

As 5G gets closer to reality and is expected to pay a major role in home broadband, it is interesting to compare it to the current home broadband options, particularly with respect to the user experience and cost of servicing the customer in the last mile.

While 5G promises low latency (10's of milli seconds) and high throughput (excess of 1Gbps), current cable and fiber offerings already offer the same benefits to the customer. 5G's improvements are really relative to the 4G mobility performance and provide similar capabilities as home broadband has had, but, in the context of mobile devices. For mobile devices, this is indeed a quantum improvement. But, for home broadband, it is not. So, 5G fixed wireless will have to innovate in cost and user experience to win over customers from their current alternatives. Hence, this comparison is topical and interesting.

Current Economics of Broadband Market


Comcast, the US broadband market leader, has close to 26M broadband subscribers while Charter has close to 23M subscribers. Both operate at EBITDA margins of roughly 40% each. The alternative service providers, AT&T and Verizon have close to 14M and 6M broadband subscribers respectively and operate at EBITDA margins of roughly 20% each.

Some of the difference in the EBITDA margin can be attributed to the scale advantage of the industry leaders. The overhead cost and higher negotiating power definitely provide an explanation. Another advantage the cable industry has over the rest of the competition is the fact that they routinely share development costs, either through Cable Labs standards or directly with each other. Since none of them compete with each other, there's little friction in this cooperative model. While Comcast and Charter together have almost 50M subscribers, their power (and cost advantage) is multiplied by the fact that the same infrastructural architecture and customer equipment is used by cable operators world wide.

I suspect that these advantages do not explain everything. Some of the difference is likely explained by the custom customer equipment that AT&T and Verizon launched, trying to keep up with the industry leaders. As a side effects, ease of installation and upgrade has been lost. What do I mean? Let me explain by first detailing the last mile in cable and then, comparing it to that of fiber, as an example. I will then explain the implications for 5G.

Cable Last Mile

Figure 1: Cable connection

In a typical broadband household, cable connection comes from a piece of neighborhood cable equipment, called the cable head-end unit. The bandwidth of the coax cable connection is shared by the customers in the neighborhood. For the broadband connection, once the coax gets into the house, a cable modem translates that coax (DOCSIS) connection to the Ethernet and WiFi devices connected to the router. The connection to the modem is via the standards-based coax connector.

Consider what happens when a customer needs a modem upgrade. When the DOCSIS standard that determines the speed of the connection is upgraded in the infrastructure along with the cable head-end, customers can simply be shipped a new modem. Unscrew the old modem, screw  the coax connector into the new modem and the customer is done upgrading to the latest technology.

Missed Opportunity in Fiber


Figure 2: Fiber connection

Contrasting this to the fiber connection, we will quickly see how the fiber upgrade experience results in much higher costs. A typical fiber broadband to the home starts with the optical splitter that splits the optical signal from the Optical Line Terminal (OLT). Optical Network Terminal (ONT) is the device at the customer's premises. ONT converts the optical signal into an electrical signal, either coax or Ethernet. A modem and/or a router takes in the electrical signal and connects the devices in the house using WiFi or Ethernet.

Unfortunately, the way ONT has been designed does not lend itself to an easy upgrade. Optical lines are directly inserted into the ONT during the installation. It is understandable that optical cables with a standard optical connector can not be shipped, given the uncertainty of the length of the cable required. But, if the cables were installed with the optical connector as a part of the installation, the ONT could potentially be user replaceable. Instead, a technician needs to visit the customer to replace it. Given the customer's disdain for the wait involved with an installation and the proprietary hardware, these design choices are difficult to fathom. Not only does this add cost, but, it also results in much worse customer experience.

Implications for 5G


Figure 3: 5G connection

Why does this matter for 5G? As you will see, 5G is, in some ways, similar to the fiber connection above. 5G fixed wireless connection will come to the customer's premises from a 5G base station mounted on a pole in the neighborhood. Fixed wireless 5G will use high band RF in the 20 and 30 GHz range. This signal needs line of sight to the 5G customer premises equipment (5G CPE) and hence, the installation will be critical. The 5G CPE needs to be mounted to account for the altitude, the location and the angle of its mount with respect to the 5G base station. Like other implementations, this 5G CPE, then, needs to communicate in some form to the router that connects to the customer's devices.

If an upgrade were to be painless, what needs to be true? It should be easy to unmount the 5G CPE which means that the mount has to be standardized and be foolproof. Also, the connection to the router needs to be standardized so that the customer could potentially replace the equipment themselves.

Given that the equipment is just a means of delivering the service, standardization will also enable lower costs by allowing others to build it. If standardization can be achieved, it will also erase the scale power of the cable giants and allow the 5G fixed wireless operators to compete on an even keel. The key question is how does one go about this!

Wednesday, December 20, 2017

Fallacy of Current Cost Accounting Of Data Service at Wireless Carriers

Service pricing at all major carriers betrays the underlying cost accounting method. Even in Unlimited plans, for example the T-Mobile footnote for Unlimited plan says that “…the small fraction of customers using >50GB/mo. may notice reduced speeds until next bill cycle due to data prioritization.” Essentially, the carrier is using GB of data usage as a means of assessing the long term value (LTV) of the customer and ergo, what is economical and what is not. But, GB of data usage based cost accounting isn’t accurate and has consequences going forward.

Why isn’t it Accurate?


It’s isn’t accurate because the underlying assumption is that GB of data usage represent a close approximation of the cost involved in delivering those bits. Cost of carrying the bits can be split into cost of carrying the bits from the customers’ device to the cell tower and from the cell tower to the internet. Cost of carrying the bits from the cell tower to the internet is the same irrespective of the type of device used by the customer. The cost of the backbone data transport pales in comparison to the billions of dollars spent in acquiring the spectrum and constructing the cell towers. So, let’s ignore it for the moment.

Given that spectrum is a scarce resource, cost of carrying bits to the cell tower is based on the amount of spectrum used and the amount of time that the spectrum is used. This isn’t the same across all the devices. The amount of spectrum used is a function of the available spectrum at the cell tower and the quality of the chipset on the device – typically, the more expensive (and premium) the device is, the better the chipset is and the more spectrum it can use at the same time – technically called carrier aggregation. The amount of time spectrum is used is a function of the distance of the customer to the cell tower and also, the quality of the antenna on the device. The less signal the device receives (whether the device is far away from the cell tower or the antenna is bad), the more time the device spends occupying the channel in the spectrum.

Why Wasn’t the Cost Accounting a Problem Before?


Ok, the cost accounting might be inaccurate. Why wasn’t it a problem before?

Pre-2006 when voice was king, there was no carrier aggregation. Mostly, all devices used the same limited spectrum available. The amount of time the spectrum was used was a direct function of the number of minutes a customer used that month. If customer had a crappy antenna or was too far away from cell tower, due to the real time nature of the voice packets, the call quality would degrade. But, the amount of time spectrum is used closely mirrored the number of minutes a customer used that month. So, cost accounting based on minutes of voice calls matched the actual costs very well.

Why is it a Problem Now?


When smartphones took over and data started to be billed by GB, the device subsidy model masked the inaccuracy of the data usage based cost accounting. Subsidy enabled the customers to buy the best phones at an affordable price, which meant most customers had similar and good phones. Also, the better the phone was, typically, the higher the subsidy. So, subsidy amortization and more homogeneous device capabilities masked the cost-price matching problems of the data usage based cost accounting.

What is the Impact?


With the correct cost accounting, network planning teams at carriers will be able to allocate capital for cell tower construction better. The network planning teams can trade-off between capital cost allocation vs. lower operating costs for those customers helped by the new cell towers.

Also, without the correct cost accounting, carriers cannot expect to compete in the hyper segmented world that wireless carrier market soon will be. If the costs are wrong, how can the price be right?

With BYOD coming in a big way and consumers conscious of the cost of the devices, not all devices will be of the same quality. A customer consuming the same amount of data on a device with low quality antenna will have dramatically higher cost than a customer consuming the same amount of data on a much better device. This is why, in the hyper segmented world, the price will be a function of the device as well. A customer using a high amount of data with the latest phone (say, an iPhone X or a Samsung Galaxy S9) might be paying the same as someone with a Moto G. That's similar to the guy with a Ferrari paying the same insurance rate as the guy with Honda Civic! It's still fair because of the different costs they put on the network. In the car world, the vendors wiped that difference by improving reliability. The device vendors can similarly wipe the difference off by ensuring that the antenna is not compromised due to cost.

Tuesday, September 19, 2017

Unlimited - What Comes Next for the US Wireless Carriers?

The more things changes, the more they remain the same! The wireless market play till now was similar to the dial-ups of old days with data limits and multiple competitors. Could that offer a window into the future of wireless carriers?

A Brief History of Landline Business


The customer access and economics were similar. Service providers had to invest in bandwidth locally, similar to wireless carriers today that have cell towers and spectrum. Customers weren't typically mobile, but, could connect from anywhere. So, service providers like AOL, could service customers anywhere as long as the customers had a local service number the carrier had invested in. Otherwise, it was too expensive for the customers due to the long distance charges. Customers were able to self select initially based on which service provider was cheaper overall. Over time, service providers built capacity across all local service areas.

Two key things that made this a competitive market were regulations that allowed any business to have a local service number and sufficient peer to peer bandwidth (that telephone carriers built) for anyone to provide service. This made the connection commodity. Internet service was the differentiation and the service providers were able to segment their customers based on usage. The more you used, the more you paid. This was the world of wireless carriers till Unlimited came around.

Then came the shorter range technologies such as DSL, cable and fiber that were no longer peer to peer. No regulation was enacted to ensure that local connection was universally accessible. So, anyone other than the service provider that built it couldn't provide an alternate service. Internet was commodity, but, the local connection wasn't any more. Something else had changed. For the service provider, it really didn't make sense to segment the customer based on usage any more. The real costs were based on speed rather than the usage as the cost of carrying bits over long distance became extremely low. Higher speeds required higher investment in the local access infrastructure. So, speed, instead of, usage became the segmentation parameter. This also had parallels to wireless as I will explain later.

Economics of Unlimited


Coming back to wireless, unlimited is tough on the carriers. With spectrum still the primary cost driver and revenue limited to the line access charge, revenues from customers don't match up with the costs of customers that consume disproportionately. With every carrier now offering unlimited, US customers are likely to "super size" on data. Of course, every carriers has come up with a band-aid patch for this with network prioritization over a certain level of usage. That only regulates perhaps the top users of data. Affecting a significant portion of the customers with a low threshold for network prioritization would likely risk customer dissatisfaction and backlash. You can create speed and usage tiers, but, pricing can become quickly complicated. So, what's a carrier to do to match pricing to the costs?

A New Model for Wireless?


Let's now look a little farther away to another retail market. Consider that the service plan revenue is a given and the carriers don't know how much risk a prospective customer presents in terms of network cost. If we consider the service plan revenue as premium and the data usage cost as risk cost, the wireless market looks a lot like a huge market we know very well - the consumer auto insurance market.

Consumer auto insurance companies are able to assess the risk posed by different segments of the population and present a personalized quote to individual customers. The costs are matched to the price, thanks to hyper-segmentation and data analytics on each potential customer.

In a market such as this, it's important for the company to know which customers to bring on and how to get them off the insurance, if the risk profile changes. This is why Progressive has an insurance comparison site - it helps convert prospective customers that fit a risk profile become current customers and later encourage current customers to go away, if they no longer fit the risk profile that Progressive wants to insure. Segment purity might be a key differentiating factor in keeping the churn low - an important factor for investors.

Segmentation and Future of Retail Wireless


Taking a leaf out of the auto insurance market, wireless carriers need to be able to segment the customers based on usage and speed needs and offer the pricing that caters to them. A customer that consumes lot of video presents a different risk profile from a voracious reader of news and e-mail. To be successful, the wireless companies need to be able to hyper segment the customer base, just like the insurance market. This will avoid the morass of ever complicated price offerings.

In a world of such hyper segmentation, each brand might mean different things to customers. One that is known for its customer service will not likely be the cheapest. You might argue that it's true today too. But, retaining segment purity will require the right combination of the channel, customer service, network service and of course, pricing along with the means of identifying the customers that belong to the segment. A carrier going after the urban millenials might have a purely online/ mobile presence, might advertise on social media, might make it easy for the customers to BYOD, focus on WiFi offloading and will look very different from a carrier that is going after baby boomers that might use TV and radio ads, provide phone and store support and provide low cost phones. This is no different from how Esurance or AARP Auto Insurance target their customers. Identifying the customers in the segment requires not just demographics, but, also, data usage and data type behaviors, device ownership etc. This ensures that the discover, evaluate, buy, use, support and exit phases of a customer journey are properly matched to the customer segment.

Unlike auto insurance market that depends on the reinsurance companies to insure the insurance companies, wireless carriers, today, are their own re-insurers. So, that's where the similarity might end. But, the interesting thing about the auto insurance market is that it is a whole lot more competitive than the wireless carrier market. To compete effectively and maintain segment purity, would a carrier need more than one brand - perhaps as many as three or four to ensure a match between customer journey and the segment? Or, a carrier could spin off its retail operation, become a pure network operator (the re-insurer of the wireless market) and let the others duke it out!

An interesting question is what happens to customers when carriers can hyper segment this way? How do the customers compare the service/price combination with their friends? The consumer auto industry again provides a clue: they can't! If they can't, would that lead to more churn or less?

Potential Disruptors


Who could help upend this happy story? Clearly, carriers need help prospecting new customers and understanding their data usage. Nielsen and other data collectors will be happy to help. Retail carriers that can best use predictive analytics to understand the customers will win.

The interesting players are Google and Apple that have a hold on the OS and the underlying amount and type of data. Could they potentially use that information to either stand their own retail carrier or help others compete? Given my previous prediction about Google, you can guess who I think will jump into the fray!

The biggest disruptor, though, might be technology again, similar to how the landline market unfolded.

5G: A Rewind?


5G, as is currently implemented in the US, is based on shorter range frequencies such as 28GHz and 39GHz with the potential of providing up to 3Gbps. But, the range of 5G cells is much lower than a typical LTE cell tower, closer to 200m, not the 20km of LTE. This means only highly urban areas will have dense deployments necessary to enjoy the high throughput. Coverage will vary wildly across the US. It will take a long time for a country like the US to attain 5G coverage similar to that of LTE. Countries such as Korea and Japan that are highly urbanized would have much better coverage than a country such as the US. As with the shorter range technologies in landline, 5G will have a profound effect on the cost of mobility. Will 5G change how the industry evolves in the short term?

So, What Comes Next?


My bet is that, in the US, 5G adoption will not be fast enough to impact the industry forces discussed before. So, I expect that the US wireless market will evolve into hyper-segmented multiple brands within the next four years before 5G could ever become dominant.

Monday, October 31, 2016

Driverless Cars: End of Road Rage?

A recent BBC article got me thinking about driver etiquette in the presence of driverless cars. Would drivers really be rude to autonomous vehicles (AVs)? I am more optimistic than Matthew Wall from BBC about the prospects of AVs and the behavior of human drivers. Here's why ...

Prospects of AVs


Let us first address the potential skepticism of people towards driverless cars. The demand for driverless cars will be driven (pun intended) by people that don't want to drive - people that either don't already have a car, or don't want to have a car. If you are a skeptic of driverless cars, sorry, you really don't have a choice. Barring a legislative initiative to outlaw driverless cars (for which there's scant evidence), you just have to live with the AVs. Consideration of safety, the huge number of cab rides taken and the right of mobility for the elderly likely will trump the resistance of the skeptics.

Driver Behavior


Clearly, AVs are still learning to deal with human drivers. Yes, it might be frustrating for human drivers to deal with AVs, especially, on local roads with slow and unexpected traffic.

But, I am optimistic that there will not be bullying of the AVs by human drivers for too long. Why? Like most technology products, we should expect that AVs will collect a lot of data, including, the places they were at a particular time, the people, cars and other sights they have seen etc, nicely meta-tagged and searchable. This will be a treasure trove for all manners of governmental agencies.

The transportation departments would want some of this information to predict traffic patterns and to plan for future construction. Welfare departments would want to understand passenger patterns and the improvements in services they can provide.

The most intriguing use in the context of driver behavior is the video footage collected by the AVs. Similar to (private) video recordings of public spaces, the video footage of the AVs can likely be asked to be presented for a police investigation through a subpoena. Even more intriguing is the use of meta--tags. Can the law enforcement agencies be able to search and quickly get through all the AVs' video footage at a certain time and place? I don't see why not! In fact, we shouldn't be surprised if the law enforcement directly submit the subpoena to the manufacturer (or software provider) of the car itself, regardless of who owns the car. I am sure there will be other uses for law enforcement analysis such as improvement in gun shot detection and other major crime detection.

Coming back to driver behavior, if you think you are going to have a bout of rage at the AVs, you might want to think of all the video footage captured. :) You should expect the video of the incident ready and packaged for the police, before you are done with your rage.

I don't know about you, but, once AVs become prevalent, I am going to put my legs up on the dashboard and let the machines do the driving. I love driving, but, it won't be as much fun with big brother watching!

Sunday, March 27, 2016

A Point System to Promote Competition in F1 Racing

F1 bosses have been agonizing over the last decade about how to make the field more competitive. Dominant teams have changed, but, the lop-sided results have continued. Recent change in qualifying and the change back have not gone down really well. But, there might be a way to promote competition with just making a change to how the points are awarded. Change the reward system and I think the driver and team behavior will change!

Under the current system, the winner of the race is awarded 25 points regardless of the margin between the first place and the second place drivers. The points are awarded in the following order: 25, 18, 15, 12, 10, 8, 6, 4, 2 and 1. The drivers in the front can't hope to get more than the 25, even if he were very very good. The drivers at the back can't change the difference (of 7 points at least) to the front runner even if they were closely matched to the front runner. This state of affairs can be changed!


The Relative Position based Scoring (RPS) for Multi-player Multi-event Competition


The new point system I propose will still award 1 point to the 10th placed driver. But, the rest of the drivers are only guaranteed 1 more point than the driver below them. The rest of the points are awarded based on how far the driver's timing is from the 10th placed driver. The points distance is measured in the difference in the final race times of the drivers, treating the race times of the 10 drivers as a population sample of a normal distribution curve. Over multiple events (21 in 2016), the normal distribution assumption is likely to be true, even though individual races might not adhere to the normal distribution curve since we will have 10 drivers over those events.

If you are a driver in ith position, your points are

RPSi=11-i+(101-55)*(Race timei - Race time10)/Σj(Race timej - Race time10)

This formula attempts to distribute the total number of points (101) among the 10 race drivers, with the 10th placed driver getting 1 point.

To generalize this to a N-competitors event that awards T total points and the Nth competitor getting 1 point, the ith competitor points are

RPSi=N+1-i+(T-N*(N+1)/2)*(Race timei - Race timeN)/Σj(Race timej - Race timeN)

So, How does this new system change the points?


To illustrate the RPS points, the Australian GP race points will be modified as follows:

Driver Team Grid Race Time Points RPS Points
1 Nico Rosberg Mercedes 2 1:48:15.565 25 22
2 Lewis Hamilton Mercedes 1 +0:08.060 18 20
3 Sebastian Vettel Ferrari 3 +0:09.643 15 19
4 Daniel Ricciardo Red Bull 8 +0:24.330 12 15
5 Felipe Massa Williams 6 +0:58.979 10 9
6 Romain Grosjean Haas F1 19 +1:12.081 8 6
7 Nico Hulkenberg Force India 10 +1:14.199 6 4
8 Valtteri Bottas Williams 16 +1:15.153 4 3
9 Carlos Sainz Jnr Toro Rosso 7 +1:15.680 2 2
10 Max Verstappen Toro Rosso 5 +1:16.833 1 1

With apriori knowledge of the point system, the race will likely be more competitive and exciting! Why do I say that?

Provides Excitement, Even When a Driver is Really Better than the Rest


The first placed driver can potentially get 56 points if he is much better than the rest of the field because the rest of the driver could end up with 9, 8, 7, 6, 5, 4, 3, 2 and 1 points. I am assuming, of course, that the race winner is way ahead and the rest of the field is bunched up together. This provides huge motivation for the front runner to keep pushing, even when a win is guaranteed. The front runner racing hard till the race end has got to be exciting for the fans!


Rewards Consistency of Car and Driver


Given that a race winner can win big, the cost of inconsistency (of car or driver) can be pretty high. A driver (and car) that is consistently good can win ultimately over a driver that has large ups and downs still.


Low Risk of Race Manipulation


The 10th placed driver is fighting for his 1 point. So, there is little risk of race manipulation from the bottom.


Makes Team Orders Near Impossible


Even in a team that's heads and shoulders over the rest of the field like Mercedes was last year, the two drivers will constantly be battling each other because there's always the possibility of scoring high (56 points) when your opponent (and teammate) could crash out of the event. If the drivers are evenly matched, the team order can potentially only provide an extra point, instead of 7 points now. So, team orders will be less likely, given all the emotional cost involved (Think Red Bull at its peak with Mark Webber getting the raw end of the deal!)


So, What's the Downside


The biggest downside is that fans won't know the points automatically and immediately after the race winner is known. The 10th driver needs to finish the race before the math can be completed, even for the race winner. But, that's a minor downside compared to the advantages the change in reward system brings. 

Also, RPS doesn't prevent situations where two drivers from the same team are at the front of the race, but, one of them is just helping the other win the championship. (Think Schumacher and Barrichello in the 2000s!) Especially, in a race like Monaco where overtaking is hard, RPS might exacerbate the situation. But, many things will have to go right for the front runner. Two drivers in a dominant team with one of them having no aspirations to championship at a race event where overtaking is hard and they are in a 1-2 position. And, even in that situation, the driver at the front is pushing hard till the end of the race to maximize his points. That has to be exciting for the fans!

RPS Can be Implemented Now!


No changes needed to teams, drivers or race venues - RPS is simple change in scoring. Change in reward system is always the best way to change competitive behavior. This applies to individuals and teams (and corporations). Drivers and teams will be motivated to keep pushing the hardest they can till the end! So, is F1 is ready for RPS?

Wednesday, July 15, 2015

Death of Subsidy: Winners and Losers

Friends, Americans, countrymen, lend me your ears; I am here to bury device subsidy, not to praise it!

Alright, burying is probably a little premature, but, clearly, device subsidy (in the form of 2-year contracts) is on its death bed. Ever since T-Mobile pioneered equipment installment plans in the US in 2013, Americans started going the way of the rest of the world. Today, equipment installment plans are the norm for the majority of customers upgrading their phones. So, how does it change the mobile industry landscape and who does it benefit most?

What Was Good about Device Subsidy?


Before we jump to how the landscape has changed, I want to give a little thought to what was good about device subsidy. (Even though it's not dead, I will stick to the principle of not speaking ill of the dead. ;) )

If you agree that supplier competition is good for the consumer, then, more choice in original equipment manufacturers (OEMs such as Samsung, Apple, LG, Motorola etc) and more operating systems (OS) is better for the consumer. Device subsidy allowed the operators to level the playing field somewhat for the smaller OEMs and less adopted OSs by offering more subsidy to the devices offered by the weaker players. Why would operators be interested in doing this? Higher diversity in OEMs and OSs increases operators' leverage and helps them differentiate themselves from others. So, the interests of the consumer and the operators are well aligned in this case.

Obviously, life wasn't perfect even with this benefit of device subsidy. Operators vied with each other to carry the most popular phone brand, ever since it was first released in 2007. Operators were willing to provide more subsidy for devices from Apple that was able to bring in higher value customers. This clearly tilted the playing field against some of the same weaker players that the device subsidy should have helped.

The other benefit of device subsidy was that it sped up the technology adoption cycle for the mobile industry. With subsidy, most customers upgraded like clock work every two years. So, OEMs and other players could plan for the demand for new technology based on this upgrade cycle. This was also good for mobile operators in managing their networks since transition between technologies (2G to 3G to 4G) could be managed smoothly with device technology transition via the upgrade cycle. Unfortunately, this resulted in higher costs for the consumers because the costs of upgrades were ultimately borne by them.

So, How Might Future be Different?


I see three main ways in which future could be different from today - likely lengthening of phone upgrade cycle, a focus on long term value of device and potential play by OEMs to get closer to the customer with their own equipment installment plans.

Lengthening of Upgrade Cycle, Especially in the Premium Segment


With the consumers directly responsible and cognizant of phone pricing, some have theorized and I tend to agree that the consumers are likely to upgrade less frequently. The evidence for this theory till now has been scant. Driven by their sudden ability to upgrade on a whim, consumers have actually pushed up the upgrade rates at most operators since the introduction of the equipment installment plans. But, I would hypothesize that the consumers who want to upgrade more frequently (than every two years) are a small fraction of the total consumers.

With AT&T's completion of two years of its launch of Next (equipment financing plan) program and high installed base of iPhones, a large majority of its customer base will be off their 2-year contracts. If the hypothesis is correct, I would expect to start seeing a slow down in upgrade rate in AT&T's Q2 results.

One possibility is that the upgrade cycle will be different between the premium segment of phones such as iPhone 6 or Samsung Galaxy S6 and lower segment phones driven by the difference in the monthly cost of ownership between the two tiers. At higher monthly costs, consumers might be more sensitive to upgrades than otherwise. This would be contrary to the past where the premium segment upgraded much more frequently.

Focus on the Long Term Value of Device to the Consumer


Reduction in upgrade rates will spur some of the OEMs to provide a device with a better long term value for the money. But, unfortunately, not all OEMs will be able to play this game. Apple, with its tight coupling of software and hardware, is much better placed in this aspect. We can already see this in Apple's plans for iOS 9 which will be compatible all the way back to iPhone 4S. At the time of the likely official release of iOS 9, iPhone 4S will be four years old. Assuming that Apple would want the device to work for at least another year, the life of iPhone 4S will be almost five years.

Now, one might wonder if this would be a good strategy, even for Apple! With a saturating smartphone market, a longer upgrade cycle will decrease the volumes even further. Apple is already preparing for this eventuality by pushing harder into services such as music and news. Even though the customer might keep the device longer, app and subscription sales will provide a steady stream of revenue that, hopefully, will compensate for revenue lost due to lower device sales.

Another way OEMs are likely to demonstrate long term value of the device is by supporting the prices in the secondary markets. We have seen this in the automotive industry for a number of years whereby some of the premium car manufacturers have participated in the secondary market by acquiring customers' vehicles and offering used certified vehicles. Apple, again, has shown early signs of intervening in the secondary markets and we should expect much more of this behavior in the future.

Equipment Installment Plan as a Means to Enhance Customer Relationship


One another advantage due of the proliferation of equipment installment plans and the gradual death of device subsidy is the ability of customers to bring their own devices. Clearly, consumers have been taking advantage of this - AT&T announced in its Q1 results that 313,000 of its gross adds were bring-your-own-devices. This provides an avenue for anyone that desires a closer relationship with the mobile customer to realize it through their own equipment installment plan.

Traditionally, Android OEMs, despite their best attempts, have had tenuous relationship with their customers because the customers identified themselves first and foremost to the App Store with their google ID. Reasons for signing up for an account with the OEMs weren't very strong. Equipment installment plans, if offered by OEMs, might now provide a stronger reason to be truly contract free. Motorola has already gone down this path. Using this financing relationship, OEMs could potentially understand their customer needs better and drive better customer retention. Again, Apple is ahead of its competition, even without offering equipment installment plans of its own because of its App Store and OS relationship with the customer.

So, What Does This All Mean?


Despite potentially being at a higher risk due to lengthening of upgrade cycle, Apple is likely the best prepared and positioned to take advantage of the future with equipment installment plans. While other OEMs might be able to replicate some aspects of Apple's strategy, they would do so without the cushion of services revenues (that Google is likely to claim in the Android ecosystem) and the strong customer relationship Apple has enjoyed. The net result is likely a continued race to the bottom in the Android ecosystem while Apple continues its domination of the higher end. Apple will continue to expand the reach of the higher end by selling its older generation devices to those that can't afford the latest devices and by improving the long term value of the devices with OS support and secondary market intervention. Equipment installment plans are likely to strengthen status quo in favor of Apple!