Serverless – The “Uber” of cloud computing

It is interesting to see the rapid evolution of cloud computing models and the different computing abstractions that have come up in recent years. Operating systems abstracted hardware, virtual machines provided another layer of abstraction over hardware, containers and container orchestration technologies provide yet another layer of abstraction. Each layer of abstraction provided its own set of benefits.

As if those levels of abstractions were not enough, AWS pioneered yet another abstraction for “serverless” computing (AWS Lambda) that allows you to specify when you want your code to run. You pay only for the compute time consumed while the code runs and manage nothing – no infrastructure, no VMs, no containers .. Literally nothing. Microsoft Azure and Google Cloud Platform have come up with their own implementations of server less computing since then.

I’d like to draw an analogy with the transportation industry. People have always owned cars and used to rent one when they travelled. This continued for quite some time until Ridesharing companies such as Uber and Lyft appeared and disrupted the entire industry. The impact of this disruption is far from over as the coming years would tell.

To give you an example of the impact from my own experience, Its been close to 2 years and I have not rented a car in my business travel. Several reasons:

  1. It is cheaper to uber than rent a car especially since I drive straight to work and then to my hotel.
  2. It is convenient. Uber arrives in a few mins, and I am off to my destination. No need to plan for extra time to drop off the car at the airport.
  3. Frictionless experience. I get charged automatically through the app.
  4. I don’t need to manage my rental – no need to look out for chips or scratches before and after, no need to refuel etc.

So essentially renting a car is like renting a VM that gives you the flexibility of using the VM the way you like. It’s yours to keep for as long as you like, and you pay a VM rental. Ubering is like running a AWS Lambda or Azure Functions. You uber from point A to point B and pay for just that. Nothing to rent or maintain. You do the same with serverless computing, just pay for your code execution and don’t rent or maintain anything.

Are ridesharing services going to kill the car rental market?

Well, cabs and car rentals have coexisted for quite some time. Ridesharing services disrupted both cabs and car rental but there are still uses cases for cabs as well as car rentals. For instance, if I have to make say 5-6 trips in a day, perhaps car rental is more efficient and economical than ridesharing.

We have a very similar case in cloud computing where renting a VM (or for that matter deploying containers) would continue to make sense if you can drive high utilization of your rented assets. For other cases, “serverless” is the way to go. However, there is a long tail of apps in most public clouds that are well suited for moving to serverless.

Many companies have already begun to make use of serverless to drive further efficiencies in their cloud native apps. Netflix for instance, uses serverless for many use cases.

Other companies are building new businesses riding solely on “serverless” architecture at dramatically lower price points, Cloudguru is a good example in the training space with insanely low cost of operation.

Just as Uber phenomenon is at the early stages so is serverless. With autonomous driving vehicles on the horizon, the disruption to the transportation industry is far from over.

Serverless computing is a profound shift in cloud computing and only time will tell if this disruptive paradigm goes mainstream.

Advertisements
Posted in Uncategorized | Leave a comment

Software Defined Storage – Trends, Opportunities, Challenges

Overview

There is a strong movement in the industry towards “software defined everything” – compute, network, security, and storage, leading to a vision of software defined data center. In this blog post, I share my perspectives on the major storage trends and the opportunities/challenges that lie ahead in realizing the vision.

Server virtualization brought efficiencies to the compute infrastructure. However, as compute virtualization in the datacenter matured and provided flexible and programmable compute architecture, it exposed the rigidity in networking and storage functions.  Software Defined Networking (SDN) rose to prominence in 2012, and promised to bring the same level of flexibility and programmability to network and security functions as Server Virtualization did to compute functions.

The next target was storage so the industry coined the term Software Defined Storage (SDS) borrowing similar concepts from SDN. Before we get into SDS discussion, it would be good to briefly look at SDN and draw parallel between SDS and SDN.

VMware recently announced NSX, a network virtualization(NV) platform which isn’t quite the same as SDN. Martin Casado, founder of Nicira and CTO of Networking at VMware explains the differences and acknowledges the lack of a clear definition of SDN causing confusion with customers.  Cisco responded with ACI sparking a debate on ACI vs NSX which will be a topic for another blog post.

Just as server virtualization provides software constructs such as vCPU, vRAM, vNIC decoupled from hardware, network virtualization provides L2-L7 software constructs for switches, routers, load balancers and firewalls to allow programmatic provisioning of network along with compute.

Similar trends can be seen in the telecom industry where functions such as session border controller (SBC) are moving away from appliances into general purpose servers. The industry term for this trend is network function virtualization (NFV).  What drove this trend is essentially Operators getting tired of dealing with lifecycle management issues related to multiple appliances from multiple vendors and seeing an opportunity to virtualize the functions on commodity hardware. Again, I’ll cover this topic later in a separate blog post.

Software Defined Storage Trends

The reason I bring the SDN, NV and NFV topics into the discussion here is because I see a broader IT trend here which is shaping the storage industry too.

  1. Agility: Enterprise IT organizations need agility in managing the infrastructure to meet the business needs, or risk losing the business to public cloud providers.
  2. Cost: Datacenter architecture is trending towards more and more general purpose and homogeneous hardware with virtualized infrastructure services layered on top to allow automation and flexible provisioning to serve application needs of the business.  This trend is disruptive to traditional hardware appliance vendors.

So how is this broader IT trend shaping the storage industry? I would classify it in 4 major areas:

  1.  Storage Evolution will follow Networking Evolution

Storage, in many ways, is similar to networking and you’ll see software defined storage causing the same terminology confusion as SDN did.  If you look around you’ll find many definitions of Software Defined Storage, mostly tweaked by vendors to suit their own interests.  Established vendors such as NetApp are saying they have had Software Defined Storage (SDS) features for years.. Just didn’t call it SDS.

EMC is taking a different approach and has recently introduced ViPR, a new software only offering in the form of a controller similar in approach to OpenFlow controller. And then there is a whole slew of startups each defining SDS in their own terms. For instance, Nexenta provides a storage solution based on OpenSolaris and ZFS running on commodity JBOD, adding yet another angle to SDS definition.

Regardless of the lack of a clear definition of SDS, the key objective that the industry is trying to accomplish, just like SDN, is to reduce complexity of managing storage in a virtualized world of compute and network. Server virtualization has created a new set of challenges for storage management where the granularity of storage management has moved from physical server to VMs hosted on it. Existing storage boxes had no notion of a VM, creating an opportunity for startups such as Tintri to build VM aware storage. Meanwhile, Vmware announced vVOLs to allow storage vendors to build VM awareness into their appliances.

  1. Converged storage and compute emerging as SAN/NAS killer for certain workloads

Motivated by the architectures of HyperScale companies such as Google and Facebook, another key trend is to bring compute and storage together in a scale out architecture that obviates the needs for traditional SAN/NAS devices. Nutanix is one of the leading players in that space, and Vmware recently announced VSAN that is built upon the same principles bringing SAN like benefits at much lower cost. Microsoft Windows Server 2012 has several new features such as Storage Spaces and SMB 3.0 Scale Out File Server (SoFS) that builds upon similar architectural principles.

Maxta is another startup that recently came out of stealth mode. They provide a storage pool created from local storage and exposed as an NFS appliance to each hypervisor.

  1. Object Storage as the fastest growth segment driving major architectural changes

Object storage is the fastest growing segment in cloud storage.  Object storage, unlike file/block storage stores the entire object and its metadata together and provides an object identifier for access. Object storage implementations typically provide REST interfaces with simple put, get or delete operations.

While one can build object based storage arrays with software and commodity JBODs, Seagate is removing many layers of the hardware/software stack, taking the object interfaces directly to its drives, through a new platform called Seagate Kinetic

Swiftstack, a company providing scale out object storage based on OpenStack Swift, has teamed up with Seagate in supporting Seagate’s new Kinetic drives.

  1. Flash will deliver IOPS while HDD delivers capacity

Finally, flash is emerging as a mainstream storage technology in the enterprise space. One can find a variety of flash based products both as pure flash arrays or PCIe cards, as well as in hybrid disk arrays. Companies such as Skyera are trying to bring the cost of flash down to $1-$2 per GB without deduplication and compression (close to the price point of HDD)  in their 1U 65TB to 250TB appliances offering up to 5 million IOPs. Price, performance and density economics of these devices will start changing the disk vs. flash array mix in datacenters in a big way.

Opportunities and Challenges

While the SDS space is still evolving, customers can start taking advantage of some of the offerings for certain type of workloads. However, so much has changed in the storage space over the last few years that it becomes difficult for customers to make appropriate decisions to address their business problems.

The key dilemma is – Storage decisions need to be made w.r.t many things such as  location – cloud or on-prem, technology – flash, disk or hybrid, types of storage – file/block or object and so forth. Imagine making all of these decisions at scale across thousands of VMs and applications, and managing the dynamic storage needs of these workloads to meet the application SLAs while optimizing cost.

In an ideal world, a CIO would like to buy storage capacity in all of these categories from best of breed vendors, throw a storage controller on top of them to best manage his applications requirements (performance, availability etc) dynamically at optimal cost. This would be the true zero admin storage nirvana.

With the current trends, we seem to be heading in that direction but are a few years away from seeing mature solutions.  Can EMC ViPR evolve to be that intelligent storage controller? Or would that be a new offering from another startup?

Another key challenge is how do you enforce application SLA in a complex and heterogenous data center environment, especially from a storage perspective?  In a virtualized world where the path of an IO request from a VM is complex traversing many layers, it is hard to define and meet end to end policies. Just as in OpenFlow, one needs to define the notion of a “storage flow” and instrument all layers from OS, drivers, hypervisors, network, and storage end points to rate limit the flow and manage storage policies through a control plane. Microsoft Research has recently published an architecture that attempts to address the challenge.

To conclude, there is a lot of innovation happening in storage software that is going to disrupt the traditional storage industry once again. It’s an exciting time to be in the IT industry where everything is going to be “Software Defined”.  As Marc Andreessen explained “Why Software is eating the world”, storage industry appears to be the next prey of software.

 

Posted in Software Defined Infrastructure | Tagged , , , , , , , , | Leave a comment

New Productivity Suite in the Mobile Era – Who will be the Winner ?

In this blog post, I discuss the applicability of Christenson’s framework as outlined in his book “The Innovators Dilemma” to productivity suite market represented by products such as Word, Excel, Powerpoint. My analysis leads me to believe that the conditions are ripe for another disruption in the productivity suite, though it remains to be seen who will be the winner of productivity suite in the mobile era.

Christenson, in his book,  discusses product evolution model in which the basis of competition moves from functionality to reliability to convenience and then price. He shows when functionality from multiple vendors overshoots the market demand, and customers can no longer differentiate based on functionality, the basis of competition moves to reliability. Similarly, when two or more vendors have highly functional and reliable products, customers choose based on convenience. He uses examples from multiple industries to make his point but the most interesting and closest to productivity suite is Intuit’s “Quickbooks” example.   He applies the model to Quickbooks that changed the basis of product competition from functionality to convenience and captured 70% of the small business accounting software market within 2 years of its introduction.

Let us try to apply this model to productivity suite.

From the late 1980s to 2000, Microsoft ruled the world with their Desktop/Office suite. A series of sustaining innovations kept Microsoft in the leadership position when the basis of competition was still functionality.

The widespread adoption of the web in the late 1990s and early 2000s allowed companies such as Google to disrupt Microsoft by creating a new productivity suite that was designed to leverage the web as a delivery medium.  Google designed a “good enough” productivity suite that met the functionality needs of its mainstream market (consumers). The basis of competition therefore changed from functionality to convenience. Simplicity of function, access from anywhere, ease of content sharing and collaboration enabled by “cloud” were “convenience” differentiators that allowed Google to start capturing Microsoft’s consumer market. However, the consumption end points that the suite was designed for were still primarily desktops. Microsoft soon responded with its own offering that now competes with Google’s.

Since the introduction of the iPhone in 2007, we have seen a massive shift from computing devices such as PCs to alternatives such as tablets and mobile phones In addition, the desire to stay socially connected can be clearly seen as new apps such as WhatsApp, WeChat, and Vine gain viral adoption.

Mobility and Social Networking is driving a new behavior where people want to get things done quickly in small chunks. Chris Dixon, a serial entrepreneur and investor, in this blog “The internet is for snacking” puts it succinctly – The successful products took big meals and converted them to snacks. The Internet likes snacks – simple, focused products that capture an atomic behavior and become compound only by linking in and out to other services. This has become even more so with the shift to mobile. People check their phones frequently, in short bursts, looking for nuggets of information.

This brings up the question whether conditions are ripe for another disruption in the productivity suite that redefines how people would like to be productive on tablets and phones. Simple, quick, in short bursts, socially enabled, are just a few of the new “convenience” attributes that would likely be the basis of competition.

The fact that we are beginning to see some new productivity tools in the marketplace having some of these convenience attributes is indicative of a potential disruption. For instance, a new social word processor for mobile such as Quip was designed from ground up for mobile devices with social collaboration built in. Some Google folks who saw an opportunity to reimagine productivity in a mobile and social world started this company. They began with a word processor, but have a mission to create the modern productivity suite for the mobile era. Another example – Box recently announced a new product Box Notes that takes a similar approach and builds upon the assumption that existing word processors have overshot the market.

BTW, the impact of this paradigm shift in computing isn’t just limited to consumers. The confluence of cloud, social and mobility forces is not only transforming organizations culturally (e.g from hierarchical to flat, from silos to open and connected, and from closed workspaces to flexible work styles) but also driving us to a new era of continuous productivity where the work lives and personal lives are intermixed and, to quote from the blog post, where  “People have the ability to time slice, context switch, and proactively deal with situations as they arise, shifting from a world of start/stop productivity and decision-making to one that is continuous”.

To sustain this transformation, we need to challenge fundamental assumptions baked into the productivity suite written decades ago for desktop environment. Retrofitting the old stack in the new world of socially connected people with mobile devices is a band aid which will last only till the industry is able to create a new modern stack that meets the functionality and reliability needs and has most of the convenience attributes discussed earlier.  The modern tools that would enable these new experiences in enterprises would in most cases be adopted/derived from consumer apps and in a few cases may need to be designed exclusively for enterprises. As an example – Cotap is a startup that was funded to bring WhatsApp type capabilities to the enterprise.

Given this paradigm shift in end user computing, it appears a disruptive change in productivity software is inevitable. However, it remains to be seen who creates the new modern productivity software for the mobile era. Would it be startups like Quip or Box or the incumbents such as Microsoft and Google?

Christenson describes five laws of disruptive technology for companies to understand so they can deal with the disruptive changes appropriately. The laws are listed below.  A detailed explanation of these laws is beyond the scope of this blog post, but the reader is advised to read their description in the book.

1)      Companies Depend on Customers and Investors for Resources.

2)      Small Markets Don’t Solve the Growth Needs of Large Companies

3)      Markets That Don’t Exist Can’t Be Analyzed

4)      An Organization’s Capabilities Defines Its Disabilities

5)      Technology Supply May Not Equal Market Demand

If the incumbents understand these laws, and can harness the forces underlying the laws rather than fight them, they may be able to maintain their market position in the productivity space in the mobile era.

So who do you think the winner will be?


Posted in Management and Leadership, Mobility and Devices, Uncategorized | Tagged , , , , , | Leave a comment

Enterprise Social Chaos

There is plenty of debate and discussion on whether enterprise social network (ESN) technologies create real and measurable value in the enterprise.  If you think enterprise social is a fad, you may want to stop reading because in this post, I assume there is value in bringing social technologies into the enterprise. However, the value realization is contingent upon the industry solving some key challenges that enterprises face today in rolling out social technologies and integrating them with their current business processes.

I help enterprise customers adopt social technologies that drive efficiency into their current processes. Through these engagements I have come to realize the immature state of the enterprise social market, and have developed an appreciation for the challenges these enterprises face in aligning social technologies with business processes.

In this post I discuss these gaps and challenges and the need to develop better solutions and/or standards to address them.

Let’s examine the different ways social technologies can be integrated into business processes. Rawn Shah in this article provides a good overview of integrating social technologies with business processes.  He discusses different models that range from a completely disjoint model (social activity happening completely outside the business processes) to a fully integrated model where business processes are embedded in the social activities and appear in the normal flow of activities. The latter provides the most business value but requires that business processes be redesigned. Given the investment costs to redesign business processes, managers aren’t likely to choose the latter model, at least not now.

With this background, let’s examine the problem of social adoption in the enterprise. There are two angles to this problem:

  • Managers are asking – How can I take advantage of social collaboration across the company without disrupting or redesigning my business processes? 
  • End users are demanding –
    • I need to collaborate in the context of the business process I am in. Don’t ask me to switch tools.
    • As I switch from one business process to another, my collaboration experience should remain seamless and consistent

With the existing set of social technologies, it is quite challenging to meet everyone’s expectations and requirements. Why?

In order to answer that, let’s first take a look at the ESN vendor landscape. There are essentially two types of vendors. The first type consists of pure play ESN vendors vying for wall to wall ESN ownership that aren’t tied to applications. Yammer, Jive and IBM Connections fall into this category. The other type of vendors are business application vendors such as SAP, Salesforce and Oracle who are building application centric social networks – Jam, Chatter and OSN respectively.

Most enterprises today have multivendor business applications in place to run different business processes. If enterprises were to implement social technologies offered by business application vendors, the end result would be fragmented communities and silos with inconsistent user experience. For instance, an account exec who lives in the SalesForce world collaborating with others on opportunity management (using Chatter) can’t collaborate with finance on a PO or invoice in SAP since finance folks are all on SAP Jam.

Drawing
This defeats the enterprise goal of promoting social collaboration across the company. It also fails to meet user expectation of seamless and consistent collaboration experience across multivendor applications.

If they were to implement pure play ESNs such as Yammer and Jive who take a more user centric view, the resulting solution would still not quite meet user expectations where users want to collaborate in the context of the business process and not outside. Yammer provides Yammer Embed to embed Yammer feeds into existing business applications which mitigates this issue but it can be argued that the resulting solution is still not optimal from a user experience standpoint.

If we take this problem further and include partners and customers, we have another set of challenges to deal with. If Company X partners with 5 other companies, each of which has adopted a different ESN, a Company X employee would need to switch across 5 experiences to successfully collaborate with partner companies.

How do we solve these challenges as social technologies advance in their maturity curve? Social is a fabric that is expected to bring employees, enterprises, partners and customers together.  It is best done using a unified fabric and a consistent experience. The experience should be similar to what we have with email or phone – you don’t switch email systems say from Outlook to Messenger when you communicate to company A vs company B, do you? You don’t use a red phone to call company A and a blue phone to call company B. So why should your social experience be different as you interact with different companies?

In an ideal world, one should be able to use social tool A to invite people who use social tool B into its communities and vice versa. Participation into these communities shouldn’t force anyone to switch tools of their choice.

The real answer lies in interoperability standards. However, given the immaturity of this market and vendors competing to entrench themselves, it will be some time before the pressure from enterprises forces the vendors to come together and define interoperability standards.

OpenSocial and OpenGraph are standards that address different aspects of ESN standardization but to my knowledge there is little interest from ESN vendors to define interoperability and consistency of experience across one network to another.

I am curious to see how the solutions and/or standards emerge to address these challenges and enable enterprises to get the true value of social by having one social platform bring all of their business processes, employees, customers and partners together.

Posted in social in the enterprise | Tagged , , , , , , , , , | 1 Comment

Mobile Application Management complexities – What’s the right solution ?

The consumerization of IT wave has created new opportunities. Vendors have rushed in with their offerings and the market is still in flux. In this post , I share my opinions and learnings from large enterprise customers trying to keep the corporate secure while allowing users to adopt a flexible work style with devices of their choice (BYOD).

In 2008 when Windows 8 planning was getting kicked off, I had the opportunity to be part of the team thinking about improving end user experiences with software distribution. Instant gratification through streaming, eliminating procedural model of application installs in favor of declarative models, application virtualization to carry legacy applications into the new application world, were some of the themes we discussed. We also discussed a new Windows appstore for distributing Windows apps and designed developer as well as end user experiences with the appstore.

4 years later Microsoft announced Windows 8 with an integrated appstore experience targeted for tablets and PCs, and a new application model. Enterprises provided feedback to the initial Windows 8 release, and in response to that feedback, an year later, Microsoft announced Windows 8.1 adding a bunch of enterprise features such as Workplace Join, Work Folders and an Open MDM API to allow third party Mobile Device Management (MDM) products to manage Windows 8.1 devices. It’s a welcome gesture on Microsoft’s part to level the playing field for all MDM vendors including Microsoft when it comes to managing devices – Windows and non-Windows.

As we know, most MDM vendors are moving into Mobile Application Management (MAM) space as well. Consumerization of IT poses several challenges to IT leaders, one of which is securing corporate data on consumer devices. There are several MAM solutions in the market, but the whole app management ( acquiring, distributing, and managing the apps) in a customer scenario is still immature. Gabe Knuth has done a brilliant job in describing the complexities of managing apps in this article. Despite good MAM solutions, acquiring and distributing apps to devices is still complex business.

In my view, “containerization” approach including app wrapping and SDK is a stop gap arrangement. The real answer is to have all Operating systems standardize and support MAM capabilities via the OS. This is the natural evolution cycle of a new market need, the solution of which shows up first as a band-aid before the cure is found.

If we look at Apple IOS and Android, Apple is further along as they include MDM as well as MAM support in IOS7. This means no app wrapping or SDK needed on IOS7. This helps eliminate a lot of complexity in how these apps can be acquired and distributed. Android, Blackberry and Windows still need to follow suit.

If we look at this problem from a standpoint of multiple vendors, it makes perfect sense to have MAM support be a native capability in all operating systems.

Stakeholder benefits of Native MAM

App Developers: Writing to each MAM specific SDK is a wasted effort. If they can write once to an OS API and different MAM vendors can then plug into the OS, it is a much cleaner solution for developers. Allows them to save cost and extend reach by having their apps enabled for multiple MAM products.
MAM vendors: Saves time and effort for them as they don’t have to reach out and invest in building an application vendor ecosystem. In fact, each MAM vendor can now have access to a much larger application ecosystem enabled by the OS or device vendor. They will however, not be able to differentiate based on numbers of apps supported , so will need to figure out other differentiators such as enhanced user experience.
IT Pros: Gives them wider app selection to offer their users.
End Users: They get seamless experience and a broader choice of apps to pick from. The applications always stay up to date.

The good news is MDM/MAM vendors have started to take notice on the complexities of the current model and are moving in the right direction abandoning the “containerization” path. For instance, Vmware just made a strategic shift embracing iOS7.  This is a welcome move that would benefit everyone including end users, IT Pros and app developers.

While Citrix through XenMobile is trying to create an app ecosystem using WorX gallery, they will need to change the direction for IOS7 and eventually Android, when Android offers those MAM capabilities natively.

I expect similar moves from other leading vendors such as Airwatch and MobileIron.

I also wonder if MDM is going to be a dead space in a few years – why care about devices when I can only manage things that I care about i.e. corporate apps and data. Many enterprises today require user devices to be completely managed if they are to be enabled for enterprise use.  Users don’t feel comfortable with this model for privacy reasons. As MAM/MDM world matures and types of devices proliferate, enterprises would care less about devices and more about securing corporate apps and data on those devices.

Comments ?

Posted in Mobility and Devices | Tagged , , , , , , , , , , , | Leave a comment

Can Lync with SDN address user experience challenges ?

Microsoft Lync is a UC solution getting rapidly adopted by enterprises. However, deployment and user adoption of any UC solution, and Lync is no different, is very challenging.

There are several benefits of a UC technology such as Lync. It takes out cost associated with running traditional telephony and voice conferencing solutions. It also drives user productivity by supporting “Work anywhere” scenarios.

Anyone who has deployed Lync understands the user adoption challenges that come with it. A user with a desk phone and PSTN connection is now given a PC and a headset and is told to “Work Anywhere”. The user feels excited since he can now carry his phone with him anywhere and be productive with IM, phone and conferences wherever there is a network connection. His excitement , however, quickly fades away when he has had a poor experience due to a bad network connection at a remote site.
Needless to say, if user experience is not managed well, the UC rollout would fail miserably.

A service such as Lync is heavily dependent on the quality of the network. Providing an end to end quality experience on Lync is non-trivial especially when one of the end points is bandwidth challenged.

Fortunately, SDN holds the promise to solve this challenge.

HP and Microsoft recently demo’d Lync with SDN at Interop 2013. Here’s the demo video and a post that explains how it works.

Lync is probably one of the few real time applications that talks to the network communicating information to the network in real time through SDN controller allowing the network to dynamically tag the flows and assign bandwidth. This significantly improves the end user experience without the overhead and cost of re-engineering the network. Lync can also dynamically provide QoE information to the SDN controller allowing SDN controller to re-route flows as needed.
Conversely, SDN controller can provide Lync with network information that Lync can use to manage user expectations prior to the user initiating a media session. This is a big deal for end user experience since this information allows Lync to proactively set expectations with the user.

An average user doesn’t understand (nor is he expected to) the intricacies of application and network interaction. For him, if the call or video doesn’t go through well, he ends up blaming the application and the IT team behind it.

By having the application and network talk to each other, the user experience can be managed much better leading to a faster adoption of UC in the enterprise.

IMHO, applications such as Lync have the potential to bring out the value of SDN and take it from hype to reality.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

The evolving role of IT

Cloud, Social, Mobility and BigData are mega trends that are driving radical transformations of businesses. Much has been written on the these trends and how these trends are helping businesses be more productive and competitive.

In this post, I share my thoughts and experiences on how these mega trends  are impacting the relationship between line of businesses and IT organizations.

As we all know, in an enterprise, IT is a service organization providing IT services to businesses. These services enable the businesses to run efficiently. Unfortunately, in most cases, the perception of IT has always been that of a slow, antiquated, “blocking the way” organization.

Several reasons for this perception but here are the two major ones –

  • IT is a cost center and as a result IT organizations are severely budget constrained. They are always under pressure to do more with less.
  • They are seldom setup organizationally to respond to business needs dynamically. Planning/budgeting happens at the beginning of the year. During the course of the year, if business need changes, it becomes difficult for IT to respond to the changing needs of the business.

As a result, IT orgs have always lagged and often been slow to respond to business and employee needs. More energy gets spent on keeping the lights on and less on innovation.

When innovation is stifled inside, it happens outside.  Creative market forces figure out a way to deliver innovative IT services without being part of an IT organization. Splunk, Evernote, Dropbox, Yammer are some of the companies who target enterprise users with “freemium” model. When the services become popular among the users, enterprises take notice and are forced to make them a corporate standard. These services in their “free” form are a big nuisance to IT and cybersecurity teams, posing serious security and compliance challenges.

The fact is users today are no longer dependent on IT to meet their needs. It is easier and faster to get IT services from cloud vendors who are offering infrastructure, platform and software as services. Users these days are no longer dependent on tools and machines provided by IT. They bring their own devices (BYOD) and use cloud services (such as Dropbox, Yammer, Google apps) to get their job done.

On the core business side, SaaS services such as SalesForce, WorkDay are replacing on-prem business applications. Even for developing/testing custom apps, businesses are finding easier and faster to spin up a machine on a public cloud such as Azure or AWS than requesting capacity from IT.

So where is this trend leading us to ? How are IT organizations likely to adopt to this change and what role would they play in the new “Post PC” world of devices and cloud services ?

I think we are moving towards a world where IT’s role would be more of a  facilitator/broker of cloud services  than a provider of IT services to businesses.  The megatrends are breaking the dependency of businesses on IT and giving them choices so businesses can adopt to the fast changing world without IT being a drag on their success.  IT can succeed the best in partnering with the businesses in advising them, guiding them and brokering relationships with service providers. This would require IT investments to shift towards security, governance, compliance, vendor contracts because enterprises would be likely dealing with more vendors, utilizing more hosted services than ever before.

The sooner IT orgs embrace this new reality, focus on where they add value (such as security, compliance, vendor contracts etc), and get out of running traditional IT services , the better off the enterprises would be.

Posted in Business and Technology | 2 Comments