Independent Telecommunications Consultants
Microwave and SCADACritical CommunicationsTwo Way Radio, 911 and DispatchConverged Communications and Contact Centers
Microwave and SCADA
Critical Communications
Two Way Radio, 911 and Dispach
Converged Communications and Contact Centers

Microsoft and the Exxon Valdez

Finally, here is the last of my three-part post and advice to the Microsoft CEO du jour. In my first and second posts, I went through the easy separations. Now it gets interesting.

The third group and separate business?  Applications.

I want to say that I consider this both desktop and server level applications. This is where I really think Microsoft failed big time. There is one application that everybody uses: email. And Microsoft owns corporate email – to this day, even with the erosion that they have caused themselves. They doubled their failure by ignoring the application on both the mobile device and on the mail server end. How, you ask? Easy – go ahead, find a version of Outlook that will run on Android or Linux. We hear our Apple folks talk about Outlook on iOS, but even that’s not the same. Going a bit further, go find a copy of Exchange that runs on a headless Linux server.

Let’s go a bit further. How many versions of Office that run on Android are there? How about ANY Microsoft server-based application that will run on Linux? By ignoring non-Microsoft-based operating systems, Microsoft itself has opened the door to greater competition. Everyone wants to do things cheaper, and in the server space, Linux gets pretty darn cheap! Add to that the performance and availability characteristics of Linux over Windows in server environments, and this becomes a no-brainer.

Author note: While Linux has certain portions of this advantage still today, the gap – to Microsoft’s credit — has narrowed significantly. I don’t want anyone to think that Microsoft Server OS was [more] un-reliable in any specific application. Linux certainly has its own shortcomings.

Given this, if you were a start-up developer, and you had a choice of using Linux, Apache, MySQL, and PHP (commonly referred to as a LAMP server), or Windows Server 2008, SQL Server, and IIS which would you use? Your start-up costs are $0 vs. $thousands, respectively. On top of that, your customers’ deployment costs short of your application are also $0 vs. whatever Microsoft’s licensing costs are to that customer.

What this really shows is that Microsoft is, down deep, an operating system software company, not an application development company. By spinning the applications into a separate group, or even sub-groups, the resulting company would be free to innovate to its heart’s content. It would also eliminate the whole argument that Microsoft-developed applications use undocumented calls in APIs.

Frankly, I’d love to have Exchange running on Linux. I think the end result would be a much better solution. I would argue that SQL Server would also benefit from a Linux core.

The real question becomes, where do you put things like SharePoint, Lync, and other applications that rely on Active Directory, SMB, Internet Explorer, and some of the other proprietary pieces of the Microsoft puzzle? And what about the “consumer” products like Xbox, mice, keyboards, and whatnot?

Well this is where the  fourth company comes into play. While Active Directory isn’t required for either desktop or server products to operate securely, SMB and Internet Explorer is required, and can’t be easily separated in the short term.

This is where the rubber really meets the road in the difficulties that Microsoft has to surmount. My recommendation would be that key technologies, particularly Active Directory and SMB, be spun into a company on their own. The good news is that now, the possibility of better control and sharing for non-Windows devices becomes a larger reality.

For what it does, Active Directory’s features, manageability, and user security are excellent. In this day of BYOD, wouldn’t it be nice to be able to manage a Linux, Android, or iOS device as part of an Organizational Unit? While LDAP and Samba work pretty well, it’s not always easy to configure. This spinoff would allow an independent business unit to develop, and probably more importantly, to license the protocol and database technology for just about anyone who wants to write to it.

What do you do with the consumer products? Well, the only real product is Xbox, and while I’m sure it’s profitable, it’s really not a big part of the Microsoft Empire. I’d sell it off, or make it the fifth company. Everything else, just dump it.  Really, when I go to buy a mouse, keyboard, or headset, I’m really not looking for “Microsoft” on the label. I doubt that any of these items is really manufactured in-house anyway.

While I’m sure I’ve left tons out, and that anybody in the world could pick this series and ideas apart, I think the concept of what I’m trying to put across is clear. Microsoft has become the Exxon Valdez of its own destiny. A similar scenario was IBM in the 80’s and early 90’s. Finally they (IBM) realized that they couldn’t own it all, and much of the corporate culture changed, for the better. Microsoft can do this too, but only if they take my advice…

Microsoft Mitosis – to survive, Microsoft must divide.

Mitosis – one of the many forms of biological cellular division.

In my first post I said I had 3, possibly 4 new separate companies that Microsoft must become to survive. Let’s take a look at what I came up with for the first two.

First, spin off the desktop operating system into a separate organization.

Yep, I said it – let the desktop be its own group. Why do you ask, would I let the desktop folks fly on their own?

Let’s look at Microsofts’ products, the market, and the changes coming. Microsofts’ core product is the Windows kernel – the core code that really runs everything. It is basically the same across the desktop and server operating systems. By nature, both are focused on Intel (or compatible) processors, a standardized open driver and hardware architecture with standardized API’s so close that many drivers are the same from server to desktop for a defined piece of hardware.

Unfortunately, the device, and even the desktop environment is changing. And to a large extent, away from an Intel compatible CPU core.  While there will always be “power users”, realistically, the average business person doesn’t need all the processing resources that they once did. Additionally, the mobile and device processors are now approaching speeds where desktops were when Windows 7 was first released.

Unfortunately, the move away from Intel compatibility in mobile devices has left Microsoft (and Intel) in a bit of a bind. In the Linux/Android world, to build a new kernel on a processor only requires an appropriate compiler – this is validated by all the various Linux derivatives for ARM and MIPS processors. Not to mention that the licensing for this is easy, and CHEAP!

Additionally, small, embedded processing devices do not lend itself to an open driver/architecture environment, further complicating development for this industry segment. Again, Microsoft is trying to innovate through acquisition with the purchase of Nokia’s mobile business.

Microsoft suffers here because they do have limited resources, and since the core kernel development is the same for desktop and server, they can’t assign enough resources to the task and maintain compatibility across all development platforms. It becomes a limitation of size – smaller companies can be more flexible and reactive. By abandoning the weight of all the rest of the company, an organization focused on the desktop or end user environment can develop to new processors and architectures, and maybe even more importantly, ease the licensing and legal portions of using the product(s).

Second – Make the server OS a standalone organization

Well, without belaboring the previous comments, let‘s face it. Do I really need themes, Aero or DirectX on a server?

But we can even take that a bit further. The I/O requirements for a server are completely different from an end user device. The fact that I can load a desktop version of Windows on server hardware and all the cheap servers out there that are nothing more than a desktop with Server software loaded speaks mountains of the overhead from a kernel, driver and I/O standpoint. This would allow the Server development to really tighten up who, what, and how the product is deployed, particularly with 3rd party application developers, memory utilization, and how the system works in a virtualized environment.

By leaving the desktop behind, the server kernel can be substantially more aggressive in I/O, memory and threading processes. The developers can focus on lower level code, thereby increasing capacity, improving how services and core hardware are monitored and executed. Add on top of that the less code there is, the less there is to secure, it only makes sense. This, like in the desktop group, allows for smaller, more responsive development in reaction to technology improvements.

In my forthcoming 3rd and last post on the ultimate demise of Microsoft – either in a positive or negative depending on your perspective, I’ll go through the last separate company, and an optional company that no one ever really thinks about when talking about Microsoft.

Microsoft – just another software company

Part 1 – To survive, Microsoft must commit suicide

I’ve been seeing this coming for a while. When I first  had the outline of this series, Steve Balmer was still CEO, and I was going to write this as an open letter to him. This was just after some of the bad news about Microsoft stock had hit, and frankly, I was going to leverage that into (hopefully) a nice blog series. But  maybe the next CEO will get my message. And my message is so straight forward that even an Ivy League MBA CEO can understand it.

As anyone who looks at my bio can tell, I date back a ways. This is why I make such a good Independent Technology Consultant. I date back to when Digital Research wouldn’t sell IBM CP/M, but Bill Gates would sell them QDOS with a few frills of his own. Admittedly, it was a shrewd and gutsy move on Mr. Gates’ part. But the important piece to know is that Mr. Gates (can I call you Bill?) and I stomped a lot of the same primordial microprocessor mud. I think if I dig, I’ll even find an old copy of BYTE Magazine that still shows Wayne Green (who I HAVE met) as the publisher, not to mention the stacks of [Steve] Ciercia’s Circuit Cellar that I finally recycled. Not to mention all the coffee cups of companies that no longer exist. One of the perks of a history of complex network consulting I guess.

[Side note to Steve Wozniack: Woz, did you ever wonder where and what would have happened if you had continued with the 6800 on the Apple1 instead of the 6502? My curiosity really ties back to the whole Lisa development since it was 6809 based – BTW, I have some MC68B09’s in a tray over here….]

So back on topic about Microsoft –

Microsoft, along with a few other technology companies born from the original Silicon Valley/Redmond boom, has the problem of largess, and to a certain extent, a belief that they can, and want to own a technology segment, and that they DESERVE to own that segment.

And to a large extent, they have owned the desktop, server and some key application environments for quite some time. The problem is that after a time, companies that are leaders begin to believe their own marketing hype.  Not that they are not innovative, although they usually end up doing more innovation through acquisition than via true ground-breaking deep thought.

But now they are in trouble: Their forays into mobile devices have failed miserably in comparison to their competitors. Their sales of server software have been consistently challenged by the various (free) Linux variants. The strong adoption of HTML5 and JavaScript has allowed both desktop-based and desktop/server applications like SharePoint, CRM, and SQL Server to become high-value targets for enterprise developers.

This competition has been affecting their stock price, some of their development, and frankly, the strong adoption of cloud computing (see my series on the cloud starting here) – this from the company that still owns both the desktop and office suite environments. Their foray into cloud computing is an obvious follow-the-leader/600 lb. Gorilla move to try to thwart additional loss of market share and leadership to Google.

So how does Microsoft regain leadership or even in a way, continue to survive? Unfortunately, in their current form, my opinion is that they won’t.  To survive, and honestly to survive the next 20 years, they have to make the hard decision to abandon the “Everything Microsoft” ideal. To do this, they will need to break up Microsoft along some key industry and market lines. I’ve come up with 3, and possibly 4 new companies and segments that will allow Microsoft to regain much of its innovation and market leadership.

Over the next two posts I will explain how and why Microsoft needs to destroy itself in order to survive. While I’ve never been accused of being clairvoyant, I do have a unique perspective, and I end up being right more often than not. Being an independent technology consultant means doing a lot of educated guessing – I guess. Check back in a couple of weeks for Part 2!

Important recall alert! APC power strips

APC (American Power Conversion) issues product recall

It has come to our attention that there has been a recall on several models of power strips with embedded surge protection manufactured by APC. Since many of our customers utilize this type of item, we wanted to make sure that we alerted them appropriately.

I know this is just another little chunk out of your day, but please walk around and check under desks, in cubicles, and in your equipment racks and cabinets for the following styles of APC power strips.

They will look like this:


APC 8 Series SurgeProtect power StripAPC 7 Series power strips

According to the company – “The affected products may present a fire hazard under infrequent, abnormal building wiring and electrical conditions,” the company said. “This hazard has been reported in a small percentage (less than 0.01%) of the units sold and included reports of property damage, mostly involving damaged nylon carpeting.”

The Consumer Product Safety Commission states that the affected models were manufactured prior to 2003. While this seems long ago in the technology world, these are the type of generic items that seem to last forever. There is really no “wear” items internally, and short (no pun intended) of being dropped or some other odd physical damage, the units could last a substantially long time.

If you do have one, check the APC Recall web site to ensure that your specific model is included and place a claim.

As always, we’re here to help!


To Cloud or Not to Cloud

Know what works in the cloud, and what doesn’t


In my previous posts on Cloud (first and second) I may have come off as not a big fan of cloud based applications. I’m actually neutral – I’m more about what the right solution is for each particular customer and situation. After all, I am about being a telecom consultant, and unless I’m objective, I’m doing my customers a disservice.

So let’s look at some basic things about using a cloud application, how, and where it potentially could affect your business.

First off, remember that a cloud application’s availability is only as good as the weakest link in the connectivity chain. In other words, an outage anywhere between you and where ever the server(s) are will affect your ability to use the application. Typically, the weakest link is the “last mile” access between you and who ever is your internet service provider. Their availability is directly related to the applications availability.  And even the most reliable providers can have issues. As an example. one of our customers recently had to deal with a total failure of their internet service – for almost 8 hours – due to a local fiber cut. Their email, and more importantly their credit card authorization mechanism were unavailable for most of the business day. Luckily, their order system is on a local server, so all they dealt with was a delay in the confirmations of the transactions.

Obviously, the point here is that if its a business critical application, make sure that in a cloud, your business has multiple diverse paths to access the system. This can create problems, particularly if you do not have a firewall that supports multiple WAN interfaces, or if a secondary provider has to rely on T1 or lower speed xDSL access.

This brings up the second point, which is bandwidth unto itself. It’s one thing to have one or two users accessing an application as you evaluate the application/provider, but what happens to bandwidth when you deploy it for everyone?  Again, this is where intelligence in your edge security device/firewall is mandatory. Of course, your other option is to buy more bandwidth (again, see my second post).

Critical business applications aside, there are some applications I would not put in the cloud as technology stands today, and some things that I think cloud is great for.

Great in the Cloud

Server data backup

I always recommend that my customers do a bare metal backup to local media of their servers. However, backing up to the cloud allows for a redundant backup solution. But I would only back up critical application data, not the entire server or application. This is due to limitations in effectively doing a bare metal restore without media.

Unified Messaging, Voice Mail, Fax Services

I’m going to bundle all three of these together, since most voice mail systems are really more than that anymore. And let’s face it, with email, text messaging and collaboration systems, these are not as critical as they once were. Note that I am NOT talking about actual telephony or PBX switching, just the adjunct messaging services.

Collaboration Systems

This can be inclusive or exclusive of email, depending on your environment, business size, how it’s used, and any business or regulatory requirements. Unfortunately, many businesses don’t realize the effectiveness of chat, instant messaging and presence applications. Allowing employees to use social media for internal business communications, regardless of how informal it may be is just a risk that many businesses shouldn’t take.

No Go in the Cloud

Telephony and Call Center

I’m not talking about basic call processing. Basic home telephony features and delivery has been around long enough, and the expectations are low enough in that application that it has now become acceptable. But a business that has any kind of call volume, and/or with advanced call processing, features such as handling a call center agent should not SOLELY rely on cloud based call processing. Yes, they all can forward to a different number in case of an outage. But can you imagine you’re poor attendant trying to answer even 3 or 4 calls on a cell phone? Or the lost revenue because your customer service agents are trying to do business on an analog phone?

There are some cloud providers that can provide some backup localized call processing. Just make sure that in this case, that sophisticated features are available even when connectivity to the cloud is lost.

Network loaded firmware

I’m not talking about Software Defined Networks. I’m talking about any device, that in order to operate has to download a firmware image from the cloud. Well you might say, as long as it has it’s image, and it remembers, that’s all I need. Besides, there are not a lot of those types of devices around. Funny thing about power and network outages, they tend to occur together. As far as devices that require the cloud to operate, see the next heading.

Operating Systems

Yes, if you haven’t been reading about what the great goals are for cloud, just go read about Google’s Chrome OS. Like everything else in cloud, the original concept isn’t new. Originally called thin clients, the server always needed to be on a local LAN, due to the amount of data needed for the initial loading of the device. With the bandwidth available now, cloud based thin clients have already become a reality.

Never say Never…

I’ve covered a bunch of ground in these posts about cloud. I’ve said some not so nice things, and some things where cloud makes sense. But I’ve been a technology consultant for a long time, and I know that as soon as I click the publish button, entropy in the world of technology will change – for someone, somehow. Following the industry and looking into a crystal ball is what I do. Frankly, I’m darn good at it. I know that someone out there will have exceptions to any and all of the items I’ve put into this article. There are some things that I’ve put in here that I expect to change as well, but not in the 18 to 36 month future that I use as my tactical visions for my customers.

Just remember clairvoyance is a subjective thing.


continuing on – Cloud Computing

Why do they want to sell you cloud?

No surprise, it’s not always for your benefit.

In my previous article I explored a little history and came up with my own definition of cloud computing. For those who have not read it, or can’t remember, my definition was as follows:

Cloud Computing –

“An application that is provided to a computing device where the execution of the core application is done separate from the (application) client, and outside the user’s private secure network, instead accessed by a shared public network.

The core application is owned and executed by some other 3rd party. The core application is also developed or configured such that it appears to be a single instance of the application to the user or specific group of users. The application client may be a standards based interface, a custom software application, or embedded firmware based.

The key difference between cloud computing and the client – server model is that in the client – server model, the core application and hardware is maintained (typically) by the users organization and located inside the private secure network.”

And checking a few web references, I was pretty close!

So now I will attempt to explain WHY there is such a push to sell everybody Cloud Computing, and to no ones’ surprise, it’s all about money.  But the money is coming in a different way.

Let me explain –

There has been a lot of talk lately about how PC sales have dropped off significantly. This has ripple affects. No new PC’s equals no new copies of Windows sold.  How many people when they purchase a new PC also purchase a copy of the latest version of Office? And while we’re at it, let’s look at Office itself. What actual new features in Office do most people use. Frankly, I can do 99.9% of everything that I need to do with the copy of Office 2003. This is the same reason that Windows XP is still in use, and frankly, I have yet to have any customer embrace Windows 8. Combine that with the availability of mature and free open source applications like OpenOffice that are generally equal in features and it doesn’t take much to look out a few years and see your product becoming a commodity item.

But it’s more than just desktops isn’t it? Server horsepower, memory and storage technology has still maintained the Moore’s Law paradigm. However, with efficient hypervisors like VMWare and Xen combined with server operating system kernal improvements by both Microsoft and the open source world, older hardware and applications are remaining in service longer. After all, if a server based application works, and you can easily migrate it from one hardware platform to another, why replace it? Or if I have to replace it, is there a low cost alternative that I can just drop in as a virtual machine to replace it? This too has not gone unnoticed in the industry.

As for a real world example, as an Independent Technology Consultant, I have done both scenarios for my customers. I have taken an older application that required (at the time) a dedicated server and migrated the entire system to operate as a virtual machine. I tell my customers that generally, if the hard drive(s) partitions are still good enough to boot and run the application, it can be migrated.

I have also recommended and helped customers evaluate and install open source alternatives to applications where the vendor was mandating an upgrade upon a customer to maintain support. I don’t intend to denigrate the vendor here – I understand why this is done. In most of these types of situations, the customer themselves have ignored the situation for far too long, and have put themselves AND the vendor between a rock and a hard place. This is why engaging or recommending a business technology consultant as part of a long term business relationship is good for customers and vendors (so much for the shameless plug).

Now returning to your regularly scheduled program….

Hardware, and now software is becoming a commodity item. Visualization has allowed for a large number of server applications to reside on an efficient single server. Storage has become cheap, which allows for large redundant arrays, to the point where the storage itself is virtual in a storage network.  These two revelations are not enough to push a large portion of the industry to adopt a cloud based model. If these two items alone were the drivers, it would have happened long ago. No, there are two more pieces to this puzzle.

The final technology piece of this puzzle is bandwidth. I’m not talking about local area network bandwidth. The cost of enterprise grade gigabit switches capable of low latency, non-blocking operation has been well within reach for some time. I’m talking about last mile bandwidth, or in layman terms, high speed access.

Twenty years ago, when I told my friends and associates in the telecommunications carrier world that their biggest coming competitor would be the cable companies, they scoffed in derision. When I would try to explain the aggregate bandwidth available in an enclosed RF distribution system, and the concept of Frequency Division Multiplexing I would generally get blank stares. Some of these associates understood the theory, but would comment about the ability of the cable companies to execute and maintain such systems.

Fast forward to approximately 5 years ago. Technology proved me correct, and the broadband networks installed by cable companies are able to compete in available last mile bandwidth. However, 1.5 to 3 megabits still really isn’t enough to drive a nearly seamless application experience to a remote host. The last thing a hosting company, or bandwidth providers want is to make promises they can’t keep. Hence, the Application Service Provider or Software as a Service model didn’t work.

Without going into a discussion about changes in head end and node routing and processing, Fiber to the Premise technology, et cetera,  suffice to say the bandwidth environment has changed. 3 to 5 megabits is the minimum service available, with standard services in excess of ten times that. The network latency now is low enough as to not affect the application experience. Broadband providers now have no reason to dissuade a customer away from a hosted application. To the contrary, they have a vested interest in an increased sale for a service that requires minimal additional infrastructure to the customer.

With buy in from bandwidth providers, hardware and software becoming a commodity market item for the large software companies, we reach the last realization why all the excitement in the industry is centered around cloud based products. It’s pretty obvious that it’s financial. What is not obvious is that there are two parts to the financial reasoning. Every business person knows that to make money, you have to sell something, and you have to continue selling something. It’s called cash flow. In the example I gave early in this article, if my old copy of Microsoft Office continues to work, after the initial sale, Microsoft garners no additional revenue. In contrast, if I purchase a three year subscription to Office365, and I don’t renew at the end of the term, I don’t have an office application suite anymore. So, at the minimum, the user has to make a decision on renewing the subscription. While this is a rather simplistic example, if you expand the concept to say, a Customer Resource Management hosted application where it is billed monthly based on the number of users and the cash flow incentive is pretty large.

The second not so secret financial reason to promote a cloud based product is ownership. Not of the application as alluded to above, but of the customer and the data. This isn’t some giant conspiracy theory. This is about inertia and change. To migrate from one application to another is effort (and cost) above and beyond the cost of application itself. With an in-house application, a customer has access to the raw data. This allows for an in house programmer or competitor to access and massage the raw data as part of a migration. With a cloud based application, this is substantially more difficult. There is no guarantee that your companies data is stored in a separate instance of a database application, or just another set of tables in one large database. So, by increasing the cost and potential complexity of migration from a “service based/hosted application” the odds of loosing a revenue source are reduced, and hence, a tighter hold on the end customer.

So much for the second part of my overview of Cloud Computing.

In my next post “To Cloud or Not to Cloud“, I’ll speak to how to determine if a cloud solution is right for your business, some of the pitfalls and how to avoid them.


Rural Broadband and REC’s

Electric Cooperatives are the best choice to provide Rural Broadband

Here is an interesting post from Bloomberg with regards to rural broadband access. Politics aside, I’ve always wondered why Rural Electrics did not see this as an opportunity to service their customers. Even without Broadband over Power Lines (BPL) implementing this technology would not be difficult for an RECElectric Cooperatives own rights of way, service rural customers and have equipment to support an outside cable plant.

Now, I know that in some states, electric cooperatives can not provide telephone services.  However, data services are typically not regulated the same as telephone.  Additionally,  data services do not have the overhead of 911 services and can be easily managed remotely as part of a services agreement.

Backhaul infrastructure does not even have to be far and wide through the service area.  There is no reason that you could not build a ring type of arrangement utilizing high speed microwave backhaul to provide the connectivity to a primary and secondary point of main service. This backhaul network can also be used as part of the Public Safety Broadband Network (FirstNet)

Distribution can also be done wireless. The advantage here is that utilities are exempt from Planning and Zoning, and again, you have access almost everywhere. This is an excellent application for TV White Space technology, and could even replace and existing SCADA or meter reading network.

Small municipalities, remote subdivisions and even, when designed properly truly rural subscribers could have moderate service.

In larger metropolitan areas, the opportunities are to provide Metro Ethernet services where the local RBOC or telephone company may not be capable (or interested) in providing service.

The final opportunity is to become a “carriers carrier” either providing dark fiber (not advisable) or bandwidth into areas where access may be limited back to other larger long haul providers (Sprint, AT&T, Verizon). These organizations are always looking for alternate routes, particularly into under served regions.

So, who would be your competition? Obviously, the local telephone company, but there are a few others that may surprise you. There have been a number of small local ISP’s try to provide wireless, but they typically fail due to poor engineering and lack of capital to sustain themselves long enough to show profit. The other surprising competitors are first and foremost the cable TV companies. They have access to broadband within a subset of the region, and many of them have gone to a model of having a limited set of head end equipment and then distributing out over fiber to their service areas. But their service areas do not always include some of the smaller communities and larger rural subdivisions, so there is an under served market already established.

Longer term, competition will most likely be from the wireless cellular companies. The advent of analog cellular service being discontinued, and the spread of 3G and 4G services will provide adequate bandwidth, but rural areas are typically last in receiving these services. Cooperatives would have an established market in these areas before cellular companies came to market. As we all know, it’s easier to keep a customer than to acquire a new one.

Lastly, there is satellite. Having worked in two separate VSAT networks, I can personally attest to some of the limitations that they have. First and foremost is bandwidth. Let’s face it, you just can’t go up to the satellite and upgrade it to allow for higher bandwidth or new features on a network level like you can on a terrestrial network. Secondly, the initial cost of equipment is high, and finally, latency can become an issue, particularly when dealing with delay sensitive applications such as streaming audio, video such as You Tube and IP phone services such as Vonage, Skype and MagicJack.

In summary, I think this is an opportunity for Rural Electric Cooperatives to provide a needed service to their customers. A business model could be built to determine how viable this is. Technology is the easy part, in the end, it’s all up to the Cooperatives to decide if they are willing to expand into non-traditional areas of service.


Same Service, Different Day – Cloud Computing

Cloud Computing – Really?

Cloud based applications aren’t all that new, but the marketing sure makes it seem so.

Yes, we made a play on the old S.S.D.D. acronym. But if you look at technology and timelines, you’ll see why this isn’t such a bad description.

But first, lets define Cloud Computing.  Before I write it down, I want you to know, that I didn’t do a search for “Cloud Computing definition” before starting this post.  Let’s see how close I come.

Cloud Computing –

“An application that is provided to a computing device where the execution of the core application is done separate from the (application) client, and outside the user’s private secure network, instead accessed by a shared public network.

The core application is owned and executed by some other 3rd party. The core application is also developed or configured such that it appears to be a single instance of the application to the user or specific group of users. The application client may be a standards based interface, a custom software application, or embedded firmware based.

The key difference between cloud computing and the client – server model is that in the client – server model, the core application and hardware is maintained (typically) by the users organization and located inside the private secure network.”

So, did I come close?

Since I’ve been around computing and networking for a long time, which is why I still call myself an Independent Telecommunications Consultant, even though I’m really a Technology Consultant. So long that I remember setting up UUCP connections at the blazing speed of 2400 bps. I’ve seen a lot of various industry trends, and how they focus on buzzwords. I’m not saying that there have not been great innovations and that they didn’t deserve the marketing hyperbole that they received, but I’ve also seen some really creative spins on concepts that have been around for years.

One of the better spins is Cloud Computing

Cloud computing has been around for decades, just called something else. Let’s build a list of the “key features” of my definition of cloud applications.

    1. Primary application processing is not on the user’s computing device.
    2. Primary application runs outside the private network, on 3rd party hardware/software.
    3. Appears as a single instance to the user(s).
    4. User accesses application with a standards based client.

My first example of a cloud application that’s been around for years?


Probably the first ubiquitous application anybody used on the internet, and in my case, back to when I ran a MajorBBS and AX.25 based packet bulletin boards on VHF radio. (For those who don’t know, packet radio is very similar to SCADA radio systems.)

Unless you’re a total techno-geek, an organization that has a user base large enough, or has other requirements to keep your email server in house, your email is probably hosted by your ISP, your web hosting company, or some email host. You access it using an email client like Outlook, Thunderbird, or even your smart phone using standard POP3 or Imap protocols. The server acts as the mail transfer agent, and (generally) stores your email. This is especially true if you access your email via a web browser.  And it’s YOUR email server if it’s a dedicated domain name – at least that’s how it looks to you and your email client. Hmm, sure looks like a cloud app to me.

My second example of a cloud app that’s been used for a long time?


Why E-Commerce?

Because to call any web page a cloud app would be too obvious. Besides, e-commerce has a lot more going on in the background than the serving up of a web page, even when it’s dynamically generated by WordPress, Joomla, or Ruby on Rails. An e-commerce site has a database driven content system, but it also has a credit card authentication process, and a mechanism that hooks back into the merchants inventory and accounting systems. Advanced e-commerce sites have advanced API code snippets to the merchants Contact Center for customer service chat applications and Click to Call services. If you don’t think the e-commerce site collects all these customer metrics for a Customer Relationship Management system for creating and tracking their marketing campaigns, you aren’t paying attention to what your browser does on a sophisticated e-commerce site. As someone who provides contact center consulting, I can tell you there has been billions of dollars spent on this in the last 20 years.

Most merchants don’t host their own e-commerce sites. Instead they rely on the hosting company for the “processing” and development, or a different web development company to maintain the web application. The user accesses all of these transparently through nothing more than a web browser.

I could go into a lot more, like telnet/ssh, ftp/sftp/scp and all these “cloud backup” services, which are nothing more than fancy rsync services. If I really wanted to delve deep, I’d get into collaboration services like Basecamp and ProjectPier, or one of the organizations that many consider really made the industry pay attention to “The Cloud” –

Cloud computing has gone by a lot of names. The first one I can really recall being used is Application Service Provider or ASP for short. A lot of the hosted Microsoft server applications were marketed this way. Software as a Service, or SaaS had better marketing, and it’s still used on occasions. It’s got the catchy roll off the tongue phrase, and the acronym looks cool in print – SaaS. But in the end, it just didn’t catch on either, and I can give a really good reason – but that will have to wait for the follow up of this post.

In the end, Cloud Computing got the marketing, and the buy in by multiple vertical segments of the technology industry. In my next post on this, I’ll explain why the industry is pushing cloud computing, and what parts might be good for business. More importantly, I’ll tell where you should “Run, not Walk!” away from cloud applications, particularly in small business technology situations.

And as I promised, now you can check my definition of Cloud Computing against Wikipedia. Looks like I was pretty close!

Make sure you read my follow on posts about cloud, “continuing on – Cloud Computing” and “To Cloud, or Not to Cloud“.

Telecom Consultants – Consulting or selling?

Telecommunications Consultant or a Service Broker/Agent?

Beware companies that are Telecom Consultants, but sell you a product or recurring services.

It’s no secret that to effectively market anything nowadays requires knowledge and effort in making sure you show up on the Internet properly.

This requires review of your competition and how they appear on web search engines. We are no exception, and regularly review who shows up in search results in our industry.

What is so very interesting is who calls themselves a Technology Consulting company, or a Telecom Consulting Service, and who is really doing consulting. If you do a search for a consultant, you get a lot of results. Unfortunately, most of them are not for true consulting agencies, but for systems integrators, resellers, bandwidth brokers or software companies – all promising to save you money by buying something other than consulting!

Usually, they are easy to spot. When you read the web site, you see a link called “Partners”, “Providers” or sometimes “Affiliates”. The other companies that seem to fall into this category is the staffing firms that provide “IT Consultants” – typically software developers, or contract based technical support personnel. The more difficult ones to identify are the organizations that offer savings, RFP development or other services, but then offer to sell or broker the service based on the savings that you receive.

The secret they don’t tell you is that they also get revenue from brokering the service. This means that their objectivity is suspect – is that the best solution or service for you or for them?

True telecom expense management Consultants can work on a savings based structure, but they don’t broker the service.

However, an independent telecommunications consultant provides nothing but knowledge. A better description would be Business Technology Consultant – someone who is paid to not only think about cost savings, but the long term goals of the clients business. They should be vendor neutral, and should be recommending solutions across vendors, and multiple technologies. While they may provide other services like project management, or implementation support, their primary service should always be to guide your business technology goals.

Praecom Consulting has written two white papers that address issues around hiring a consultant. “Why Hire a Consultant” gives 10 reasons why a business should consider hiring an independent consultant, and “The Truth about IT Certifications” which reviews the background of IT vendor certifications and why they are not always the best guide for hiring technology consultants.

Both of these papers are available on our White Papers page.

Phantoms & Ghosts Just In Time For Halloween

A few months ago, a client called with an unusual problem. They were in the process of adding a new radio system to their dispatch operation.

Their radio tower was a few hundred feet from the 911 center and was connected via private telephone cable that we had engineered a number of years ago. The problem was they had run out of cable pairs in the telephone cable. Their growth over the last few years, as they had gone to county wide dispatch, had taken all future growth we had allowed in our planning.

The radio system was a simple simplex system with tone remote control. The cost estimate to add a new telephone cable was not in this year’s budget. The administrator came to us and asked what other alternatives do we have? The service shop had offered a number of high tech solutions including adding digital multiplexing to the cable to derive the needed circuit. This was also cost prohibitive. We offered to solve the problem for only a few hundred dollars plus our hourly fees.

In years past, the telephone company derived “phantom circuits” on physical copper cable pairs by the use of transformers (in telephone parlance, they are called repeat coils). This method is still a viable solution as it can give a third audio circuit for every 2 copper cable pairs and the use of six 600 ohm center tapped transformers which are still readily available.

We made up a drawing and an equipment list for the local radio shop to order and install. When it was completed we were able to take the new radio system and apply it to the phantom circuit that was derived and the system was made operational without the expense of additional cable or multiplex equipment.

The explanation of how this works is as follows:

Both wires of the circuit labeled Circuit A in Figure 1 become one conductor of the phantom circuit. As the current flow in these two wires is identical and in phase there is no voltage differential across the pairs and no audio is heard in the Circuit A audio circuit. The two pairs of the Circuit B audio circuit work in an identical manner. By taking the center taps of the two transformers and using them as the conductors for the phantom transformer, a third audio channel is derived that is not heard in either Circuit A or Circuit B audio paths. Note: it is necessary that the conductors themselves for each of the two side circuits must be within 2 ohms of each other and ideally below 0.5 ohm of each other. If this is not the case, the side circuit audio and phantom circuit audio will crosstalk.

There is even one more circuit that can be derived. By the addition of an additional set of transformers, connected between the center tap of the phantom transformer and ground, an additional audio circuit can be derived. This circuit is unbalanced and can be susceptible to audio hum, however, this circuit can be used as an intercom circuit between the radio equipment building and the dispatch center. This gives four separate voice circuits over 2 copper cable pairs and some inexpensive center tapped audio transformers.

A PDF version of this article is available for download in the White Papers section of this website.

Copyright 2009 all rights reserved.