Happy Tuesday everybody!
I recently switched my trusty Nokia N900 for the more mainstream (but not as geeky) Samsung Galaxy S3.
Don't get me wrong, I really loved my N900. I used it as my primary cell phone for over three years. But sadly, it was supported by only one carrier where I live, a carrier which was gouging me for a very basic plan (I didn't even have data). So when a new carrier launched locally which offered everything I already had on my plan (plus data) for a little over half the cost, I could no longer ignore the economic argument of switching.
I'd done a fair bit of research on the S3 (and Android in general) before switching as I wanted to make an informed decision. The S3 was appealing since it was on for $99 (with a two year agreement. However, recent changes in the law allow people to quit contracts early simply by paying off the device balance, which I think is fair). I also was lucky enough to get the $99 purchase fee wavered as a special opening day offer, so I effectually got the phone for free). I also considered the S4, but the few extra features it had over the S3 really didn't see to justify the cost (economic argument wins again). So far, I've been mostly happy with the S3.
(N900 purists: don't despair! While it may no longer be my primary phone, my N900 shall not go to waste, as it is a truly wonderful device. I'm already working on plans to re-purpose it as a dedicated media player and/or web server).
In any case, this blog post is not about comparing the merits of the N900 vs the Galaxy S3. Instead, it's about a possibly disturbing tend I've noticed since switching over to the S3.
The nature of "apps"
One of the biggest selling points of mobile devices is the size of the "App Store", i.e. what kind of 3rd-party applications can be added to the device to add more features.
Apps are, of course, nothing new. Every since the early days of computers, people have been buying them not just for the software that comes included with the computer, but for the software which can be added onto the computer after the fact. Back in the day, we simply called them "programs" or "software". This became synonymous with "application", which was eventually just shortened to an "app".
The distribution of 3rd party applications have changed as well, since the introduction of mobile operating systems. Originally, software was produced on physical media (CDs, Floppy Disks, etc), bought at brick-and-mortar stores, which the user put into their computer and installed the software. With the rise of the Internet, its become much easier to simply transfer the software electronically and cut out the middleman. Even in the early Internet, there were many sites dedicated to downloadable software. The idea of an app store basically builds this into the operating system itself (Linux distributions of course long ago introduced this as a software repository).
Why apps are good
That's all fine and dandy. App stores make it a lot easier for the application developers to get their applications into the hands of customers, while making it easier for customers to get the applications.
Apps also tend to be more tailored to the specific hardware, or platform. This can (although not necessarily does) mean that the software can be better tested before being released, and thus less buggy. If a company writes both the software serving the information, and the client interpreting it, they can do a better job of making sure the protocol works together and their application will work better since they won't have to rely on potentially buggy clients which detracts from their service.
Why apps are bad
In the early days of the Internet, it was well established that the protocols which distributed information over the Internet (HTTP, FTP, POP, etc) were publicly published, and well understood. That meant that there existed a common language spoken by both the client and the server. The server used a specific protocol to provide the information, and anybody could read the protocol specification and write a client to determine and display the information. For example, a web server is written which speaks HTTP, and a client is written which also speaks HTTP. This had two benefits 1) Anybody could write a client to interpret the protocol, and develop it as they see fit; 2) A single client could interpret many different types of information (e.g. over HTTP) from many sources, without the need for thousands of protocols to be developed. After all, every protocol needs a client. Imagine if every single website on the Internet required a separate web browser, or if a single web browser was thousands of times bigger because it had to support thousands of different protocols. Chaos, I say, chaos.
And yet, many apps seem to be taking this approach. Even organizations which are severed well by nothing more than a website are instead creating tailored applications, instead of expecting users to access the site through the web browser.
In other words, apps are encouraging proprietary protocols, which are read by specific clients instead of clients which can be written by anybody. This, in and of itself, isn't a bad thing, for the reasons I mentioned above.
The concerning thing is that if a prominent or well-used served decided to drop support for the public protocol (e.g their website) and only support the proprietary protocol. Then, in order to access the service, you have a dependency on being able to access their client, which further has a dependency on having the platform that their client runs on. For people who like to be able to develop their own custom clients, or run custom platforms, this wouldn't be acceptable.
While this trend started with mobile devices, it seems to also be migrating over to more traditional computers. For example, even Windows 8 encourages software to be gained via an app-store rather then accessing the information through a common protocol via a web browser.
It also means that you have to have more software on your machine, which means that you need to consume more resources. I do understand creating custom clients (apps) for things which need highly customized protocols (especially ones optimized for speed, i.e. gaming protocols), but there are a lot of organizations out there developing apps for information which, in my opinion, simply don't need them and would be just as well served via a web browser. However, out of the ones that I'm aware of, it's not like they have discontinued the public interface, but rather simply added an app-based one to enhance access to the service.
Conclusion
So as long as the app isn't the only way to access the information, we shouldn't have an issue. But maintaining two separate protocols (a public and a proprietary one) is costly and resource consuming. So, one could see the argument in switching to only using one. And given the benefits to a proprietary protocol and client I mentioned above, it's easy to see why it would be tempting to go that route.
In any case, it's mostly food for thought, but something that I'll continue to be weary of in the future. Hopefully there is room for both private and public-type protocols to exist side by side. If not, there are ways we can deal with lack of public protocols such as virtual machines. I'm also encouraged by the fact that things like Android are based on open source principles, which tend to be easier to visualize if necessary, unlike other platforms.
No comments:
Post a Comment