Friday, January 5, 2018

Thoughts on Nintendo Switch

Hi all!

Are you digging the Nintendo Switch yet? Or still just a curious observer?

I got my Switch back in August of 2017, starting out with Breath of the Wild, Mario vs Rabbids and Sonic Mania :). With a recent influx of games received over the holidays, my collection now includes Super Mario Odyssey, Rime, Just Dance 2018, Cave Story+ and Axiom Verge.

Since I've always been a Nintendo fan, the Switch is unsurprisingly my favorite console to date. It's very similar to what I imagined the successor to the Wii U would be, a smaller version of the Wii U Gamepad, which could also act as a dedicated console.

The Joy Cons were a wonderful surprised, as I didn't expect them at all. The Wii Remote redefined the gaming controller, and the Joy Cons have refined the concept to near perfection with a slimmed down design, analog sticks instead of D Pads, and amazing haptic feedback. Playing Just Dance on the Switch with the Joy Cons is much nicer than the clunkier Wii Remotes, with excellent motion tracking (though I do also long for a controller-less design, a la the now discontinued Kinect) and the ability to use both Joy Cons at once (even if not for songs).

Likewise, the Pro Controller is great to use when you want a more classic feel.

Multi-cartridge support?

While the eShop is great, I'm always a fan of physical cartridges over eShop versions of the game. While I bought Sonic Mania digitally, since I didn't see any evidence of a physical release, I specifically waited for Axion Verge on the cartridge since I knew one was coming. I may also end up buying Blossom Tales digitally, though I'll still wait for a bit in the hopes of a physical release. 

It would be nice to have the convenience of eShop games, but being able to retain the cartridge form factor. While this would have been prohibitive with optical media, the Switch's cartridge form factor allows a unique opportunity. While the Switch in "portable mode" makes sense to have a single cartridge, I'd love to see an attachment that connects to the dock allowing us to load multiple cartridges as once, so that you can easily switch between games on different cartridges. This makes perfect sense in a home console, and the USB ports on the Switch Dock would make the connections easy. A separate dock could also be sold which would have the multiple cartridge ports built-in.

A 2DS Player?

I'm disappointed that we have not yet seen any way to officially play 3DS games on a TV (in 2D mode), having to resort to very expensive hacks. Not only do I long to play several 3DS games on my TV (Luigi's Mansion, Link Between Worlds, Mario Kart 7, etc), it just feels "right" as a follow up to Nintendo's previous attachments for allowing portable games on your TV (the Super Game Boy for the SNES, and the Game Boy Player for the GameCube).

The Wii U would have seemed the perfect console to create an attachment to allow this, already having a separate touch screen that allows asymmetric game play. While I've seen some arguments that the resolution of a 3DS game (400x240) would make for poor playing on a TV, even a simple 4.5x linear scaling, or less depending on your TV's resolution, should still be playable. And the fact that DS games are available on the Wii U eShop (at 256 x 192 resolution no less) kind of nullifies that argument (though I have not personally played any). It's entirely possible the resolutions could be improved for the TV, in some way.

I suspect one reason a 3DS Adapter never materialized for the Wii U was simple due to poor sales of the console. If it had seen wider success, maybe I could finally play New Super Mario Bros 2 on my TV.

Therefore, with sales of the Switch, the natural question is whether or not we could see any way to play 3DS games on a TV, via the Switch. It would be a bit more difficult for the Switch than the Wii U, but certainly possible. I expect we could see some form of "2DS Player", as an accessory for the Switch. I imagine it would take the form of the "lower" section of the 2DS XL, which the cartridge could be inserted into, supplying the circle pads and controls, as well as the lower touch screen. This could then connect (maybe even wirelessly) to an attachment that would plug into the Switch itself, which would then render itself as the "top" screen, either in handheld or docked mode. I can only imagine a case allowing you to use it in handheld mode as the "2DS XXL" :).

There is also the question of whether or not there is business case for such a perphial. A Wii U/3DS adapter may simply have been nixed due to fear of it negatively impacting 3DS/2DS sales. There is also the factor of potentially missing the 3DS eShop on the adapter. But I personally think if set at a decent price point (< $100, less than the cost of an original 2DS), such an adapter would be a huge hit and generate a decent profit margin. Even the Game Boy Player was somewhat limited compared to the Game Boy Advance, and the same could be true of a 2DS Player - for example, perhaps the eShop capabilities wouldn't exist, requiring games on a physical cartridge only. Some people suspect the 3DS may be near its lifetime, and so it would not be crazy to wait until then to release such an adapter.

Or perhaps it's all just a crazy pipe dream, and I'll have to replace my 3DS with a 2DS XL so I can play Samus Returns on a larger screen :). 

What might we see in a Switch 2?

While the Switch is still flying off shelves, rumors are flying that Nintendo is already working on a successor. Given the rapid pace of technology, this would not surprise me in the slightest.

I doubt that the Switch successor will be a brand new concept. Like the Wii U built on top of the Wii (adding a touch screen), I suspect the Switch 2 will take the Switch concept and refine it further.

When I first heard about the Switch, one thing that really surprised me was that the dock was wired. Given the precedence set by the Wii U for wireless video transfer, I expected the Switch to connect to the TV wirelessly as well (via a "Chromecast-like" dongle). The physical dock works fine, but it is a bit cumbersome. I suspect the Switch 2 will see some sort of wireless dock feature. Using a wireless dock would also solve the missing screen issue for a 2DS Player, using the Switch itself as the lower screen, but there I go dreaming again :).

The Joy Cons could also use a bit of improvement. I find constantly sliding them on and off the respective rails, depending on what mode you want to use, a bit tiresome. Perhaps we will see Joy Cons that are a bit more self contained, but still usable in a similar way, maybe with NFC syncing and wireless charging.

4K Gaming? Maybe, but I wouldn't hold my breath. We don't even have Netflix for the Switch yet, so I won't guess on any multimedia features it may have. But for now, I'll happily keep rocking the Switch and am excited for all of the new games coming up.

Cheers!

Friday, December 19, 2014

Random acts of math!

Hi all,

Hope you are having a splendid Friday!

Just a quick note to let you know about a new blog started by yours truly.

Random Acts of Math is a fun little place to share some peculiar or interesting things from the world of math. I wanted some practice writing with the digitizer on my Galaxy Note Pro and thought posting some writings to a blog would be constructive and might help a few people.

Disclaimer: I'm not a math expert, nor do I have a math degree (I did take some math courses in university). Given my amature status, always consult a professional before using anything you read on this blog for something important!

What about Jay's Desktop?

It hasn't been forgotten! Jay's Desktop will still be a place for me to write about things I find interesting  ("a place for my stuff") when the mood strikes me. Think of Random Acts of Math as a "subset" of Jay's Desktop - specifically for math-related items and writing from my tablet :)

Cheers!

Tuesday, March 18, 2014

Ubuntu 12.04 Tips: Clearing out old kernels & SSD Trim

Greetings!

I've recently come across two Ubuntu/Linux tips that I wanted to share (and document). They are particularly import if you run Ubuntu on machine with not a lot of extra hard disk space. In my case, I have a hybrid hard drive with a 24 GB SSD partition, and a 750 GB data partition (Ubuntu is installed on the SSD partition of obvious reasons, while most of /home uses symbolic links to the data partition). Over the last year or so, I've noticed my SSD partition steadily increase is size, from about 40% to 73%. Fearing I would run out of room soon, I did some research on if this was merely from system updates and installed software, or if it was something else. I also noticed my machine seemed to be responding much more slowly then it had been when I first set it up a year ago, and tried several things to improve performance without much result. I feared this might be related to the lack of disk space on the SSD partition as well, and I was sort of right.

These tips might be application to other Linux distributions as well. As always, use at your own risk!

1) Clearing out old kernel versions

Ubuntu keeps old kernels hanging around after you install new ones via auto-update. They can take up quite a bit of room. There are some good reasons for keeping old kernels around (e.g. reverting if a kernel update breaks something). But it's unlikely you'll need all of them.

Here's a simple little command line to clear out the old ones.

sudo apt-get purge $(dpkg -l linux-{image,headers}-"[0-9]*" | awk '/ii/{print $2}' | grep -ve "$(uname -r | sed -r 's/-[a-z]+//')")

It worked for me, and cleared out a good 6 GB or so of old kernel files, which made a big difference on my 24GB SSD (I went from 73% full to 45% full). I'd recommend only doing this after you've confirmed that the newest kernel works, and even then, you might want to modify it slightly to keep the second last kernel just in case.

This also made a big difference in my /boot partition, which is about 500 MB. This is in fact what led me to find this, as I'd been getting error messages upon boot about /boot being nearly full and went to investigate. After removing the extra kernels, I'm back down only 29 MB full on /boot. Nice!

Reference: http://askubuntu.com/questions/89710/how-do-i-free-up-more-space-in-boot

2) SSD Trim

If you ever use Ubuntu on an SSD drive (as I do), your performance will slow overtime unless you periodically run the 'fstrim' command (e.g. in a daily cron job) to send SSD delete commands for removed files. I definitely noticed a drop in performance over the last year, and have been trying to diagnose why when I came across this little gem.

Since then I've started running fstrim, I've definitely noticed an improvement. The first time I ran it, it sent about 7GB worth of "deletes". They are supposed to add this in 14.04, but it's not in as of 13.10 (or 12.04, which is what I use).

Full instructions and information are in the link below. Enjoy!

Reference: http://www.howtogeek.com/176978/ubuntu-doesnt-trim-ssds-by-default-why-not-and-how-to-enable-it-yourself/

Friday, March 14, 2014

Converting Desktop to Dedicated HTPC

Hello everybody, and good day to you :)

Recently, I converted my Ubuntu 12.04 Desktop/HTPC/DVR to a dedicated media center. It had previously served a dual purpose as both my HTPC and my general use "day to day" computer. This approach had both pros and cons, for one thing, it was easier to configure and work on the HTPC portions, but I also had to run unsightly cables from the machine to the TV. Having moved recently as well, I found there wasn't as much room in the new living room to use the desktop and HTPC simultaneously.

So, I decided it was time to convert the desktop/HTPC into a dedicated media center!

This was a fun project which I thought I'd share my experiences on for anybody interested.

Requirements

Like any worthwhile project, it's important to set a series of requirements and guidelines. This helps you move towards your goal using milestones and measure your success.

In this case, my requirements were as followed:

  1. Use a horizontal ("desktop") form factor case to house a full size ATX motherboard, power supply and PCI-card while being small enough to fit in my entertainment center with sufficient air-flow.
  2. The case should be aesthetically pleasing and fit match with the other elements in the center.
  3. Fit a 5.25 inch optical drive, media card reader and front-USB.
  4. Integrate in the IR Blaster and Receiver.
  5. Integrate most of the media center/MythTV controls to be run from a remote control.
  6. Find a wireless keyboard with built-in touch pad for more fine-tuned control when necessary.
  7. Set up for remote access from other computers on the home network.
Equipment

I was able to reuse most of the hardware from my previous tower, so I was able to keep the cost of the project pretty low. The only thing I had to purchase new was the tower, the optical drive, and the keyboard with built-in mouse pad.

I looked carefully at quite a few websites, but in the end, I turned to my old friend NewEgg.

The case I purchased was an APEVIA Black SECC case. It seemed to have the best features overall for a reasonable price. I measured out the spot on my media center to make sure it would fit with a few inches clearance for air flow. It does stick out the back end of the media center slightly, but this isn't noticeable unless you are looking directly from behind (the media center is "kitty-cornered", so it's not noticable).

Overall, I'm very pleased with the APEVIA Case overall. Major Pros are the size, form factor, heat dissipation. Minor cons include difficult to remove front bazel, the memory card reader not being flush with the front case, a bright power LED, and the included power supply only being 20-pin instead of 24-pin (no deal breakers).

I didn't bother with the built-in power supply since my motherboard recommends a 24-bit connector. However, it was extremely easy to remove the old power supply and reuse the one from the existing tower (note that if you do use the internal power supply, it's switched off by default, so you'll want to remove the front bezel to turn it on).

The front bazel requires some muscle to take off (then again, I'm not exactly The Hulk). Use your fingers under the lip (coming from the narrow side) and brace your other hand against the case and pull firmly. It should make a snap and come off.

The power LED is extremely bright. This was easily fixed for me with a piece of electrical tape. A small part of the LED is still visible for functionality purpose.

At first, I was a little worried about heat. My old tower had a funnel directly over the processor for air flow, while the APEVIA does not. You'll notice two fans on the back which plug into the power supply, as well as side vents and a power supply vent. My processor (an AMD) gets a fair amount of load when playing back or encoding media, however I have yet to see the internal temperature sensors get much above 35-40C, which is great.

The case fans (and my power supply) are quiet enough for a HTPC setting. At least for me, but I don't personally notice them. Obviously it's not as quiet as it could be if you were running a full fan-less system.

I use my memory card reader in the external 3.5 inch slot. Strangely, there are these two plastic "lips" on either side of the bay which prevent the reader from coming all the way to the front. But it's only recessed about 1/8 of an inch, so really no big deal and it works overall. You might even be able to file the lips down if you have lots of patience.

It took me about an afternoon to transfer all the internal guts (motherboard, processor, power supply, etc. to the new case). While I've swapped PCI cards, memory and drives many times, this was my first motherboard install from scratch. I found a wonderful guide here that you might like to read if it is your first time as well. A less detailed, but still informative guide, is here. One tip is to install the memory card reader and hard drive before the motherboard.

Sure, I could have gone even smaller (micro-ATX), but since I wanted to keep the project cheap, I'm extremely happy with the result overall.


For the optical drive, I debated rather or not I even needed one. My old tower did not have one, and I used an external USB DVD-optical drive when required. I could simply have kept using that. But that detracted away from the integrated media center affect, and I was also hoping to upgrade to BluRay burner so I could backup many of the home movies and photographs I've acquired over the years onto a larger media format (also working on transferring stacks of home movie VHS tapes I recently acquired from my mom).

I again purchased the BluRay burner from Newegg. So far I've had no problem reading and burning DVD's, but I have yet to try any BD-R or BD-RE discs. I purchased quite a few for a low cost at a local store, so hopefully I'll have a chance to try them soon.

The keyboard with integrated touch pad was quite interesting. I wasn't even sure 100% sure they actually existed. But I happened to come across one while out on a shopping trip for a good deal, and couldn't pass it up. It's a Logitech K400. It works great as a HTPC keyboard. The keys are a little small if you were doing a lot of serious typing, but for a HTPC, it's perfect. The integrated touch pad works great. The receiver is very small, it actually includes an extension USB receiver in case you find it too small! I use one of the front USB ports for it, so I can easily move it to another machine if necessary.

One neat feature of the K400 is that it has an "on/off" switch. So you can conserve the battery life when it's not in use, which is great.

Finally, what media center is complete without a Logitech Harmony? The particular model I chose was the 700 model, which is pretty fantastic. Of course, I already had the Harmony for a while now (ergo I didn't include it in the cost of the project), but wanted to mention it overall. It works great with the HVR1600, if you are wondering. I was able to get it for significantly cheaper than retail price on a "Boxing Day" sale here in Canada, so keep an eye out!

Software

Thankfully, much of the software was already configured and in place from when I used the machine as a tower. I did make a few tweaks though.

The main OS is Ubuntu 12.04 and the "DVR" and Media Center software is MythTV. The IR Blaster works as before. MythTV allows me to play back any recordings, or other media I might have.

People who build MythTV boxes tend to keep them running 24/7. Personally, this isn't my style, since I don't like to waste energy needlessly. But, if the box isn't on, it can't record anything.

If your BIOS supports it, there is a really nifty feature called RTC Wake. Basically, it allows you to write a simple Unix time to a file, usually in the Proc system. You can then start your system at the appropriate time from a completely powered off state.

Thursday, March 13, 2014

Linux Conversion for ASUS S56C (Part 2)

Welcome back!

In Part 1, you saw how to create recovery media for Windows 8.

Sadly, booting to that recovery media (or the install disc for an alternate operating system), isn't trivial.

You might be familiar with older-style systems where you could bring up a boot menu by holding down a hot key on boot, as well as a hot key for booting into the BIOS.

For the ASUS S56C (and possibly other machines as well), that key is Esc. Hold it down as the computer is rebooting (I'm having trouble getting into the BIOS from a cold boot, i.e. from powered-off state, it only seems to work on a reboot. Not sure why yet.)

However, when the menu comes up, you'll notice that you have only two options: The Windows Boot Volume, and "Enter Setup" the BIOS. There are no options to boot from LAN, a USB Key, or an Optical Drive. Very disappointing.

The reason for this is a new form of BIOS called UEFI, and a feature of UEFI known as Secure Boot, which prevents you from starting any unsigned boot loaders with the system loads. The introduction of this has led to a lot of controversy, though it has both good and bad features (think malware which can attack the boot sector). In my option, the most important thing is that it's possible to disable it, or add additional signing keys, so that you can boot a custom operating system. I'll save you the nitty/gritty details (but I encourage you to read them here), and will hop into the details on how to get this thing to boot from something other than the hard drive.

Once you enter the setup, there are four options we need to be concerned with. Sadly, these options are anything but clearly labeled or explained. It took me some time to find the right combination of options to get it working, which was one of my motivations for writing this post.

Security Tab / Disable Secure Boot

1) Much of the documentation you'll read on UEFI/Secure Boot will tell you the first step is to disable the "Secure Boot" option the BIOS. In this System, Secure Boot is called "Secure Boot Control". It's enabled by default, so switch it to "Disabled"

Boot Tab / Enable Legacy BIOS

2) Next, go to the "Boot" tab. The first option you are concerned with here is "Fast Boot". Disable it.
3) The next option is called "Launch CSM". CSM stands for "Compatibility Support Module", as is part of the EFI framework to support legacy BIOS. Change it to Enabled.
4) When you enable Launch CSM, you'll notice the "Fast Boot" option disappears, and a new option called "Launch PXE OpRom" appears. Enable it.

Now, save your changes and reboot. When holding down "Esc", this time you should see new boot options for the optical drive, LAN and USB. Horray! From here, you should be able to boot to the newly created recovery media

Friday, March 7, 2014

Linux Conversion for ASUS S56C (Part 1 - Windows 8 Backup)

Hello everyone!

This post was a long time in coming. I started it last April, but got distracted with life and happiness (you know, those non-computer related things). Anyway, last April I picked up an "Ultrabook-style" laptop to serve as my daily machine (a snazzy Asus S56C). The blog post chronicles transferring the machine to a Linux-based one, and what steps I had to go through to do so as well as any Tips and Tricks I discovered along the way. Note that I was using Windows 8.0 at the time, so the newer update, 8.1, might have some of the issues I ran into fixed.

Backing up Windows 8

I knew from the beginning that this would be a dedicated Linux machine. My first impressions on trying to use Windows 8 was basically that it was a pile of insanity served in a bowl of nonsense (I'm not biased or anything I swear :)). I'm sure I could get used to Windows 8 eventually, if I had to, but I would probably do some tweaks to get a more traditional desktop feel.

But since I knew Windows wouldn't be staying, I didn't invest too much time in that. However, I felt it was important to protect my investment by making sure I had a backup of the pre-installed Windows, in case I ever needed to restore it.

Sadly, even this was more complicated them I'm used too, so I thought it prudent to start the conversion blog with some helpful tips on backing up the original system.

Recovery Discs

I'm very used to making recovery discs on older systems, in fact I encourage everybody using computers to have some sort of disaster-recovery mechanism in place (including, but not limited to, recovery discs).

Normally, these recovery disc tools are provided by the computer manufacturer. They also take the form of a "recovery partition" on your hard drive, although in my experience, you can also make recovery discs in case the partition gets removed (or corrupted). These discs usually server to restore the partition in that case.

That's all fine and dandy, but I had a hard time finding the mechanism to create recovery discs. There didn't seem to be a manufacturer provided tool, and indeed there wasn't. However, with some Googling, I found how to create a create a recover drive from within Windows 8. That's right, you don't seem to be able to use discs any more, instead you create a recovery flash drive.

To get to the tool, open your "Charms" bar (move the cursor to one of the screens four corners), and select the "Search" option (it looks like a magnifying glass). In the Search Bar, type "recovery".

You might be surprised, as I was, to find zero results. That's because things are partitioned into categories in the search results. By default, you are only searching the "Apps" category. To find the recovery drive tool, you need to search the "Settings" category. Do that by clicking the "Settings" button. Personally, I would consider the tool an "App" and not a "Setting", but what do I know? :)

Once you search for "recovery" in Settings, you should find a link called "Create a recovery drive". You'll need a minimum 16GB Flash Drive to use. Thankfully, these aren't too expensive now-a-days (around $9.99 CDN here).

After that, follow the instructions to create the recovery drive. NOTE: You sadly won't be able to boot the recovery drive without doing some additional steps, but we'll get to in Part 2.

Create a System Image

If you are familiar with Windows 7, you know that you can create a complete Windows 7 system image to restore to an alternate drive (for example, if you suffer a hard drive malfunction).

This tool still exists in Windows 8, but it's hidden in an even more obscure location. Like before, go to the search menu and type "recovery" (in the Settings category). Look for an option called "Windows 7 File Recovery". "Windows 7??", I hear you asking. Yes, Windows 7. To me, a tool called "Windows 7 File Recovery" would be some sort of tool for recovery files from Windows 7 (perhaps a backup). And, indeed, it is. But it's also where they decided to hide the System Image tool (used for Windows 8). Strange, but let's continue...

In the Windows 7 File Recovery menu, there are two links on the side:
1) Create a System Image
2) Create a System Repair Disc

We're going to use the first one.

Broken out of the box

Sadly, it turns out the System Image tool is broken out of the box. If you try to run the tool and following the instructions to put the image on DVD, you may get a rather cryptic and unhelpful error message:

"The backup failed. The drive cannot find the sector requested. (0x8007001b)."

The error is actually referring to the fact that the disc isn't formatted. You could probably manually format the disc, and it would work fine, but you'd think the too would do that for you, no?

Well, it does, as long as you install the update to fix it. Installing all the recent updates (which is a good idea before doing a System Image anyway) should repair it, or if you are the impatient type, here's a link to the Knowledge Base article which will help you download the specific patch: http://support.microsoft.com/kb/2779795

After you are up to date, click the link to create the system image and follow instructions. You can choose an external hard drive, or a "one or more DVDs". I opted for the DVD option (on DVD-RW's so as to not be deleted by accident, which led to the error above). On a fresh out of the box system, it took me four DVD's. Don't forget to label them!

Create a system repair disc

Last but not least, you should probably create a System Repair disc. It seems the system image you made above is useless without a tool to load the image back onto your hard drive. The system repair disc might also come in handy for other reasons. You'll find one of the options on the system repair disc is to restore the machine from a system image. However, I won't go into those details here.

Note: TechRepublic posted an article great here which goes into far more detail on the steps than what I mentioned above. It suggests that creating a System Repair Disc and a Recovery Drive are effectively the same thing, and you probably don't need to do both. Although I find it a little strange that the Recovery Drive takes up most of a 16GB flash drive, but the System Repair Disc fits on a single DVD. Regardless, I felt more comfortable having both, so I created both.

A few other tips:
1) Feel free to check for (and install) any BIOS updates before removing Windows. There's a handy Windows-based tool for updating your BIOS, though my machine was update to date out of the box. You can probably also do the flash from within the BIOS, but I'm not 100% sure.

2) You might get bugged after a few boots to register your system. This is probably a good idea if you want your manufacture warranty.

3) You can get to the recovery partition by holding down "F9" on boot. As far as I can tell, the interface is very similar to the Repair Disc and the Recovery drive interface.

That's it for now! In Part 2, I'll show you what BIOS you need to update to boot from USB or DVD, either for restoring one of your backups (or installing Linux). All the best!

Tuesday, July 23, 2013

Be wary of a solely app-centric ecosystem

Happy Tuesday everybody!

I recently switched my trusty Nokia N900 for the more mainstream (but not as geeky) Samsung Galaxy S3.

Don't get me wrong, I really loved my N900. I used it as my primary cell phone for over three years. But sadly, it was supported by only one carrier where I live, a carrier which was gouging me for a very basic plan (I didn't even have data). So when a new carrier launched locally which offered everything I already had on my plan (plus data) for a little over half the cost, I could no longer ignore the economic argument of switching.

I'd done a fair bit of research on the S3 (and Android in general) before switching as I wanted to make an informed decision. The S3 was appealing since it was on for $99 (with a two year agreement. However, recent changes in the law allow people to quit contracts early simply by paying off the device balance, which I think is fair). I also was lucky enough to get the $99 purchase fee wavered as a special opening day offer, so I effectually got the phone for free). I also considered the S4, but the few extra features it had over the S3 really didn't see to justify the cost (economic argument wins again). So far, I've been mostly happy with the S3.

(N900 purists: don't despair! While it may no longer be my primary phone, my N900 shall not go to waste, as it is a truly wonderful device. I'm already working on plans to re-purpose it as a dedicated media player and/or web server).

In any case, this blog post is not about comparing the merits of the N900 vs the Galaxy S3. Instead, it's about a possibly disturbing tend I've noticed since switching over to the S3.

The nature of "apps"

One of the biggest selling points of mobile devices is the size of the "App Store", i.e. what kind of 3rd-party applications can be added to the device to add more features.

Apps are, of course, nothing new. Every since the early days of computers, people have been buying them not just for the software that comes included with the computer, but for the software which can be added onto the computer after the fact. Back in the day, we simply called them "programs" or "software". This became synonymous with "application", which was eventually just shortened to an "app".

The distribution of 3rd party applications have changed as well, since the introduction of mobile operating systems. Originally, software was produced on physical media (CDs, Floppy Disks, etc), bought at brick-and-mortar stores, which the user put into their computer and installed the software. With the rise of the Internet, its become much easier to simply transfer the software electronically and cut out the middleman. Even in the early Internet, there were many sites dedicated to downloadable software. The idea of an app store basically builds this into the operating system itself (Linux distributions of course long ago introduced this as a software repository).

Why apps are good

That's all fine and dandy. App stores make it a lot easier for the application developers to get their applications into the hands of customers, while making it easier for customers to get the applications.

Apps also tend to be more tailored to the specific hardware, or platform. This can (although not necessarily does) mean that the software can be better tested before being released, and thus less buggy. If a company writes both the software serving the information, and the client interpreting it, they can do a better job of making sure the protocol works together and their application will work better since they won't have to rely on potentially buggy clients which detracts from their service.

Why apps are bad

In the early days of the Internet, it was well established that the protocols which distributed information over the Internet (HTTP, FTP, POP, etc) were publicly published, and well understood. That meant that there existed a common language spoken by both the client and the server. The server used a specific protocol to provide the information, and anybody could read the protocol specification and write a client to determine and display the information. For example, a web server is written which speaks HTTP, and a client is written which also speaks HTTP. This had two benefits 1) Anybody could write a client to interpret the protocol, and develop it as they see fit; 2) A single client could interpret many different types of information (e.g. over HTTP) from many sources, without the need for thousands of protocols to be developed. After all, every protocol needs a client. Imagine if every single website on the Internet required a separate web browser, or if a single web browser was thousands of times bigger because it had to support thousands of different protocols. Chaos, I say, chaos.

And yet, many apps seem to be taking this approach. Even organizations which are severed well by nothing more than a website are instead creating tailored applications, instead of expecting users to access the site through the web browser.

In other words, apps are encouraging proprietary protocols, which are read by  specific clients instead of clients which can be written by anybody. This, in and of itself, isn't a bad thing, for the reasons I mentioned above.

The concerning thing is that if a prominent or well-used served decided to drop support for the public protocol (e.g their website) and only support the proprietary protocol. Then, in order to access the service, you have a dependency on being able to access their client, which further has a dependency on having the platform that their client runs on. For people who like to be able to develop their own custom clients, or run custom platforms, this wouldn't be acceptable.

While this trend started with mobile devices, it seems to also be migrating over to more traditional computers. For example, even Windows 8 encourages software to be gained via an app-store rather then accessing the information through a common protocol via a web browser.

It also means that you have to have more software on your machine, which means that you need to consume more resources. I do understand creating custom clients (apps) for things which need highly customized protocols (especially ones optimized for speed, i.e. gaming protocols), but there are a lot of organizations out there developing apps for information which, in my opinion, simply don't need them and would be just as well served via a web browser. However, out of the ones that I'm aware of, it's not like they have discontinued the public interface, but rather simply added an app-based one to enhance access to the service.

Conclusion

So as long as the app isn't the only way to access the information, we shouldn't have an issue. But maintaining two separate protocols (a public and a proprietary one) is costly and resource consuming. So, one could see the argument in switching to only using one. And given the benefits to a proprietary protocol and client I mentioned above, it's easy to see why it would be tempting to go that route.

In any case, it's mostly food for thought, but something that I'll continue to be weary of in the future. Hopefully there is room for both private and public-type protocols to exist side by side. If not, there are ways we can deal with lack of public protocols such as virtual machines. I'm also encouraged by the fact that things like Android are based on open source principles, which tend to be easier to visualize if necessary, unlike other platforms.