Thursday, October 6, 2011

How not to solve a Rubik's Cube

The Rubik's Cube is one of my favorite puzzles. Upon first glace at a sufficiently scrambled cube, one might feel an instant feeling of dismay. With so many combinations, how could anyone hope to figure out how to unscramble it in a reasonable amount of time?

Many people would give up right then and there. But Rubik's Cubes have been solved countless times by countless people. It's far from an unsolvable problem, and in fact many of us take our fun from the thrill in the challenge.

I have successfully solved a Rubik's Cube on a number of occasions. I'm far from an expert solver, or a speed solver, generally it'll take me a couple of hours to solve an arbitrarily scrambled cube, not because it takes that long, but because I can only do it in stages and spending too long on a single stage get's my brain a little wonky :)

I remember the first time I solved the cube was right around the time of my thirteen birthday. Being proud, I continued to play with the solved cube until I accidentally scrambled it again. To my initial frustration, I then had to solve it a second time, which I did. It was promptly buried in the backyard after that.

(Aside: It was buried not because it frustrated me to the point I wanted to get rid of it, but we also happened to be making a Time Capsule at the same time and I thought it would make a good addition. That Time Capsule I believe is due to be unearthed next summer actually.)

Nevertheless, puzzle solved, I prompt forgot about the device for a number of years. Fast forward to several months back when I happened to see one at a local department store and decided to buy it.

I've solved this cube a couple of times since then, this time things are a little different however. Instead of just solving the cube once and forgetting about it, over time I'm practicing on it, trying to become better at solving it so that I can solve it in general for any specific cube without help.

To be fair, I've never actually solved a cube, from scratch, without some sort of assistance. Even when I solved the first cube all those years ago, I had a sort of guide that came with the cube. Now, this guide was far from a defacto "here are step by step solving instructions", but rather just a general set up "helper" steps to get where you wanted to go.

These helper steps are called algorithms, and are basically a short (or long) set of instructions on how to take the cube in one configuration and get it to another configuration. For example, how to move a block on the bottom, to the top, without changing a series of surrounding cubes (but not all cubes). I've always used these to help me, though again, even with this set of helper steps solving the cube is still fairly difficult. You still need to get it the right stage to do perform the algorithm, and could make other unwanted changes as well. Assembling them all together to solve the cube is still the hard part.

Some might label using the algorithms as "cheating", but to me, cheating at a cube would be more akin to rearranging the stickers or taking the cube apart (as many, including my original, are capable of being dismantled) and reassembling it correctly. Even expert speed solvers use these algorithms, likely taught to them by a book or someone else, or perhaps learned from scratch in some cases. Using algorithms is just a natural part of learning how to solve the cube. I suppose you could invest many hours in addition to actual solving time to develop the algorithms, or it may just be the software engineer in me which has taught me to use what's already been developed and not waste time reinventing the wheel :)

But in fact, this blog post is not about how to solve a Rubik's Cube. There are many other sites for that, one of my favorites is linked here: http://www.chessandpoker.com/rubiks-cube-solution.html

No, rather this post is about how *not* to solve a Rubik's Cube. But wait, you cry, why would anyone care about how not to solve the cube?

The answer is that there is an oft-quoted, but misguided, way that people often use to try and solve the cube. When someone first starts trying to solve it, they might start by randomly twisting sides in the hope they'll get somewhere. This procedure is doomed to fail (unless maybe you happen to be a computer program with dozens of gigabytes of memory). But rather, the misconception among novice cubers seems to be that there exists a fixed sequence of moves (moves, which I'll define below) which will solve the cube regardless of it's starting configuration (we'll call this the Magic Sequence). So a cuber with this mindset will spend all of his or her time searching for the Magic Sequence, without actually really paying attention to the mechanics of the cube.

In fact, searching for the Magic Sequence is at least equally (in fact more so) doomed to fail then twisting the cube randomly. I suppose even random twists could conceivably solve the cube eventually even if it takes eons. But trying to use a single set to solve the cube regardless of it's starting configuration is simply impossible, as I will demonstrate here. And so, it's important to explain why this method can't work and that a cuber trying to use it will ultimately never succeed.

First, let's consider what I mean by a "move". Among cubers, "moves" can have a variety of meanings, anything from a single turn to an entire algorithm. However, a common definition for a move often used by newer cubers and people searching for the Magic Sequence is a single turn, either clockwise or counter clockwise, of one of the cube's six sides (sometimes called a Quarter Turn by cubers).

Now, if you'll consider it, what is the cube actually made of? In fact, a Rubik's Cube is not a singular cube, but rather 26 smaller cubes (9 on each end, plus 8 going in a ring down the middle. There is no dead center cube, as that's where the rotation mechanism lives).

We can easily enumerate each of of these smaller 1-26 cubes and keep track of their orientation as we work the cube. Keep in mind that this enumeration would exist regardless of the color of each "face" of the cube.



Thus, from any starting configuration, there are only 12 possible unique moves, by this definition, that we can make to get the cube in a new configuration:

1. Top layer goes clockwise or counter clockwise
2. Bottom layer goes clockwise or counter clockwise
3. Right layer goes clockwise or counter clockwise
4. Left later goes clockwise or counter clockwise
5. Front later goes clockwise or counter clockwise
6. Back layer goes clockwise or counter clockwise

Here is an illustration of 6 of the 12 possible moves. The other 6 are just mirrors on the non-visible sides.







Some people might try to argue that there are more moves then this. For example, turning the bottom two layers clockwise. But, this is really equivalent to turning the top later counter-clockwise. Likewise, turning the right layer clockwise twice is really just two applications of the same unique move. Each move will have a fixed, predetermined affect on the cube that can't be altered.

A solved cube, with all sides showing the same color, will live as one, and only one, possible configuration of the individual cubes. The cube can only exist in a single configuration at a time, it cannot exist in two different states simultaneously. Agreed? This is Rubik's cube, not Schrodinger's cube, after all.

Let's say that I start with the cube in a given configuration, called Configuration A. I then perform a sequence of moves on the cube (where a sequence is an ordered set of the twelve moves listed above). We'll call this Sequence A. Upon applying Sequence A, to Configuration A, we end up in another Configuration B (which could possibly be equal to Configuration A). The important thing to remember is that every time I start the cube in Configuration A, and apply sequence A, I'll *always* end up in Configuration B. Likewise, if I start the cube in Configuration B, and perform Sequence A backwards, I'll always end up with Configuration A. This is true for all possible Configurations and Sequences of the cube. There is no possible sequence that I can perform on a configuration such that applying the same sequence to the same configuration will result in multiple configurations. Make sense? The diagram below shows the symmetric nature of applying a sequence to a configuration.



Given that, to disprove the "Magic Sequence", we can use what's called a proof by contradiction. The basic premise is that if we can show that the existence of a magic sequence would lead to a contradiction given the rules of the cube we've described (cannot exist in more than one state, and configuration + sequence = a single new configuration), then it can't exist.

Let's call a cube's solved state Configuration Z.
First, let's imagine that the "Magic Sequence" did exist. This would mean that there exists a sequence such that I can apply the sequence to Configuration A and end up with Configuration Z, and that I can apply the identical sequence to Configuration B, and end up with Configuration Z, (where A and B are *not* the same configuration, and represent any configuration the cube could be in).

But then, what would happen to Configuration Z if I performed the "Magic Sequence" backwards? Remember that we stated the cube could only exist in one configuration at any one time.

If the "Magic Sequence" existed, then I'd be able to perform it backwards from the solved state (Configuration Z) and end up with *both* Configuration A AND Configuration B, which is impossible, since the cube can only be in one configuration at a time. In other words, I can't apply the "Magic Sequence" to Configuration Z, and wind up with A, then later apply the same sequence to Configuration Z, and wind up with B, since a fixed sequence of moves will always have the same result on the cube. Therefore, the "Magic Sequence" does not exist.



Make sense?

Now, there are plenty of other ways to solve a cube. Many of them involve the application of multiple algorithms in order, which is sort of like a Magic Sequence. But the important distinction is that the algorithms involve decision-making, e.g. applying a different algorithm next based on the result of the last. If such a "Magic Sequence" like the one disproved existed, you'd be able to solve the cube without any decisions.

Feel free to leave any questions or comments you might have below and I'll do my best to answer them!

Happy cubing!

Tuesday, July 12, 2011

Why the Wii U should support GameCube

Recently announced was Nintendo's next generation console: the Wii U.

A lot of buzz has been generated around the new console, and with good reason.

Begin a long time fan of Nintendo, I am highly intrigued by the new console. It supports a new array of features, including an innovative touch screen controller and high definition game support.

The funny thing is, I remember reading an article a number of years back that suggested a touch screen controller was among the first prototypes for the Wii; but they decided against it due to fear of imitating the DS too much....I have no idea if that's true or not; but if so this certainly seems an interesting turn of events!

I'm glad to hear that the Wii U will support Wii games and all current Wii-based controllers (classic controller, balance board, etc); although this rumored concept of only one Wii-U Tablet controller per console sounds limiting; Nintendo might just figure out a way to make it work (they could also change their mind on this as supposedly it's already been suggested that you could bring your controller over to a friends house).

But what I really want to talk about is legacy game support on the Wii U. My understanding is that the Wii support for GameCube games is due to having an on board GameCube processor, of which the GC controller ports and GC Memory Cards are wired into (but also have an inturrupt to the Wii processor so they can be used by Wii software as well).

My experience when playing a GC game on the Wii has been just that: it basically acts just like a GameCube and is indistinguishable from the original console. It bypasses the Wii processor altogether.

It is obvious to see why it's tempting to remove GC support: removing the processor leaves more room in the chassis for newer, powerful processors and makes the console cheaper by not having to manufacture a second processor. Plus, the GameCube console is over 10 years old and approaching obsolescence.

But I think there's a very good case for providing GC support on the Wii U, in some form or another.

When the Wii came out with GameCube support, most people were pretty happy that they could ditch their GC with ease, why have two consoles when you can have just one? But then came the real kicker: There was also going to be a Virtual Console which would allow you to play games from the NES, SNES and N64 consoles. I think this, more then GameCube support, really set our imaginations a flame: any Nintendo fan will tell you how much they love the charm and nostalgia of the classic Nintendo games. Playing our favorite classics on a modern, updated console, without having to blow on the cartridge 50 times? Queue the drool in 3....2....1....

But Nintendo did more then they realized when they announced the Virtual Console: they set a precedent. For once; a company was saying "We stand by our old work; we want you to enjoy it as you always did, now and for years to come." It wasn't that you could only play Nintendo Wii games, but you could play nearly the entire Nintendo home console library: NES+SNES+N64+GC+Wii. From that point on, I was left with an potentially dangerous expectation: consoles should only add functionality onto their predecessors and never remove functionality (or at least, remove the minimum possible, only where it would significantly conflict with the design of the next console: e.g. the GameBoy Player on GC).

As a result, I imagined the Wii as less of gaming console and more of a gaming hub: a device that could support many different types of games from many different types of systems. As a fan of removing redundancy, this was a very exciting prospect, and a trend I personally expected to continue into future consoles such as the Wii U.

So then another question presents itself: Will the Wii U support Virtual Console titles?

Although no official word on this has been presented, I'll be fairly shocked if the answer is anything other then "Yes."

If my assumption is correct, then let's look at the list of consoles of the Wii U would support:

NES
SNES
N64
Wii
Wii U

with one glaring omission: the GameCube!

Why would you develop a console that A) Supports the 3 oldest consoles; B) Supports the 2 newest console, but C) Simply decides to ignore the 4th console in the middle?

It doesn't make a heck of a lot of sense to me. Thus, I think the Wii U should absolutely add some level of support for GameCube games, if nothing else, so that it doesn't feel like they are pointlessly skipping a console in their gaming library (supporting all consoles except one).

Now, I'm not saying the Wii U has to support GC games in the same way the Wii does (use a GC chip, have on board GC controller & Memory card slots).

But there are lots of other ways they can add GC support. One of the most obvious ways is to include GameCube games onto the Virtual console. As for controllers perhaps sell an optional adapter to connect the GC controller to USB, and run the GameCube games with emulation (I'm going to assume the Wii U's processor would be powerful enough to run GC emulation software).

Another option is to sell some sort of "mini-GameCube" device for a cheap price that includes GC ports, a disc reader and memory card slots and connects to the Wii U but is otherwise an empty shell that relies on the Wii U for processing: a "GameCube player" or sorts if you will.

There are hybrid approaches as well: allow the Wii U to read GC discs natively, but not include the ports on board but only via an optional mechanism and run the games through emulation. Heck, I'm sure you could figure out a way to connect a GC controller with a Wii-U controller or even a Wii Remote!

Point is, there are lots of different ways to do it; and no serious excuse for not doing it. The absolute minimum I would like to see is GC games on the Virtual Console and support for playing them via the Wii's Classic Controller (which we know to be supported). But this presents a problem: as much as I am a fan of eliminating the need for multiple devices (see my previous post on the subject), I am not of fan of needing to purchase the same things multiple times. In fact, I fully expect a migration tool for porting your Virtual Console purchases from the Wii to the Wii U (again, Nintendo set a precedent for this by announcing you could transfer DSi games to a 3DS).

What I think would be a great option for Nintendo and consumers is to allow a "trade in service" where people could send in older games and receive a credit to get the same game (or an equivalent in Nintendo Points) on the Wii (-U) Shop Channel. You could do this for GC discs, N64 cartridges, etc. Nintendo could recycle the components to make new games/equipment while the consumer would not have to purchase the same thing twice. I really wanted to see this materialize with the Wii Virtual Console, but it never did. However, if more of Nintendo's legacy games continue to wind up on the Virtual Console, I feel it's becoming more of a necessity.

So, not saying any of this is going to happen, but it's a possibility. Would lack of GC support altogether stop me from purchasing a Wii U? Probably not, but it gives me less of an incentive knowing that I'd need to hold onto my Wii (or GameCube) as well. Obviously Nintendo is a company first, with a revenue line to think about, without which they can't produce new consoles and games. But I seriously hope Nintendo continues to put their customers first, which I feel to date they have been doing a good job of. In the end, it will be more profitable for them since it will make many of us, including myself, more likely to purchase their games and equipment.

It makes no sense for the Wii's successor to not support Virtual Console games and it makes even less sense for Nintendo to simply ignore GameCube in the list of consoles the Wii U supports.

Let's hope Nintendo sees it that way too.

Sunday, June 19, 2011

Project: Starry Expanse

Happy Sunday everyone!

When I was kid, I was very interested in the "Myst" franchise. It was a very challenging game: you needed to figure out what to do each step of the game with no real direction. My friend and I played it for about 2 months before finally getting to the end.

It was followed by a sequel, called Riven, which if anything was 10 times more challenging.

There were 3 more sequels to the original, but made by difference companies. They were fun names nonetheless, but never really had the charm and appeal of the originals Myst&Riven.

Myst&Riven were "semi-3D", basically a series of still images that you clicked on to go the next still. You could click to move forwards, up, down, left, right, to give you an artificial feeling of being in a 3D environment. Given the limited technology at the time (circa. 1993), this was pretty ground breaking.

Later, a remake of the original was created called "realMyst". Gameplay was nearly identical to the first, but instead of still images, the game was recreated with a brilliant 3D engine allowing for full range of access across the environment, day/night effects, weather effects and more.

It was very exciting and while playing realMyst it gave me a wonderful feeling of nostalgia + intrigue. "This was the way Myst was meant to be played..." I said to myself. The idea was just ahead the technology at the time.

But my thoughts immediately jumped to the next most logical question: Would there ever be a realRiven?

Myst was later re-released to multiple platforms including the DS, PSP and iPhone.

Unfortunately, they seemed to only like releasing the good old-fashioned "still-based" navigation system, rather then an immersible 3D environment. Personally, I never understood this choice, as the immersible environment I felt would have a better chance of bringing in new fans.

Logically, if they did that, realRiven could follow (and possibly with game 3&4, 5 was already based on the full 3D environment).

But no 3D remakes of the other games never surfaced. Why? I cannot stay, but that hasn't stopped a group of fans from recreating the sequel in 3D themselves!

The project is called Starry Expanse. If you're a Myst fan you'll get the reference. If not, go get yourself a copy of realMyst (I believe it's on Steam), beat it, and then you'll get the reference.

This is a very neat project! Recreating a game in real time 3D is no small feat, so I wish them the best of luck. You can even make a donation on their site (I did) and get your name in the Credits!

Check out their site for more info. It's a really great project and will hopefully help open Myst & Riven to a new world of fans.

Take care!

Saturday, February 19, 2011

The Length of a Calendar Day

Happy 2011! Hope you all have a great year.

Just for the record, I don't have any New Years Resolutions. I have minor modifications to my behavior that I intended to implement coincidentally on January 1st. That's totally different. Right? :)

But just how long *is* January 1st? While down visiting my friends and family over the holidays, this question was posed to me.

Now of course, the answer seems completely obvious. 24 hours. Right? Some might argue the day isn't quite 24 hours, given various rotation periods, or only considering hours of "daylight" etc. But in general, the length of day being defined by the Earth's rotational period is 24 hours. But that wasn't the topic of discussion, rather, this one referred specifically to timezones.

That is, given that different parts of the planet can register the calendar being different days at the same time, how long does *somewhere* on the planet register a specific day?

Take January 1st for instance. I celebrated New Years at 12:00 AM January 1st, Local time. My particular time zone is UTC-4 (aka GMT-4), so by the time it was January 1st for me, it was already January 1st for more than half the planet. So how long, in hours, did January 1st last from the first moment somewhere on Earth registered it, until the last place on Earth clicked over to January 2nd?

The Simple Answer

Let's imagine we break the Earth symmetrically by time zone:

In this simplified model, the day would first "dawn" in UTC+12. It would turn 12:00 am January 1st, 2011 first in UTC+12, then 1 hour later progressively across the planet. The interesting thing is what happens when you go the *other* direction, that is directly from UTC+12 to UTC-12. This is called the International Date Line and works like so, if you cross it traveling west, then the time remains the same but you increment a day to the date. If you cross it going east, the time still remains the same, but the date decrements a day. Thus, If it's 12:00 AM on January 1st in UTC+12, then it's 12:00 AM on December 31st in UTC-12. Using this, we can extrapolate a simple chart like so:


With this model, UTC+12 would be the first place it becomes January 1st, and it would last for 24 hours, the entire time it would be December 31st in UTC-12. Once January 1st in UTC+12 clicks over the January 2nd, the International Date Line tells us that UTC+12 is now January 1st, which it has another 24 hours as Jaunary 1st. The various time zones in between would have their day in between the two extremes, but we don't need to consider the overlapping periods to answer this question. Thus, in this sense it would be January 1st somewhere on the planet for a total time range of 48 hours. Kinda neat, huh?

But not quite...

The simplified model postulated above helps us to think about the problem, but isn't quite the solution. It implies that there are 24 equally distributed time zones (not including UTC), which isn't quite the case.

Granted, there are timezones that are not offset on the hour, UTC -3:30 (Newfoundland Time) for example. But these don't matter for this analysis, since they are still within the UTC+12 and UTC-12 extremes.

The problem is that UTC+/- 12 aren't necessarily the extremes. In fact, there are two additional time zones that need to be considered, UTC+13 and UTC+14. So indeed, UTC+14, rather then UTC+12, is the first place the day clicks over.

What does this imply for our analysis? Well, think about it this way, we've started that there's a total 48 hour window between the period the day dawns in UTC+12, then ticks over the next day in UTC-12.

But, by the time the day clicks over in UTC+12, it's already been that day in UTC+14 for two hours now. And so, we must add on these two hours to the 48 window for a grand total of 50 hours being the length of a calendar day.

Wow, 50 hours!

Indeed! Quite a bit more then the usual 24. However, there's still one other aspect we haven't considered yet: Daylight savings time.

The analysis presented above assumes that all timezones (by which I mean UTC offsets) remains the same throughout the year. This is of course not true. I'm currently in my local timezone of UTC-4, however, for a number of months throughout the year while DST is in effect, I'll in fact be UTC-3 instead.

Daylight savings generally follows a simple rule: Fall back, spring ahead. Thus, during the Fall/Winter months, you are in your "normal" timezone, but during DST, you are one more hour ahead then usual. This means that if you are west of the Prime Meridian, you get one hour closer to UTC (-4 becomes -3), and if you are east of the Prime Meridian, you get one hour farther away (+4 becomes +5).

How does this affect our analysis? Let's consider what this would do at the extremes we've established:

Standard | DST Offset
---------|-----------
UTC + 14 | UTC + 15
UTC - 12 | UTC - 11

This might not seem to affect our counts at all, and in this case, you'd be correct. We might be gaining an hour moving to UTC+15, but we're losing an hour off the other end, keeping the count to our previous count of 50 hours.

The interesting thing is that, while DST offsets us by an hour, not all places observe it. This creates a few interesting scenarios.

For example, presume that there exists a place in UTC+15 that does observe Daylight savings time, but a place in UTC - 12 that does not observe daylight savings time. The resulting effect would be that the UTC+15 and UTC-12 timezones were in effect simultaneously, in fact resulting in a 51-hour day.

Likewise, the opposite could be true. Imagine that there is a place in UTC+14 which does not observe daylight savings time, and that there were no places in UTC-12 which did not. This would cause us to run from UTC-11 to UTC+14 simultaneously, resulting in a 49-hour day.

My research so far however indicates that the only place I'm aware of using UTC+14, a country called Kiribati, uses UTC+14 all year round, so in fact never offsets to UTC+15. So unless I'm incorrect on that, or it changes in the future, we can rule out a 51-hour day.

In addition, according to Wikipedia with regards to UTC-12, there are in fact no human habitations in this timezone. Instead, the timezone is nautical only, observed by ocean ships which happen to be crossing through it. And I highly doubt that they bother to observe DST, or even if some did, that all would.

Thus, year round we likely have a UTC-12, which is the last part of the planet to observe a calendar day.

So even though DST could affect our analysis of a calendar-day length by an hour, due to the decisions of local (or non-existent) jurisdictions, they do not. At least for now, the length of calendar day observance remains at 50 hours, year round, regardless of local DST offsets.

Why Timezones?

To wrap up this post, a short discussion on why we use timezones at all. If you are Canadian (or even interested in the subject at all) you are likely familiar with the Sir Sandford Fleming Heritage Minute. Fleming was an railway engineer, who was fed up by the ridiculous "minute" offsets of various timezones between cities. This was because each various location liked to have noon the time when the sun was "overhead", making the setting of time as one traveled by rail very inconvenient.

So Fleming came up with a different idea: Standardized time. Dividing the world into roughly 24 equal sizes areas, it was now far, far easier to communicate times across the world and have them be relevant and make sense.

Although it took him some time to get it widely accepted and adopted, Fleming's invention of Standard time was nothing short of genius. It was likely as important as the railway and telegraph themselves in modernizing the industrialized world.

But Fleming only reduced the number of timezones. He shrunk the number back by a pretty significant amount, but didn't eliminate them completely. Why?

To me, the elimination of timezones would seem to be the next logical step. As a computer programmer, I can tell you that writing and dealing with software that needs to operate in different timezones can be challenging. You always need to be conscious of what time you are working with, is it local or UTC, how much does it need to be offset by, is it daylight savings time or not, etc. Doing comparisons can also be tricky, and since various programmers do things in different ways, sometimes cooperating between different programs and programmers just complicates things more.

Imagine that, if instead of having timezones, everyone on the planet simply used the single timezone, say UTC. The benefit of this would be that there would no longer be any ambiguity when communicating timezones across the planet. March 1st at 12:00 pm would be March 1st at 12:00 pm everywhere.

Locally, things might seem a little odd at first. For example, people in Greenwich might go to work at 9 am and get home at 5 pm, while people in Halifax might to work at 1 pm and get home at 9 pm.

But what real difference would that make? Sunrise would just "happen" to be at 10 am instead of 6 am, but so what? I dare say that if such a system were to be adopted, it would probably only take a generation or so, perhaps less, for everyone to become accustomed to it. I'm sure our biological clocks would adjust, same as they did for Standard time. You would still have to do some mental calculation offsets on occasion for specific things, but probably not as many.

Just look at the transition of most countries (US not withstanding) to adopt Metric over Imperial measurements. I, for one, certainly can't think or estimate in miles or quartz. I'm not too bad with inches and feet, but only because when I used to help my Dad with upholstery or construction, he made sure I read the measuring tape in inches. I do know my own height in both centimeters and mass in kilograms, and generally get pretty confused dealing with Fahrenheit.

But those are just my personal preferences. Getting international cooperation on such a scheme would prove very difficult, just look at the opposition Mr. Fleming ran into. And countries are far less willing to adopt such things even today.

Nevertheless, I think it's a neat idea with a number of benefits, even if its never actually adopted. What about you? What might be some other pros to such an approach? What might be the cons and downsides of it?

Leave your thoughts and opinions in the comments and below, and feel free to correct me on any of my calculations if you feel I made a mistake in my calculations on the length of a calendar day.

Best wishes, and take care!

Thursday, February 17, 2011

The future of computational devices?

Imagine, for a moment, the computer you're reading this post on.

What type of computer is it? Is it a traditional desktop? A notebook or a netbook? What about a tablet or a smart phone?

Your options on what you use to access information are continually growing, even now they are several times greater then they were just a few years past.

If you are on a traditional computer, say a desktop, what kind of specifications might it have?

A modern 2010-era computer, sold for a reasonable price, might have a set of specifications like this:

* Dual-core processor
* 500 GB Hard drive Storage
* 4 GB of System Memory
* 512 Dedicated Graphics card with 3D acceleration
* Multi-channel sound system

What sized box is your tower? Is it a larger, standard ATX-sized unit, or maybe one of the small form factors?

Whatever the size, I want you to imagine taking that desktop and shrinking it....continually smaller and imagine a computer with similar specifications, but with a form factor the size of your phone.

Sound crazy? Well, consider my own smart phone, a Nokia N900, with the following specifications:

* 600 MHz ARM Cortex-A8 CPU
* 256 MB System Memory
* 32 GB Storage
* PowerVR SGX 530 GPU supporting OpenGL ES 2.0
* Stereo sound system

Not too bad. In fact, as little as decade ago, those specs would probably have been fairly impressive in that desktop your on right now, wouldn't they?

Is it really that crazy that the technology in smart phones could approach the level of desktops? I don't think so.

Consider laptops. Not that long ago, people who chose laptops for the portability advantages they offered were forced to sacrifice the performance of desktop. This is no longer true, as laptops have reached complete parity with desktops in terms of the specifications and abilities.

Those of us today who continue to choose desktops mostly do it for form factor reasons, for example my high definition 22 inch display, full keyboard with number pad and mouse. Of course, these things can additionally be added to a laptop. Other uses for desktops over laptops might include, like myself, use as a DVR (more easily permanently connected to my TV and cable box) or the ability to have multiple disc drives and the like.

Nevertheless, choosing a desktop today is more about form factor and preference then specifications.

In fact, I dare say that while smart phones, net books and tablets continue to make leaps and bounds each year in the amount of power they offer, the traditional computing paradigm of desktops and laptops seem to have plateaued.

For example, why don't we commonly go our local computer stores and see 8 GHZ processors and computers with 48 gigabytes of memory? Are we finally seeing a plateau of Moore's law? Or is the slowdown more for marketing and business purposes?

In fact, one of the problems with sticking more and more transistors on a chip is that the damn things get too bloody hot. Who needs an Infinity-GHZ processor when you need to burn thousands of watts of power just to keep it cool?

Why, even the modest Athlon chips in my two previous laptops could get into the very uncomfortable (and dangerous) 80-90 degrees centigrade range. Had they kept with the numbering convention, I'm sure the slogan for the the Pentium 5 would have been, "Now, you can cook toast on it too!". On the other hand, the Athlon X2 250 processor in my desktop rarely gets above 30C, nor does the Intel Core Duo in my laptop.

But the fact of the matter is that we don't need never increasing clock rates and increases in memory to be happy. In fact, I remember reading an article several years back (that I unfortunately can't source) suggesting that the major chip manufactures such as Intel and AMD would soon stop trying to increase their clock speeds and instead focus on the chips they got: basically, trying to shrink them down and make them more power efficient. This a good thing, not just for your power bill, but for the environment too.

It seems that we are living this reality: Processors aren't getting faster, but they are getting cheaper, smaller, more efficient and multi-cored. We need this more then we need more gigahertz, because there is clearly a limit of diminishing returns. We don't need faster computers because we don't have applications (unless you are in the server or HPC market) that can use them. At least, not yet. Even my desktop with a modest 2GB of Ram runs circles around many computers of better specifications, DVR'ing, web browsing and play games at the same time. Of course, I use a far superior operating system then most :).

So what does that mean for the future of such devices? If laptops and desktops continue their plateau, and the smaller form factor devices such as smart phones continue their rise, will we eventually reach a point where they are all at parity?

It wouldn't surprise me. Likewise, it also wouldn't surprise me if the day comes when your entire computer system fits in your hands, and that's the only computer you need.

For example, imagine a smart phone 10 years from now. We'll consider this our speculative "super-device". It can be connected to a GSM or CDMA network, likely has wi-fi and cellular data capabilities, camera and GPS, plus also a large touch screen and optionally a physical keyboard. It can make calls, play the newest high-end games, browse the web, has storage in the hundreds of gigabytes, extremely fast data transfer and processing rates, and more.

What are the disadvantages of this device? Well, no body wants to stare at web pages on a small screen forever, nor do they want to type up their reports on a keyboard only a few centimeters big.

But wait! Picture another device, in the form factor of a laptop, with a large screen, full keyboard, optical drive and card reader, larger battery perhaps, etc. Except that this device is just a "shell", it has beauty but no brains. No processor, motherboard or memory of it's own. Instead, slide your smart phone into a receptacle and suddenly you can have an entire computer system ready to rock. Able to type reports, see movies and web pages on a larger screen, even play the latest visually stunning computer games.

But why stop there? Don't need a keyboard? Just provide a large touch screen dock, sans keyboard, for your smart phone with receptacle and suddenly you've got a fully functional tablet (or e-reader). Add a keyboard with no optical drive and you've got a net book.

Need a larger screen for those high definition movies/games, or want to use a printer? Just provide a small dock which is nothing but ports, for monitors, printers, keyboards, even DVR connections if you want, and there is your desktop.


The receptacle could also be integrated into cars, essentially taking over as the entire entertainment and communication system of the vehicle.

I fully feel as though this is the natural evolution of where technology is heading. But is it a good idea? What are some of the pros and cons of such a design?

Right now, I have three "computers" that I use on a daily basis. My desktop, my laptop and my smart phone. Each has it's own place in my technological arsenal. My desktop of course serves as my main "home" PC: it does my DVR'ing, plays games, lives as my music and media server, browses websites, check my personal email, Skype conference with my family and more. My laptop is mostly work oriented, it has all my work schedules on it, current projects, contact information, work email, etc. But I also occasionally use it when I travel for web browsing, watching movies, etc. My smart phone, while of course admirably fulfilling it's capacity as my only phone, also handles all my personal schedule, memos and todo's, plays games and browses the web, at 5MP doubles as my primary picture and video camera, and is a full Sat-Nav GPS device with voice guided directions.

I'd be lying if I said the thought of all those devices being combined into one, but each with it's own profile what I wanted to do at the time, wasn't appealing to me. It's easy to get into a state of 'digital fatigue' when you are surrounded by too much technology and want to simplify things, only to feel your current technology is unable to fulfill your needs in some form or another. Even I find myself wanting a tablet, net book, or second laptop, even though I can pretty easily convince myself that I don't really need them. And on top of that, I still have game consoles, several televisions, DVD devices, and so on.

But there is danger as well. Phones are of course designed to be robust, they have to be, being jostled around all day after all. There are significant dangers in putting all your eggs in one digital basket: what happens when your phone gets destroyed, damaged or even just lost?

This could have some pretty bad consequences. But there are other problems as well, for example, Vendor lock-in. Just because you buy your device from Vendor A, you shouldn't have to buy your shells from Vendor A. For such a system to work, the dock and protocols should be entirely open and implementable by all.

The idea of an "all in one" device capable of doubling as any computing device we have today excites me a great deal, though there are pitfalls that I seriously hope we can avoid in order to realize such a device.

There is one pitfall we might not be able to overcome: upgrade-ability. A properly built desktop can be upgraded endlessly, to the point where it is an entirely new computer. Laptops are also upgradeable, but to a significant less degree: the hard drive, memory, battery and optical drives are often changeable but good luck trying to upgrade the screen, motherboard or video card. Unfortunately, as the form factor gets smaller, the ability to upgrade decreases proportionally. Good luck trying to change the memory in that smart phone, or adding an optical drive to that net book.

To make our speculative super device, we want to keep two principles in the back of our mind at all times: longevity and recyclability. We've already made the assumption that the specifications of all devices types would largely plateau out, become equal. But I'm not saying that at this point technology growth would stop, merely that the growth of the three major form factors (desktop, laptop and smart phone) would all grow at the same rate. There will still be advances as people develop new technologies and find uses for them. So technology *will* advance, albeit and hopefully at a more sustainable pace.

I think these devices would need to have a long life span, technology sufficient to last as long as possible. And, when you are finally ready to get a new device, we need programs in place to reuse, resell or recycle the old ones, possibly even taking off from the price of a new device.

Could a device/system like this ever become mainstream? Companies such as Motorola are already taking the first step with their Atrix phone (though I've heard rumors the laptop dock is only available with certain plans...which doesn't bode well). Just imagine if a company, say Apple, announced tomorrow that they had a new iPhone that, with the right dock, could also be your iPad, MacBook and iMac? Would not flocks of people swarm out to buy it? I think so. And the other major vendors, Dell, HP, etc would all follow while Microsoft would probably try to slap Windows on everything. Unfortunately, it might not be in the best interest of these companies to work together, which would create a hell for consumers.

Ideally, I would like to see everything left as open as possible. I could go on for a good length of time on how I believe in the decoupling of hardware and software, but we shall save that for another post.

The only way I would like to see this happen is if people are in control of their own devices. For example, as a strong proponent of free and open source software, I'd want to be able to run my own operating system on my device, and still have my hardware work and interact with other devices. We can place extra security and encryption on the devices (biometrics, perhaps), to help prevent the devices from being compromised if lost.

The phone component needs to optional. We can add a SIM card slot onto the device, and hopefully, carriers and manufactures will allow you to hook up to their networks seamlessly. The phone itself would be little more then an optionally installable application on the device. Hopefully carriers would remove those ridiculous data caps on their networks...but I know that is likely little more then a dream.

What about dedicated uses of the technology? Like I said, my desktop doubles as my DVR, and my ultimate device that I envision will hardly be able to record television shows for me if it's in my pocket on the other side of town.

This could be where device "reuse" comes in. In any case, there are likely to varying types of devices with different hardware capabilities. So it's not that crazy that I could use an older one, or cheaper one properly configured for DVR use while my main device stays with me.

We may still end up with multiple devices, but the fact is that the flexibility and configurability of the devices would all them to act as any other device, which would ultimately reduce the number of simultaneous devices we need at once. And with things such as longevity built into the device, they would need to be replaced less often, while the form factor can no longer improve.

I think such a technology has great potential. It's reasonable to implement, and could revolutionize the way we interact with our devices. But is has pitfalls as well, aspects we need to carefully avoid and implement properly if want to be successful. Nevertheless, I believe it is likely where we are to be headed, hopefully it'll be more of a blessing then a curse.

Do you agree? Feel free to share your thoughts and feelings in the comments, and have a great day!

Sunday, January 30, 2011

Gandalf Vs Dumbledore: The Great Debate (Warning: Strong Language!)

Hello folks!

Read this the other day on the wonderfully funny Failbook and just thought it was too funny not to share. I paraphrased it slightly, but you can find the original post here. Special thanks to the fellow with the Batman(?) avatar.

Warning: Strong language!


































You have been warned.


Gandalf Vs Dumbledore

Lemme break it down for you. Dumbledore is pretty sweet. he runs a school where all sorts of crazy shit goes down. He has a bird that spontaneously com-busts and a pretty sweet office. Oh and he dies helping to save the world. No doubting: Dumbledore is pretty bad ass.

But then there's Gandalf

First, he finds the root of all evil, and lays out a plan to save the world. When he gets shit on by his buddy Saruman, he escapes by TALKING TO A MOTH, so that the moth can go get his buddy A GIANT FUCKING EAGLE to fly him off the roof of Saruman's tower. Then he hooks back up with Frodo and the gang. But wait, HE DIES. It is important to note however that he dies FIGHTING A GIANT FLAME DAEMON with a MOTHERFUCKING WHIP. Now, normally, dying would be a problem for most people.

FUCK THAT.

Gandalf just shrugs it off LIKE A BOSS and comes back to finish what he started. He also decided to update his wardrobe with some pimpin' white robes. Now fully pimped out, he tells everybody the plan then dips for a minute to handle some shit elsewhere, cause that's how Gandalf motherfucking rolls.

Then right when shit starts hitting the fan at Helm's Deep, he shows up WITH A GIANT FUCKING ARMY. Oh, and did I mention he shows up on the KING OF HORSES....RIDING BAREBACK?!?! So, not only does Gandalf have figurative balls of steel, he undoubtedly has ACTUAL BALLS OF STEEL.

Finally, after cleaning shit up at Minas Tirith, he peaces out and lets all the hobbits and humans enjoy a world PURGED OF ALL EVIL.

So, to recap,

Dumbledore: mentors the younger generation, sacrifices his live for the greater good.

Gandalf: Talks to animals, gives death the middle finger, constantly saves everybody else's ass, and then when it's all said and done, just leaves everyone else with all the spoils of war.

Gandalf WINS.

Saturday, January 29, 2011

LIRC module disable by update

Good evening everyone,

The other day I ran my system updates and one of the packages that got updated was a package called "lirc-modules-source".

Unfortunately, as a result of this update, my remote control and IR blaster was disabled.

Apparently, the kernel module I used for my infrared equipment, lirc_zilog, was removed in the update. The source file for the module was still available, but all traces of the *.ko weren't to be found.

In addition, I could no longer build the kernel module due to some changes in the newest Ubuntu version. I still intend to do some more research on *why* the update removed the module, but fortunately due to a very helpful blogger I found a very simple set of instructions to repair it:

* sudo bash
* apt-get remove lirc-modules-source
* rm -rf /usr/src/lirc-0.8.6/
* apt-get install lirc-modules-source
* cd /usr/src/lirc-0.8.6
* wget http://bobkmertz.com/blog-files/zilog-for-lucid.diff
* patch -p0 < zilog-for-lucid.diff
* dpkg-reconfigure lirc-modules-source

The last step will build the source and install the *.ko files. The original blog post can be found here with some more information, or if you are still having trouble: http://notepad.bobkmertz.com/2010/06/pvr-150-ir-blaster-on-mythbuntu-1004.html

I did this successfully on my HVR-1600 running regular Ubuntu 10.04. Full LIRC support returned after I modprobe'd the driver.

Otherwise, my MythTV-based DVR continues to work wonderfully. I'm making great progress on my complementary auto startup and shutdown problems I'm developing and will hopefully post them on line when they are complete. In the meantime, just wanted to pass along this info in case someone runs into similar issues! And also, it's a good idea to do a quick review of the system updates before you install them.

Protip: If you do see wonky behavior from any of your software after an update, you can review what packagers were recently updated with Synaptic. Just launch Synaptic and go to File->History and check the logs. This is how I ultimately deduced the culprit to the missing LIRC behavior.

All the best!