Happy 2011! Hope you all have a great year.
Just for the record, I don't have any New Years Resolutions. I have minor modifications to my behavior that I intended to implement coincidentally on January 1st. That's totally different. Right? :)
But just how long *is* January 1st? While down visiting my friends and family over the holidays, this question was posed to me.
Now of course, the answer seems completely obvious. 24 hours. Right? Some might argue the day isn't quite 24 hours, given various rotation periods, or only considering hours of "daylight" etc. But in general, the length of day being defined by the Earth's rotational period is 24 hours. But that wasn't the topic of discussion, rather, this one referred specifically to timezones.
That is, given that different parts of the planet can register the calendar being different days at the same time, how long does *somewhere* on the planet register a specific day?
Take January 1st for instance. I celebrated New Years at 12:00 AM January 1st, Local time. My particular time zone is UTC-4 (aka GMT-4), so by the time it was January 1st for me, it was already January 1st for more than half the planet. So how long, in hours, did January 1st last from the first moment somewhere on Earth registered it, until the last place on Earth clicked over to January 2nd?
The Simple Answer
Let's imagine we break the Earth symmetrically by time zone:
In this simplified model, the day would first "dawn" in UTC+12. It would turn 12:00 am January 1st, 2011 first in UTC+12, then 1 hour later progressively across the planet. The interesting thing is what happens when you go the *other* direction, that is directly from UTC+12 to UTC-12. This is called the International Date Line and works like so, if you cross it traveling west, then the time remains the same but you increment a day to the date. If you cross it going east, the time still remains the same, but the date decrements a day. Thus, If it's 12:00 AM on January 1st in UTC+12, then it's 12:00 AM on December 31st in UTC-12. Using this, we can extrapolate a simple chart like so:
With this model, UTC+12 would be the first place it becomes January 1st, and it would last for 24 hours, the entire time it would be December 31st in UTC-12. Once January 1st in UTC+12 clicks over the January 2nd, the International Date Line tells us that UTC+12 is now January 1st, which it has another 24 hours as Jaunary 1st. The various time zones in between would have their day in between the two extremes, but we don't need to consider the overlapping periods to answer this question. Thus, in this sense it would be January 1st somewhere on the planet for a total time range of 48 hours. Kinda neat, huh?
But not quite...
The simplified model postulated above helps us to think about the problem, but isn't quite the solution. It implies that there are 24 equally distributed time zones (not including UTC), which isn't quite the case.
Granted, there are timezones that are not offset on the hour, UTC -3:30 (Newfoundland Time) for example. But these don't matter for this analysis, since they are still within the UTC+12 and UTC-12 extremes.
The problem is that UTC+/- 12 aren't necessarily the extremes. In fact, there are two additional time zones that need to be considered, UTC+13 and UTC+14. So indeed, UTC+14, rather then UTC+12, is the first place the day clicks over.
What does this imply for our analysis? Well, think about it this way, we've started that there's a total 48 hour window between the period the day dawns in UTC+12, then ticks over the next day in UTC-12.
But, by the time the day clicks over in UTC+12, it's already been that day in UTC+14 for two hours now. And so, we must add on these two hours to the 48 window for a grand total of 50 hours being the length of a calendar day.
Wow, 50 hours!
Indeed! Quite a bit more then the usual 24. However, there's still one other aspect we haven't considered yet: Daylight savings time.
The analysis presented above assumes that all timezones (by which I mean UTC offsets) remains the same throughout the year. This is of course not true. I'm currently in my local timezone of UTC-4, however, for a number of months throughout the year while DST is in effect, I'll in fact be UTC-3 instead.
Daylight savings generally follows a simple rule: Fall back, spring ahead. Thus, during the Fall/Winter months, you are in your "normal" timezone, but during DST, you are one more hour ahead then usual. This means that if you are west of the Prime Meridian, you get one hour closer to UTC (-4 becomes -3), and if you are east of the Prime Meridian, you get one hour farther away (+4 becomes +5).
How does this affect our analysis? Let's consider what this would do at the extremes we've established:
Standard | DST Offset
---------|-----------
UTC + 14 | UTC + 15
UTC - 12 | UTC - 11
This might not seem to affect our counts at all, and in this case, you'd be correct. We might be gaining an hour moving to UTC+15, but we're losing an hour off the other end, keeping the count to our previous count of 50 hours.
The interesting thing is that, while DST offsets us by an hour, not all places observe it. This creates a few interesting scenarios.
For example, presume that there exists a place in UTC+15 that does observe Daylight savings time, but a place in UTC - 12 that does not observe daylight savings time. The resulting effect would be that the UTC+15 and UTC-12 timezones were in effect simultaneously, in fact resulting in a 51-hour day.
Likewise, the opposite could be true. Imagine that there is a place in UTC+14 which does not observe daylight savings time, and that there were no places in UTC-12 which did not. This would cause us to run from UTC-11 to UTC+14 simultaneously, resulting in a 49-hour day.
My research so far however indicates that the only place I'm aware of using UTC+14, a country called Kiribati, uses UTC+14 all year round, so in fact never offsets to UTC+15. So unless I'm incorrect on that, or it changes in the future, we can rule out a 51-hour day.
In addition, according to Wikipedia with regards to UTC-12, there are in fact no human habitations in this timezone. Instead, the timezone is nautical only, observed by ocean ships which happen to be crossing through it. And I highly doubt that they bother to observe DST, or even if some did, that all would.
Thus, year round we likely have a UTC-12, which is the last part of the planet to observe a calendar day.
So even though DST could affect our analysis of a calendar-day length by an hour, due to the decisions of local (or non-existent) jurisdictions, they do not. At least for now, the length of calendar day observance remains at 50 hours, year round, regardless of local DST offsets.
Why Timezones?
To wrap up this post, a short discussion on why we use timezones at all. If you are Canadian (or even interested in the subject at all) you are likely familiar with the Sir Sandford Fleming Heritage Minute. Fleming was an railway engineer, who was fed up by the ridiculous "minute" offsets of various timezones between cities. This was because each various location liked to have noon the time when the sun was "overhead", making the setting of time as one traveled by rail very inconvenient.
So Fleming came up with a different idea: Standardized time. Dividing the world into roughly 24 equal sizes areas, it was now far, far easier to communicate times across the world and have them be relevant and make sense.
Although it took him some time to get it widely accepted and adopted, Fleming's invention of Standard time was nothing short of genius. It was likely as important as the railway and telegraph themselves in modernizing the industrialized world.
But Fleming only reduced the number of timezones. He shrunk the number back by a pretty significant amount, but didn't eliminate them completely. Why?
To me, the elimination of timezones would seem to be the next logical step. As a computer programmer, I can tell you that writing and dealing with software that needs to operate in different timezones can be challenging. You always need to be conscious of what time you are working with, is it local or UTC, how much does it need to be offset by, is it daylight savings time or not, etc. Doing comparisons can also be tricky, and since various programmers do things in different ways, sometimes cooperating between different programs and programmers just complicates things more.
Imagine that, if instead of having timezones, everyone on the planet simply used the single timezone, say UTC. The benefit of this would be that there would no longer be any ambiguity when communicating timezones across the planet. March 1st at 12:00 pm would be March 1st at 12:00 pm everywhere.
Locally, things might seem a little odd at first. For example, people in Greenwich might go to work at 9 am and get home at 5 pm, while people in Halifax might to work at 1 pm and get home at 9 pm.
But what real difference would that make? Sunrise would just "happen" to be at 10 am instead of 6 am, but so what? I dare say that if such a system were to be adopted, it would probably only take a generation or so, perhaps less, for everyone to become accustomed to it. I'm sure our biological clocks would adjust, same as they did for Standard time. You would still have to do some mental calculation offsets on occasion for specific things, but probably not as many.
Just look at the transition of most countries (US not withstanding) to adopt Metric over Imperial measurements. I, for one, certainly can't think or estimate in miles or quartz. I'm not too bad with inches and feet, but only because when I used to help my Dad with upholstery or construction, he made sure I read the measuring tape in inches. I do know my own height in both centimeters and mass in kilograms, and generally get pretty confused dealing with Fahrenheit.
But those are just my personal preferences. Getting international cooperation on such a scheme would prove very difficult, just look at the opposition Mr. Fleming ran into. And countries are far less willing to adopt such things even today.
Nevertheless, I think it's a neat idea with a number of benefits, even if its never actually adopted. What about you? What might be some other pros to such an approach? What might be the cons and downsides of it?
Leave your thoughts and opinions in the comments and below, and feel free to correct me on any of my calculations if you feel I made a mistake in my calculations on the length of a calendar day.
Best wishes, and take care!
Saturday, February 19, 2011
Thursday, February 17, 2011
The future of computational devices?
Imagine, for a moment, the computer you're reading this post on.
What type of computer is it? Is it a traditional desktop? A notebook or a netbook? What about a tablet or a smart phone?
Your options on what you use to access information are continually growing, even now they are several times greater then they were just a few years past.
If you are on a traditional computer, say a desktop, what kind of specifications might it have?
A modern 2010-era computer, sold for a reasonable price, might have a set of specifications like this:
* Dual-core processor
* 500 GB Hard drive Storage
* 4 GB of System Memory
* 512 Dedicated Graphics card with 3D acceleration
* Multi-channel sound system
What sized box is your tower? Is it a larger, standard ATX-sized unit, or maybe one of the small form factors?
Whatever the size, I want you to imagine taking that desktop and shrinking it....continually smaller and imagine a computer with similar specifications, but with a form factor the size of your phone.
Sound crazy? Well, consider my own smart phone, a Nokia N900, with the following specifications:
* 600 MHz ARM Cortex-A8 CPU
* 256 MB System Memory
* 32 GB Storage
* PowerVR SGX 530 GPU supporting OpenGL ES 2.0
* Stereo sound system
Not too bad. In fact, as little as decade ago, those specs would probably have been fairly impressive in that desktop your on right now, wouldn't they?
Is it really that crazy that the technology in smart phones could approach the level of desktops? I don't think so.
Consider laptops. Not that long ago, people who chose laptops for the portability advantages they offered were forced to sacrifice the performance of desktop. This is no longer true, as laptops have reached complete parity with desktops in terms of the specifications and abilities.
Those of us today who continue to choose desktops mostly do it for form factor reasons, for example my high definition 22 inch display, full keyboard with number pad and mouse. Of course, these things can additionally be added to a laptop. Other uses for desktops over laptops might include, like myself, use as a DVR (more easily permanently connected to my TV and cable box) or the ability to have multiple disc drives and the like.
Nevertheless, choosing a desktop today is more about form factor and preference then specifications.
In fact, I dare say that while smart phones, net books and tablets continue to make leaps and bounds each year in the amount of power they offer, the traditional computing paradigm of desktops and laptops seem to have plateaued.
For example, why don't we commonly go our local computer stores and see 8 GHZ processors and computers with 48 gigabytes of memory? Are we finally seeing a plateau of Moore's law? Or is the slowdown more for marketing and business purposes?
In fact, one of the problems with sticking more and more transistors on a chip is that the damn things get too bloody hot. Who needs an Infinity-GHZ processor when you need to burn thousands of watts of power just to keep it cool?
Why, even the modest Athlon chips in my two previous laptops could get into the very uncomfortable (and dangerous) 80-90 degrees centigrade range. Had they kept with the numbering convention, I'm sure the slogan for the the Pentium 5 would have been, "Now, you can cook toast on it too!". On the other hand, the Athlon X2 250 processor in my desktop rarely gets above 30C, nor does the Intel Core Duo in my laptop.
But the fact of the matter is that we don't need never increasing clock rates and increases in memory to be happy. In fact, I remember reading an article several years back (that I unfortunately can't source) suggesting that the major chip manufactures such as Intel and AMD would soon stop trying to increase their clock speeds and instead focus on the chips they got: basically, trying to shrink them down and make them more power efficient. This a good thing, not just for your power bill, but for the environment too.
It seems that we are living this reality: Processors aren't getting faster, but they are getting cheaper, smaller, more efficient and multi-cored. We need this more then we need more gigahertz, because there is clearly a limit of diminishing returns. We don't need faster computers because we don't have applications (unless you are in the server or HPC market) that can use them. At least, not yet. Even my desktop with a modest 2GB of Ram runs circles around many computers of better specifications, DVR'ing, web browsing and play games at the same time. Of course, I use a far superior operating system then most :).
So what does that mean for the future of such devices? If laptops and desktops continue their plateau, and the smaller form factor devices such as smart phones continue their rise, will we eventually reach a point where they are all at parity?
It wouldn't surprise me. Likewise, it also wouldn't surprise me if the day comes when your entire computer system fits in your hands, and that's the only computer you need.
For example, imagine a smart phone 10 years from now. We'll consider this our speculative "super-device". It can be connected to a GSM or CDMA network, likely has wi-fi and cellular data capabilities, camera and GPS, plus also a large touch screen and optionally a physical keyboard. It can make calls, play the newest high-end games, browse the web, has storage in the hundreds of gigabytes, extremely fast data transfer and processing rates, and more.
What are the disadvantages of this device? Well, no body wants to stare at web pages on a small screen forever, nor do they want to type up their reports on a keyboard only a few centimeters big.
But wait! Picture another device, in the form factor of a laptop, with a large screen, full keyboard, optical drive and card reader, larger battery perhaps, etc. Except that this device is just a "shell", it has beauty but no brains. No processor, motherboard or memory of it's own. Instead, slide your smart phone into a receptacle and suddenly you can have an entire computer system ready to rock. Able to type reports, see movies and web pages on a larger screen, even play the latest visually stunning computer games.
But why stop there? Don't need a keyboard? Just provide a large touch screen dock, sans keyboard, for your smart phone with receptacle and suddenly you've got a fully functional tablet (or e-reader). Add a keyboard with no optical drive and you've got a net book.
Need a larger screen for those high definition movies/games, or want to use a printer? Just provide a small dock which is nothing but ports, for monitors, printers, keyboards, even DVR connections if you want, and there is your desktop.
The receptacle could also be integrated into cars, essentially taking over as the entire entertainment and communication system of the vehicle.
I fully feel as though this is the natural evolution of where technology is heading. But is it a good idea? What are some of the pros and cons of such a design?
Right now, I have three "computers" that I use on a daily basis. My desktop, my laptop and my smart phone. Each has it's own place in my technological arsenal. My desktop of course serves as my main "home" PC: it does my DVR'ing, plays games, lives as my music and media server, browses websites, check my personal email, Skype conference with my family and more. My laptop is mostly work oriented, it has all my work schedules on it, current projects, contact information, work email, etc. But I also occasionally use it when I travel for web browsing, watching movies, etc. My smart phone, while of course admirably fulfilling it's capacity as my only phone, also handles all my personal schedule, memos and todo's, plays games and browses the web, at 5MP doubles as my primary picture and video camera, and is a full Sat-Nav GPS device with voice guided directions.
I'd be lying if I said the thought of all those devices being combined into one, but each with it's own profile what I wanted to do at the time, wasn't appealing to me. It's easy to get into a state of 'digital fatigue' when you are surrounded by too much technology and want to simplify things, only to feel your current technology is unable to fulfill your needs in some form or another. Even I find myself wanting a tablet, net book, or second laptop, even though I can pretty easily convince myself that I don't really need them. And on top of that, I still have game consoles, several televisions, DVD devices, and so on.
But there is danger as well. Phones are of course designed to be robust, they have to be, being jostled around all day after all. There are significant dangers in putting all your eggs in one digital basket: what happens when your phone gets destroyed, damaged or even just lost?
This could have some pretty bad consequences. But there are other problems as well, for example, Vendor lock-in. Just because you buy your device from Vendor A, you shouldn't have to buy your shells from Vendor A. For such a system to work, the dock and protocols should be entirely open and implementable by all.
The idea of an "all in one" device capable of doubling as any computing device we have today excites me a great deal, though there are pitfalls that I seriously hope we can avoid in order to realize such a device.
There is one pitfall we might not be able to overcome: upgrade-ability. A properly built desktop can be upgraded endlessly, to the point where it is an entirely new computer. Laptops are also upgradeable, but to a significant less degree: the hard drive, memory, battery and optical drives are often changeable but good luck trying to upgrade the screen, motherboard or video card. Unfortunately, as the form factor gets smaller, the ability to upgrade decreases proportionally. Good luck trying to change the memory in that smart phone, or adding an optical drive to that net book.
To make our speculative super device, we want to keep two principles in the back of our mind at all times: longevity and recyclability. We've already made the assumption that the specifications of all devices types would largely plateau out, become equal. But I'm not saying that at this point technology growth would stop, merely that the growth of the three major form factors (desktop, laptop and smart phone) would all grow at the same rate. There will still be advances as people develop new technologies and find uses for them. So technology *will* advance, albeit and hopefully at a more sustainable pace.
I think these devices would need to have a long life span, technology sufficient to last as long as possible. And, when you are finally ready to get a new device, we need programs in place to reuse, resell or recycle the old ones, possibly even taking off from the price of a new device.
Could a device/system like this ever become mainstream? Companies such as Motorola are already taking the first step with their Atrix phone (though I've heard rumors the laptop dock is only available with certain plans...which doesn't bode well). Just imagine if a company, say Apple, announced tomorrow that they had a new iPhone that, with the right dock, could also be your iPad, MacBook and iMac? Would not flocks of people swarm out to buy it? I think so. And the other major vendors, Dell, HP, etc would all follow while Microsoft would probably try to slap Windows on everything. Unfortunately, it might not be in the best interest of these companies to work together, which would create a hell for consumers.
Ideally, I would like to see everything left as open as possible. I could go on for a good length of time on how I believe in the decoupling of hardware and software, but we shall save that for another post.
The only way I would like to see this happen is if people are in control of their own devices. For example, as a strong proponent of free and open source software, I'd want to be able to run my own operating system on my device, and still have my hardware work and interact with other devices. We can place extra security and encryption on the devices (biometrics, perhaps), to help prevent the devices from being compromised if lost.
The phone component needs to optional. We can add a SIM card slot onto the device, and hopefully, carriers and manufactures will allow you to hook up to their networks seamlessly. The phone itself would be little more then an optionally installable application on the device. Hopefully carriers would remove those ridiculous data caps on their networks...but I know that is likely little more then a dream.
What about dedicated uses of the technology? Like I said, my desktop doubles as my DVR, and my ultimate device that I envision will hardly be able to record television shows for me if it's in my pocket on the other side of town.
This could be where device "reuse" comes in. In any case, there are likely to varying types of devices with different hardware capabilities. So it's not that crazy that I could use an older one, or cheaper one properly configured for DVR use while my main device stays with me.
We may still end up with multiple devices, but the fact is that the flexibility and configurability of the devices would all them to act as any other device, which would ultimately reduce the number of simultaneous devices we need at once. And with things such as longevity built into the device, they would need to be replaced less often, while the form factor can no longer improve.
I think such a technology has great potential. It's reasonable to implement, and could revolutionize the way we interact with our devices. But is has pitfalls as well, aspects we need to carefully avoid and implement properly if want to be successful. Nevertheless, I believe it is likely where we are to be headed, hopefully it'll be more of a blessing then a curse.
Do you agree? Feel free to share your thoughts and feelings in the comments, and have a great day!
What type of computer is it? Is it a traditional desktop? A notebook or a netbook? What about a tablet or a smart phone?
Your options on what you use to access information are continually growing, even now they are several times greater then they were just a few years past.
If you are on a traditional computer, say a desktop, what kind of specifications might it have?
A modern 2010-era computer, sold for a reasonable price, might have a set of specifications like this:
* Dual-core processor
* 500 GB Hard drive Storage
* 4 GB of System Memory
* 512 Dedicated Graphics card with 3D acceleration
* Multi-channel sound system
What sized box is your tower? Is it a larger, standard ATX-sized unit, or maybe one of the small form factors?
Whatever the size, I want you to imagine taking that desktop and shrinking it....continually smaller and imagine a computer with similar specifications, but with a form factor the size of your phone.
Sound crazy? Well, consider my own smart phone, a Nokia N900, with the following specifications:
* 600 MHz ARM Cortex-A8 CPU
* 256 MB System Memory
* 32 GB Storage
* PowerVR SGX 530 GPU supporting OpenGL ES 2.0
* Stereo sound system
Not too bad. In fact, as little as decade ago, those specs would probably have been fairly impressive in that desktop your on right now, wouldn't they?
Is it really that crazy that the technology in smart phones could approach the level of desktops? I don't think so.
Consider laptops. Not that long ago, people who chose laptops for the portability advantages they offered were forced to sacrifice the performance of desktop. This is no longer true, as laptops have reached complete parity with desktops in terms of the specifications and abilities.
Those of us today who continue to choose desktops mostly do it for form factor reasons, for example my high definition 22 inch display, full keyboard with number pad and mouse. Of course, these things can additionally be added to a laptop. Other uses for desktops over laptops might include, like myself, use as a DVR (more easily permanently connected to my TV and cable box) or the ability to have multiple disc drives and the like.
Nevertheless, choosing a desktop today is more about form factor and preference then specifications.
In fact, I dare say that while smart phones, net books and tablets continue to make leaps and bounds each year in the amount of power they offer, the traditional computing paradigm of desktops and laptops seem to have plateaued.
For example, why don't we commonly go our local computer stores and see 8 GHZ processors and computers with 48 gigabytes of memory? Are we finally seeing a plateau of Moore's law? Or is the slowdown more for marketing and business purposes?
In fact, one of the problems with sticking more and more transistors on a chip is that the damn things get too bloody hot. Who needs an Infinity-GHZ processor when you need to burn thousands of watts of power just to keep it cool?
Why, even the modest Athlon chips in my two previous laptops could get into the very uncomfortable (and dangerous) 80-90 degrees centigrade range. Had they kept with the numbering convention, I'm sure the slogan for the the Pentium 5 would have been, "Now, you can cook toast on it too!". On the other hand, the Athlon X2 250 processor in my desktop rarely gets above 30C, nor does the Intel Core Duo in my laptop.
But the fact of the matter is that we don't need never increasing clock rates and increases in memory to be happy. In fact, I remember reading an article several years back (that I unfortunately can't source) suggesting that the major chip manufactures such as Intel and AMD would soon stop trying to increase their clock speeds and instead focus on the chips they got: basically, trying to shrink them down and make them more power efficient. This a good thing, not just for your power bill, but for the environment too.
It seems that we are living this reality: Processors aren't getting faster, but they are getting cheaper, smaller, more efficient and multi-cored. We need this more then we need more gigahertz, because there is clearly a limit of diminishing returns. We don't need faster computers because we don't have applications (unless you are in the server or HPC market) that can use them. At least, not yet. Even my desktop with a modest 2GB of Ram runs circles around many computers of better specifications, DVR'ing, web browsing and play games at the same time. Of course, I use a far superior operating system then most :).
So what does that mean for the future of such devices? If laptops and desktops continue their plateau, and the smaller form factor devices such as smart phones continue their rise, will we eventually reach a point where they are all at parity?
It wouldn't surprise me. Likewise, it also wouldn't surprise me if the day comes when your entire computer system fits in your hands, and that's the only computer you need.
For example, imagine a smart phone 10 years from now. We'll consider this our speculative "super-device". It can be connected to a GSM or CDMA network, likely has wi-fi and cellular data capabilities, camera and GPS, plus also a large touch screen and optionally a physical keyboard. It can make calls, play the newest high-end games, browse the web, has storage in the hundreds of gigabytes, extremely fast data transfer and processing rates, and more.
What are the disadvantages of this device? Well, no body wants to stare at web pages on a small screen forever, nor do they want to type up their reports on a keyboard only a few centimeters big.
But wait! Picture another device, in the form factor of a laptop, with a large screen, full keyboard, optical drive and card reader, larger battery perhaps, etc. Except that this device is just a "shell", it has beauty but no brains. No processor, motherboard or memory of it's own. Instead, slide your smart phone into a receptacle and suddenly you can have an entire computer system ready to rock. Able to type reports, see movies and web pages on a larger screen, even play the latest visually stunning computer games.
But why stop there? Don't need a keyboard? Just provide a large touch screen dock, sans keyboard, for your smart phone with receptacle and suddenly you've got a fully functional tablet (or e-reader). Add a keyboard with no optical drive and you've got a net book.
Need a larger screen for those high definition movies/games, or want to use a printer? Just provide a small dock which is nothing but ports, for monitors, printers, keyboards, even DVR connections if you want, and there is your desktop.
The receptacle could also be integrated into cars, essentially taking over as the entire entertainment and communication system of the vehicle.
I fully feel as though this is the natural evolution of where technology is heading. But is it a good idea? What are some of the pros and cons of such a design?
Right now, I have three "computers" that I use on a daily basis. My desktop, my laptop and my smart phone. Each has it's own place in my technological arsenal. My desktop of course serves as my main "home" PC: it does my DVR'ing, plays games, lives as my music and media server, browses websites, check my personal email, Skype conference with my family and more. My laptop is mostly work oriented, it has all my work schedules on it, current projects, contact information, work email, etc. But I also occasionally use it when I travel for web browsing, watching movies, etc. My smart phone, while of course admirably fulfilling it's capacity as my only phone, also handles all my personal schedule, memos and todo's, plays games and browses the web, at 5MP doubles as my primary picture and video camera, and is a full Sat-Nav GPS device with voice guided directions.
I'd be lying if I said the thought of all those devices being combined into one, but each with it's own profile what I wanted to do at the time, wasn't appealing to me. It's easy to get into a state of 'digital fatigue' when you are surrounded by too much technology and want to simplify things, only to feel your current technology is unable to fulfill your needs in some form or another. Even I find myself wanting a tablet, net book, or second laptop, even though I can pretty easily convince myself that I don't really need them. And on top of that, I still have game consoles, several televisions, DVD devices, and so on.
But there is danger as well. Phones are of course designed to be robust, they have to be, being jostled around all day after all. There are significant dangers in putting all your eggs in one digital basket: what happens when your phone gets destroyed, damaged or even just lost?
This could have some pretty bad consequences. But there are other problems as well, for example, Vendor lock-in. Just because you buy your device from Vendor A, you shouldn't have to buy your shells from Vendor A. For such a system to work, the dock and protocols should be entirely open and implementable by all.
The idea of an "all in one" device capable of doubling as any computing device we have today excites me a great deal, though there are pitfalls that I seriously hope we can avoid in order to realize such a device.
There is one pitfall we might not be able to overcome: upgrade-ability. A properly built desktop can be upgraded endlessly, to the point where it is an entirely new computer. Laptops are also upgradeable, but to a significant less degree: the hard drive, memory, battery and optical drives are often changeable but good luck trying to upgrade the screen, motherboard or video card. Unfortunately, as the form factor gets smaller, the ability to upgrade decreases proportionally. Good luck trying to change the memory in that smart phone, or adding an optical drive to that net book.
To make our speculative super device, we want to keep two principles in the back of our mind at all times: longevity and recyclability. We've already made the assumption that the specifications of all devices types would largely plateau out, become equal. But I'm not saying that at this point technology growth would stop, merely that the growth of the three major form factors (desktop, laptop and smart phone) would all grow at the same rate. There will still be advances as people develop new technologies and find uses for them. So technology *will* advance, albeit and hopefully at a more sustainable pace.
I think these devices would need to have a long life span, technology sufficient to last as long as possible. And, when you are finally ready to get a new device, we need programs in place to reuse, resell or recycle the old ones, possibly even taking off from the price of a new device.
Could a device/system like this ever become mainstream? Companies such as Motorola are already taking the first step with their Atrix phone (though I've heard rumors the laptop dock is only available with certain plans...which doesn't bode well). Just imagine if a company, say Apple, announced tomorrow that they had a new iPhone that, with the right dock, could also be your iPad, MacBook and iMac? Would not flocks of people swarm out to buy it? I think so. And the other major vendors, Dell, HP, etc would all follow while Microsoft would probably try to slap Windows on everything. Unfortunately, it might not be in the best interest of these companies to work together, which would create a hell for consumers.
Ideally, I would like to see everything left as open as possible. I could go on for a good length of time on how I believe in the decoupling of hardware and software, but we shall save that for another post.
The only way I would like to see this happen is if people are in control of their own devices. For example, as a strong proponent of free and open source software, I'd want to be able to run my own operating system on my device, and still have my hardware work and interact with other devices. We can place extra security and encryption on the devices (biometrics, perhaps), to help prevent the devices from being compromised if lost.
The phone component needs to optional. We can add a SIM card slot onto the device, and hopefully, carriers and manufactures will allow you to hook up to their networks seamlessly. The phone itself would be little more then an optionally installable application on the device. Hopefully carriers would remove those ridiculous data caps on their networks...but I know that is likely little more then a dream.
What about dedicated uses of the technology? Like I said, my desktop doubles as my DVR, and my ultimate device that I envision will hardly be able to record television shows for me if it's in my pocket on the other side of town.
This could be where device "reuse" comes in. In any case, there are likely to varying types of devices with different hardware capabilities. So it's not that crazy that I could use an older one, or cheaper one properly configured for DVR use while my main device stays with me.
We may still end up with multiple devices, but the fact is that the flexibility and configurability of the devices would all them to act as any other device, which would ultimately reduce the number of simultaneous devices we need at once. And with things such as longevity built into the device, they would need to be replaced less often, while the form factor can no longer improve.
I think such a technology has great potential. It's reasonable to implement, and could revolutionize the way we interact with our devices. But is has pitfalls as well, aspects we need to carefully avoid and implement properly if want to be successful. Nevertheless, I believe it is likely where we are to be headed, hopefully it'll be more of a blessing then a curse.
Do you agree? Feel free to share your thoughts and feelings in the comments, and have a great day!
Subscribe to:
Posts (Atom)