Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

eReader/Tablet Review: Barnes & Noble Nook HD 7"

Nook HD 7" and a piece of toast, courtesy of barnesandnoble.com. I don't know why.

For the paperless academic on a budget


Virtually every article about improving one's academic workflow (note-taking, reading, writing and general office/mobile productivity) begins and ends with the iPad. In some respects, makes a lot of sense. On top of design, branding and marketing elements, Apple also has the advantage of a well-stocked app store. An important trade-off here is a hefty price tag. Meanwhile, there are plenty of devices for well under $200 that offer promising features and competitive app stores as well as the ability to sideload content and apps. Android devices, for example, fit the needs of students and academics extremely well, if not better than their Apple counterparts. First and foremost, many Android or even Windows phones and tablets are more likely to have expandable memory, which is great if you don't keep all of your files in cloud storage and/or if you are not likely to have a regular data or WiFi connection (such as in the field).

Which brings me to my next point: academics actually have a particular and sometimes peculiar set of needs for their devices. These admittedly vary from person to person, by discipline, age, work environment and place on or off the departmental food chain. For me, I focus mainly one three major tasks, in order of importance: 1. Reading. A lot of reading. 2. Taking and making notes. 3. Saving and organizing files. Everything else from photos and video to social media is secondary for the purposes of this review, but certainly not altogether unimportant.

There are a couple more caveats. I'm a strong believer that you should only ever invest in the technology you need instead of overspend on technology you'll never make full use of. So it's best to read this review keeping in mind how you like to work and what you find necessary or unnecessary; intuitive or counter-intuitive. A lot of people come to me for advice about buying tablets or other gadgets. I wish I could say that it always surprises me how often they fail to consider actual needs - what they'll be trying to do with the device - rather than its looks, brand name or quirky functions (that phone doesn't have the app where I can remote start my car's ignition from Mars, I'd better spend the additional $200 and upgrade my service plan ...). If you were taking a tablet to the field for actual fieldwork tasks (interviews, etc.), I would likely recommend something designed for that kind of work, like the Galaxy Note 10.1 or potentially an iPad, but that is a separate review.

Even though I'm a lover of gadgets, for day-to-day use, I actually find tablets rather fiddly for things like social media or anything that involves a lot of typing. I don't play games, listen to music, Skype chat or even watch videos on mine. I have a laptop with internet access, a phone and a media player, so I don't need a tablet to recreate all of the functions that these individual devices do very well. Instead, I want it to supplement the repetitive and/or arduous tasks that my other devices do rather poorly. That is, I need a more effective portable library to store, read and annotate PDFs and eBooks. As it happens, the Nook HD 7" is excellent as both an eReader and a tablet and is therefore worth reviewing as for a device to help academics go paperless on a budget.

That said, it's far from perfect.


About the Nook HD


B&N mostly makes eReaders like the highly rated Nook Simple Touch. Indeed, after B&N's earlier stumbles with the Nook Tablet and Nook Color, the Nook HD itself started its life as more of a glorified eReader than a full-fledged Android tablet. Upon its release, the Nook HD series was burdened by lackluster software including a useless web browser and a very limited app store. However, the recent Google Play update (late 2012) bumped the NookHD/HD+ into fully fledge Android tablet terrain – albeit still somewhat constrained by B&N's restricted version of the operating system - and returned this once unassuming device to my radar. Those with the technical know-how can root it fairly easily to free themselves from B&N's walled garden, but most general users won't need to. (Rooting voids your warranty, which you might not want to do given a software glitch that affects these devices within the return period. See below. Also, regular software updates from Nook will break your root). Just run the update out of the box to get full access to the Google suite of apps including the Play Store, Play Books, Music, Magazines, Chrome web browser, etc.

I chose the 7" HD because it's a good crossover device and therefore best value for my needs. It allows me to read and markup my thousands of academic PDFs while also being lightweight and comfortable enough to read novels. I have something like 2,000 books and articles on my Nook HD added in the three months or so that I've owned it. It also comes in a 9" (HD+) option, which is probably best suited for purely academic work because you get more screen area to work with, so those three-column academic papers require less zooming/scrolling. Socio-cultural anthropology doesn't get much of these multicolumn pages, but my recent foray into cognitive anthropology has been another story. The 7" version is optimal for all other types of books. The lower profile and weight reduces wrist strain and is preferable for those with smaller hands like myself. I tried both out in my local B&N, where they were the same price ($149 for 16GB. There's an 8GB 7" for $129 while the 9" 32GB is $179) and went for the 7" for portability even though the 9" screen was appealing. They've been on sale for as low as $79.


Specifications


Rating: 4.5 out of 5

The full technical specs for the Nook HD are available here. I'll go over what I see as the key points, especially those specs that stand out from the competition.

Speed: the device is fairly nippy with a Dual Core 1.3ghz processor and 1GB RAM. It runs Android 4.0.4 Ice Cream Sandwich (modified by B&N).

Storage capacity: Internal storage varies by price (see above), but you get virtually unlimited storage space because it takes micro SD memory cards up to 64GB for expansion. Not only is this great to have and a definite advantage over other devices that lack expandable memory, but in the case of the Nook HD, its software makes storing your files on an external card highly recommended (again, more on this below). Expandable memory is noticeably absent from the Google Nexus, iPad mini and Kindle Fire HD/X.

Connectivity: On one hand, the Nook HD/HD+ require proprietary charging/data cables which are pretty expensive to replace in the US. On the other, the device charges really quickly compared to, say, the Amazon Kindle HD. I can get a full charge from empty in around 1.5 hours. WiFi-only for downloads and no NFC.

Battery life: Out of the box, I estimate about 9 hours for reading, but just going through the menus and flipping between apps can have noticeable effect. Since I mostly read, I've turned down the screen brightness for comfort and get up to 14 hours on a single charge. It hardly uses any battery on standby. I can pick it up two days later and the battery will have drained only 1-2%.

Pixel Density: Booklovers are already in on the Nook's secret. The screen resolution of 1400x900 at 243 pixels per inch is ideal for – you guessed it – reading. This means much less eye strain if you spend hours pouring over ebooks and documents. Text and images render beautifully crisp and clear. For comparison, 250ppi is the benchmark for the "retina" display that you pay for with Apple and the eye can't make out more detail beyond that point. 243ppi bests the comparable tablets in its price range like the Samsung Galaxy Tab 7", Google Nexus 7" (2012) and even the Kindle Fire HD 7", its most direct competitor as an eReader turned tablet. This alone was a major reason why I chose the Nook HD and it does not disappoint.


Look and Feel


Rating: 4 out of 5

Nook HD 7" in Smoke (photo: techradar.com)

The Nook HD measures 7.7"x 5.0" x 0.43". The screen is the same size as other comparable 7-inch tablets, but it can give the impression that it is smaller/narrower due to the noticeably wider bezel. The aesthetic impact is debatable, but I actually prefer a wide, grippable bezel so you can hold the device comfortably without getting your fingers all over the screen. The fact that the frame is plastic gives it a somewhat cheaper look, but it also allows the tablet to weigh in at a mere 300g, which is a fair trade-off for me. Plus, the plastic is reminiscent of the portable devices of yesteryear that were markedly more durable against everyday wear.

Overall, the device resists fingerprints fairly well. It also has a really comfortable soft-touch rubber backing that feels very stable in the hand and is completely the opposite of the cheaper appearance of the front. More bothersome is the build quality of the buttons and the sd card slot. The volume and power buttons and SD card slot protector are a cheap, clicky plastic which is shiny unlike the matte finish of the rest of the body, almost like an afterthought. The card slot protector feels extremely flimsy. Most of the superficial flaws can largely be mitigated by the various protective accessories (cases, skins, folio covers, etc) available, but if the card slot breaks off, you're pretty much stuck getting dust inside.

Nook HD power button and SD card slot. Photos: techradar.com

Another very obvious shortcoming for the Nook HD when compared with, say, the Kindle Fire HD, is that there is no front- or rear-facing camera. No camera at all. I would have liked to see a front camera at least for the occasional Skype chat, but it's not that big a deal for me personally. However, because there is no camera, not all Android apps are supported, including Skype and anything that requires photos/videos or scanning.


Quality Control/Unboxing


Rating: 2 out of 5

If slight build issues were the only thing to contend with, this would be a near-perfect device at such a low pricepoint. Unfortunately, one major flaw with the Nook HD is that there appears to be a real problem with Barnes & Noble's quality control. I had store credit available, so I purchased my Nook from my local Best Buy. When I brought it home and opened the sealed box, the device was in terrible shape. Although it shipped factory packed in protective plastic, this plastic sleeve and the tablet inside were both covered in heavy smudges and clearly marked with fingerprints. That shouldn't happen with a new device.

Unboxing. Photo by author.

I assumed that I was given a refurbished or open box item and went back to Best Buy for an exchange. This time, I opened Box #2 in front of a sales associate. They were surprised to find that the device inside was also covered in oily smudges. They insisted that they never open products from the factory or re-shrink wrap them, so the source of the problem had to be Barnes & Noble. I was skeptical, but called B&N to find out more. They had never heard of devices shipping in bad condition. I doubt that; but regardless, I did what most people on the Internet do and blamed Best Buy. With my return partially in store credit, I then travelled to 2 more Best Buy locations and opened 3 more Nook HD boxes in front of store managers. All had some kind of damage. In one store, the manager even tried to clean the smudges off the screen, but they wouldn't come off. In another, the bezel was broken and loose from the screen. It became clear that something was seriously wrong with the poor quality devices from B&N. Before giving up, I made a last ditch effort to order from the Best Buy website. It arrived with out any trace of smudge or mark on the screen. Success! However, the inner plastic tray that holds the device was cracked at the corner. And so there was a matching tiny gouge in the plastic bezel near the headphone jack.

Best Buy couldn't get away without the blame this time, as the geniuses packed the item like this:

Quality packaging skills from the geniuses at Best Buy. Photo by author.

The accordion-shaped crush pattern on the box is, I assume, courtesy of UPS. It's nice when companies work together to give you great service.

The screen was flawless and it booted up, so I kept it instead of making a fourth return which would cost me more in gas than the device itself. This is pretty abysmal quality control and prevents me from rating the Nook HD higher than 3.5 stars. That's without even turning it on and thus negates many of its admittedly positive attributes. As an aside, Best Buy was extremely accommodating with my request to keep opening and discarding Nooks free of charge, whereas B&N customer service is pretty useless. When I reported the units damaged, they could only offer a refurbished device as a replacement and also proceeded to deny that their products leave the factory in bad shape. A quick check of YouTube unboxing videos shows their smudges are common.



Functionality and Usability


Rating: 3 out of 5


Although the software might seem a little clunky (sometimes a lot of actions are required to arrive at a simple task), I actually find it reasonably intuitive. The fact that it's aimed at a wide audience, and with Nook's paired-down focus on reading, means that it's simple to use and learn how to navigate. Opening files is extremely fast, scrolls well and is very comfortable. As mentioned above, access to the Google Play store means that the old complaints about restricted operating software are rather moot for most users. Out of the box, the interface is fairly customizable, with personalized lock screens and wallpapers, widgets, recent documents carousel and sliding desktop screens for categorizing icons. It's certainly nice to look at, if not the most functional if you're in a hurry. There are interface apps you can download to tweak the appearance without rooting. The modified Nook version of Android includes a Nook Today screen which basically tries to sell you eBooks based on your interests (okay concept, but I wish I could add other retailers to that screen. Sorry, B&N, you're too expensive). It also supports multiple user accounts and parental controls, neither of which I make any use of.

The library menu is pre-organized into categories or shelves by Nook that you can't change, including Books, Documents, Magazines, Catalogs, etc. Anything you put into these folders on the device or the matching folders on your SD card will show up there, with the exception of the Documents folder (I can't fathom why). As a result, all books and journal articles stored on the SD card must go into Books if they are going to show up on the device. But then they show in one massive list that is cumbersome to sort. That's irritating. You can create new "shelves" (also cumbersome because they get hidden under "My Shelves"), but not get rid of the default ones. If you buy a magazine from the Nook store that B&N misclassifies as a "Book", you can't move it between shelves to rectify the problem and customer support couldn't care less.

Similarly, you can install new apps, but can never get rid of the ones that come pre-installed (like Hulu Plus, Facebook and Pandora). Even if you click "uninstall" or use a file manager app to force it to uninstall, the next time you put your wifi on, they'll download again. A workaround is to hide the offending apps from your user profile. The same applies to books that you've purchased from Nook. If you want to archive them to the cloud, you have to do so from the B&N website or they'll keep coming back even after you try to hide them. The software is full of little annoyances such as these, but, in general, nothing too major. A custom launcher or file manager app can be an instant remedy to most organizational issues, so it's not worth getting too hung up on the interface.


On the plus side, Nook's native reader application is actually very good. It's light, fast and sleek, supports a wide range of file types, and is easy to use. For regular eBooks (ePubs) the Nook Reader app is as good as any alternative from the Play store. For catalogs, magazines and graphic novels, the Zoom View feature automatically adjusts the page turns to take you to the next relevant section or block of the page while making the best use of the screen real estate. You can also read PDF files in the native Reader app, but I recommend a third party PDF reader for the kind of intensive reading, annotating, editing and highlighting that students and researchers are often engaged in. I'll write up another review for the best Android apps for academics soon where I will go through the various PDF readers, but the app market has quite a few to choose from. From the ones I've tried on the Nook thus far, note-taking, highlighting and annotating functions really well, attesting to adequate screen sensitivity. In short, with a PDF reader installed, the Nook HD becomes an ideal device to read/download academic content.

Another B&N perk is that you can get free wifi at any B&N location. Plus, while you're in store, you can read any books you want for up to an hour. If you have a nice B&N nearby or on campus that can be useful, even if only for reading new bestsellers or novels for free over a series of visits. The Overdrive app also works on Nook, giving you access to eBook loans from your local public library and any participating university libraries.

99% Error


The software is easy enough to get used to with next to no learning curve. Sadly, I encountered a serious glitch that is now impossible to overlook and has long-term affects on usability. I'm referring to it here as the "99% error" and together with B&N's failed quality control, it forces me to cap my overall rating at no more than 3.5 stars. Basically, randomly and for unknown reasons, the Nook freezes during boot-up. Reports from user forum discussions indicate that some sources of the freeze include following a software update or after modifying user accounts, or, in my case, after the unlock screen froze and I had to reboot the device. Instead of fully booting, the screen locks at 99% loading and never loads the interface. The battery would simply drain if you let it keep "loading". The next time you charge it and turn it on, it would again only get to 99%. There are various suggested remedies to fix this, but a hard reset to factory settings is the only thing that really works.

Having to hard reset a device happens from time to time and is not that big a deal. What is a big deal is when you can't actually access the device to back up your files before doing so and there's really no way to tell if or when it's going to happen. This is where I have trouble recommending the Nook if you tend to gather a lot of local files like notes, images and annotations. Apart from the SD card, anything stored to the device memory and not in some cloud service (apps and most app data in Google Play are safe) will be wiped in the event your Nook freezes and needs to be factory reset. This makes regular backups of any data on the drive an absolute must.

However, even more annoying is that if you have made any notes or highlights in your eBooks using Nook's native Reader app, this data cannot be backed up anywhere. You can't even export highlights/notes at regular intervals for your own data preservation. Unlike Amazon Kindle's whispersync service, even your Nook books purchased from B&N will lose their bookmarks, notes, highlights and annotations. The workaround is using a third party app that allows exports (I have not found one that will do this automatically). Even then, all you will have is a separate file with a list of highlights and annotations. You can't import them back into the book's text again. You'll also need a file manager to move any exported files to your SD card from the Nook's internal memory, and that's something novices will probably struggle with. Some apps put files in places on the Nook that appear to fall haphazardly into the My Files category. Making sense of where everything is stored to get your backups right will get more confusing as time goes on and more of the space is used up. It's kind of a mess.

The potential to lose all data stored to the internal memory means that it's safer to keep all documents on the SD card except for app data. This should irritate anyone who, like myself, paid extra for 16GB onboard storage ... but at least I have plenty of space for apps?


Conclusion


Overall Rating: 3.5 out of 5

I would absolutely recommend this device as an eReader (with the caveat that even though the Nook Reader app is great, you'll want to install one that allows some kind of backup if you like to keep your reading notes). I am also confident in recommending it for academic users, because of the availability of quality productivity apps from Google Play and the overall comfort of reading and making notes even with the limited screen size. The battery lasts for up to 14 hours of reading with the screen reasonably bright (the high pixel density means little to no eye fatigue). The screen is also highly responsive. The build makes it lightweight, portable and comfortable to hold. If the software were more stable and the quality control more reliable, this could be a 5 star device. That said, updates are still being released, so there may be a patch for any lingering issues in the future (fingers crossed).


Back to Anthropology

Product Reviews?

Regular readers of this blog may have been wondering about my brief foray into eyeglass reviews, like what it had to do with anthropology or academia or ethnography or any of the other usual content I post here. In fact, I have written product reviews on this blog before (see 'Product Reviews' tab above), mostly on hardware and software. There are two main reasons why I write online consumer reviews and how-tos. Firstly, I like being able to produce something useful that will draw in a wider audience, especially if I have had trouble finding something suitable or comprehensive on a topic myself.

Back when I was a PhD student, I often lamented the lack of practical hardware and software reviews for stuff I could actually afford (which wasn't much), so I gravitated towards reviewing free and open source software or hacks and workarounds to make basic computers/browsers more productive. My own field kit was mixed bag of old technology put to new uses. Rather than buying a bunch of premium and proprietary software, I immersed myself in the belief that there is almost certainly a free/low-cost way to do most tasks using one or a combination of open source or gratis software/web-based applications. The learning curve is steep, but worth it when you can't afford more. That's more or less how I got on to tech reviews and how-tos in the first place.

In a similar vein, I still notice a lack of academic-oriented reviews for products and services, especially cross-over consumer items like tablets, digital recording devices, clothing or field gear. I had trouble finding a decent academic review of the Kindle DX graphite, for instance. Most of my reading is qualitative where extensive note-taking and highlights are imperative, but other academic styles of working are very different. Plus, anthropologists need to know what's going to work for them in the field as well as the office (or lack thereof).

I was sure that I had bored readers to death with eyewear reviews, but actually my 5-post series on glasses has become the most popular on this blog to date. I'm pretty confident that they've helped people to save a lot of time, energy and money. I intend future reviews to be of more direct interest to academics, anthropologists, students, geeks or social researchers, but not exclusively. My next planned review will also be of an optical nature, but with fieldworkers in mind.

Secondly, I am working on some new research to do with media and consumerism, so consider the product reviews that appear here as a minor form of participant observation. Details will follow in the future, but there are more pressing things on my agenda at the moment. Just to be clear: I will never post pre-written "sponsored" reviews (read: robot spam) to get ad revenues and won't ever post anything that I haven't written myself and don't honestly believe. I'll also clearly state when I've been given a complimentary product sample to review.


A brief Urban Firewalls update (finally)

I designated October as my month to return to my PhD thesis to prepare it for publication. Given my highly unstable personal circumstances at present (not to mention ending the month with Hurricane Sandy and a prolonged blackout), I am actually impressed that I managed to start getting down to work. I am currently drafting a plan for the new book version which includes re-working the chapter layout and refining the ethnographic contributions, potentially adding some comparative case studies from outside of Spain, and more original material that did not appear in the PhD version. The PhD manuscript as it stands presents a detailed story about a small Catalan town and its highly localized responses to technological and urban change. By re-organizing the contents, I hope to enable the local data to interweave with a more universal story of humans and technology and contribute to a more comprehensive anthropology of the digital age. I have a new website where I'll post updates of the progress of Urban Firewalls.


New at the OAC

There have also been quite a few items of interest over at the Open Anthropology Cooperative recently. We started shaking off the back-to-school malaise with a new e-seminar and some great blog posts. In case you missed it, catch up on the seminar for "In and Out of the State" by Patience Kabamba. In his featured blog, John McCreery asks, what about society and culture have changed to make being a dick the road to failure instead of the key to success? I am surprised that no one has yet provided any ethnographic studies of bullying in the forum, but this is a question I will be returning to shortly in an upcoming blog post. The US presidential elections inspired this post about language and politics and this follow-up blog on election lessons learned. Speaking of openness, why don't anthropologists share what they know about households with economists?

Despite this fairly steady stream of new and interesting additions to the site, the subject of "stagnation" in our forums has surfaced yet again, leading us to re-question the state of affairs over at the OAC under the header The Rise and Fall of Social Networks. If you are interested in the politics of making a site like the OAC work and some of the ongoing obstacles we are facing, please join in the discussion. My response to that thread will give you an idea of where I stand on a number of issues as well as a hint at what I'm working on for the future of the OAC:

Some good points in this article, at least for thinking about a historiography of social networking sites. But then there are significant differences between social networks and academic networks, much of which have to do with return on time investment, volunteer labor and long-term objectives, not to mention power relations and status hierarchies that carry over from the academic world. Much of activity on the social web need not concern itself with aims, intentions or long-term goals. It's easy. It can keep ticking over until boredom or newness - whichever comes first - force change. Academic networks don't work exactly the same way. The OAC mixes both together, which may contribute to an identity crisis of sorts.

I don't agree with all the points made in the article about Facebook vs. Twitter. I actually think that Twitter is, on the whole, more active and powerful than Facebook. Facebook's modus operandi is outdated, the layout and structure muddled, its features are restrictive and its policies are confusing. Sure, for most users, a lot of this is irrelevant. Even Apple can convince people its products are inherently usable, which is patently untrue. Yet both of these companies are successful by closing off their markets and thereby normalizing clumsy technology and unintuitive interfaces. Twitter not so much. But I digress ...

There are probably more dead blogs on the internet than active ones. There are at least 83 million fake, unused or inactive Facebook accounts. I have emails that lapsed into oblivion over the years, websites that expired, and domains I never renewed. Is there any technology online that is not subject to simply running its course? This post, Why Are There So Many Dead Blogs, does a pretty good job of noting all the simple human factors involved. It's not only the technology that determines what network lives or dies.

Playing around on Twitter and/or keeping in touch with family on Facebook are not analogous to activity at the OAC. The first is fleeting and impermanent. The second is personal and intimate. The latter takes more time commitment, at least some critical thought, and the expectation of some kind of pointed exchange or response over time. We've tried to add site features that lower the barrier to participation (share buttons, twitter tab, RSS), but the returns on this are also quite low. The content that is uploaded without the requirement of reciprocity or response (e.g. "sharing a video", "liking" something, "listing an event"), is really incidental to any wider successes here, or so it would seem.

The more significant products of the OAC's concerted efforts - namely the Press - require investments of time and energy. They attract participants because they fit longstanding academic value models. Academics change slowly even if we'd like to think that new modes of communication make a qualitative difference to how we live and work. Hence why email has not imploded as the means for transmitting academic information. Mailing lists are still popular because they are semi-closed/private and simple. They do one useful thing well enough to stick around. In early OAC days, Twitter was a big deal for us: a real paradigm shift that led to the OAC's development in the first place. Today, no one seems that bothered to engage on Twitter. Perhaps that is a failure on our part as far as implementation, but it is more likely that Twitter no longer fills a communicative need for the OAC since circumstances have changed. The OAC Facebook page is now a bit more active, but still pretty separate from the main network.

We have had continual debates about what the site hopes to achieve or "do" - a mission statement - that would attract participants and be meaningful. Yet no one seems willing to take on a more permanent role in shaping the site. If the OAC is imploding, what's the precise cause and remedy other than lack of dedicated interest?

I have concentrated a lot on technical development at the OAC and I still believe that a deluge of content is preventing more adequate use and navigation of the site. I do agree with John that we need to streamline access to the most interesting content and like the idea of running a "best of" series that resurrects old posts to keep them alive. Instead of pushing for some "new" spark, we are likely not making best use of what we already have. I wish Ning made it easier to index and display old posts. I have sketches/ideas for site changes, but I am scrambling to keep on top of things at the moment. We don't have as strong a development team as we once did among the admins, and it really can't be done without wider interest.


We have been talking about these issues at the OAC in some form or another since the site's speedy launch in 2009. I am now committed to taking more drastic efforts to put an end to pervasive content-navigation woes in the hopes that related participation woes will also disappear. A few weeks ago, I began experimenting with site improvements for revamping the OAC's appearance, perhaps better termed "image". The OAC homepage hosted on Ning has been both a source of the OAC's successes as an academic/social network and a frustrating infrastructural barrier to expansion. I am working on some bold ideas that would involve making more dramatic changes beyond Ning. If the experimentation starts to look like an actual possibility, I will float the new ideas on-site for feedback. As I mentioned in the post above, any lasting effort cannot really be forged without wider community interest. If you can help in any way to make the Open Anthropology Cooperative a more effective, active and useful site for anthropologists to accomplish meaningful things, please volunteer your skills.


New to anthropology: PopAnth

The launch of PopAnth in September marks an exciting move forward for anthropology online. PopAnth presents snapshots of anthropological knowledge for popular audiences in online magazine format. It was formed out of a discussion about public anthropology over at the OAC. The team, including some OAC veterans, has really embraced the idea of opening anthropology and making it more publicly engaging. The articles are fun to read and really distill worthwhile talking points about what anthropology is and what it hopes to discover about people. Greg Downey over at Neuroanthropology sums up the motivation and intentions behind PopAnth, including samples of recently submitted articles and how to get involved.




Image from hongkiat.com

Reflections on Hurricane Sandy, media and disconnection

Hurricane Sandy: ParkedLast week, Hurricane Sandy pummeled the east coast of the US. Power outages due to extreme coastal flooding, high winds and fallen trees have caused food and gas shortages in my area that are only now beginning to be resolved. As I write, many fellow Long Islanders and New Yorkers are still without electricity, heating and hot or clean water. You have by now – and likely before myself, since I have been in a blackout for some days – seen the ghostly images of dark and empty NYC streets, subway lines under 10 feet of water, cars thrown upon sand banks on Long Island beaches, and rubble where houses once stood in New Jersey and throughout the region. I have documented some of the devastation in photos such as the ones in this post and more that I will share on Flickr over the coming days.

Power is still being restored in my neighborhood. Our electricity has just returned, but only a block away the lights are still out. 15 minutes away in all directions, coastal residents on Long Island and in Queens are still faring much worse. Many have lost their homes entirely. If you are in the area and want to volunteer your assistance to those struggling in the aftermath of the storm, visit nycservice.org to offer your help. Be sure to check your local transit schedules and road warnings wherever you're heading and keep roads clear for emergency vehicles. As of November 3rd, Most LIRR lines into the city are up and running on an hourly basis. Full fares are now being charged.

***

I talked a bit about media representation(s) in the aftermath of Hurricane Irene last year. This post is along the same lines. There are plenty of commentaries on the political and economic impact of "frankenstorm" (ugh) Sandy pretty much everywhere. I don't have dramatic stories of severe damage to tell because we were largely spared (Hurricane Irene took our old roof, so we were inadvertently well-prepared for the heavier winds this time). The damage from Sandy has been harsh, but the immediate area in which I live was free from flooding and major destruction, with the exception of some fallen trees. Our major issue has been exploded transformers and downed power lines, which are unwisely suspended perilously in the sky. On cue, there has been some renewed political discussion about why the entire region does not have subterranean power lines, but I suspect as little will come of this as of asking why coastal NY has no protection from floodwater as is the case in, say, the Netherlands. It was only a Category 1 storm and the devastation has been extensive.

Before the storm, people rolled their eyes at the suggestion that this was going to be "the big one". Many stayed in their homes on the barrier islands (flimsy strips of sand sticking out in the Atlantic) to wait out the storm instead of moving inland. After the storm, rescue teams had to go out to areas in Zone A that should have evacuated in order save the residents who had refused to do so. It was too late for some. Was it a massive media failure that more people panicked prior to Hurricane Irene, which caused comparatively minimal damage last year, than prior to Sandy, whose effects have been devastating? In the aftermath of Sandy, especially with the upcoming presidential elections, the country will ruminate for some time on why/how our infrastructure buckled so easily under the pressure of the storm. But will we learn from it?

Hurricane Sandy: Inside Out

Blackout

As an anthropologist of technology and digital lifestyles, it was certainly an experience to be forced into a low-tech existence for a few days. It is possible that most people, myself included, like to think that they are prepared for "the worst" when they are probably not. In light of recent events, it might even be hasty to mock those doomsday shut-in weirdos on the discovery channel. For example, many people here stood in line for hours to buy expensive emergency generators from the hardware store before the storm, only to be hit with a gas shortage and unable to fuel it in the days after. They stocked up on food and drinking water, but when the power lines went done, fridges had to be emptied. Some household water supplies were tainted by sewage due to rising flood waters and others loss gas for cooking and heating. We don't think of all these basic necessities as "high tech" modern luxuries, but it has taken days for local authorities to get their mainframes up and running to even supply the energy and figure out where the broken transformers and down lines were.

Since I spend a majority of my life in some or other state of digital connectivity, immediately after the power cut I was initially struck by an imprecise feeling of emptiness with not being able to get online. All landline telephone, internet service and wifi networks died instantly. My cell phone service also went down or spotty for 4 or 5 days until AT&T shared its lines with T-Mobile. Ironically, AT&T service got my phone working for calls, but blocked data connections, so I could not browse any websites on my phone. By day 2 without power, I was irritated that I couldn't turn my TV on to see some news about the storm. I found an old transistor radio powered by a 9V battery (yes, both of those things still exist. Who knew?) which was my only channel of information during the storm and it was frustratingly non-interactive. The hosts kept saying things like "look at these images coming in" and "here is a chart" and "have you ever seen anything like this?"

I spent the cold nights (no heat) trapped inside reading or listening to music in between radio reports. Both of these are activities that, I now realize, have become habitually bundled up with other tasks that I'm normally doing online. Sitting in the dark doing either on its own felt irritatingly monotonous without the obsessive clicking between windows that normally accompanies my work habits. "Just" reading or "just" listening to music is no longer a full activity. It feels unproductive; like I'm not doing anything at all. The false sense of productivity I otherwise get from being connected to the web is, I'm pretty certain, a clear product of the induced ADD of a life permanently online and it was probably good to disconnect for a while to become more aware of it.

There was a palpable sense of desperation from people I encountered everywhere: always staring at their phones, refreshing their Facebook apps and longingly charging their laptops, as if we were all missing out on something essential.

By the evening of day 3, I surprisingly felt my need to be connected had all but dissipated, except for a nagging feeling like I "should" want to get online to get information. But information about what? Other than checking my mail and a few emails I had to send, nothing was too pressing. With the power outage and everything locally at a standstill, it was hard to imagine any substantial things going on in the rest of the world that I had any real urgency to know about. I did keep thinking of stuff that I would normally have Googled immediately, but instead had to make a short list to catch up on later. Yet there was a palpable sense of desperation from people I encountered everywhere: always staring at their phones, refreshing their Facebook apps and longingly charging their laptops, as if we were all missing out on something essential. As the days passed, that seemed to gradually fade, replaced by more important concerns of immediate survival as supermarket shelves emptied, gas lines got longer and nights grew colder.

Hurricane Sandy: No Gas For You

Disconnect

What I found intriguing about the media coverage is that in the multitude of speeches from NJ Governor Christie and NYC Mayor Bloomberg, listeners were urged to log on to city, utility and charitable websites for vital information affecting their survival: where to get gas/food/clothes/water; when power would be restored; how to survive harsh winter conditions with no heat; in which areas the water was no longer safe to drink; what bridges and tunnels were closed. And yet with mobile web and internet down, it was impossible for nearly 3 million people to log on and get this information. I have to wonder how many other T-Mobile users lost web access when AT&T blocked it.

Have we lost our ability to successfully go low-tech to transmit information? LIPA (Long Island Power Authority) is notorious for its inability to move at a suitable pace in restoring power. Throughout the disaster, it neglected to even provide service estimates of when electricity would be restored on the Island, whereas all other power authorities in the region were doing so on at least a daily basis. The LIPA emergency hotline went down for days and most people were unaware that you could text message in reports about fallen trees and power lines.

Still, news outlets proclaimed: go online for information. The NYC Mayor's office Twitter account was updated regularly, but since I had no web access for almost 6 days, I was never the wiser other than the insistence by radio news voices that this was the case. Calls for volunteers gave web addresses instead of phone numbers. It was truly frustrating feeling disconnected in my own town/city even when everyone else was without power, too. The frustration of disconnection – whether web-based or just a generalized feeling of being lost – seemed to affect the other residents of my town as well.

Stories of price-gauging, exploded generators and fridges full of rotten food united small groups of previously unknown neighbors tethered to the wall via their charging cables.

Shops and supermarkets with generators stayed open throughout the disaster. Some offered "charging stations" with multiple outlets for people to top up their phones, laptops, tablets and devices. These became popular congregating areas for commiserating locals as well as essential power sources. Information transmitted between phone-charging patrons was just as significant as the public broadcasts on the radio: Which stations still have gas? How long are the lines? Does the supermarket have bread? Water? Candles? Stories of price-gauging, exploded generators and fridges full of rotten food united small groups of previously unknown neighbors tethered to the wall via their charging cables.

While Google was putting together a fairly useful interactive crisis map, only a privileged few with functioning smart phones and apps could make use of it. The majority with no power and those with cheap dumb phones had to utilize street-based networks and word-of-mouth to get by. Did temporarily disconnecting from the web lead people to reconnect with their offline "communities"? I doubt it. Actually, a lot of people were acting like apocalyptic asshats. But I venture that those recovering from the storm will agree that honing our low-tech information gathering skills so that we can function in future states of disconnection is not an altogether bad idea.

Why newer is not better: notes from laptop shopping

(NB: This tech rant was written 9 months ago. It's a little dusty.)



After years of mulling it over, I finally managed to purchase a new laptop. The sad fate of a tech-loving anthropologist in the concluding months of a PhD program is that cash flow is at an all-time low, while new gadgetry is at its shiniest. Even on my spartan budget, however, such is the diversity of the general consumer marketplace that I was able to maximize my hardware options at competitive prices. In the end, I managed to procure a brilliantly performing Win7 laptop for under $500. It isn't the MacBook Pro I had so hoped to acquire, but it has proven reliable. Even Windows 7 is not too shabby.

The process of buying a new laptop or computer today is complicated given the range of products, brands and features available. I am often asked for advice on this subject, which inevitably becomes a very personalized process of narrowing down needs and wants against a set budget. Novices who are unsure about computers in general are unanimously overwhelmed by the choices and frustrated when they end up with something they don't want (or that ends up being "too complicated"). Still, it's even more difficult when you know exactly what you want and need it to come out of a reasonably priced box. It therefore took me about 4 months to narrow down a system and specifications within my price range (which consistently became obsolete with each month that passed) and I remained undecided until the last moment.

I had never purchased a laptop in a physical store before, having previously ordered online. I realize now how lucky I was that the product which arrived at my door in 2002 was a quality machine with a decent build and intuitive design that suited my needs. I now fastidiously check local shops before purchasing products online to test their build quality and suitability. I spend more waking hours interacting with computers than any other activity, so I am very picky when the planets miraculously align and I can afford a new purchase. In short, design is everything. Drives and memory can be updated, but you're stuck with the shell. This is where tactile perception counts. It has to feel right.

Although I have been in constant contact with innovative technologies, brand new hardware and cutting-edge devices in the course of my research, this was my first major, personal acquisition in over 8 years. I am happy with the speed and flexibility of my new system, but I miss some design features of the old one. In my shopping experience, I must have literally tested hundreds of laptops of different specs and models. Here are my impressions:


I'm about to say something nice about Windows

(This post is about 9 months old and has been sitting in draft. It made me laugh to read it again, so I’m posting it now).




Turn off all the excessive babysitter prompts ("are you sure you want to allow this?"), forget the hefty footprint (standard HDD sizes on new machines size can take it), and the one feature of Windows 7 which makes it my new best friend: improved file search.

Finally coming out of the dark ages and catching up with Spotlight for file indexing and searching, the lightening fast full-text search means that regardless of whatever quirky way I've decided to organize my multitude of academic files, articles, books, notes, fieldnotes, web junk and miscellaneous accumulation of unsorted mess, I can find it in a few keystrokes. I can even get it to trawl my Zotero database along with my other files. Academics, rejoice. And whatever you do, don't listen to the advice telling you to disable the file indexing system to save resources and speed up the OS. Slim down everything else, but not this.

What makes Win 7 file search worth blogging about is that all I'm really dependent upon to get my PhD done efficiently is a word processor and a browser. I didn't have any fieldnote software like NVivo when I was in the field, so all my files are in separate Word and Excel docs, in Zotero, zip archives or simply photo and video files. My folder tree is fairly logical, but my PDFs, articles, books and resources are scattered across folders corresponding to 8 years of Higher Education and hundreds of subject headers. More file creation and management software never helped. In fact, I discovered a long time ago that I don’t need or want more software to manage it all. That just produces an even more fragmented mess plus ties me in to costly proprietary software. All I need is a good search system.

Congrats, Microsoft. You've made your first improvement since Windows 3.1 (I still have a copy; too bad it won’t dual boot). I'm impressed.

Windows 3.1
There are many other aspects of Win 7 to rate. Not all would receive such a glowing review, but I work faster and more efficiently than I did before, which is better than a poke in the eye (read: Vista).





Welcome to Analog/Digital (http://analogdigital.us)

The decision to move to a new domain and upgrade the appearance of this site has been pending for over a year, but I have consistently been sidetracked by other commitments. Continued time pressures at present will inevitably result in a staggered launch with new content to follow soon.

The technical stuff

The design changes are mainly aesthetic (a fresh coat of paint), including improved navigation (new menu bar, dual sidebars). I also provide more links to my other content from around the web (Del.icio.us, Twitter, Flickr, Collected.info, OAC) and new page elements featuring recent comments from readers like you. Under the hood, the code is better composed than the previous, ancient (in internet terms) site layout, which means faster page loads.

The re-design was therefore motivated by practical and infrastructural as well as aesthetic concerns. This page should now be compliant with all browsers and resolutions (although it is optimized for resolutions 1024x768 and higher).

More good news is that old page links should still work. You will, however, need to update your RSS feed readers to be sure that you’re receiving the latest content. Click here for the new posts feed, or follow the links at the top of the page next to the RSS icon.

While I am still tweaking, papering the walls, laying the carpets, etc, please feel free to let me know if anything looks forgotten or irreparably broken.

Why Analog/Digital?

Analog/Digital reflects my perception of new technologies as continuous with existing lifestyles. We are all perpetually skirting the line between now and then, past and future, analog and digital. My efforts at contributing to an anthropology of the internet and web-based media take this on board as the starting point for understanding the ongoing human engagement with technology.

I hope that you will find these site changes to be an overall improvement. As my personal and thesis-writing schedules calm down, I will be adding new content that has been waiting half-finished in a queue of unpublished posts for far too long.

[Image: steampunk synthesizer. Source.]

Unlocking Digital Cities

The November issue of Wired Magazine (UK) features "Unlocking the Digital City", a series of articles exploring how new technologies have transformed - and are continually reinventing - urban life and urban landscapes. The entire issue is worth reading. Below are excerpts from three perspectives on the promises and realities of the digital age in urban environments. (This blog post has been cross-posted on the OAC. Discuss it here).

'Sense-able' urban design
Scholars back in 1995 speculated about the impact of the ongoing digital revolution on the viability of cities. Only 14 years ago, the mainstream view was that, as digital media and the internet had killed distance, they would also kill cities. Technology writer George Gilder proclaimed that "cities are leftover baggage from the industrial era" and concluded that "we are headed for the death of cities", due to the continued growth of personal computing, telecommunications and distributed production. At the same time, MIT Media Lab's Nicholas Negroponte wrote in Being Digital that "the post-information age will remove the limitations of geography. Digital living will include less and less dependence upon being in a specific place at a specific time, and the transmission of place itself will start to become possible."

In fact, cities have never prospered as much as they have over the past couple of decades. China is currently building more urban fabric than has ever been built by humanity. And a particularly noteworthy moment occurred last year: for the first time in history more than half the world's population - 3.3 billion people - lived in urban areas.

The digital revolution did not end up killing our cities, but neither did it leave them unaffected. A layer of networked digital elements has blanketed our environment, blending bits and atoms together in a seamless way. Sensors, cameras and microcontrollers are used ever more extensively to manage city infrastructure, optimise transportation, monitor the environment and run security applications. Advances in microelectronics now make it possible to spread "smart dust" networks of tiny, wireless, microelectromechanical system (MEMS) sensors, robots or devices. [Read more ...]

Words on the Street
Over the last decade a great number of people on Earth have embraced the digital mediation of everyday life. Without considering the matter with any particular care, as individuals or societies, we have installed devices in our clothing, our buildings, our vehicles and our tools which register, collect and transmit extraordinary volumes of data, and which share this data with the global network in real time.

Under such circumstances, it is only natural that a great many of these systems will be used in the planning and management of cities. In the interest of managing traffic and, ostensibly, enhancing public safety, our streets are ringed with networked cameras, salted with embedded sensor grids. We traverse urban space in networked vehicles that are GPS-tracked and leased to us as hourly services like Vélib' and Bicing and City CarShare, or tap our way on to mass transit with RFID-enabled payment cards like London's Oyster. [Read more ...]

Your Neighborhood is Now Facebook Live
... Miriam "went to the Flea" (the flea market, I presumed). Out on the street a few minutes later, Eva herself appeared, violin case slung over her shoulder. It wasn't until we bumped into Miriam a few blocks later, bags full of second-hand trinkets, that it hit me: my Brooklyn neighbourhood had become Facebook Live.

Conventional wisdom says that technology is bad for real-world communities, that we are often alone at home in front of blue screens. This is no doubt true. But we are also out on the street stealing glances at smaller screens, and interacting in more meaningful ways because of it. When it comes to technology and cities, today's thrilling development - "thrilling", that is, if you like real cities and corporeal people - is that social networking is enhancing urban places. I may have been only affirming face-to-face the interactions I just had in cyberspace, but that act was significant for the future of our cities.

The bandwidth of urban experience has increased. The ancient ways are still there: the way a place looks, the neighbours we wave at and the hands we shake. But now, there is an electronic conversation overlaid on top of all that: tweets and status updates, neighbourhood online message boards, detailed mobile electronic maps, and nascent applications that broadcast your location to your friends. This is far more interesting than what we were promised a decade ago: the proverbial coupon blinking on your mobile as you walk past Starbucks. (I have yet to experience this.) [Read more ...]

Discuss at Urban Anthropology.



The fun part of past projections of miraculous household devices of the future is how they are always cloaked in technologies of the time, like the stunning circuit board feature in this video. The computers here are even discussed as sentient beings that can inform their owners and be informed of transactions. Particularly endearing is how the wife's machine is designed for child monitoring and home-shopping, while the husband's machines are designed for finance, record-keeping and worldwide communication. Still, this is a pretty accurate (for its time) rendition of what the Internet would be like, only without any real notion of cyberspace or a concept of digital storage. I think you will agree that the acting is exceptional (especially the simulated print scene).

I had to add this second video of the technology as it was realized a couple of decades later, when computers had finally matured to "tools of the human spirit". The electronic screen means you're connected ... to a computer network called "Internet".



For more insight into the true evolution of the web (read: hilarity will ensue), I recommend reading the comments on YT for each of these videos.

The End of an Era?

Insider technology news - with its detailed web-trend tracking and analysis - can be responsible for somewhat misleading predictions about fundamental changes in the very fabric of Internet space and time, such as the designation of Web 2.0 and the social web as "new eras" in communication. Such grandiose terminology suggests a drastic break with the past, rather than emphasizing continuity or overlap in the shape or form of new media and related trends. At the same time, however, insider tech blogs are great venues for putting new web start-ups into perspective, at times paradoxically free of the idealism inherent in both popular media (Twitter, case in point, ad nauseam), and social research.

In the end, the so-called "social web", web-of-the-people, etc., is still subject to industry swings and financial roundabouts, which inevitably outweigh both permanence and innovation in equal measure. User-generated content and social networking are popular buzz words that highlight egocentricity in every day Internet use, thus enticing swarms of anthropologists, but user behavior is only one side of the Web 2.0 coin. I venture to say that changes in techie reality happen at a pace much faster than those in the social sciences, and, as we try to get a handle on new media as they blow up, the industry has moved on. For instance, while MySpace has become a prolific stomping ground for Internet anthropologists, the techies seem ready to wipe their hands and turn up their noses: "Like an ’80s rock band, MySpace’s time has come and gone" (see below) and "MySpace is dead - the Internet is growing up" (RWW).

For anthropologists, new media is about socialization, communication, bonding, age-relative culture or sub-cultures, networking and interpersonal relationships. For insiders, the social web is a marketplace. The new prediction therein is that the only market left for massive social networking sites is divided into niches or designated corners in which they can sustain a profit:
"Folks, what we are seeing is an end of general purpose, broad social networking. Finally, after nearly two years of us saying so, social is now simply part of the web fabric".
The funny thing is, anthropologists came to the same conclusion, but by mostly different means ... and over a decade ago.


Full article from GigaOM:

With MySpace Changes, a Social Networking Era Ends

The legendary New York Yankees catcher Yogi Berra is rumored to have said about a restaurant: “Nobody goes there anymore because it’s too crowded.” That is precisely how I feel about MySpace, which apparently has a lot of visitors, especially in the U.S., where it is marginally ahead of Facebook, but no one I know actually uses it.

Things are only going to get tougher — Google’s deal with News Corp is going to end soon, and with it a steady spigot of cash will be turned off for a service that is struggling to grow revenues. Like an ’80s rock band, MySpace’s time has come and gone. And nothing reflects that more than the exits of MySpace CEO Chris DeWolfe and his long-time cohort, President Tom Anderson. DeWolfe ran the company from 2003, helped sell it to News Corp for $580 million in 2005 and later helped negotiate a $900 million advertising deal with Google. Since then, MySpace has lost its buzz to Facebook (which is in turn losing that buzz to Twitter). It attempted to become an app platform, but that hasn’t worked out as well. Being a media entrepreneur, I have religiously studied Rupert Murdoch’s career. At the first sign of diminishing returns, Murdoch puts a media entity up for sale, and tries to swap his tin mine for one producing gold. He tried to do that when he attempted to pawn off MySpace to Yahoo.

The clock has been ticking on MySpace and its executives. Earlier this year, COO Amit Kapur and two other long-time MySpace employees left the company because they couldn’t get the contracts they wanted. Their exit was spun by News Corp. After reading various accounts of DeWolfe’s exit, you can see they left Chris out to dry — something I find particularly distasteful.

Regardless of his exit, there is a strategy in place that could turn MySpace into a decent enough money maker: MySpace Music. By looking to social network’s musical roots, MySpace executives realized that they could build the MTV of the broadband generation. Combining text, audio, video, and social abilities with its audience, MySpace can thrive as a niche yet lucrative musical destination. A lot has to go right for that to happen, though. I have outlined a long list of reservations about MySpace Music.

Back in November 2008, Kevin Kelleher noted, “Social networks spent too much time trying to build audiences without building a solid business model.” With a recession raging and the advertising market in a slump, the social networks have to figure out business models — fast. For MySpace it could mean capturing music industry dollars. MySpace wouldn’t be the first social network looking for niche riches. Hi5, a San Francisco-based social network that’s popular outside of the U.S., recently cut half of its workforce and is said to be pivoting into becoming a social gaming destination. Others are going to soon follow. Folks, what we are seeing is an end of general purpose, broad social networking.

Finally, after nearly two years of us saying so, social is now simply part of the web fabric. Facebook founder Mark Zuckerberg recognized that and since then has been pushing hard on Facebook Connect, which is a simple authentication method that also allows granular social interactions to be embedded in non-Facebook services. With over 200 million Facebookers, Mark has somewhat of a future.

DeWolfe should take this unceremonious exit as a blessing in disguise. Or as Yogi would say, “It gets late early around here…”

[emphasis added]

[Edit: Yet another end of an era ...]

Web 3.0: because Web 2.0 is so 2008 ...

Faster than you can say "social networking sites", Web 2.0 has been rebuilt and we have now arrived/are about to arrive/may soon arrive at Web 3.0. And may I say, it's about time.

According to Jason Calacanis, "Web 3.0 is defined as the creation of high-quality content and services produced by gifted individuals using Web 2.0 technology as an enabling platform".

Of course, there are other definitions (Wikipedia is a mess with them).

I like Calacanis' attention to the oft overlooked quality of content as a turning point, rather than type and appearance of website interfaces, but the "next generation" of the web will by necessity incorporate changes in both of these. This fits in well with some of my earlier complaints about Wikipedia's empty calories. Incidentally, I've been somewhat vindicated, as even Wikipedia has recognized its own faults in this way. So:

Web 3.0 throttles the “wisdom of the crowds” from turning into the “madness of the mobs” we’ve seen all to often, by balancing it with a respect of experts. Web 3.0 leaves behind the cowardly anonymous contributors and the selfish blackhat SEOs that have polluted and diminished so many communities.
The idea, then, is that Web 3.0 is the bastard child of Web 2.0 and The Age of Reason: a rebuild which will try to rectify rampant (or at least superfluous) idiocy by reverting to the expert-laymen divide, which ,way back when, used to make up something called, oh, what was it ... knowledge? He is optimistic in suggesting that, given the present state of the Internet as we know it, such a transformation can happen at all, but I'll keep my eye on it just in case. Still, in a counterintuitive sort of way, it makes sense that some manner of control will naturally begin to exert itself over the dissociative entanglement of content and context propagated by the fragmentary and participatory nature of Web 2.0. It is almost inevitable in light of the enhanced networking capabilities and reliance on expert information, recommendation and collaboration.

I enjoy that idea that a natural outcome of the insurmountable chaos that is Web 2.0-level Internet will be simplicity, hierarchy and order. But if that were true, then such would have been the case with the move from the oversaturated Web 1.9.9.9, when the flood gates burst open and Web 2.0 took over to make content more participatory, allowing us all to index the web the way we want it. The history of the Internet is fascinating because, like human history, the major points are recurring. The ideals of the original, free thought network (geek playground) that was the Internet, back in its early early days, had, by the late 90s, been overshadowed by corporate mining/raping of the web for advertising. Free services and information became a thing of the past around 2000-2002, but now the open source movement takes us full circle, nouveau-geeks taking on board the desire to re-democratize the web to fulfil every user's specialized needs.

So that is an obvious second feature of a predicted Web 3.0: open source; freedom of information. Everyone will presumably soon have the basic skillset for a little re-programming and fine-tuning to make things suit themselves. There will always be more free downloads, updates, plug-ins, customizations and add-ons.

At the Technet Summit way back in 2006, Jerry Yang, Yahoo founder, stated:

You don't have to be a computer scientist to create a program. We are seeing that manifest in Web 2.0 and 3.0 will be a great extension of that, a true communal medium...the distinction between professional, semi-professional and consumers will get blurred, creating a network effect of business and applications.
In my opinion, the "true communal medium" may be achievable in theory, but so far it fails in execution. Like the idea of "ubiquitous computing", which I agree we are inching towards at varying degrees conceptually, it is somewhat of a gratuitous term for a more blurred reality. This is mostly because there are still gaps in user knowledge and desires at all ages and levels. Not everyone feels the need for ubiquity, particularly those who do not understand the tools, or those who find them more time-consuming than analog alternatives. (For example, using Remember The Milk instead of sticking a post-it to the fridge). If your life isn't already digitally integrated, there is still a threshold of change which might be easier to surmount now rather than in the future, when Web 3.0 integration becomes watertight. It is ironic, but even some students of mine with iPods can work iTunes but can't figure out an email account.

A side effect of the growing intensity of specialist interests within Web 2.0 has therefore been the strengthening of the 1337/noob divide. No longer solely the purview of gifted cybernerds working out of their basement, the tools to create content on the web are more readily available to us all. Still, the content creator:viewer ratio remains skewed in favor of the latter. For instance, very few people upload videos to YouTube in comparison to the numbers who watch them. In the same vein, there are many people who download music or go on Facebook but don't consider themselves "Internet users" at all. So, ironically, as the means to access the tools to create content on the web become increasingly democratized, niche specializations in whatever realm - gaming, web design, hacking, SNS, chat - lead to increasing accusations of noobism and inferiority. A self-ascribed, elite, expert class of web users has never disappeared, just been overshadowed by the potentiality of user-generated content (UGC).

Similarly, I think there is a saturation point somewhere towards or after ubiquity becomes a reality. I'm not sure where the threshold is on either side, but I'm pretty sure that it's more like a continuum, anyway. There are only so many blogs and social networking sites (SNS) and microblogs and social bookmarking sites and indexes and RSS feeds and streams and music downloads and transfers and content management tools that we can keep up with, even if we're connected at every minute of the day. We still need to assimilate all the information. I remember myself that (back in the day), I used to have a over a hundred people on my messenger buddylists (ICQ, MSN, Yahoo, AIM). One day I realized that upon logging on I had 20 to 30 instant messages and the input became too great to keep up with, combined with emails and online groups (the precursors to SNS). I started logging on as "invisible", and the symbolism there is self-evident. Today I have three people on one messenger.

Ubiquitous computing sounds so ideal: wherever you go, from your PC to your phone, kitchen to car, bathroom to classroom, your content is there waiting for you. But I still believe that we all have our individual saturation points. So the Web 3.0 version of ubiquity will need to bring about a greater level of organization and one-touch prioritization, reworking superfluous information overload in favor of sleek and slimlined content management. Then I'll buy the whole Web 3.0 thing. Only I won't buy it, of course, I'll download it for free!

Other previewed features of Web 3.0 are enhanced interoperability, mass-networking, interconnected services, and universal, cross-platform accounts. This concept isn't new - the Microsoft/.NET Passport failed to achieve this around 2001 if I recall correctly, and now seems likely to cooperate with the OpenID framework, a step in the right direction. A more unified user authentication system seems unavoidable now because of the sheer number of networking websites, blogging tools, personal organizers, mobile content disseminators, and other overlapping programs which, by sheer volume, are easier to manage from a single portal. What will that portal be? The information "cloud" is so expansive, so cumulonimbus, that the only revolutionary change I can foresee being truly useful will be structurally and architecturally determined - not in a hardware or networked sense, but in an organizational sense. Peer filtering of UGC not withstanding, something's got to give. If there won't be a single data organizer for cross-platform mining of user-specific information, then at least each individual service should become more interactive and "open" so that the information is easily and quickly interchangeable between platforms. This will not only make each service stronger and more versatile, but better able to withstand the many fickle web generations yet to come.

Next stop, more dynamic web content. Despite my love of a good browser and probably because of my inability to find one, the real future of rich Internet content is outside the 2D box, with cross-platform, third-party software. I believe that we'll see considerable expansion of web-based services through tools like MS Silverlight and Adobe Air. Instead of browser add-ons, we'll overstep and work beyond the browser to achieve greater interactivity. As a run-of-the-mill Internet user, I'm kind of adverse to heightened desktop functionality because I find it so invasive, but as a web enthusiast, it is definitely the way things are going to enable to kind of power that new sites and services will require to smoothly and seamlessly integrate them with the lifestyle of user machines.

This blurring of the line between the desktop, browser and web has been happening seen since Java, Flash and Shockwave. If anything will make a single-step jump in revolutionizing Web 3.0, it will be in, ironically, shifting off the web to a web-desktop hybrid of interconnectivity. Enhanced service toolbars in our browser are just the beginning of the bridge-building between online and offline content. The oft-cited eBay auction desktop utility which allows users to edit auctions on their PC and upload them to eBay is one example. There is something so Juno Version 1.0 (1996) about that!

I'm skeptical as to how effective attempts at Semantic Web and AI will be in the Web 3.0 world. Perhaps in Web 5.0. This is always predicted as the element of the future, but it never gets realized or fulfilled, and frankly it's getting a little sad (sorry, Tim Berners-Lee). AI, machine learning, intelligent agents, I don't know. I'll believe it when I see it - not only in action - but when I see it making a practical difference to my everyday life. Let's admit it, the only way to make computers truly intelligent is to find an intelligent being to program them. That seems to be the sticking point. Besides, I don't want my computer to become self-aware. It's bad enough that it taunts me with those red and green squiggly lines under my text in MS Word. It's called poetic license, stupid machine.

Joking aside, I'm having trouble envisioning the potential application that achieves machine intelligence at a level which is truly useful and doesn't need babysitting. Besides proximity monitors in mobile devices and dictionaries/definition generators, the knowledge base of an AI system still needs to be monitored so it can 'learn'. Where is the big jump then, that makes this 3.0-worthy? Enhanced data-mining techniques are not enough to make intelligent programming predictions. This is because they are still based on the same platform - mining the existing web for content. Any predictions based on this would have to be done by some sophisticated algorithm of which I can't even begin to fathom. Even "collaborative filtering", which might be more effective, still requires a human intervention factor that makes the whole thing seem counterproductive. Some applications that might be expandable are recommendation engines, based on data mining content, but they are far from perfect, although some are more impressive than others; for example, music recommendations.

Despite my skepticism, I hope that Web 3.0 will see more universal compatibility and database generalization, so that cross-platform and cross-browser compliant import/export facilities exist and function with ease, rendering all forms of content truly fluid.

I think the most accurate definition of what we can expect from the elusive Web 3.0 in contrast to Web 2.0 comes from Eric Schmidt, Chairman and CEO of Google:

Web 2.0 is a marketing term, and I think you've just invented Web 3.0. But if I were to guess what Web 3.0 is, I would tell you that it's a different way of building applications... My prediction would be that Web 3.0 will ultimately be seen as applications which are pieced together. There are a number of characteristics: the applications are relatively small, the data is in the cloud, the applications can run on any device, PC or mobile phone, the applications are very fast and they're very customizable. Furthermore, the applications are distributed virally: literally by social networks, by email. You won't go to the store and purchase them... That's a very different application model than we've ever seen in computing.

This is the clearest extension to what already exists, with a likely development in practical terms. The revolutionary aspects are implied and subtle in their continuity with the current state of the web. In the end, I should admit, I find web "versioning" to be facile and arbitrary. In dividing the web into generations of growth, we are missing the true beauty of it all - the open-ended continuum that is free knowledge sharing, and the various subsets and offshoots of information diffusion. The "transformations" between 1.0, 2.0 and 3.0 are really a matter of degree more than substance; shape and form more than actuality; purpose and intention more than content.

Indeed, even with changes in appearance, design and visualizations throughout the years, the content on the web is mostly the same: photos, video, sound and text. What marked the distinction between 1.0 and 2.0 was self-obsessed egocentricity more than anything else. As we may inch towards the free information highway of ideals that so inspired early Internet enthusiasts, the egocentricity of the social networking generation of the web is all-pervading in our time and not easily effaced.

If Web Beta was marked by newness and optimism, Web 1.0 was a click-happy marketing expansion and .com bubble. That would make Web 2.0 the collaborative and tag-happy ADHD-ridden web. Let's hope 3.0 is a desire to implement sensibility and open source aspirations to fulfil the high hopes of the information society of prehistory - or, in web years, roundabout 1995.

And let's not forget revolution.



Digg!
Add to del.icio.us

WiTricity

Why is this taking so long to achieve?

Wireless energy promise powers up
A clean-cut vision of a future freed from the rat's nest of cables needed to power today's electronic gadgets has come one step closer to reality. US researchers have successfully tested an experimental system to deliver power to devices without the need for wires. The setup, reported in the journal Science, made a 60W light bulb glow from a distance of 2m (7ft).

WiTricity, as it is called, exploits simple physics and could be adapted to charge other devices such as laptops. "There is nothing in this that would have prevented them inventing this 10 or even 20 years ago," commented Professor Sir John Pendry of Imperial College London who has seen the experiments. "But I think there is an issue of time. In the last few years we have seen an exponential growth of mobile devices that need power. The power cable is the last wire to be cut in a wireless connection." Professor Moti Segev of the Israel Institute of Technology described the work as "truly pioneering".

Energy gap
The researchers from the Massachusetts Institute of Technology (MIT) who carried out the work outlined a similar theoretical setup in 2006, but this is the first time that it has been shown to work. "We had a strong faith in our theory but experiments are the ultimate test," said team member Assistant Professor Marin Soljacic. "So we went ahead and sure enough we were successful, the experiments behave very much like the theory."
...
"These results are encouraging. The numbers are not far from where you would want for this to be useful," said Professor Soljacic. The system exploits "resonance", a phenomenon that causes an object to vibrate when energy of a certain frequency is applied. When two objects have the same resonance they exchange energy strongly without having an effect on other surrounding objects. ... Instead of using acoustic resonance, WiTricity exploits the resonance of low frequency electromagnetic waves.

HOW WIRELESS POWER COULD WORK
1) Power from mains to antenna, which is made of copper
2) Antenna resonates at a frequency of about 10MHz, producing electromagnetic waves
3) 'Tails' of energy from antenna 'tunnel' up to 2m (6.5ft)
4) Electricity picked up by laptop's antenna, which must also be resonating at 10MHz. Energy used to re-charge device
5) Energy not transferred to laptop re-absorbed by source antenna. People/other objects not affected as not resonating at 10MHz [source]


. Digg!
Add to del.icio.us

top