Everynow

by Thomas Vander Wal in , , , , , ,


We are living in a time where there are not only many concurrent realities existing at once, but our understanding of “Now” is perhaps broader and more broken than most any time since the Middle Ages started sprouting into the Renaissance. This is nowhere more prevalent in our understanding of the future, particularly the near future. Future technologies and future living have been part of reality in and around us for decades. But the time gap between the few edge cases who are living with what most consider future technology and life to when it hits mainstream is ever increasing.

The Future is Here…

This stretch of living with future technologies as a regular part of our lives and those who are not there yet, or even living a generation of reality behind all while living in the same culture is something I’ve called the everynow. Everynow started about 2004 as a tongue-in-cheek riff on Adam Greenfield’s everyware term use. Everynow is the breadth reality in William Gibson’s “The future is already here - it’s just not evenly distributed” statement from 2003, which is an idea many have discussed for years prior, but with out such nice phrasing.

Breadth of Adoption Reality

It seems the everynow is about 20 years in breadth.

For years it seemed it was about 10 to 12 years and it was nice to see it in Steven Berlin Johnson’s wonderful book “Where Good Ideas Come From”, he talks about the reality of ideas taking about 10 years from inception to getting them into relatively broad public use. When you think of internet based email in the 90s and the use of corporate email internally and out through internet gateways to better connect freely and more unencumbered, it took about 5 years for email to get to roughly 99% inside the organization (many organizations were much closer to that 10 year mark). But, in the last couple years in particular that 10 years.

Internet of Things

This past week with Tom Coates’ Twitter account for his house, @houseofcoates getting some mainstream media press the difference from what Tom is doing (and thinking and playing with for a very long time) rather echoes things like MisterHouse which started in the late 1990s using X10 devices and services and internet enabling them using Perl. The demo site for MisterHouse allowed those on the web to see the live status of lights, messaging, home music service, and things like window shade open status, but also for quite a while allows any of us to modify them right from the web. Tom’s long interest and work with his House of Coates is the latest iteration and extension of this and his long work on web of data and internet of things. (By the way Tom’s work is quite good and worth tracking down.)

The chatter around the Internet of Things, which is far from mainstream exposure and partial understanding is nearly 15 years old since its first usage by Kevin Ashton, really took off around 2002 and 2003. Bruce Sterling’s still incredibly valuable framing of the Internet of Things in his Shape of Things book from 2005 added the incredibly helpful concept of Spimes to conversations (actually he seeded this in 2004 in a SIGRAPH presenation, “When Blobjects Rule the Earth”), thinking through, and development many of us had been wading in for a few years.

Information for Use and Reuse on Mobile

Another example is around mobile… When I think about this mobile explosion that has “taken place recently” there is very little that is different from the thousands of handfuls of us living with smartphones in the early 2000s and thinking of the capabilities and potentials and building them and living them, all while the many many thousands of us were swimming in the same pool of live with the billions of others around us. Many of use in the U.S. and Western Europe felt we were deeply behind those living in Japan and Korea and their understanding and living the realities of living a life with mobiles that augmented their reality as the devices and services enhanced their lives lived with the devices in them. As we developed use of our web based information for use on internet connected Palm and other similar devices in the 90s

But, from a consumer and early adopter framing the reality of what is potentially doable in the future and having that in place and in use for some time know then trailing all the way back to those living in prior realities and the frustrations (although they think they are manageable, but not realizing how poorly the tools and services are working for them) is pushing that everynow to a very confounding nearly 20 years.


Pieces of Time, Place, Things, and Personal Connections Loosly Joined

by Thomas Vander Wal in , , , , , , , , ,


There are a lot of people wondering what to do with all the data that is being generated by social tools/sites around the web and the social tools/services inside organization. Well, the answer is to watch the flows, but the pay off value is not in the flow it is in contextualizing the data into usable information. Sadly, few systems have had the metadata available to provide context for location, conversation flow, relevant objects (nouns), or the ability to deal with the granular social network.

How many times have you walked bast a book store and thought, “Hmm, what was that book I was told I should check out?” Or, “my favorite restaurant is book filled, what was the name of the one recommended near here a month or so ago?” When the conversations are digitized in services like Twitter, in Facebook, or the hundreds of other shared services it should be able to come back to you easily. Add in Skype, or IM, which are often captured by the tools and could be pulled into a global context around you, your social connections, the contexts of interest the for the relationships, and the context around the object/subject discussed you should have capability to search to get to this within relatively easy reach.

Latency from Heavy Computational Requirements

What? I am hearing screaming from the engineers about the computational power needed to do this as well as the latency in this system. Design Engaged 2005 I brought up a similar scenario, within context of my Personal InfoCloud and Local InfoCloud frameworks called Clouds, Space & Black Boxes (a 500kb PDF). The key then as it is still is using location and people to build potential context and preprocess likely queries.

When my phone is sharing my location with the social contextual memory parser service that see I am quite near a book store (queue the parsing for shared books, favorited conversations with books, recent wish list additions (as well as older additions), etc. But, it is also at the time I usually eat or pick up food for a meal, so restaurant and food conversations parsed, food blogs favorited (delicious, rated on the blogs, copied into Evernote, or stored in Together or DevonThink on my desktop, etc.) to bring new options or remind of forgotten favorites.

Now, if we pull this contextual relevance into play with augmented reality applications we get something that starts bringing Amazon type recommendations and suggestions to play into our life as well as surfacing information “we knew” at some point to our finger tips when we want it and need it.

Inside the Firewall

I have been helping many companies think through this inside the firewall to have, “have what we collectively know brought before us to help us work smarter and more efficiently”, as one client said recently. The biggest problem is poor metadata and lack of even semi-structured data from RDFa or microformats. One of the most important metadata pieces is identity, who said what, who shared it, who annotated it, who commented on it, who pointed to it, and what is that person’s relationship to me. Most organizations have not thought to ensure that tiny slice of information is available or captured in their tools or service. Once this tiny bit of information is captured and contextualized the results are dramatic. Services like Connectbeam did this years ago with tags in their social bookmarking tool, but kept it when they extended the ability to add tagging in any service and add context.

Reblog this post [with Zemanta]


Inline Messaging

by Thomas Vander Wal in , , , , , ,


Many of the social web services (Facebook, Pownce, MySpace, Twitter, etc.) have messaging services so you can communication with your "friends". Most of the services will only ping you on communication channels outside their website (e-mail, SMS/text messaging, feeds (RSS), etc.) and require the person to go back to the website to see the message, with the exception of Twitter which does this properly.

Inline Messaging

Here is where things are horribly broken. The closed services (except Twitter) will let you know you have a message on their service on your choice of communication channel (e-mail, SMS, or RSS), but not all offer all options. When a message arrives for you in the service the service pings you in the communication channel to let you know you have a message. But, rather than give you the message it points you back to the website to the message (Facebook does provide SMS chunked messages, but not e-mail). This means they are sending a message to a platform that works really well for messaging, just to let you know you have a message, but not deliver that message. This adds extra steps for the people using the service, rather than making a simple streamlined service that truly connects people.

Part of this broken interaction is driven by Americans building these services and having desktop-centric and web views and forgetting mobile is not only a viable platform for messaging, but the most widely used platform around the globe. I do not think the iPhone, which have been purchased by the owners and developers of these services, will help as the iPhone is an elite tool, that is not like the messaging experience for the hundreds of millions of mobile users around the globe. Developers not building or considering services for people to use on the devices or application of their choice is rather broken development these days. Google gets it with Google Gears and their mobile efforts as does Yahoo with its Yahoo Mobile services and other cross platform efforts.

Broken Interaction Means More Money?

I understand the reasoning behind the services adding steps and making the experience painful, it is seen as money in their pockets through pushing ads. The web is a relatively means of tracking and delivering ads, which translates into money. But, inflicting unneeded pain on their customers can not be driven by money. Pain on customers will only push them away and leave them with fewer people to look at the ads. I am not advocating giving up advertising, but moving ads into the other channels or building solutions that deliver the messages to people who want the messages and not just notification they have a message.

These services were somewhat annoying, but they have value in the services to keep somebody going back. When Pownce arrived on the scene a month or so ago, it included the broken messaging, but did not include mobile or RSS feeds. Pownce only provides e-mail notifications, but they only point you back to the site. That is about as broken as it gets for a messaging and status service. Pownce is a beautiful interface, with some lightweight sharing options and the ability to build groups, and it has a lightweight desktop applications built on Adobe AIR. The AIR version of Pownce is not robust enough with messaging to be fully useful. Pownce is still relatively early in its development, but they have a lot of fixing of things that are made much harder than they should be for consuming information. They include Microfomats on their pages, where they make sense, but they are missing the step of ease of use for regular people of dropping that content into their related applications (putting a small button on the item with the microformat that converts the content is drastically needed for ease of use). Pownce has some of the checkboxes checked and some good ideas, but the execution of far from there at the moment. They really need to focus on ease of use. If this is done maybe people will comeback and use it.

Good Examples

So who does this well? Twitter has been doing this really well and Jaiku does this really well on Nokia Series60 phones (after the first version Series60). Real cross platform and cross channel communication is the wave of right now for those thinking of developing tools with great adoption. The great adoption is viable as this starts solving technology pain points that real people are experiencing and more will be experiencing in the near future. (Providing a solution to refindability is the technology pain point that del.icio.us solved.) The telecoms really need to be paying attention to this as do the players in all messaging services. From work conversations and attendees to the Personal InfoCloud presentation, they are beginning to get the person wants and needs to be in control of their information across devices and services.

Twitter is a great bridge between web and mobile messaging. It also has some killer features that add to this ease of use and adoption like favorites, friends only, direct messaging, and feeds. Twitter gets messaging more than any other service at the moment. There are things Twitter needs, such as groups (selective messaging) and an easier means of finding friends, or as they are now appropriately calling it, people to follow.

Can we not all catch up to today's messaging needs?


Stitching Conversation Threads Fractured Across Channels

by Thomas Vander Wal in , , , , , , , , , , , ,


Communicating is simple. Well it is simple at its core of one person talking with another person face-to-face. When we communicate and add technology into the mix (phone, video-chat, text message, etc.) it becomes more difficult. Technology becomes noise in the pure flow of communication.

Now With More Complexity

But, what we have today is even more complex and difficult as we are often holding conversation across many of these technologies. The communication streams (the back and forth communication between two or more people) are now often not contained in on communication channel (channel is the flavor or medium used to communicate, such as AIM, SMS, Twitter, e-mail, mobile phone, etc.).

We are seeing our communications move across channels, which can be good as this is fluid and keeping with our digital presence. More often than not we are seeing our communication streams fracture across channels. This fracturing becomes really apparent when we are trying to reconstruct our communication stream. I am finding this fracturing and attempting to stitch the stream back together becoming more and more common as for those who are moving into and across many applications and devices with their own messaging systems.

The communication streams fracture as we pick-up an idea or need from Twitter, then direct respond in Twitter that moves it to SMS, the SMS text message is responded back to in regular SMS outside of Twitter, a few volleys back and forth in SMS text, then one person leaves a voicemail, it is responded to in an e-mail, there are two responses back and forth in e-mail, an hour later both people are on Skype and chat there, in Skype chat they decide to meet in person.

Why Do We Want to Stitch the Communication Stream Together?

When they meet there is a little confusion over there being no written overview and guide. Both parties are sure they talked about it, but have different understandings of what was agreed upon. Having the communication fractured across channels makes reconstruction of the conversation problematic today. The conversation needs to be stitched back together using time stamps to reconstruct everything [the misunderstanding revolved around recommendations as one person understands that to mean a written document and the other it does not mean that].

Increasingly the reality of our personal and professional lives is this cross channel communication stream. Some want to limit the problem by keeping to just one channel through the process. While this is well intentioned it does not meet reality of today. Increasingly, the informal networking leads to meaningful conversations, but the conversations drifts across channels and mediums. Pushing a natural flow, as it currently stands, does not seem to be the best solution in the long run.

Why Does Conversation Drift Across Channels?

There are a few reasons conversations drift across channels and mediums. One reason is presence as when two people notice proximity on a channel they will use that channel to communicate. When a person is seen as present, by availability or recently posting a message in the service, it can be a prompt to communicate. Many times when the conversation starts in a presence channel it will move to another channel or medium. This shift can be driven by personal preference or putting the conversation in a medium or channel that is more conducive for the conversation style between people involved. Some people have a preferred medium for all their conversations, such as text messaging (SMS), e-mail, voice on phone, video chat, IM, etc.. While other people have a preferred medium for certain types of conversation, like quick and short questions on SMS, long single responses in e-mail, and extended conversations in IM. Some people prefer to keep their short messages in the channel where they begin, such as conversations that start in Facebook may stay there. While other people do not pay attention to message or conversation length and prefer conversations in one channel over others.

Solving the Fractured Communication Across Channels

Since there are more than a few reasons for the fractured communications to occur it is something that needs resolution. One solution is making all conversations open and use public APIs for the tools to pull the conversations together. This may be the quickest means to get to capturing and stitching the conversation thread back together today. While viable there are many conversations in our lives that we do not want public for one reason or many.

Another solution is to try to keep your conversations in channels that we can capture for our own use (optimally this should be easily sharable with the person we had the conversation with, while still remaining private). This may be where we should be heading in the near future. Tools like Twitter have become a bridge between web and SMS, which allows us to capture SMS conversations in an interface that can be easily pointed to and stitched back together with other parts of a conversation. E-mail is relatively easy to thread, if done in a web interface and/or with some tagging to pull pieces in from across different e-mail addresses. Skype chat also allows for SMS interactions and allows for them to be captured, searched, and pulled back together. IM conversations can easily be saved out and often each item is time stamped for easy stitching. VoIP conversations are often easily recorded (we are asking permission first, right?) and can be transcribed by hand accurately or be transcribed relatively accurately via speech-to-text tools. Voice-mail can now be captured and threaded using speech-to-text services or even is pushed as an attachment into e-mail in services as (and similar to) JConnect.

Who Will Make This Effortless?

There are three types of service that are or should be building this stitching together the fractured communications across channels into one threaded stream. I see tools that are already stitching out public (or partially public) lifestreams into one flow as one player in this pre-emergent market (Facebook, Jaiku, etc.). The other public player would be telecoms (or network provider) companies providing this as a service as they currently are providing some of these services, but as their markets get lost to VoIP, e-mail, on-line community messaging, Second Life, etc., they need to provide a service that keeps them viable (regulation is not a viable solution in the long run). Lastly, for those that do not trust or want their conversation streams in others hands the personally controlled application will become a solutions, it seems that Skype could be on its way to providing this.

Is There Demand Yet?

I am regularly fielding questions along these lines from enterprise as they are trying to deal with these issues for employees who have lost or can not put their hands on vital customer conversations or essential bits of information that can make the difference in delivering what their customers expect from them. Many have been using Cisco networking solutions that have some of these capabilities, but still not providing a catch all. I am getting queries from various telecom companies as they see reflections of where they would like to be providing tools in a Come to Me Web or facilitating bits of the Personal InfoCloud. I am getting requests from many professionals that want this type of solution for their lives. I am also getting queries from many who are considering building these tools, or pieces of them.

Some of us need these solutions now. Nearly all of us will need these solutions in the very near future.


The Future is Now for Information Access

by Thomas Vander Wal in , , , , , , , , , , , , , , ,


An interview with Microsoft's Steve Ballmer in the in the San Francisco Chronicle regarding Steve's thoughts about the future of technology, information, and Microsoft (including their competition) sparked a few things regarding the Personal InfoCloud and Local InfoCloud. It could be the people I hang out with and the stay-at-home parents I run across during the day, but the future Ballmer talks about is happening now! The future will more widely distributed in 10 years, but the desire and devices are in place now. The thing holding everything back is content management systems that are built for the "I Go Get Web" and people implementing those systems that see technology and not a web of data.

Let's begin with Ballmer's response to the question, "Ten years from now, what is the digital world going to look like? To which Ballmer responds: A: People are going to have access to intelligence in multiple ways. I'm going to want to have intelligence in my pocket. I'm going to want to have intelligence in my TV. I'm going to want to have intelligence in my den and in my office. And what I may want in terms of size, of screen size, of input techniques, keyboard, handwriting, voice, may vary.

I think what we'll see is, we have intelligence everywhere. We have multiple input techniques, meaning in some sense you may have some bit of storage which travels with you everywhere, effectively. Today, people carry around these USB storage devices, but you'll carry around some mobile device.

The problem is people have the devices in their pockets today in the form of Blackberries, Treos, Nokia 770s, and just regular mobile phones with browsing and syncing. The access to the information is in people's pockets. The software to make it simple with few clicks is where the battle lies. My Palm OS-based Treo 650 is decent, but it has few clicks to get me to my information. My friends with the Windows version of the same device have six or more clicks for basic things like calendar and address book. Going through menus is not simplicity. Going directly to information that is desired is simplicity. A mobile devices needs simplicity as it is putting information in our hands with new contexts and other tasks we are trying to solve (driving, walking, meeting, getting in a taxi, getting on a bus, etc.).

The Information

Not only does the software have to be simple to access information in our Personal InfoCloud (the information that we have stated we want and need near us, but have structured in our personal framework of understanding). We also interact with the Local InfoCloud with is information sources that is familiar to us to which we have set a means of easing interaction (cognitively, physically, or mechanically).

This "intelligence" that Ballmer refers to is information in the form of data. It needs to be structured to make solid use of that information in our lives. This structure needs to ascend below the page level to at least the object level. The object level can be a photo with the associated metadata (caption, photographer, rights, permanent source, size, etc.), event information (event name, location, date and time, permanent location of the information, organizer, etc.), full-text and partial-text access (title, author, contact info, version, date published, rights, headers, paragraphs, etc.).

These objects may comprise a page or document on the web, but they not only have value as a whole, they have value as discrete objects. The web is a transient information store for data and media, it is a place to rest this information and object on its journey of use and reuse. People use and want (if not need) to use these objects in their lives. Their lives are comprised of various devices with various pieces of software that work best in their life. They want to track events, dates, people, ideas, media, memes, experts, friends, industries, finances, workspaces, competition, collaborators, entertainment, etc. as part of their regular lives. This gets very difficult when there is an ever growing flood of information and data bombarding us daily, hourly, consistently.

This is not a future problem. This is a problem right now! The information pollution is getting worse every moment we sit here. How do we dig through the information? How do we make sense of the information? How do we hold on to the information?

The solutions is using the resources we have at our finger tips. We need access to the object level data and the means to attach hooks to this data. One solution that is rising up is Microformats, which Ray Ozzie of Microsoft embraces and has been extending with his Live Clipboard, which is open for all (yes all operating systems and all applications) to use, develop, and extend. The web, as a transient information store, must be open to all comers (not walled off for those with a certain operating system, media player, browser, certain paid software, etc.) if the information is intended for free usage (I am seeing Microsoft actually understand this and seemingly embrace this).

Once we have the information and media we can use it and reuse it as we need. But, as we all know information and media is volatile, as it changes (for corrections, updates, expanding, etc.) and we need to know that what we are using and reusing is the best and more accurate information. We need the means to aggregate the information and sync the information when it changes. In our daily lives if we are doing research on something we want to buy and we bookmark it, should we not have the capability to get updates on the prices of the item? We made an explicit connection to that item, which at least conveys interest. Is it not in the interest of those selling the information to make sure we have the last price, if not changes to that product? People want and need this. It needs to be made simple. Those that get this right will win in the marketplace.

What is Standing in the Way?

So, the big question is, "what is standing in the way"? To some degree it is the tools with which we create the information and some of it is people not caring about the information, data, and media they expose.

The tools many of the large information providers are using are not up to the task. Many of the large content management systems (CMS) do not provide simple data structures. The CMS focusses on the end points (the devices, software, tools, etc.) not the simple data structures that permit simple efficient use and reuse of the objects. I have witnessed far too many times a simple web page that is well structured that is relatively small (under 40KB) get turned into an utter mess that is unstructured and large (over 200KB). Usable, parseable, and grabable information is broken by the tools. The tools focus on what looks good and not what is good. Not only is the structure of the data and objects broken, but they are no longer addressable. There are very few CMS that get it right, or let the developers get it right (one that gets it right is Axiom [open disclosure: I have done work with Siteworx the developer of Axiom]).

The other part of the problem is the people problem, which is often driven by not understanding the medium they are working within. They are focus on the tools, which are far from perfect and don't care enough to extend the tools to do what they should. Knowing the proper format for information, data, media, etc. on the web is a requirement for working on the web, not something that would be nice to learn someday. Implementing, building, and/or creating tools or content for the web requires understanding the medium and the structures that are inherent to building that well. I have had far too many discussions with people who do not understand the basics of the web nor the browser, which makes it nearly impossible to explain why their implementation fails. Content on the web has requirements to be structured well and the pages efficiently built. The pages need to degrade (not with an $80,000 plug-in) by default. Media on the web that is for open consumption must work across all modern systems (this should be a 3 year window if not longer for the "modern" definition).

Summary

So what is the take away from this? Content needs to be built with proper structure to the sub-object level (objects need the metadata attached and in standard formats). The content needs to be open and easily accessed. Portability of the information into the tools people use that put information in our pockets and lives must be done now. We have the technology now to do this, but often it is the poorly structured or formatted information, data, media, etc. that stands in the way. We know better and for those that don't know yet the hurdle is quite low and easy to cross.


The Come To Me Web

by Thomas Vander Wal in , , , , , , , , , , ,


Until May of 2005 I had trouble with one element in my work around the Model of Attraction and Personal InfoCloud (including the Local and Global InfoClouds as well) to build a framework for cross-platform design and development of information and media systems and services. This problem was lack of an easy of explaination of what changes have taken place in the last few years on the web and other means of accessing digital information. In preparing for a presentation I realized this change is manifest in how people get and interact with the digital information and media.

This change is easily framed as the "Come to Me" web. The "Come to Me" web, which is not interchangeable with the push/pull ideas and terms used in the late 90s (I will get to this distinction shortly). It is a little closer to the idea of the current, "beyond the page" examinations, which most of us that were working with digital information pre-web have always had in mind in our metaphors and ideologies, like the Model of Attraction and InfoClouds.

The I Go Get Web

Before we look at the "Come to Me" web we should look at what preceded it. The "I Go Get" metaphor for the web was the precursor. In this incarnation we sought their information. The focus was on the providers of the content and the people consuming the information (or users) were targeted and lured in, in the extreme people were drawn in regardless of a person's interest in the information or topic covered. The content was that of the the organization or site that provided that information.

This incarnation focussed on people accessing the information on one device, usually the desktop computer. Early on the information was developed for proprietary formats. Each browser variant had their own proprietary way of doing things, based around a few central markup tags. People had to put up with the "best view with on X browser" messages. Information was also distributed in various other proprietary formats that required software on the device just so the person could get the information.

The focus providing information was to serve one goal (or use) reading. Some of this was driven by software limitations. But it was also an extension of information distribution in the analog physical space (as opposed to the digital space). In the physical space the written word was distributed on paper and it was consumed by reading (reuse of it meant copying it for reading) and it took physical effort to reconstruct those words to repurpose that information (quoting sources, showing examples, etc.).

The focus was on information creation and the struggle was making it findable. On the web there were only limited central resources used to find information, as many of the search engines were not robust enough, did not have friendly interfaces. Findability was a huge undertaking, either to get people what they desired/needed or to "get eyeballs".

Just as the use of the information was an extension of the physical realm that predated the digital information environment, the dominant metaphor in the "I Go Get" web was based in the physical realm. We all designed and developed for findability around the navigation/wayfinding metaphor. This directly correlates to going somewhere. Cues we use to get us to information were patterned and developed from practices in the physical world.

Physical? Digital? Does it Matter?

You ask, "So what we used ideas from the physical world to develop our metaphors and methodologies for web design and development?" We know that metaphors guide our practices. This is a very good thing. But, metaphors also constrain our practices and can limit our exploration for solutions to those that fit within the boundaries of that metaphor. In the physical realm we have many constraints that do not exist in the digital realm. Objects are not constrained by the resources they are made from (other than the energy to drive digital realm - no power no digital realm). Once an object exists in the digital realm replicating them is relatively insignificant (just copy it).

Paths and connections between information and objects is not constrained by much, other than humans choosing to block its free flow (firewalls, filtering, limiting access to devices, etc.). Much like Peter Merholz desire lines where people wear the path between two places in a manner that works best for them (the shortest distance between two points is a straight line). Now, don't think of the physical limitation between two points, I need to go from my classroom on the fourth floor of building "X" to across campus, up the hill to the sixth floor office of my professor. Draw a straight line and walk directly. This does not work in physical space because of gravity and physical impediments.

Now we are ready to understand what really happens on the web. We go from the classroom to our professors office, but we don't move. The connection brings what we desire to us and our screen. In this case we may just chat (text or video - it does not matter) with the professor from our seat in the classroom (if we even need to be in the classroom). Connections draw objects to our screens through the manifestation of links. As differently as people's minds work to connect ideas together, there can be as many paths between two objects. Use of physical space is limited by limitations outlined in physics, but the limitations are vastly different in digital space, use of the same information and media has vastly different limitations also.

It is through breaking the constraints of old metaphors and letting the digital realm exist that we get to a new understanding of digital information on the networks of the digital realm, which include the web.

The Come to Me Web

The improved understanding of the digital realm and its possibilities beyond our metaphors of the physical environment allows us to focus on a "Come to Me" web. What many people are doing today with current technologies is quite different than was done four or five years ago. This is today for some and will be the future for many.

When you talk to people about information and media today they frame it is terms of, "my information", "my media", and "my collection". This label is applied to not only information they created, but information they have found and read/used. The information is with them in their mind and more often than not it is on one or more of their devices drives, either explicitly saved or in cache.

Many of us as designers and developers have embraced "user-centered" or "user experience" design as part of our practice. These mantras place the focus on the people using our tools and information as we have moved to making what we produce "usable". The "use" in "usable" goes beyond the person just reading the information and to meeting peoples desires and needs for reusing information. Microformats and Structured Blogging are two recent projects (among many) that focus on and provide for reuse of information. People can not only read the information, but can easily drop the information into their appropriate application (date related information gets put in the person's calendar, names and contact information are easily dropped into the address book, etc.). These tools also ease the finding and aggregating of the content types.

As people get more accustom to reusing information and media as they want and need, they find they are not focussed on just one device (the desktop/laptop), but many devices across their life. They have devices at work, at home, mobile, in their living space and they want to have the information that they desire to remain attracted to them no matter where they are. We see the proliferation of web-based bookmarking sites providing people access their bookmarks/favorites from any web browser on any capable device. We see people working to sync their address books and calendars between devices and using web-based tools to help ensure the information is on the devices near them. People send e-mail and other text/media messages to their various devices and services so information and files are near them. We are seeing people using their web-based or web-connected calendars to program settings on their personal digital video recorders in their living room (or wherever it is located).

Keeping information attracted to one's self or within easy reach, not only requires the information and media be available across devices, but to be in common or open formats. We have moved away from a world where all of our information and media distribution required developing for a proprietary format to one where standards and open formats prevail. Even most current proprietary formats have non-proprietary means of accessing the content or creating the content. We can do this because application protocols interfaces (APIs) are made available for developers or tools based on the APIs can be used to quickly and easily create, recreate, or consume the information or media.

People have moved from finding information and media as being their biggest hurdle, to refinding things in "my collection" being the biggest problem. Managing what people come across and have access to (or had access to) again when they want it and need it is a large problem. In the "come to me" web there is a lot of filtering of information, as we have more avenues to receive information and media.

The metaphor and model in the "I go get" web was navigation and wayfinding. In the "come to me" web a model based on attraction. This is not the push and pull metaphor from the late 1990s (as that was mostly focussed on single devices and applications). Today's usage is truly focussed on the person and how they set their personal information workflow for digital information. The focus is slightly different. Push and pull focussed on technology, today the focus is on person and technology is just the conduit, which could (and should) fade into the background. The conduits can be used to filter information that is not desired so what is of interest is more easily identified.


Location? Location? Where am I?

by Thomas Vander Wal in , , , , , , , , , , ,


I have been traveling more than usual this year to places in the United States and Europe. Some I have been to before and others I have not. Many of the trips are to places for only a few days and are set around meetings, conferences, or speaking engagements. I am often making plans at the last minute or having to make arrangements on the fly as ancillary meetings (not the prime reason I am there) get moved or cancelled. I am often looking for food, coffee, wifi, electronic stores, hardware stores, etc. in a location I am not completely familiar with. I am needing services of the local businessman, but I am not local.

The "Local Services"

You say, "there are many local services". Yes, there are Yahoo Local, Google Local, A9 Yellow Pages search, and other more local guides. But, none of them work on a mobile. There are Google SMS search and Mobile Yahoo, which has search that can tie to your local info, but if I am traveling I most likely have not save where I am looking for options.

Most modern phones know your location, they have to by law in the United States for emergency service calls. The phones do not provide easy access to that location software because the carriers providing the service do not want you to have it for free, they want somebody to pay for that information. If I call information they are not going to tell me where I am, nor the type of service or store I am seeking.

A Hack Finds "Where"

My current hack is to stand in front of a store, which I know the street name and I send the request for information about the place to Google SMS (ritual coffee. san francisco, ca) and I get one important piece of information back, the zip code. The zip code in the United States is the key to getting location information. There is nothing when driving (or actually riding as a passenger, because one never text messages while driving) or walking around that tells you the zip code (I have given up asking strangers on the street the zip code as it is more often than not incorrect). Once I have the zip code I can ask the mobile services for "coffee 94110" and get another place to get coffee and sit down because Ritual Coffee Roasters is utterly packed and already has seat vultures hovering.

Ministry of Silly Steps

Doing this little dance I get options, but it is a few steps that I should never have to take. The information most needed in a local search when mobile is location

Zip It, Zip, Z..

With the zip code I can dump that into my Mobile Yahoo! "new location" and get results. But, even because Yahoo! Mobile knows it is me (they offered me my stored locations (such as Home and Work)) it does not use that information to give me things I have reviewed and stored in Yahoo! Local. In the online version of Yahoo! Local I get reviews from people in my "community" (that really really needs to get a firm understanding of the granular social network), which is often helpful (if I know the person and can adjust my perception because I know how close that person's preferences are to mine on that subject). Sometimes I need an extension cord or an Apple Store (or a good substitution).

Elsewhere: Missing Even Partial Solutions

Additionally, this only works in the United States. The global local versions of Yahoo don't have fleshed out local services that are anything close to what is available in the United States and my "community" (as imperfect of an approach as it is at the moment) is still more helpful at filtering than nothing and I know I have many people in my "community" that have not only been to the same locations I am in, but have reviewed restaurants, local stores, etc. on the web and I want to be able to pull that information back in. Yes, this means the services need to grasp and embrace digital identity to make this work (or just build a social network capable address book that knows who my friend's identities are on various other services and social networking tools where this information may be sitting - not rocket science by any means). I heard some native language services were around, but those would not be fully helpful to me (I think I could get through it however), but if I tried a service that did not work it is not pointing me to one that does (now that would be insanely helpful and I would likely go to the kind service people for everything first as they would point me to just the right place every time).

Ya Beats Goo

Well at least Yahoo! understands there are places outside the United States. Google's services are not there, or any where on the mobile front it seems. In my last trip to Europe nobody knew that Google offered these services, which it seems they do not, in one of the most mobile use intensive cultures in the Western Hemisphere.

Enough

I know, enough. I agree. We need mobile information that works. WiFi is not here everywhere. Even if it were I am not foolish enough to pull out my laptop to try and get a signal and then get the information I need. I have a mobile device with the perfect capability to do just this. Actually there are more than double (if not triple - can not put my fingers on this info) the users with this capability on their mobile than laptop users in the United States (foolishly most laptops do not have locative hardware in them to ease this possibility if it was your last possibility). The technologies are here. Why are we not using them?


Europe Presentations from October

by Thomas Vander Wal in , , , , , , , ,


I am late in posting the links to my two presentations given in Europe. I presented the Personal Digital Convergence as the opening keynote to the SIGCHI.NL - HCI Close to You conference. I have also posted the final presentation, IA for the Personal InfoCloud, at the Euro IA Summit 2005.


WebVisions Designing for the Personal InfoCloud Presentation

by Thomas Vander Wal in , , , , , , , ,


The presentation at WebVisions of Designing for the Personal InfoCloud went quite well yesterday. There is ever growing interest in the Personal InfoCloud as there are many people working to design digital information for use across devices, for reuse, for constant access to information each individual person desires, and building applications around these interests.

In an always on world peoples desires and expectations are changing for their access to information. The tools that will help ease this desire are now being built and some are great starts have been made in this direction. I will be writing about some of these tools in the near future.

I am more exited today than I have been in quite some time by what I see as great progress for the reality of a Personal InfoCloud. It is ever closer for the Personal InfoCloud being more automated and beginning to function in ways that really help people find efficient ways to use information they have found or created in their lives when they want it or need it.

Not only does the Personal InfoCloud need devices but it needs people designing the information for the realities of Web2.0, which is not the old web of "I Go Get", but the new web of "Come to Me". This change in focus demands better understanding of sharing digital assets, designing across platforms and devices, and information being reused and organized externally.


Social Machines

by Thomas Vander Wal in , , , , , , ,


Those of you that follow this site will also likely enjoy Wade Roush's continuousblog. Wade is the West Coast editor for the MIT Technology Review magazine. He has the August cover story in the TechReview titled, "Social Machines" and has posted Social Machines on his site as of today. Please be sure to pick up a print copy and/or read the article on the TechReview site when it is published there.

[I should mention I am quoted in the "Social Machines" article]


Two Sides of the Local InfoCloud

by Thomas Vander Wal in , , , , ,


The Local InfoCloud (one of four clouds, Personal, Local, Global, and External) focusses on access to information that is based on location, as well as membership. These two elements are related, but also can stand on their own.

The Local InfoCloud often uses the Local Area Network (LAN) as an example of its properties. The information on the LAN is open to all that are connected to the network and in many cases have rights to access the information. The information is not organized by the user nor categorized by them. The need to be connected in an office to the LAN is not as important as folks can access by Wide Area Network (WAN) or through an Internet portal that will securely provide access. The Internet portal then allows a mobile user to access the information, which breaks the need to be in a certain location.

Conversely, a mobile user may have access to information based on their location. Projects like Urban Tapestries show what is coming with location-based information. Commonly location-based services have been tied to GPS navigation services in cars that explain with the closest ATM, parking garage, restaurant, etc. are located in relation to your current position. Interacting with digital repositories to provide a review of a restaurant that is accessible in front of the restaurant, or the history of a building when standing in front of the building is also part of the Local InfoCloud. It too is not user organized or user categorized.

The two most important properties of the Local InfoCloud are location and membership. Location is obvious, but membership is less of a obvious relationship to the local moniker. However, membership is often associated with joining with others that have a common interest, bond, or goal. Membership can be framed as a nearness of interests and exclusive of those that do not have an interest. The members have drawn together as they have that common attraction. The Internet connection easily allows this type of grouping to occur.


Tools to Manage Information On Your Personal Hard Drive

by Thomas Vander Wal in , , , , ,


I have been battling the management of information on my personal hard drive on my TiBook. This is one element in my Personal Info Cloud (a self-organized information system that is managed by me and is there to assist me when I need information). I have been finding that my organizational structure is lacking on my hard drive as I have cross-purposes for the information.

An example is I am writing an article and I need to track down a journal article I have downloaded to my hard drive in the past. I store research info on my hard drive in directories by topic areas, such as an IA/UCD directory holding directories on user testing, facets, interaction design, etc. There are times when I am working on an article or essay and have stored helpful resources in a research directory in that project's directory, as I like to have information in close proximity to what I am working on. For each idea I am working on I nearly always have an outline building in OmniOutliner format and at least one graphical representation of the issue at hand, done in OmniGraffle. These two or more data sources are the foundation, along with research that help me further formulate the ideas.

I have gone far beyond what folders can offer, even using soft/symbolic links does not help me greatly. These files need metadata so that they can be better stored for searching, but they also need a project home. The project home should allow for note taking and links to files that are on my hard drive as well as external hyper links.

I have a handful of candidates that have been suggested over this past week from friends at the IA Summit in Austin or once I returned home. I will be downloading and trying them beginning next week (post Christening).

The Candidates

I have already loaded Curio and been trying it for a little more than a week. The tool is not as integrated as I would like. I have not had success dropping PDF or OmniGraffle files into the Idea Space. The external files are held in an organizer, but I can not annotate these files in a more direct manner. The Idea Space is much like a lightweight OmniGraffle and OmniOutliner, which I have and are better tools. I do like the Dossier, which is a questionairre for each project, but I would like to have more than one available for each project as it currently seems is the limit.

James showed me his implementation of iView, which is mostly a digital asset management system. James does most of his thinking in a notebook (possibly a moleskin) and is filled with text and wonderful drawings to capture his ideas. James in turn scans the contents of his notebook into his laptop and uses iView to annotate and view his ideas. The digital assets can be annotated and then sorted and grouped. This seems like it would work for some of my information, but not everything for me. I have not had a scanner for about a year and have not been used to having our new scanner available again so that I could scan in my graph paper notebook.

Jesse brought up VoodooPad as an option. VoodooPad is built on a Wiki technology. I am not a fan of Wiki technology for group project tracking, but for self annotation and having the ability to link to files on my hard drive by drag-and-drop I can see the value. Tanya mentioned she had a similar system using a personal Wiki that worked very well for herself in new environments. VoodooPad may be my next try as I really like having the ability to cross-link ideas.

I have been trying StickyBrain 2 for a few months now, but I have not been fully dedicated to trying it. The initial idea behind StickyBrain works for me, but the interface and the junk preloaded in it have cluttered the interface before I even began. The interface to add info into StickyBrian is very nice as it is in the mouse-related menu (right mouse click for those with such devices). Content in StickyBrain can be categorized, but that can get out of hand. StickyBrain also as a search tool, but unless I have annotated the information correctly, I do not always do so, I can not find it.

Bryan suggested AquaMinds NoteTaker, which I have not seen in action, but the site does offer very good movies that explain how information is entered and how the too can be used. To some degree this is how I use OmniOutliner, but NoteTaker seems to have far more functionality. This will be one I try and compare to OmniOutliner.

Lastly is Tinderbox a note taking tool and idea organizer. Tinderbox's strengths seem to be based on getting this information on to the Web, which is not my initial need. I know a couple people who have been very happy with Tinderbox in the past, but I do not know if they are still using Tinderbox. I looked at this tool when I was thinking about a change from my vanderwal.net weblogging tool that I build, but I was not thinking in terms of finding a tool to better organize my digital thoughts and artifacts of thought.

Conclusion

I will be downloading these of the next couple weeks and I will be writing reviews on them as I try them. I have a couple articles and other items due in the next couple weeks so I may be texting by fire and not having too much time to summarize the results of my testing.