I still haven't found what I'm looking for

The first time I was really aware that I was looking for information was probably during my PhD.  I had to find the context of my project and spent time in Chemical Abstracts looking for it.  I used keywords that seemed appropriate and discovered the first truth of information searching – that the abstracts above and below the one you are looking for are always more interesting.  I never questioned how the keywords had been arrived at and how they were allocated to the papers I eventually read.  It was how things were done.  I got the information I needed and used it. 

With every successive evolution of computer power the task became easier.  Within a few years I had a local “database” on my computer.  I had to define the fields before I started and when once I forgot one, I had to go back to the beginning to make it all work.  More computing power led to more flexible systems.  Within 10 years, I could leave all the files on my computer and search them “au naturelle” using either keywords or, using a thesaurus-driven approach, things that ought to be associated with my target.  Next came more logic-driven systems - and now most of the time we don’t think how close to magic a search engine has become.  Now I use Spotlight on my computer, an integral part of the operating system, not a separate programme.

I was set on this train of thought by a couple of conversations recently with interesting UK companies.  Our Innovation Platform on Intelligent Transport Systems and Services has evolved with the available technology and our current competition is looking to support companies developing “informed personal travel” systems.  There are several out there already but the functionality is still developing and its usability has come on leaps and bounds.  I was visiting Kizoom to understand some of their latest work and their latest challenges.  I have been a fan and user of their iPhone application, myRail Lite (now sadly withdrawn), and was interested in their associated plans. 

What strikes me about this field is the precision required and the thoroughness of the data assembly, because the information they provide me as a user is a mash-up from several sources.  The application starts with my location, finds the nearest railway station and then allows me to interrogate the departures and arrivals from that station.  In its latest version, it also allows a link to the mapping function on the iPhone and gives me the telephone numbers of the local taxi services. 

The volume of data they have to analyse and assemble to present the information I need in a user-friendly manner is beyond my simple understanding.  That they are developing an equivalent tool for buses is even better.  The fact that every bus stop in the UK has a defined cyber identity shows that someone was thinking ahead, but there are now several places in the UK where you can interrogate not only the timetable but also the live feed, so you can never miss a bus again.  That this volume of data can be accessed and analysed is a function of the development of computing power and the algorithms that are used, and gives an indication that, whilst Moore’s Law still appears to hold, we can expect increased levels of useful and accessible information to be delivered to our computers and PDAs so that we can travel more efficiently.

The same increase in computing capacity has also allowed development in a different direction, and a visit from Polecat gave us an insight into that as well.  They are using the tools of the semantic web to analyse the impact an organisation is having in cyber space.  I have been learning new words (and their associated ideas) like semantics, semiotics, ontologies, contextual parsing, social web analytics and psychographic profiling as we try out this new approach on ourselves.  Rather than predetermining relationships and ordering well-tagged data, their approach is to collect masses of data and analyse it using semantic algorithms to gauge how the subject of the search is perceived by those who reference it.  This is getting pretty close to how Tim Berners-Lee envisaged it less than a decade ago and reflects a growing trend we are seeing in the “information and communications technology” space, where computers are more a part of the overall communications system and perform low level “human” tasks to add leverage to the human users’ knowledge. 

Interestingly, we met Polecat not through one of our normal engagement processes, but through their role in WebMission 09.  Even as I write this I am following tweets from the 20 companies and the “fringe” who landed in San Francisco on Saturday to meet the movers and shakers of Silicon Valley and take part in Web 2.0 Expo.  Although I didn’t get to go to San Francisco, I did go to their UK launch event a few weeks ago in London. The room was full of people with new companies, new ideas for companies and just new ideas.  I was mentally exhausted by the end of the evening.  I can only imagine the energy levels that must be running around the WebMission team in San Francisco.

So, how I use computers and their networks has developed a lot since I started using them, but I suspect there is a lot more I should expect from them in the near future. One day, perhaps, the equivalent of the old aphorism about Chemical Abstracts will mean the computer will answer the question I should have asked.

 

Last updated on Friday 24 February 2012 at 10:32

Leave a Comment
*Required fields
Please enter the contents of the verification image. This is to help us prevent automated ‘spam’ comments.
Post comment
Copyright © 2013