This snapshot, taken on
13/05/2010
, shows web content acquired for preservation by The National Archives. External links, forms and search may not work in archived websites and contact details are likely to be out of date.
 
 
The UK Government Web Archive does not use cookies but some may be left in your browser from archived websites.

Noun qualifiers of other nouns are ambiguous, especially when there are two or more.  Newspaper editors know this – they use them extensively in their headings, as research shows that people interpret them in different ways.  It helps persuade the headline skimmer to read, or at least start to read, the article.

The one that bugs me most is ‘content management system’ (CMS).  Two different noun qualifiers, but which qualifies which of the three?  From my experience they are content systems and management systems but rarely a system for managing the aggregated collection of content.  Let me explain.

Many of the larger CMS come from an enterprise background where the task was to get in control of the many documents wandering around.  They have a lot of functionality including nifty conversions from one format to another.  And good tracking of where these documents are.   They also often contain many tools for managing the workflow in their production, authoring, approving and publishing.  The last is usually done by moving to a pre-publication area with a publisher role making the final act of releasing to the intended audience.  Then Web came along and a functionality was added to send to a website.

There are also many content systems that store words, pictures and other media, in a structured way in a database and then publish to the Web by adding in (X)HTML coding and style sheets and, if the developer is smart, rules for displaying one or another type of content depending on the situation in the system or type of viewer.  These come from a background of Web publishing and although they do that well, they usually don’t offer tools that work across the database.

What we need in government though is the third – content management:

  • ‘Give me all the content that does not contain an entry in a particular metadata field and list with the contact emails of the authors.’
  • ‘List all the pages that have been viewed only by internal staff.’
  • ‘Find all the content containing expression X and change to Y and add to a new metadata field’

And, most of all,

  • ‘Extract out all the content according to these rules, structure them in this particular way and send to go off to another CMS’.

It’s all do-able but so much hard work at present.  We should have easy interfaces to manage content.  I care, because we have many talented digital media staff in government (maybe not enough, but they are certainly there), who waste much of their time and effort in struggling with getting content in and out and between CMS’s. Usually because there is so little functionality to manage content across the whole collection and because exchange between systems is so difficult.  There must be a much better way of doing this and so releasing government and public sector expertise to contribute to the public agenda.  I’m carrying a banner to all those providers of CMS and web publishing services to say ‘Free our digital media experts!’

By doing more information structuring and using common structures, as we’ve been introducing for jobs and consultations, and have been done for a long while in press releases and descriptions of documents, we should be able to build systems that easily exchange information when there are machinery of government changes and that make it easy to manage the content. (And, of course, add in semantic web coding in a supported way!)  Then we might be able to start describing systems as those that enable and facilitate content management.

  • Share/Bookmark

Read 17 Comments »

A big thank you and well done to everyone who contributed to a great event last Thursday.

30 people from Government executive agencies and non-departmental public bodies (NDPBs) came to find out about website auditing.

Attendees were given an introduction by Alex Butler, COI’s Transformational Strategy Director. Alex told how this work came about as the result of the National Audit Office (NAO) report Government on the Internet (July 2007). The NAO report found that:

  • a quarter of government organisations did not know the costs of their websites
  • 16% had no data about how much their websites were being used
  • quality standards had only improved slightly since 2002

Alex explained how the audits form part of a package of improvements offered by COI including standard measures for website quality, value for money and usability. These measures form part of the requirement by the Public Accounts Committee for greater accuracy and transparency in how the Government manages its investment in digital media.

Those present then watched a short clip of COI Chief Executive Mark Lund’s presentation at the Internet Advertising Bureau event IAB Engage 2009. Mark describes the work COI is doing with the Cabinet Office and other parts of Government in setting standards and evaluating the cost-effectiveness government websites.

mark-lund-video

Richard Foan, ABCe’s Managing Director, then talked through the details of website auditing. Richard emphasised the need for standards in an industry where there are so many different methods, metrics and tools to choose from. He explained that ABCe are an independent non-profit organisation, auditing to industry standards. ABCe certify that government websites comply with the standards and measure website usage in a consistent way.

The question and answer session that followed was lively and yielded some useful actions:

Thanks to all those who attended, I hope you found it useful. And thanks particularly to Linda who worked the floor extremely well and made a star entrance!

For those who missed the event, we are planning to do similar events in the future. But in the meantime hopefully this blog article gives a flavour of what took place.

  • Share/Bookmark

Read 6 Comments »

Some meanings of words change over time.  Take ‘scan’ for example.  That used to mean reading something systematically giving attention to every word in order.  Thus when people talk about scanning a newspaper in the 19th Century, they mean someone read it all starting at the first word and going through to the last.  Yes, really, that’s what people used to do.  We still imply that version of the word when referring to ‘scanners’.

But there was deviation into another meaning.  The same task was used by sailors to scan the horizon – systematically looking around it for shipping or land.  This gradually shifted to the question being asked – is there anything interesting to see?  And that’s the meaning we generally attribute to the verb nowadays.  A quick glance or skim over something to see if there is anything that grabs our attention.

Well why is this relevant to the usual subject covered by this blog?  Because we sometimes mistake when people are scanning or scanning.  And it is important in getting people to the information or service quickly.

People quickly scan (in the second sense) and recognise something familiar in search results lists.  So seeing something in a search results list that says it is on Directgov, for example, should signal to the reader that they can trust it as authoritative, that it will be easy to read (because written for them) and that it will point people to other relevant content to their need.

Building brand is important to enable people to quickly skim over something and determine what they choose.  It is also important in memorability.  That’s why the guidance insists on avoiding sub-domains and using directory structure.  In plain words, avoiding URLs of the form YAA.website.gov.uk and instead using website.gov.uk/YAA.  The human brain does something different in the two cases.

In the former, one registers the YAA (standing for Yet Another Acronym) and then tends to forget the rest.  If that is what we want people to do, then using instructions such as ‘Search for Change4Life’ is a more effective means.  We’re increasingly using that to focus people on major campaigns.

However, if we want people to find it easily on the Web, then saying you’ll find it on Directgov, or wherever, is more effective – and the form direct.gov.uk/YAA does exactly this.  It both uses and builds the brand.  And if it is really current, it will be on the home page.

Sub-domains may be the technical means to delivering services and packaging content into useful and manageable chunks as part of internal processes, but for marketing the directory structure is much more helpful to get people to what they’re looking for.

  • Share/Bookmark

Read 14 Comments »

COI guidance TG124 Structuring information on the Web for re-usability is now re-issued as version 1.2.  Diligent implementers have identified two small errors in version 1.1 and these have now been corrected.  They were correct in the examples on the Google Code website and only incorrect in the abstracted guidance.

The first correction is in paragraph 31.

Old version

<div about="#this"
  typeof="foaf:Document"
  rel="dc:type" resource="[argot:Consultation]"
  >
 ...
</div>

Correct code

<div about="#this" typeof="foaf:Document">
  <span rel="dc:type" resource="[argot:Consultation]"></span>
...
</div>

Notice how the @rel attribute is in a separate span element.

There is a similar correction in paragraph 34.

Old version

<div about="#this"
  typeof="foaf:Document"
  rel="dc:type" resource="[argot:Consultation]"
 >
  <span property="dc:publisher" content="Ministry of Justice"></span>
 ...
</div>

or even merged onto the first element:

<div
 about="#this"
 typeof="foaf:Document"
 rel="dc:type" resource="[argot:Consultation]"
 property="dc:publisher" content="Ministry of Justice"
>
...
</div>

Correct code

<div about="#this" typeof="foaf:Document">
 <span rel="dc:type" resource="[argot:Consultation]"></span>
 <span property="dc:publisher" content="Ministry of Justice"></span>
...
</div>

or even merged onto the first element:

<div about="#this" typeof="foaf:Document"
 property="dc:publisher" content="Ministry of Justice">
 <span rel="dc:type" resource="[argot:Consultation]"></span>
...
</div>

Apologies for the changes. If you have any questions, please contact me at adam.bailin@coi.gsi.gov.uk.

  • Share/Bookmark

Read 1 Comment »

It’s a new year and here at COI the digital policy team have loads of exciting events and projects coming up.  I’ll tell you all about that in a minute.  My first priority is the government website audit project.  I’ll remind you  just in case you have forgotten what the project is all about. The Public Accounts Committee (PAC) in their Sixteenth Report recommended that Government develop a single set of reporting metrics for website usage. The Government agreed and made it mandatory for all government websites to be audited. Hence the Government Website Audit project.

We have an event coming up in London on the 21st of January aptly named “Auditing Government Websites”.  Speaking at the event is our very own Alex Butler, COI Transformational Strategy Director, and ABCe’s Managing Director Richard Foan.  The event is aimed at government agencies (EAs) and non-departmental public bodies (NDPBs) and will cover the following topics:

  • importance of auditing
  • industry standards for measuring websites
  • how to get ready for the audit
  • the audit process (pre audit and post audit)

The event provides an opportunity for government agencies and NDPBs to audit their websites for approximately half the amount if procured independently. What better way to start the year? The event is free and registration for the event and the audits are pouring in, so register now and make your websites 2010 compliant.

Please email me at linda.morakinyo@coi.gsi.gov.uk if you would like to attend.

Hope to see you all at there.

Invitation to auditing government websites event 21 Jan 2010

  • Share/Bookmark

Read 1 Comment »

 What is a URI?  This is the question a colleague asked me yesterday.  Of course, he knew what it stood for (Uniform Resource Identifier), but he was asking what it was for and why they are interesting to government.  The initial answer is that it is essential for the idea of Linked Data, it is the process through which one bit of information is linked to another bit.  But I wanted to dig a bit deeper and explain the kinds of use of Linked Data that the government has in mind.

The Web is basically a document standard – a description of what constitutes a Web page, together with a process for describing it’s location (the URL) and so of linking from one to another.  When you do a Web search, for example using Google or Bing, then you get a list of documents in which the information you seek might be in. 

A URI enables a unique way to identify a particular bit of data inside the Web page, and so link one bit to another.  Thus it might be useful to distinguish London-the-place from the other London-the-places and from the several authors with surname London.  We can get some way towards this by intelligent contextual analysis, the approach that Microsoft, for example, told me they are taking.  This involves heavyweight data crunching using search technologies.  The URI approach is to identify something as distinctive,  for example, London the place in this particular geospatial location, and then give it a URI that others can use to refer to it to disambiguate it from all other occurrences of the concept or word.

This is the core idea of a URI, that a place, event, person, concept, document, or whatever can be given a unique identifier that others can use.  Of course you need to do something more than that, as Sir Tim Berners-Lee describes in his four steps:

  1.  Use URIs as names for things
  2. Use http URIs so that people can look up those names
  3. When someone looks up a URI, provide useful information using the W3C standards (RDF, SPARQL)
  4. Include links to other URIs so that they can discover more things

 I usually add one more:

5.  the provider of a set of URIs provides a Lookup service to take the object being named and provide a URI for it (i.e. the converse of 2.)

 So what would be useful for government to do?  One fruitful area to explore are those things that come and go, or move around, or change.  For example MPs get appointed to serve in HM Government and then move around.  Giving each MP a URI so that every time a press release reports their activities would be helpful, particularly as they are often described in different ways.  Clicking on a URI link could take you to a page of information about them – for example their biography, committees they serve on etc, or, with a little macro on the side, a set of relevant links about them.  And then there might be URIs for Departments.  They come and go – when were they in existence?  What were their responsibilities?  Is there archived content about them?  What is the current list of Departments? That kind of information we know would be useful to provide, as we get asked for them

Those are two examples of sets of URIs that government could usefully run:  the MP names, and the list of Departments.  Another might be the roles that comprise HMGovernment, i.e. the Ministers.  Clearly at local government level the set of Local Authorities would be one that would be useful, so that one person referring to a public body would know it was the same one that another called by a different name or abbreviation. 

The government has developed a draft standard for designing sets of URIs and we are now exploring what core sets of URIs it would  be useful to provide.  Let us know and we’ll see if we can do so.

  • Share/Bookmark

Read 4 Comments »

How do you evaluate the cost-benefit of the government’s digital engagement?  As a result of the Improving government online review of measurement standards, several interesting discussions were started around extending the work on valuing and evaluating websites to all digital media.

Evaluation is a key priority for Government communicators. Matt Tee, Permanent Secretary and Head of Profession Government Communications, has prioritised evaluation as a key area of focus, along with skills and behaviour change. This is not surprising given the current economic climate. Government has to account for every pound spent and that means evaluating our communications activity to demonstrate cost-effectiveness.

Matt Tee has also requested that every government department develop a digital engagement strategy by March 2010, alongside the Public Accounts Committee recommendation that every department has a channel strategy, – a sign that digital engagement is being taken seriously.

So, how do we measure it?

As with any marketing communications activity, that depends on what the communications objectives are. However, there are commonalities across different campaigns and across different digital engagement tools and it’s those that I want to explore.

Recently, I’ve been working with colleagues in COI on this problem and we’ve come up with three common measures that appear to work across all digital engagement or social media tools:

  1. Number of relationships
  2. Number of user-generated content items
  3. Number of referrals/recommendations

1. Number of relationships

The number of relationships or connections within a network is a measure of power or influence. For example, it could be the number of followers on Twitter, number of friends in Facebook or the number of subscribers to a blog. In social network analysis, this is the basic measure of centrality within a network, which is called degree centrality.

There are other interesting measures of power within a network. For example betweenness centrality measures the degree to which a member lies between other members of a network. In the Facebook analogy, a person may have 1000 friends but have less influence than a person with 50 friends, each of whom have 1000 friends.

Graph showing betweenness centrality from lowest (red) to highest (blue)

Betweenness centrality from red (lowest) to blue (highest)

2. Number of user-generated content items

The number of user-generated content items measures participation within the network. For example, it could be the number of comments on a blog or the number of videos uploaded to a Youtube channel. It measures the level of engagement of an audience, suggestive of active participation not simply passive interest.

3. Number of referrals/recommendations

The number of recommendations is what many seek. This measures virality, advocacy, recommendability. For example, it could be the number of retweets, the number of  ‘share this’ actions or the number of pingbacks. It goes beyond mere participation; it means your content or message is valued enough to be recommended to others inside and outside the network.

We would be very interested to hear any thoughts on this. Many people are starting to think through return on investment in this area and it would be useful to have some level of consensus before applying to the government’s use of digital media for engagement.     Let us know what you think.

  • Share/Bookmark

Read 10 Comments »

The title comes from a recommendation of the Power of Information Taskforce.  They commissioned our team at COI to undertake a usability review of government websites, as our mission  is to improve the quality and consistency of government and the public sector online.  

The usability review found that there were many basic issues with the sites they looked at. Over 150 individual usability problems were identified including:

  • Failure to let users know where they are in the site
  • Inconsistency in page design putting undue stress on the user to recall rather than recognise (increasing cognitive load)
  • Use of language that users couldn’t understand
  • Linking to documents without clues to the content within

The Taskforce asked us to build a ‘usability toolkit’ setting out the basic usability criteria for government websites. Working with usability agency Bunnyfoot, we’ve developed a toolkit containing guidelines on best practice.  The toolkit has an emphasis on visual learning, making extensive use of graphics and video demonstrations and is designed around eight themes including page layout, navigation and writing content. Each theme has an interactive self-assessment exercise which tests knowledge and is intended to initiate discussion.

The toolkit is open to everyone to use, but there are advantages in registering as this helps users keep track what the content they’ve visited and the assessment tests they’ve completed.  So far 85 people have registered from over 20 different departments across government.

The most popular completed test to date is on page layout (123 completed tests).  It has guidelines about:

  • Consistency between pages
  • Devoting page space to content
  • Screen resolution
  • How to un-clutter content
  • Styling text for readability
  • Effective use of colour

It’s basic stuff but essential knowledge for anyone developing content for government, public sector, or inideed any other websites. There’s also a great video showing how we read text using eyetracking:

Sccreenshot of video of how we read shown through eyetracking

So, if you’re in the public sector, don’t waste another minute, register to use the toolkit! It’s easy, you just need to enter an email address and password and, if you work in central government, your department name.

Usability toolkit registration screen

  • Share/Bookmark

Read 4 Comments »

I was in Boston last week.  It was lovely – the sun streaming through the red fall leaves and it was warm enough to walk around in just a shirt.

The event was the 10th anniversary of a publisher service that I had conceived and proposed. Others have taken it on to create one of the most significant developments in academic publishing.  The idea is simple, but its execution hard.  That is to link the references at the end of an academic article to the article in another publishers’ database.  The problem is knowing where that other article is and coping with the fact that publishers buy and sell journals, thus shifting them around the place. The journal reader shouldn’t have to know where the cited article is, only to click and (with suitable permissions) get access to it.

We have similar issues in government.  We have data and information that the end-user wants to find that is distributed across many different places, and usually the user doesn’t care about which bit of government provides it. Moreover, there are changes that occur when Departments get closed and created, thus moving their online content around the place.

The two problems are similar – how do you get separate bodies to collaborate and how do you find and link to relevant information and data that will outlast major changes.

The publishers use a handle technology on which is built a Digital Object Identifier system.  Attached to each is a searchable metadata store that includes the current location.  By each publisher uploading all their bibliographic data to a central store, you can form automatic processes that link citations to the location of the cited article.  As articles move, their unique handle stays the same and only the location in the central datastore needs updating.

For government, we considered this but took a different approach to ensure all links work.  This is because government is essentially a closed system. So that is why we have adopted use of URLs as Unique Resource Identifiers, rather than a handle approach.  All websites are archived by The National Archives in such a way that the original URLs can be identified. Then each Department needs to introduce a piece of software that automatically redirects the link to the Department website if still there or to The National Archives if not. That way, links always work.

Both academic publishers and government share another important value for end-users.  They need to be able to know that the information they reach is authoritative. For publishers this means that it is peer-reviewed and the title of the journal broadly indicates the degree of reliance they can place on the results.  For government, the fact that it is a .gov.uk site means that it is the authoritative source of information.  Trust lies at the heart of both systems.

Likewise, end-users need to know if information is the most recent.  In academic publishing the date is the indicator with other information such as whether or not an article has been retracted (for example the original MMR vaccine paper was retracted).  In government, it is important to replace old information with new, while making sure that the old is still available through the archive, to avoid losing part of the history of the country.

This approach also underlies the Semantic Web applications we’ve been introducing.  Different types of information are distributed across the public sector, for example jobs and consultations. The question is how to find them and create useful aggregated services from them, both by government itself and for others.  The solution we’re implementing is the use of semantic web and specifically RDFa.  This is because RDFa is being searched and used by Google and Yahoo! and so is findable.  Single point of access services can then be created that point the user back to source.

There are many analogies between academic journals and website publishing in creating a good service for its customers and users.  It is useful to consider these and see how citizens can be given a better experience.  It is also useful to look at a lot of other channels – for example, news and information services.  Websites bring together many different aspects of information and communication and there is value to be had in looking at precedents and taking the best from them, while exploring how to use the Web most effectively to deliver services that online users want.

I felt proud to be back in Boston among old friends from around the world, celebrating something so significant. I’m looking forward to what we can achieve by working collaboratively across the public sector to make an equivalently important step change in user experience.

  • Share/Bookmark

Be the first to comment »

Government Digital Inclusion Champion Martha Lane Fox was the Big Thinker at COI last week. She announced the launch of her campaign to Race Online for 2012, which aims to get more people online particularly from socially excluded groups.

Research by Price Waterhouse Coopers shows that 10 million adults in the UK have never used the Internet. 4 million of those are socially excluded, of which 38% are over 65 and 39% unemployed, and it is these that people that Fox is targeting. Her speech was inspiring. I particularly liked her ‘user-centred’ approach. She told three stories from around the country:

  • A teacher at a computer literacy centre in Lambeth revealed that many of the students couldn’t read.
  • A builder in Birmingham gets the majority of his work by looking online.
  • Children on a rough estate in Bristol learned about growing vegetables from older residents, imparting computer knowledge in return.

Martha stressed that it isn’t about IT training, it’s about using computers for relevant purposes such as looking at photos of your grandchildren living abroad, saving money by shopping online or looking for jobs. People aren’t inherently interested in computers but may want to go online if they see that it can improve their quality of life.

Later in the week, I went to the BBC to hear about their new Accessibility Toolkit 2.0 (ATK 2.0) from Jonathan Hassel, Head of User Experience Design. His aim is to improve online experiences for disabled people.

There are 11 million adults in the UK with a long standing health problem or disability that affects their daily activities including their ability to work – and therefore covered by the Disability Discrimination Act. Of these, according to research from the Office for Disability Issues, 47% are over 65 and 43% are unemployed. Startlingly, 58% have never used the Internet.

Jonathan talked about the various barriers to disabled people getting online including lack of interest, lack of means and lack of confidence. These are the same reasons as for the population at large. BBC research into encouraging broadband adoption echoes the experiences of Martha Lane Fox. They focussed on the 21% of UK adults who do not have the Internet at home or use elsewhere. The figures are similar: 10.5 million aged 15+ with and average age of 61 (over half were 65+) and 67% are C2DE compared with 45% of the UK population.

The BBC’s research found low levels of interest (68%), low intention to acquire (82%) and low knowledge (81% knew little or nothing). The primary barriers were low perceived benefits versus cost and low confidence and skills. Secondary barriers include basic affordability, literacy and social exclusion. One participant claimed that:

“There’s nothing on there that you couldn’t get from Teletext.”

The challenge to get people online was acknowledged as being very difficult. Previous messages have failed because messages were not targeted and therefore not perceived as relevant or beneficial. Four broad areas were identified to aid adoption:

  • Families and friends
  • Media skills
  • Easier home internet access
  • Affordability and cost

Interviews with recent adopters revealed the power of friends and families, with one man saying:

“I wasn’t confident on the computer until (my son) started showing me bits on his computer.”

They also revealed that people were scared computer courses and the possibility of public humiliation in front of peers. It’s not enough simply to communicate the availability of courses.

The focus then returned to disabled audiences and Jonathan developed an interesting argument around web accessibility. The most common approach to date has been to focus on delivering inclusive websites. That is, to try to build websites that work for as wide an audience as possible. (Note on terminology here, this isn’t inclusion in the same sense as previously described, which is more about getting people online.) Jonathan’s argument is that this doesn’t work because people have such diverse needs that one design will never work. Websites do allow personalisation through operating system and browser settings but most people don’t know how these work. There’s also the AAA approach which places personalisation controls directly on to the web page. Most people don’t know what “AAA” means either. So the BBC have developed a prototype solution that brings personalisation controls into the browser but through an intuitive user interface.

Screenshot of the BBC's accessibility toolkit 2.0 prototype presenting default templates for different types of disability or health condition e.g. low vision, autism and dyslexia

The idea is that you have preferences for a website, accessible via a link at the top of the site. Clicking on this reveals a set of default options recommended by people with different health problems or disabilities. For example, you can set the colours, text size, font size and weight and so on. This isn’t revolutionary but what’s different is the user interface that the BBC have designed. It’s the first time it’s been done intuitively. And anything that helps create a positive user experience for people online is a good thing. After all, if we’ve worked hard to convince people of the benefits, the last thing we want is for them to switch off because of a usability or accessibility issue. Lets make sure it works for people when they arrive.

  • Share/Bookmark

Read 4 Comments »