Accessibility is not a matter of cyborgs… anymore.

October 19, 2007

(Just had to repost)… DO WAIT until the full page completes loading prior to hit Publish! GRRR.

Earlier today, while looking for some reading that related web 2.0 with accessibility in a serious way, I found this website (http://www.w3.org/TR/WCAG20/) from the WCAG. Even when we’ve been struggling for the last week or so to make our blogs compliant with the different recommendations for accessibility, the WCAG WG has prepared a draft on the new version of their standard.

Following the reading in that link I found that the predecessor (recommendation until this becomes a standard) is version 1.0, which dates back from 1999. Well, in 1999, nobody really thought about accessibility, because the priorities were Y2K (remember those days) and building tools that made the required work (build a web page) and some others to build an e-commerce site. These tools were basically of two kinds: programming languages (and their interfaces) such as Java, PERL, or VB and “intelligent” web servers which allowed to generate sites starting from CSS templates.

Today, eight years later, a lot of interest (and pressure) has been put in the game as governments and industries are requiring some levels of compliance on the web developments they do.

By taking a closer look at the participants (http://www.w3.org/WAI/GL/participants.html) of the WCAG WG (working group) that is defining this standard (which was open for reviewing and commenting until June, 2007), companies such as Microsoft, IBM, Google, Adobe and others are making their participation count in the forum, so I would expect that in some very near future, all the new versions of the tools and software that we use to build web pages are WCAG 2.0 compliant.

An example of this is Microsoft recently launched Expression software (an update to the popular FrontPage, http://www.microsoft.com/expression/expression-web/FPUpgradeFAQ.aspx) which says that it is compliant with WCAG 2.0 and includes validation tools for it. Maybe with this kind of tools the web accessibility development stops being a cyborg thing.

My lost post explained also how a cyborg, a term used today, involves somebody using technologically advanced devices so close to his activities that they appear to be part of him or her. After our group’s post “I, Cyborg”, somebody argued that the tools we’re using today parallelly resemble human use of other inventions, and they are right… the TV, the wheel are inventions that humans did not have as part of their lives, but maybe when they started people used other terms different to Cyborg to refer to them.


I, Cyborg

October 17, 2007

A cyborg is someone who uses parts which were not originally with him or her to achieve things.

I’m a cyborg because I use the web to achieve thing I would not normally do. I can say more examples of today’s cyborgs:

  1. A “normal” person that is always using technology, for instance, a mobile phone in order to send messages, communicate ideas or just talk with other people.
  2. A person with some kind of disability, who needs a special device to use a computer, or just not a device, instead uses a normal computer with special software that helps this person do daily tasks.
  3. Another cyborg could be the average blogger, wiki editor or just social network user, a person that, without this device, would not socialize in the same way with people around him.

All these scenarios get enabled by web 2.0 because the technologies that are used to make them possible make the access to technology possible for different kind of people.Web 2.0 enables people; these people would never be able to do things without web 2.0, so it is kind of a prothesis to them.

Web 2.0 empowers them to do what they want and never dared.

A video to see cyborgs in action: http://www.youtube.com/watch?v=rSnXE2791yg


Web Conservation

October 17, 2007

An interesting aspect of the phenomenon Web 2.0 is it’s resemblance to worldwide library to which anyone could bring its contribution. It is a fact than especially in universities, but not only, the web has been considered for a long time a source of documentation; nowadays the process being amplified by the scale to which one could contribute with its own knowledge. This facet of the Web 2.0 has often called as Library 2.0.

Compared to a traditional library however, one should consider the issue of archiving and preserving this content. Especially from a library perspective the preservation of what some would call this “cultural memory”, triggered many discussion, debates and research in this area. Nevertheless practical ideas on how we can achieve this are still in their infancy. Some might recall the existence of the Internet Archive, currently the single large-scale preservation effort. Other efforts in this area either are not so well known or try to focus on some particular type of content.

The endeavor is more difficult since it is difficult to predict the changes in the web. Its strengths from the content access point of view (such as links, tags, blog rolling, chaining and syndication feeds) is also a weakness for archiving all this material. Research in this area revealed that the average life time of a web page is somewhat between 45 and 75 days. In addition, the constant changing and aggregation of content amplifies the problem.

The most important obstacle yet deals with the technologies issues. First it’s about content that becomes obsolete because the format, access protocol or standards used when it was published evolved in the mean time. It is often difficult to upgrade all content to the new technologies and some of it is always lost. Aside the preservation of the content it’s also a matter of preserving the experience which resides in the presentation and user interaction with the site when considering the palette of web client and server platforms deployed. And while the content data is always easy accessible this is seldom true for the code, scripts and databases that drive that site’s experience.

There will always be parts of the sites inaccessible to an external party, and the extra effort required in uncovering the inaccessible files by very well involve the site’s owner or administrator. In addition, with the currently available standards, the site itself can make requests to automated systems trying to archive its content to ignore some of the files. The Robot Exclusion Protocol for example is a way of informing search or archiving engines (the so called crawlers) to skip a set of files in the process that are, otherwise, publicly accessible. It is estimated that 400 to 550 times of the web content is hidden or protected from the end-user.

Suggested ways of uncovering this hidden web involve the use of software that can detect and then try to replicate the behavior of different web pages, meaning that even the code that directed that behavior remains hidden, the experience is preserved. Finally, cooperation with the site owner is the best solution for the data collection problem but is also the less scalable. The deployment of OAI-Metadata Harvesting Protocol could enable automatic retrieval of the entire site’s content for archiving purposes, however the reluctance for various reasons in large-scale adoption this technology as well as the extra effort and costs involved kept it so far at the stage of theoretical rather that a practical solution.

In relation to the multiple facets that a web page could be presented to its users a new term has been invented: the content cardinality. A greater than one cardinality means that the same web page, indentified by the same URL, can have a slightly or more different content, when viewed by different users. This is obviously true for sites that customize their content depending on the user accounts, or publish different content for every instance of the page.

Another issue regarding to the preservation of web content is the legal matter. Especially the intellectual property and privacy related content is often a sensitive area that hinders the archiving attempts.

These issues regarding the long-term preservation of Web 2.0 triggered various cons and pros. Some could argue that the dynamics of the web consisting of numerous daily blog postings, data mash-ups, ever-changing wiki pages and personal data that have been uploaded to social networking sites is of limited value and it doesn’t worth the trouble. However, there are a lot of materials with at least some value and some believe that we should preserve at least that part.

Unfortunately as I explained before not only the tools available to touch the hidden part of the web are still in their infancy but no major project archiving the most accessible part of the web has take shape yet. Temporary solutions could include individual protection where the site itself provides some means of archiving old data. This is not only a feature of most wikis but more and more sites provide to their users some sort of archiving repository. But nevertheless, it is obvious that for the goal of preserving the web essence, new research and maybe new technologies are still required.


More about memes…

October 17, 2007

While searching for information about memes on the Internet, I’ve found some advices to get an interesting and attractive meme as the result of a competition started by the site “HazRuido.com”.

Summarizing some of the points given by this webpage, we can say that a good meme needs a topic able to capture the attention because of its peculiarity. In this way, the receiver will send it to his friends or partners.

Another important thing is thinking about the profile of the person who is going to receive the meme. (They are usually young and single men). The use of the web as much as the mail can help to attract the interest of the Internet users, if we succeed in obtaining the attention of the public opinion leaders. And finally, we have to profit the traditional resources to get a topic for our memes.

Nowadays, there are different types of memes according to their objectives:

  • Self-promotion: They are focused on promoting a person, product or company.
  • Advertising and marketing: they are focused on public relations, advertising, and they are used to create interest and positive criticism for a product among the public.
  • Hoaxes: They spread urban rumours for example…
  • Others.

In my opinion, they are getting more and more important in the social networking area and among big companies which consider them as a source for their business.

By Elena P.


Memes

October 16, 2007

Surely you have received some e-mail asking you to fill a number of fields on aspects about you (your name, your favourite color, if you have ever been in love and so on) and forward it to your contacts, so that they’ll receive your answers and will have to do the same. That’s a meme. But, of course, all memes aren’t like that.

Acording to the Oxford English Dictionary:

A meme /mi:m/ is an element of a culture that may be considered to be passed on by non-genetic means, esp. imitation.

So, any kind of information that is copied from person to person, and likely to be modified and selected, is a meme, just like the cultural brother of a gene. This term was coined by Richard Dawkins in his book The Selfish Gene. Memetics, the science related with memes, declares that our cultures are the constantly evolving product of natural selection of memes. For example, another types of memes would be fashion or urban legends.

Susan Blackmore, whose bio and work is worth reading, is interested in this topic and she sees Internet as “a vast realm of memes, growing rapidly by the process of memetic evolution and not under human control“. I cannot disagree with her, Internet being such a great net of passing information. Lately, it is very common to see memes travelling through the blogsphere, such as “your five weird habits” or “your 25 favourite films ever”.

Some ‘self-declared’ memes are quite unuseful, but other can be very interesting. For example, Blog del día started a meme entitled “3 tips to be a good blogger” (in Spanish). A large amount of bloggers followed it and an interesting research on almost all the reactions can be found at Blogging para ser un buen blogger (also in Spanish). Not that nobody could summarize them all, of course – here we have Blogger’s tips on the subject, for example -, but anyway it’s a fascinating proccess of social contribution to the evolution of information.


WEB ACCESSIBILITY TOOLBAR

October 15, 2007

Just a curiosity…

While looking for information about accessibility, I have found the WAT 2.0 (Web accessibility toolbar 2.0) is a tool developed “to aid manual examination of web pages” for some of the most relevant aspects of access to websites. The web pages designers can provide access to alternative views for some contents in the site or facilitate the use of some online applications. Those are the most important advantages and aspects:

  • Compatible with IE7 and Windows Vista.
  • New functionality: log window which allows save the info in a text file.
  • Buttons to open the same page in Opera and Firefox (just if they are installed).
  • Generator of code’s views where you can highlight every element that you want.
  • Focus highlighter.

It seems (according to the information that I have read) the tool it is being tested by now. (The beta version is available on the Internet) but in my opinion, this bar can have a great future mainly among people who use IE. Perhaps most of you knew (at least a general idea) this software, but I didn’t know it and its features have seemed to me very interesting…

By Elena P.


A new concept: the mashups

October 13, 2007

Following with the thread of our last post, today I would like to make a general approach to an almost unknown concept: the mashups, which is closely related with the idea of folksonomy.

At the beginnig, this term was used referred to musical area (the combination of two songs existing to create a new one).

According to Wikipedia, a mashup is “a web application that combines data from more than one source into a single integrated tool”. The contents used by this “hybrid application” are sourced from others websites via a public interface or API.

Mashups have become a revolution in the web development because they make easy for everybody (using simples tools as APIs, for example) to build and grow new websites with innovative format by mixing existing contents and topics (most of them availables by the public in general).

For example, we can take some photographs from Flickr and combine them with some maps from Google. We use GPS coordinates too. And finally, we have a new website to look for places and show them with their exact position.(API contacts with the site which is the source of the contents and requires the sending of the information needed by the user at this moment).

At the moment, the main contents sources for mashups are services like: Flickr, eBay, Youtube, Amazon, Yahoo!, Microsoft or Google. (Most of the spanish ones are based in Google Maps to locate the universities or schools, restaurants, hotels… ).

Currently, there are three important types: consumer mashups, data mashups, and business mashups; although the most well-known are the first ones which combine a lot of data from different sources using a very simple graphical interface.Some people say that they are almost ecosystem which are growing very fast and by 2007 and 2008, there will appear 10 new mashups per day. At the moment the certain only thing is that the popularity of those websites is increasing among the internauts.