Some statistics

October 31, 2007

The first part of our work on this blog has come to an end. During these weeks we have researched a lot about several topics related to Web 2.0. Some of our research has gone live in the form of published posts and some others have simply remained in our memory. With they all we have learned a great deal, sometimes even opened our mind. Not to mention what we could call the ‘active’ part – not only reading, but also managing, keeping the accessibility and contributing to our own blog, as well as commenting on our classmates’ one. Most of us hadn’t written on a blog before and it’s meant a new experience.

Looking back to the work we’ve done, these are the statistics for each of us:

  ELENA ESTRELLA ALEX MAURICIO
READING 0.25 0.30 0.5 0.3
PUBLISHING/COLLABORATING 0.5 0.40 0.35 0.55
THINKING 0.25 0.30 0.15 0.15

So that makes these global figures:

  AVERAGES
READING 34%
PUBLISHING/COLLABORATING 46%
THINKING 21%

Group statistics

We hope that our writings are interesting for you, who are reading this blog. Don’t forget to leave your comment! 😉

Advertisements

More about Web 2.0 Conservation

October 22, 2007

In my last post I talked about the aspects and the issues that arise in the preservation of Web 2.0 content. Now I want just to point out how are this preservation issues outstanding for some components of Web 2.0:

Blogs: are one of most common piece associated with the new version of the web. The conversational content of many of them can sometimes be identified as non-valuable; that is not worthy to bother with archiving its content. For the rest, however, the short update cycle (the speed at which new content is added), the numerous external references pose some of the difficulties I talked about. Then, what should be preserved from the blog other then the posts: the comments, the embedded resources? Someone noted that blogs tend to be rather individual rather than organizational, hence it’s rather difficult to archive them in a way the content is easy searchable and accessible.

Wikis: most of their content is what we called hidden web. The text and media content are stored in inaccessible databases, while the wiki experience is found as web metadata on the web server. However the inbuilt history function from most of them is an acceptable compromise for now.

Media sharing: the content is again hidden web and most of the streaming technologies used for live media are either proprietary or use Digital Rights Management to hinder any attempt of downloading the content.

Data mash-ups: assemble live content from various web sites that publish their APIs. Therefore, most of the content is also hidden web and therefore not readily accessible. Since the look & feel is part of experience, the overall preservation is even more difficult to accomplish.

Social networks: contain their users’ personal space. Though some may involve some look & feel elements which are part of the experience, this is not always the case. However, most contain private information of their users so major obstacle lies within the privacy an intellectual property.


Accessibility is not a matter of cyborgs… anymore.

October 19, 2007

(Just had to repost)… DO WAIT until the full page completes loading prior to hit Publish! GRRR.

Earlier today, while looking for some reading that related web 2.0 with accessibility in a serious way, I found this website (http://www.w3.org/TR/WCAG20/) from the WCAG. Even when we’ve been struggling for the last week or so to make our blogs compliant with the different recommendations for accessibility, the WCAG WG has prepared a draft on the new version of their standard.

Following the reading in that link I found that the predecessor (recommendation until this becomes a standard) is version 1.0, which dates back from 1999. Well, in 1999, nobody really thought about accessibility, because the priorities were Y2K (remember those days) and building tools that made the required work (build a web page) and some others to build an e-commerce site. These tools were basically of two kinds: programming languages (and their interfaces) such as Java, PERL, or VB and “intelligent” web servers which allowed to generate sites starting from CSS templates.

Today, eight years later, a lot of interest (and pressure) has been put in the game as governments and industries are requiring some levels of compliance on the web developments they do.

By taking a closer look at the participants (http://www.w3.org/WAI/GL/participants.html) of the WCAG WG (working group) that is defining this standard (which was open for reviewing and commenting until June, 2007), companies such as Microsoft, IBM, Google, Adobe and others are making their participation count in the forum, so I would expect that in some very near future, all the new versions of the tools and software that we use to build web pages are WCAG 2.0 compliant.

An example of this is Microsoft recently launched Expression software (an update to the popular FrontPage, http://www.microsoft.com/expression/expression-web/FPUpgradeFAQ.aspx) which says that it is compliant with WCAG 2.0 and includes validation tools for it. Maybe with this kind of tools the web accessibility development stops being a cyborg thing.

My lost post explained also how a cyborg, a term used today, involves somebody using technologically advanced devices so close to his activities that they appear to be part of him or her. After our group’s post “I, Cyborg”, somebody argued that the tools we’re using today parallelly resemble human use of other inventions, and they are right… the TV, the wheel are inventions that humans did not have as part of their lives, but maybe when they started people used other terms different to Cyborg to refer to them.


I, Cyborg

October 17, 2007

A cyborg is someone who uses parts which were not originally with him or her to achieve things.

I’m a cyborg because I use the web to achieve thing I would not normally do. I can say more examples of today’s cyborgs:

  1. A “normal” person that is always using technology, for instance, a mobile phone in order to send messages, communicate ideas or just talk with other people.
  2. A person with some kind of disability, who needs a special device to use a computer, or just not a device, instead uses a normal computer with special software that helps this person do daily tasks.
  3. Another cyborg could be the average blogger, wiki editor or just social network user, a person that, without this device, would not socialize in the same way with people around him.

All these scenarios get enabled by web 2.0 because the technologies that are used to make them possible make the access to technology possible for different kind of people.Web 2.0 enables people; these people would never be able to do things without web 2.0, so it is kind of a prothesis to them.

Web 2.0 empowers them to do what they want and never dared.

A video to see cyborgs in action: http://www.youtube.com/watch?v=rSnXE2791yg


Web Conservation

October 17, 2007

An interesting aspect of the phenomenon Web 2.0 is it’s resemblance to worldwide library to which anyone could bring its contribution. It is a fact than especially in universities, but not only, the web has been considered for a long time a source of documentation; nowadays the process being amplified by the scale to which one could contribute with its own knowledge. This facet of the Web 2.0 has often called as Library 2.0.

Compared to a traditional library however, one should consider the issue of archiving and preserving this content. Especially from a library perspective the preservation of what some would call this “cultural memory”, triggered many discussion, debates and research in this area. Nevertheless practical ideas on how we can achieve this are still in their infancy. Some might recall the existence of the Internet Archive, currently the single large-scale preservation effort. Other efforts in this area either are not so well known or try to focus on some particular type of content.

The endeavor is more difficult since it is difficult to predict the changes in the web. Its strengths from the content access point of view (such as links, tags, blog rolling, chaining and syndication feeds) is also a weakness for archiving all this material. Research in this area revealed that the average life time of a web page is somewhat between 45 and 75 days. In addition, the constant changing and aggregation of content amplifies the problem.

The most important obstacle yet deals with the technologies issues. First it’s about content that becomes obsolete because the format, access protocol or standards used when it was published evolved in the mean time. It is often difficult to upgrade all content to the new technologies and some of it is always lost. Aside the preservation of the content it’s also a matter of preserving the experience which resides in the presentation and user interaction with the site when considering the palette of web client and server platforms deployed. And while the content data is always easy accessible this is seldom true for the code, scripts and databases that drive that site’s experience.

There will always be parts of the sites inaccessible to an external party, and the extra effort required in uncovering the inaccessible files by very well involve the site’s owner or administrator. In addition, with the currently available standards, the site itself can make requests to automated systems trying to archive its content to ignore some of the files. The Robot Exclusion Protocol for example is a way of informing search or archiving engines (the so called crawlers) to skip a set of files in the process that are, otherwise, publicly accessible. It is estimated that 400 to 550 times of the web content is hidden or protected from the end-user.

Suggested ways of uncovering this hidden web involve the use of software that can detect and then try to replicate the behavior of different web pages, meaning that even the code that directed that behavior remains hidden, the experience is preserved. Finally, cooperation with the site owner is the best solution for the data collection problem but is also the less scalable. The deployment of OAI-Metadata Harvesting Protocol could enable automatic retrieval of the entire site’s content for archiving purposes, however the reluctance for various reasons in large-scale adoption this technology as well as the extra effort and costs involved kept it so far at the stage of theoretical rather that a practical solution.

In relation to the multiple facets that a web page could be presented to its users a new term has been invented: the content cardinality. A greater than one cardinality means that the same web page, indentified by the same URL, can have a slightly or more different content, when viewed by different users. This is obviously true for sites that customize their content depending on the user accounts, or publish different content for every instance of the page.

Another issue regarding to the preservation of web content is the legal matter. Especially the intellectual property and privacy related content is often a sensitive area that hinders the archiving attempts.

These issues regarding the long-term preservation of Web 2.0 triggered various cons and pros. Some could argue that the dynamics of the web consisting of numerous daily blog postings, data mash-ups, ever-changing wiki pages and personal data that have been uploaded to social networking sites is of limited value and it doesn’t worth the trouble. However, there are a lot of materials with at least some value and some believe that we should preserve at least that part.

Unfortunately as I explained before not only the tools available to touch the hidden part of the web are still in their infancy but no major project archiving the most accessible part of the web has take shape yet. Temporary solutions could include individual protection where the site itself provides some means of archiving old data. This is not only a feature of most wikis but more and more sites provide to their users some sort of archiving repository. But nevertheless, it is obvious that for the goal of preserving the web essence, new research and maybe new technologies are still required.


More about memes…

October 17, 2007

While searching for information about memes on the Internet, I’ve found some advices to get an interesting and attractive meme as the result of a competition started by the site “HazRuido.com”.

Summarizing some of the points given by this webpage, we can say that a good meme needs a topic able to capture the attention because of its peculiarity. In this way, the receiver will send it to his friends or partners.

Another important thing is thinking about the profile of the person who is going to receive the meme. (They are usually young and single men). The use of the web as much as the mail can help to attract the interest of the Internet users, if we succeed in obtaining the attention of the public opinion leaders. And finally, we have to profit the traditional resources to get a topic for our memes.

Nowadays, there are different types of memes according to their objectives:

  • Self-promotion: They are focused on promoting a person, product or company.
  • Advertising and marketing: they are focused on public relations, advertising, and they are used to create interest and positive criticism for a product among the public.
  • Hoaxes: They spread urban rumours for example…
  • Others.

In my opinion, they are getting more and more important in the social networking area and among big companies which consider them as a source for their business.

By Elena P.


Memes

October 16, 2007

Surely you have received some e-mail asking you to fill a number of fields on aspects about you (your name, your favourite color, if you have ever been in love and so on) and forward it to your contacts, so that they’ll receive your answers and will have to do the same. That’s a meme. But, of course, all memes aren’t like that.

Acording to the Oxford English Dictionary:

A meme /mi:m/ is an element of a culture that may be considered to be passed on by non-genetic means, esp. imitation.

So, any kind of information that is copied from person to person, and likely to be modified and selected, is a meme, just like the cultural brother of a gene. This term was coined by Richard Dawkins in his book The Selfish Gene. Memetics, the science related with memes, declares that our cultures are the constantly evolving product of natural selection of memes. For example, another types of memes would be fashion or urban legends.

Susan Blackmore, whose bio and work is worth reading, is interested in this topic and she sees Internet as “a vast realm of memes, growing rapidly by the process of memetic evolution and not under human control“. I cannot disagree with her, Internet being such a great net of passing information. Lately, it is very common to see memes travelling through the blogsphere, such as “your five weird habits” or “your 25 favourite films ever”.

Some ‘self-declared’ memes are quite unuseful, but other can be very interesting. For example, Blog del día started a meme entitled “3 tips to be a good blogger” (in Spanish). A large amount of bloggers followed it and an interesting research on almost all the reactions can be found at Blogging para ser un buen blogger (also in Spanish). Not that nobody could summarize them all, of course – here we have Blogger’s tips on the subject, for example -, but anyway it’s a fascinating proccess of social contribution to the evolution of information.