Gadgets and Mashups in the Live platform

November 18, 2007

Microsoft has traditionally been a leader in desktop development. By the explosion of web 2.0, the company is trying to cope with the overwhelming quantity of web development environments that start to appear everywhere…

One of the first attempts of Microsoft to insert itself into the web 2.0 ring was the MSN Messenger application, which was quickly converted from a web/client-server architecture to a web services/rss/xml app, without harm to the users and becoming one of the fastest growing applications in history.

But it was not enough, full-web applications designed for the web 2.0 took over and concurrently, became part of the default user portfolio.

Microsoft continues doing efforts to overcome the plethora of initiatives that sudden multiple new competitors (or collaborators?) start to put on the Internet specifically in the layer of small web applications – or gadgets – launched the live.com site, which allowed the users to include their developments in a “gallery” where other users could put on the Internet the gadgets they developed in order to build “custom” sites.

Later, Microsoft launched “Spaces”, another web site, oriented to a more “personal sharing” audience, where people could start a blog, a photo sharing space and could as well include the gadgets they chose from the gallery.

The development process, however was still some “geeky” secret which was publicly available to MSDN participants (MSDN is Microsoft’s developer community, which offers general freely advice on Microsoft development and, by a paid subscription some additional support and tools).

Recently, as a new effort in trying to help people develop for the web 2.0 before they go with the competitors, Microsoft put specific help to develop gadgets in the developer center for the gadgets gallery, and enabled the development of other type of gadgets, now for the “Sidebar”, a side application for Windows Vista, emoticons and winks for Messenger, toolbar buttons, and the SideShow (a small screen included in newer laptops that enables quick information browsing in Windows Vista even when the computer is turned off)

For the next step: mashups, Microsoft has developed a new web site, called Popfly, it is kind of a new portal, which integrates a lot of new technlogies, including Silverlight, to enable the users to quickly develop mashups by including very easily code pieces and information developed by others and published in Popfly… more information about Popfly to come…

Advertisements

Gadgets and the web 2.0

November 18, 2007

The competition for gaining the preference of the users in web 2.0 tools gets even more serious when it comes to development.

While the text and media items are shared with increasingly easiness, the war is moving to a new terrain: development. As the web 2.0 keeps growing, for users, text, image and audiovisual content is not enough… they now want to share intelligence, expressed now in the way of code parts, called by some gadgets, by other widgets and at least half a dozen more names.

We’re exploring now the “???gets” world: small pieces of code that can be shared on the Internet so other users can put them in their pages for personal and/or public use… and here we’ll have to fight face to face to our low experience in software development which, so far seems to be a must if you want to share code.

We’ll be exploring how to do gadgets… or widgets… or however you want to call them in different platforms, we chose 4 platforms that are in our criteria the most common so we can find out how to do it… we already discarded Opera as the gadgets they offer (apparently very easy to build) are very restricted to the Opera browser.

Keep tuned! We’ll share our thoughts on these and other topics next!

Mauricio


Some statistics

October 31, 2007

The first part of our work on this blog has come to an end. During these weeks we have researched a lot about several topics related to Web 2.0. Some of our research has gone live in the form of published posts and some others have simply remained in our memory. With they all we have learned a great deal, sometimes even opened our mind. Not to mention what we could call the ‘active’ part – not only reading, but also managing, keeping the accessibility and contributing to our own blog, as well as commenting on our classmates’ one. Most of us hadn’t written on a blog before and it’s meant a new experience.

Looking back to the work we’ve done, these are the statistics for each of us:

  ELENA ESTRELLA ALEX MAURICIO
READING 0.25 0.30 0.5 0.3
PUBLISHING/COLLABORATING 0.5 0.40 0.35 0.55
THINKING 0.25 0.30 0.15 0.15

So that makes these global figures:

  AVERAGES
READING 34%
PUBLISHING/COLLABORATING 46%
THINKING 21%

Group statistics

We hope that our writings are interesting for you, who are reading this blog. Don’t forget to leave your comment! 😉


More about Web 2.0 Conservation

October 22, 2007

In my last post I talked about the aspects and the issues that arise in the preservation of Web 2.0 content. Now I want just to point out how are this preservation issues outstanding for some components of Web 2.0:

Blogs: are one of most common piece associated with the new version of the web. The conversational content of many of them can sometimes be identified as non-valuable; that is not worthy to bother with archiving its content. For the rest, however, the short update cycle (the speed at which new content is added), the numerous external references pose some of the difficulties I talked about. Then, what should be preserved from the blog other then the posts: the comments, the embedded resources? Someone noted that blogs tend to be rather individual rather than organizational, hence it’s rather difficult to archive them in a way the content is easy searchable and accessible.

Wikis: most of their content is what we called hidden web. The text and media content are stored in inaccessible databases, while the wiki experience is found as web metadata on the web server. However the inbuilt history function from most of them is an acceptable compromise for now.

Media sharing: the content is again hidden web and most of the streaming technologies used for live media are either proprietary or use Digital Rights Management to hinder any attempt of downloading the content.

Data mash-ups: assemble live content from various web sites that publish their APIs. Therefore, most of the content is also hidden web and therefore not readily accessible. Since the look & feel is part of experience, the overall preservation is even more difficult to accomplish.

Social networks: contain their users’ personal space. Though some may involve some look & feel elements which are part of the experience, this is not always the case. However, most contain private information of their users so major obstacle lies within the privacy an intellectual property.


Accessibility is not a matter of cyborgs… anymore.

October 19, 2007

(Just had to repost)… DO WAIT until the full page completes loading prior to hit Publish! GRRR.

Earlier today, while looking for some reading that related web 2.0 with accessibility in a serious way, I found this website (http://www.w3.org/TR/WCAG20/) from the WCAG. Even when we’ve been struggling for the last week or so to make our blogs compliant with the different recommendations for accessibility, the WCAG WG has prepared a draft on the new version of their standard.

Following the reading in that link I found that the predecessor (recommendation until this becomes a standard) is version 1.0, which dates back from 1999. Well, in 1999, nobody really thought about accessibility, because the priorities were Y2K (remember those days) and building tools that made the required work (build a web page) and some others to build an e-commerce site. These tools were basically of two kinds: programming languages (and their interfaces) such as Java, PERL, or VB and “intelligent” web servers which allowed to generate sites starting from CSS templates.

Today, eight years later, a lot of interest (and pressure) has been put in the game as governments and industries are requiring some levels of compliance on the web developments they do.

By taking a closer look at the participants (http://www.w3.org/WAI/GL/participants.html) of the WCAG WG (working group) that is defining this standard (which was open for reviewing and commenting until June, 2007), companies such as Microsoft, IBM, Google, Adobe and others are making their participation count in the forum, so I would expect that in some very near future, all the new versions of the tools and software that we use to build web pages are WCAG 2.0 compliant.

An example of this is Microsoft recently launched Expression software (an update to the popular FrontPage, http://www.microsoft.com/expression/expression-web/FPUpgradeFAQ.aspx) which says that it is compliant with WCAG 2.0 and includes validation tools for it. Maybe with this kind of tools the web accessibility development stops being a cyborg thing.

My lost post explained also how a cyborg, a term used today, involves somebody using technologically advanced devices so close to his activities that they appear to be part of him or her. After our group’s post “I, Cyborg”, somebody argued that the tools we’re using today parallelly resemble human use of other inventions, and they are right… the TV, the wheel are inventions that humans did not have as part of their lives, but maybe when they started people used other terms different to Cyborg to refer to them.


Web Conservation

October 17, 2007

An interesting aspect of the phenomenon Web 2.0 is it’s resemblance to worldwide library to which anyone could bring its contribution. It is a fact than especially in universities, but not only, the web has been considered for a long time a source of documentation; nowadays the process being amplified by the scale to which one could contribute with its own knowledge. This facet of the Web 2.0 has often called as Library 2.0.

Compared to a traditional library however, one should consider the issue of archiving and preserving this content. Especially from a library perspective the preservation of what some would call this “cultural memory”, triggered many discussion, debates and research in this area. Nevertheless practical ideas on how we can achieve this are still in their infancy. Some might recall the existence of the Internet Archive, currently the single large-scale preservation effort. Other efforts in this area either are not so well known or try to focus on some particular type of content.

The endeavor is more difficult since it is difficult to predict the changes in the web. Its strengths from the content access point of view (such as links, tags, blog rolling, chaining and syndication feeds) is also a weakness for archiving all this material. Research in this area revealed that the average life time of a web page is somewhat between 45 and 75 days. In addition, the constant changing and aggregation of content amplifies the problem.

The most important obstacle yet deals with the technologies issues. First it’s about content that becomes obsolete because the format, access protocol or standards used when it was published evolved in the mean time. It is often difficult to upgrade all content to the new technologies and some of it is always lost. Aside the preservation of the content it’s also a matter of preserving the experience which resides in the presentation and user interaction with the site when considering the palette of web client and server platforms deployed. And while the content data is always easy accessible this is seldom true for the code, scripts and databases that drive that site’s experience.

There will always be parts of the sites inaccessible to an external party, and the extra effort required in uncovering the inaccessible files by very well involve the site’s owner or administrator. In addition, with the currently available standards, the site itself can make requests to automated systems trying to archive its content to ignore some of the files. The Robot Exclusion Protocol for example is a way of informing search or archiving engines (the so called crawlers) to skip a set of files in the process that are, otherwise, publicly accessible. It is estimated that 400 to 550 times of the web content is hidden or protected from the end-user.

Suggested ways of uncovering this hidden web involve the use of software that can detect and then try to replicate the behavior of different web pages, meaning that even the code that directed that behavior remains hidden, the experience is preserved. Finally, cooperation with the site owner is the best solution for the data collection problem but is also the less scalable. The deployment of OAI-Metadata Harvesting Protocol could enable automatic retrieval of the entire site’s content for archiving purposes, however the reluctance for various reasons in large-scale adoption this technology as well as the extra effort and costs involved kept it so far at the stage of theoretical rather that a practical solution.

In relation to the multiple facets that a web page could be presented to its users a new term has been invented: the content cardinality. A greater than one cardinality means that the same web page, indentified by the same URL, can have a slightly or more different content, when viewed by different users. This is obviously true for sites that customize their content depending on the user accounts, or publish different content for every instance of the page.

Another issue regarding to the preservation of web content is the legal matter. Especially the intellectual property and privacy related content is often a sensitive area that hinders the archiving attempts.

These issues regarding the long-term preservation of Web 2.0 triggered various cons and pros. Some could argue that the dynamics of the web consisting of numerous daily blog postings, data mash-ups, ever-changing wiki pages and personal data that have been uploaded to social networking sites is of limited value and it doesn’t worth the trouble. However, there are a lot of materials with at least some value and some believe that we should preserve at least that part.

Unfortunately as I explained before not only the tools available to touch the hidden part of the web are still in their infancy but no major project archiving the most accessible part of the web has take shape yet. Temporary solutions could include individual protection where the site itself provides some means of archiving old data. This is not only a feature of most wikis but more and more sites provide to their users some sort of archiving repository. But nevertheless, it is obvious that for the goal of preserving the web essence, new research and maybe new technologies are still required.


WEB ACCESSIBILITY TOOLBAR

October 15, 2007

Just a curiosity…

While looking for information about accessibility, I have found the WAT 2.0 (Web accessibility toolbar 2.0) is a tool developed “to aid manual examination of web pages” for some of the most relevant aspects of access to websites. The web pages designers can provide access to alternative views for some contents in the site or facilitate the use of some online applications. Those are the most important advantages and aspects:

  • Compatible with IE7 and Windows Vista.
  • New functionality: log window which allows save the info in a text file.
  • Buttons to open the same page in Opera and Firefox (just if they are installed).
  • Generator of code’s views where you can highlight every element that you want.
  • Focus highlighter.

It seems (according to the information that I have read) the tool it is being tested by now. (The beta version is available on the Internet) but in my opinion, this bar can have a great future mainly among people who use IE. Perhaps most of you knew (at least a general idea) this software, but I didn’t know it and its features have seemed to me very interesting…

By Elena P.