Archive for the ‘Web’ Category

Big Bazaar’s ‘your money is gone in 15-day’ return policy

Saturday, November 8th, 2008

I was looking for a multi-region DVD player and hopped into Big Bazaar, a multi-brand chain store which sells everything from vegetables to LCD TV. The sales guy pitched me to a buy a Philips DVD Player stating that, “it will play everything!”. To my surprise there was no region code mentioned on the box packaging of the DVD player (Philip’s own deceptive packaging). As a skeptic, I asked that guy whether the item is returnable. Yes, he said.

Once unpackaged, the DVD player only played region 5 (India/Asia) DVDs; and not the ‘Region 1’ (US/Canada), a cache of which lugged from US.

Went back to Big Bazaar after 10 days, was able to return the item with a little play of words but to my surprise was a credit-note instead of the cash Rs. 3500, I paid. The credit note was okay as that may be a way to dissuade fraud. The fine line was that the credit note expires in 15 days from issue date. The money will evaporate if I don’t go back to the same store and spend it all. This is unfair. I’m planning to file a complaint with the ministry of consumer affairs on this practice. It may not be illegal from Biz Bazaar’s point of view as in India returning an item itself is a new concept!

There are a few things Big Bazaar should fix to avoid consumer complaints:
1. Issue a non-expiring credit note or make it 6 months/12 months to the least
2. Allow that credit-note to be encashed at all Big Bazaars and not just the specific store where the note was issued.

Does really locks your data?

Tuesday, November 4th, 2008

I love Zoho and what they have delivered as a product suite. and Zoho have a bitter relationship after their failed talks of merging (Benioff made the offer which Zoho rejected)However, I disagree with Sridhar on one count in his latest post:

Since then, Salesforce has repeatedly tried to block customers from migrating to Zoho CRM, by telling them (falsely) that they cannot take their data out of Salesforce until their contract duration is over. We have emails from customers recounting this.

Isn’t that the natural tactic any sales guy plays to prevent customer migration? Migrating data out of is one email away. Even if you are during trial period, they give you the complete dump of all the object as excel spreadsheet without a word.

WordPress inching towards full CMS capabilities

Monday, July 14th, 2008

Matt announced WordPress 2.6. Features include:

  • Version Control: Wiki like tracking of edits
  • Google gears compatibility
  • Theme previews — was much needed for experimentation!
  • Plugin update notification bubble
  • SSL Support and other security enhancements
  • Word count
  • Easter egg (Matt has quashed the rumours)


Tiny urls: Taking WWW towards a single point of failure

Saturday, November 17th, 2007

Tinyurl, urltea and several other url reducers provide an excellent service where they reduce the sometimes very long urls to a fraction of their original size. The short version of the URL provides relief to people on the phone (can’t really think of anything else which could benefit from the service). Thanks to the growth of twitter, the url abbreviating services have gained a lot of popularity recently, so much so that people have started replacing regular URLs on the web (eg. Look at the comment in this post). Charlene Li even considered having the tiny URLs in her book. David Pogue carries the ecstatic side of finding a new service without evaluating the potential pitfalls.

I don’t understand why people want to mask the URL for the normal WWW. A lot of people click the URL after doing a hover and figuring out the actual target.

5 minutes ago, I clicked on a urltea link and I got a 503 HTTP Error:

A 503. Service is not available from urltea. Tinyurl claims to have abbreviated a billion urls. Imagine the impact of such a downtime.

The URL abbreviation services pose the following problems:

1. Single point of failure for billions of web urls. This totally defeats the distributed architecture of WWW.
2. Masked urls could be prone to deception by spammers and XSS exploiters. Quoting Wired Blog, “your audience has no clue where it will lead — could be a porn link, could be a virus laden site from Russia.”
3. A lot of browser security features work on the domain name and it’s associated attributes stored locally. A different url masks the true domain.
4. It leads to even more problems in the text mining community — where a single domain pollutes the corpus of links, while hiding the actual target. Any link analyzer has to first resolve the actual target of the tiny url by performing an HTTP HEAD request.
5. What if tinyurl gets bought by a get-rich-quick advertising company and they start sending a pop-up along with the actual URL. That would be an idea for someone to make a lot of money from billions of tiny urls!

The value provided by these services for mobile is great — it’s a big problem when the tiny urls start popping up on everybody’s webpages! I’m not alone to think there is something wrong with the service in the WWW context. Here’s Tom and here’s Scott Rosenberg of Salon.

The rise of Ajax and the death of HTTP 404

Wednesday, January 3rd, 2007

In classic web application models, the user-agent sits between the user and the the webserver — The user-agent does not apply any business logic other than rendering the pages. With the rise of Ajax, server-side logic is moving to client, so much so that the conventional 3-tier web model is being challenged in a way that the whole presentation layer and the controller is being touted to move to the browser.
So, what happens to the veritable 404 (and other related HTTP error codes and pages)? In classic web applications, if you have a “Page Not Found” situation, you as a user “see” the associated 404 page (e.g. However, with Ajax, it’s the Ajax engine which is suppose to capture the 404. The user sees a “pretty message” while the Ajax engine (or the library running the engine) captures the 404. For example, in scriptaculous’ javascript library, on404 is the callback handler for HTTP 404s returned from the server.
Reproducing and annotating Jesse James Garrett’s diagram from the original Ajax article comparing classic and Ajax application model further crystallizes this thought that we need a redefined set of HTTP codes to support the Ajax application model.
Image modified without explicit permission of Adaptive Path. Are they cool?
Tags: , ,

Ajax: Cleaner, Simpler, and Interactive User Interfaces

Monday, June 13th, 2005

I never thought geeks love fancy four letter words, until Ajax happened. Since this term Ajax was coined (more than the happening of Google Maps, GMail etc. ), there has been a renewed interest in XML, Javascript, and DHTML from the perspective of XML binding as native Javascript objects (DOM) and ability to make ad-hoc HTTP requests after a page load (without refreshing the page). However, the technique of refreshing partial content on the page is not new, websites have been doing this with IFRAMEs, etc. to achieve the desired effect. But, it was more of a hack than clean programming.
Before Ajax was born, Microsoft engineers were cranking (1,2) on XMLHttpRequest objects on their MSDN website, and the outlook webclient for exchange. Now (since there is a lot of hype), I remember reading an article in MSDN Magazine which I finally found on the web. Also, to my surprise I found the following piece of Javascript code (thanks to Google Desktop) lurking in my hard-drive of an old desktop. I have no clue which website I copied it from! Anyway, this was my first working Ajax code:

var req;
function processReqChange() {
    if (req.readyState == 4) {
        if (req.status == 200) {
            } else {
            alert("Request Borked: +

function loadXMLDoc(url) {
    if (window.XMLHttpRequest) {
        req = new XMLHttpRequest();
        req.onreadystatechange = processReqChange;"GET", url, true);

Above is a very rudimentary code. I don’t even know, if it’s going to work on “all” the browsers.
Doing things the Ajax-way was Javascript’s original goal — but the movement got muffeled by usability pundits and reluctance of companies to piss-off customers who were using the old browsers.
Anyway, there is a good pickup (Ajax has been slashdotted plus there are dedicated blogs) on Ajax. However, there is a lot to be solved with the rest of the web. As Adam Bosworth points out; we still need to solve three fundamental problems viz. Fixing printing of web pages, making the browser listen for external events and having a web application run offline. I think we may be able to get a handle on the last one with Greasemonkey user scripts.
Next part — my Greasemonkey endeavours (I actually wrote some working code mixing Ajax & Greasemonkey with some offline content).

Scary Dot-con saga of Infospace

Sunday, March 20th, 2005

Ravneet Grewal reports about never known facts related to Infospace, it’s founders, options which were never given to employees, lawsuits, push-to-IPO, founders cashing out before the bust, employees selling their mansions to founders, lies, SEC lobbying, endless. The original story as part of an investigation done by Seattle Times.

Is HTML a Legacy? The rise of Rich Internet Applications

Sunday, August 22nd, 2004

c. 1995. During the SunWorld conference, there was a lot of activity related to Java. Microsoft and notably Netscape announced their intention to license Java. It was the magic of executing applets in the browser, which made the browser makers circle like bees around the Java platform. With applets the seeds of Rich Internet Applications were thus sown. Macromedia was around too. But, applets provided not only animation but a complete ability of building rich GUI applications using Object Oriented Programming.
During the dot-com boom, predominantly Internet applications were of “Browser<-->Application Server<-->Database” type. The application server was the place where all the logic, interaction, caching was being done. The browser was just the rendering engine for the HTML output from the application server. Almost every click on the HTML page resulted into server roundtrips. Model-View-Controller (MVC) framework was the chief design pattern governing complex websites.
Around 2002-2003, the logic started moving to the client. An initial download of screens, rendering logic and subsequent server trips to fetch the data. One case in example is GMail–a big download of Javascript, followed by a DHTML driven browser UI. If you use GMail, you might have noticed the speed with which you can move between messages that have already been viewed. Oddpost is another RIA example, total DHTML magic. It was the RIA-ness of e-mail which made Oddpost a good proposition for Yahoo.
A growing RIA framework is Macromedia’s Flex. Quoting from ColdFusion Developer’s Journal, “Flex offers a standards based, declarative programming methodology and server runtime services for delivering rich, intelligent user interfaces with the ubiquitous cross platform, cross-device Macromedia Flash client.”
Other top contenders for RIA framework:

Check out a real world RIA example here. (No, I didn’t make a $259/day reservation. I booked mine at a different hotel, offering $54.95/day using the plain old HTML!)

The Vision of Semantic Web: Part I (Search Engines and Web content)

Sunday, August 1st, 2004

Semantic 1 : of or relating to meaning in language
That’s the dictionary definition of Semantic. When applied to the Web–it means content which is semantically related to the content. Let us take the example of a keyword search on Google. I type in Blog, take a snapshot of the results and then key in Weblog. There is only one result in the top 10 which is found in these two samples.
Blog and Weblog; don’t we use these interchangeably? Don’t they mean the same? Semantically, to a human–YES; to the search engine indexing the web content–NO. That’s exactly the vision of Semantic Web, when search engines and information retrieval in general extracts data like humans.
Well, in the above example of “Blog” vs. “Weblog”, its not the search engine’s fault for failing to index the content in a desirable manner. To some extent the problem also lies in the HTML page, which expresses the term “Blog” and “Weblog”. What if the HTML page header says that all the terms in the page conform to certain taxonomy. This is not uncommon, exactly what we do in a DTD or an XML Schema document. Take for example, the <P> tag. The tag is defines in the HTML DTD, and well understood by the browser’s parsing and rendering engine. A browser semantically understands this tag as–“the text which comes after this tag is a paragraph and should be rendered as such”. In case of HTML the vocabulary is limited, a P tag is always a P tag. However, in case of English language a “Blog” is a “Weblog” which is an “Online Journal” which is… the list continues.
Establishing relationship is not trivial. A well-defined set of terms related with peers, parent-child nodes, and attributes–essentially this is Ontology, a way of representing and conceptualizing knowledge.
One very good example, where this association works–A robot programmed to identify/recognize fruits. Robot’s master writes the word “Mango” on the whiteboard. The robot quickly scans his ontology(assuming that the robot in our example uses Ontology for Knowledge Representation) for a match. He finds an exact match for the word M-A-N-G-O. Then he traverses; Mango –> Mangifera Indica (attribute type Scientific Name) –> Fruit (Parent node). The robot then thinks–“Mango is a Fruit”. But, how does he find whether the fruit is sweet/sour, grown in tropical climate, has a large seed, grows on trees, is rich in Vitamin C, Folate, Selenium and Pantothenic Acid ? The answer lies within the Ontology, which could represent the extended knowledge as well.
Going back to the search example, there are couple of ways to solve this problem:

  1. While indexing the page, instead of indexing the terms, index the generic id as retrieved from a “super” ontology. The hard part is locating the Ontology
  2. Let the web page authors expose the terms with some metadata around it. For (a hypothetical) example:
    <p>This is my <so:onto id=”757893″ contextid=”222″>Weblog</so:onto>

  3. Convert the search term itself. For example, if I search for Weblog, two queries are made–for “Blog” and “Weblog” and the search results de-duped and presented.

Some work is already being done in the TAP Project. TAP is a succession of Alpiri, founded by RV Guha and Rob McCool, the same people behind TAP.

Tim Berners-Lee Awarded Millennium Technology Prize

Sunday, May 2nd, 2004

The Finnish Technology Award Foundation describes the selection of Tim Berners-Lee by unanimous vote of the International Award Selection Committee as recipient of the first Millennium Technology Prize.
The Finnish Millennium Technology Prize is awarded every other year for innovation based on scientific research in any of four disciplines: Health Care and Life Sciences, Communications and Information, New Materials and Processes, and Energy and the Environment. It is a technology award granted “for outstanding technological achievements that directly promote people’s quality of life, are based on humane values, and encourage sustainable economic development.”
Tim Berners-Lee, a graduate of Oxford University, England, “holds the 3Com Founders chair at the Laboratory for Computer Science and Artificial Intelligence Lab (CSAIL) at the Massachusetts Institute of Technology (MIT).” Berners-Lee created the first server, browser, and protocols central to the operation of the Web: the URL address, HTTP transmission protocol and HTML code. Currently Berners-Lee directs the World Wide Web Consortium (W3C) at MIT, in Boston.
He was born in London, UK in 1955 and graduated from Oxford University in the UK. In 2003, Berners-Lee was named a Knight Commander of the Order of the British Empire for his pioneering work.