by Henry Farrell on August 31, 2010
“Scott Rosenberg”:http://www.wordyard.com/2010/08/30/in-defense-of-links-part-one-nick-carr-hypertext-and-delinkification/ has a good go at Nick Carr’s claims about what the Internets is Still Doing to our Brains. BRRRAINNNZZZ ! ! !
bq. Carr’s “delinkification” critique is part of a larger argument contained in his book The Shallows. I read the book this summer and plan to write about it more. But for now let’s zero in on Carr’s case against links, on pages 126-129 of his book as well as in his “delinkification” “post”:http://www.roughtype.com/archives/2010/05/experiments_in.php. … The nub of Carr’s argument is that every link in a text imposes “a little cognitive load” that makes reading less efficient. Each link forces us to ask, “Should I click?” As a result, Carr wrote in the “delinkification” post, “People who read hypertext comprehend and learn less, studies show, than those who read the same material in printed form.” … [The] original conception of hypertext fathered two lines of descent. One adopted hypertext as a practical tool for organizing and cross-associating information; the other embraced it as an experimental art form, which might transform the essentially linear nature of our reading into a branching game, puzzle or poem, in which the reader collaborates with the author. … The pragmatic linkers have thrived in the Web era; the literary linkers have so far largely failed to reach anyone outside the academy. The Web has given us a hypertext world in which links providing useful pointers outnumber links with artistic intent a million to one. If we are going to study the impact of hypertext on our brains and our culture, surely we should look at the reality of the Web, not the dream of the hypertext artists and theorists.
[click to continue…]
by Maria on February 24, 2010
Three Google executives have been convicted of violating Italian privacy law because of a children’s bullying video posted briefly by Google in 2006. Although Google took down the offending video of several children in Turin cruelly taunting a mentally disabled boy, and subsequently helped authorities to identify and convict the person who posted the video, three executives were convicted today of violating privacy. A fourth employee who has since left the company had his charges dropped, which seems to indicate that a political point is being made. The executives in question are outraged, and former UK Information Commissioner Richard Thomas is quoted as saying the episode makes a mockery of privacy laws.
For years I’ve observed that Italy always pushes for the most extreme EU version of laws about privacy and security and then domestically gold-plates them into laws that would seem more at home in Turkmenistan. It makes other Europeans scratch their heads as the Italians generally aren’t willing or able to enforce their draconian laws. Several years ago over a pint in Brussels, an exasperated UK official told me ‘the Italians have no intention of ever implementing this stuff, but we’re a common law country and if it’s on the books, we actually have to do it’.
Update: Milton Mueller has an interesting take on the decision and makes the point that the E-Commerce Directive has not aged well in an era of user-generated content.
[click to continue…]
by Eszter Hargittai on February 25, 2008
I should be prepping for class, but I want to add an alternative perspective to a question raised about Google’s popularity. The Freakonomics blog features an interesting Q&A with Hal Varian today, I recommend heading over to check out how Google’s chief economist answers some questions submitted by readers last week.
The Official Google Blog takes one of the questions and posts an expanded response to it. The question:
How can we explain the fairly entrenched position of Google, even though the differences in search algorithms are now only recognizable at the margins?
Varian addresses three possible explanations: supply-side economies of scale, lock-in, and network effects. He dismisses all of these (see the post for details) and then goes on to say that it’s about Google’s superior quality in search that makes it as popular as it is.
I don’t buy it, especially the dismissal of the lock-in factor. [click to continue…]
by Eszter Hargittai on December 14, 2007
Henry points us to a new Google initiative and was wondering what I might think about it. I started writing a comment, but thinking that a comment shouldn’t be three times as long as the original post (and because I can), I decided to post my response as a separate entry.
First, I think Kieran is right, knol is way too close to troll, I would’ve picked a different name. (That said, most people out there probably have no idea what a troll is so in that sense it’s just as well although I still don’t like the name.)
I address three issues concerning this new service of trying to create something Wikipedialike within Google’s domain: First, will it gain popularity? Second, what might we expect in terms of quality? Third, what’s in it for Google beyond the potential to showcase more ads? [click to continue…]
by Henry Farrell on December 14, 2007
The Wikimedia folk have been muttering for a while about taking on Internet search companies such as Google, but I suspect that Google is more likely to be able to “displace them”:http://googleblog.blogspot.com/2007/12/encouraging-people-to-contribute.html than vice-versa.
Earlier this week, we started inviting a selected group of people to try a new, free tool that we are calling “knol”, which stands for a unit of knowledge. Our goal is to encourage people who know a particular subject to write an authoritative article about it. The tool is still in development and this is just the first phase of testing. For now, using it is by invitation only. … A knol on a particular topic is meant to be the first thing someone who searches for this topic for the first time will want to read. The goal is for knols to cover all topics, from scientific concepts, to medical information, from geographical and historical, to entertainment, from product information, to how-to-fix-it instructions. Google will not serve as an editor in any way, and will not bless any content … For many topics, there will likely be competing knols on the same subject. … People will be able to submit comments, questions, edits, additional content, and so on. Anyone will be able to rate a knol or write a review of it. … Once testing is completed, participation in knols will be completely open, and we cannot expect that all of them will be of high quality. Our job in Search Quality will be to rank the knols appropriately when they appear in Google search results. We are quite experienced with ranking web pages, and we feel confident that we will be up to the challenge.
I’m waiting to see what Eszter and “Siva”:http://www.googlizationofeverything.com/ have to say about this before I can start to think in earnest about this, but given Google’s clout and resources I imagine that this project is much more likely to have legs than, say, Citizendium.
Update: See also “Nicholas Carr”:http://www.roughtype.com/archives/2007/12/google_knol_tak.php.
by Eszter Hargittai on May 18, 2007
For your weekend reading pleasure: the special theme section of the Journal of Computer-Mediated Communication I edited on The Social, Political, Economic, and Cultural Dimensions of Search Engines is out. The Introduction gives you the motivation for this collection and a summary of the pieces. From the Abstract:
Search engines are some of the most popular destinations on the Web—understandably so, given the vast amounts of information available to users and the need for help in sifting through online content. While the results of significant technical achievements, search engines are also embedded in social processes and institutions that influence how they function and how they are used. This special theme section of the Journal of Computer-Mediated Communication explores these non-technical aspects of search engines and their uses.
Enjoy!
by Eszter Hargittai on January 15, 2007
(Despite the pathetically boring title of this post, I hope you will consider reading on, the plot concerns Web search, racism and teaching.)
Today’s Google doodle is in honor of Martin Luther King, Jr. Day in the U.S.. These doodles always link to something relevant regarding the focus of the drawing. I was especially curious to see what the target link would be in this case, given some peculiarities of the results to a search on martin luther king jr. Not surprisingly (to me at least), the doodle links to the search results of a somewhat different query: martin luther king jr. day, which yields a sufficiently different set of links.
Why was I not surprised and why do I take such interest in this particular case? It dates back to exactly two years ago when I was teaching my Internet and Society class to undergraduate students. At that time, Northwestern didn’t excuse students from classes for the entire day (it does now), but my class conflicted with several campus events so I decided to cancel class. However, I did want student to do some course-related work so I had them blog about something related to the holiday that they found online. It was a very open assignment, but focused enough to get some of the spirit of the holiday on their minds.
One of the students wrote an entry pointing to the Web site martinlutherking.org and discussed how she had found the site’s critical approach to the holiday and the man behind it intriguing. She cited the sources featured on the site, prominent media outlets such as Newsweek and The New York Times. I found her discussion interesting, but was a bit skeptical and so I went to look at the site. I quickly realized that it was hosted by an organization called Stormfront, which prominently describes itself as White Pride World Wide on its logo.
[click to continue…]
by Eszter Hargittai on August 7, 2006
Not surprisingly this is the kind of topic that spreads like wildfire across blogland.
AOL Research released (link to Google cache page) the search queries of hundreds of thousands of its users over a three month period. While user IDs are not included in the data set, all the search terms have been left untouched. Needless to say, lots of searches could include all sorts of private information that could identify a user.
The problems in the realm of privacy are obvious and have been discussed by many others so I won’t bother with that part. (See the blog posts linked above.) By not focusing on that aspect I do not mean to diminish its importance. I think it’s very grave. But many others are talking about it so I’ll focus on another aspect of this fiasco.
As someone who has research interests in this area and has been trying to get search companies to release some data for purely academic purposes, needless to say an incident like this is extremely unfortunate. Not that search companies have been particularly cooperative so far – based on this case not surprisingly -, but chances for future cooperation in this realm have just taken a nosedive.
[click to continue…]
by Chris Bertram on December 15, 2005
“Pootergeek has a post”:http://www.pootergeek.com/?p=1908 on using “Google’s blogsearch”:http://blogsearch.google.com/blogsearch as an alternative to Technorati. For full instructions follow the link to his site, but meanwhile “here’s the search set up for Crooked Timber”:http://blogsearch.google.com/blogsearch?hl=en&scoring=d&filter=0&q=%22crooked+timber%22+-site%3Ahttp%3A%2F%2Fcrookedtimber.org%2F&btnG=Search+Blogs
by Eszter Hargittai on December 9, 2005
IDG News Service has an article with results from a study conducted by S.G. Cowen and Co. about search-engine use by socio-economic status and Internet experience of users. The findings suggest that Google users are more likely to be from higher income households and be veteran users than those turning to other services for search. Finally some data on this! I have had this hypothesis for several years, but had no data to test it. I am usually frustrated when people make generalizations about Web users based on data about Google users (worse yet, Google users referred to their Web sites through particular searches) and this is precisely why. I did not think Google users (not to mention ones performing particular searches on certain topics) are necessarily representative of the average Internet user. (The report says very little about the methodology of the study so it is hard to know the level of rigor concerning sampling and thus the generalizability of the findings.)
[click to continue…]
by Henry Farrell on November 2, 2005
While we’re on the subject of Google Map and Google Earth overlays, “Kathryn Cramer”:http://www.kathryncramer.com/kathryn_cramer/2005/10/google_earth_dy.html and her friends have been doing some interesting and important work on importing satellite data as overlays, and using this as a means to disseminate information about, and focus attention, on natural disasters. This information can be used to discover hill carvings of knights and dragons; but it can also (and this is Kathryn’s main point) bring home what’s happening in disaster zones such as the earthquake region in Pakistan.
by Eszter Hargittai on October 21, 2005
It’s been too long since we’ve had some geeky goodness around here. But wait! You don’t have to be a geek to appreciate and benefit from the following so read on regardless of your geek quotient.
I’ve been a big fan of Firefox since last Fall and given its wonderful features (better security [update: see comments for suggestions as to why this may not be the case], all sorts of functionality) I try to do my best to encourage others to use it as well.
In that vein, I have put together a page with a list of my favorite extensions. Firefox extensions are little programs that add features to the browser. Some of my favorites include being able to search for a street address without having to retype the address or pull up a map first, tabbed browsing, better use of browser space, etc. I know some of these features are available in other programs as well, but it’s great to have it all come together so nicely in one program. Feel free to list additional favorites in the comments to this post.
I have also put together a detailed tutorial on how to install the program (on Windows) for those who do not feel comfortable downloading programs. Feel free to pass along these page to your parents, cousins, friends, etc.
This Webuse.Info site contains some additional information so to recap:
Enjoy!
UPDATE: Since the comments have gone in all sorts of directions, I have highlighted in green sections of posts that refer to additional extensions for those who want quick access to that info.
by Eszter Hargittai on September 26, 2005
Inspired by this post on Digg, I started running searches on Google to see what would yield a really high number of results. A search on “www” yields results “of about 9,160,000,000”. This is curious given that according to Google’s homepage, the engine is “Searching 8,168,684,336 web pages”. Perhaps they are extrapolating to sites that they are not searching. Or perhaps those “of about” figures are not very accurate. In general, those numbers are hard to verify since Google won’t display more than 1000 results to any query. The figures may be helpful in establishing relative popularity, although it’s unclear whether the system can be trusted to be reliable even to that extent.
by Eszter Hargittai on September 15, 2005
A serious problem with content filters – whether add-on software or the “safe” search mode of systems – is that they often block legitimate content that should not be filtered out. These false positives can include important information that most would have a hard time defending as harmful. Paul Resnick and colleagues have done some interesting work on this regarding filtered health information.
Now comes to us a helpful little tool (found through ResearchBuzz) that lets you run searches to see what content is blocked in the safe-search modes of Google and Yahoo!. Type in a search term and see what sites would be excluded from the results when running the safe mode on the two engines.
Curiously, Google blocks the TheBreastCancerSite.com when you turn to safe mode for a search on “breast cancer” while Yahoo! doesn’t. (The Breast Cancer Site does not seem to have objectionable material, its noted mission is to raise funds for free mammograms.)
By the way, Google’s and Yahoo!’s results can be quite different regardless of what gets filtered. Dogpile has a nifty little tool that visualizes some of the differences. I discussed it here while guest-blogging over at Lifehacker a few weeks ago.
by Eszter Hargittai on August 29, 2005
Anecdotally, I still often hear people say (like I did this weekend, or like I’ve read in CT comments) that it wouldn’t take that much for a new company to enter the search-engine market. But we are not in the late 1990s and it would take tremendous resources to enter this market.
The major players at this point are AOL, Ask Jeeves, Google, Microsoft and Yahoo!. (Note that in contrast to much anecdotal evidence in the press and among other commentators, Google does not have nearly the market share that many people suggest. I’ve discussed this on CT before.)
Among the above search engines, AOL, Google, MSN, and Yahoo! represent much more than just search engines. They are vast empires of Internet-related products that continue to innovate and introduce new services.
This does not mean that there is no room for innovation. In fact, we seem to be undergoing a second boom these days (somewhat reminiscent of the late 90s, but in a much more realistic manner). Numerous interesting and innovative services have sprung up in the last few years. However, you will notice that many of these are eventually acquired by one of the companies above. Examples: Google’s acquisition of Blogger and Yahoo!’s acquisition of Flickr.
And to be sure, we have even seen new entrants in niche markets of search, for example, the searching of recently added content. Here, Technorati and Feedster come to mind. While offering valuable services – an almost immediate inclusion of blog content in search results – these engines focus on a very small segment of Web content.
It would take tremendous amount of resources in this day and age to even come close to the computation and labor resources that drive the above-mentioned companies and allow them to index Web content at a more general level. It is unlikely that we will see independent new entrants in the near future. If we do, they will likely be acquired by one of the companies above.