My most recent research article looks at predictors of social network site (SNS) usage among a group of first-year college students. First, I look at whether respondents use any social network sites and then examine predictors by specific site usage (focusing on Facebook, MySpace, Xanga and Friendster based on popularity). Before asking about usage, I asked about having heard of these sites and all but one person reported knowledge of at least one SNS so lack of familiarity of these services does not explain non-adoption. The analyses are based on a representative sample of 1,060 first-year students at the University of Illinois, Chicago surveyed earlier this year. This is an especially diverse campus concerning ethnic diversity. (See the paper for more details about the data and methods.)
Methodologically speaking, I find that it is worth disaggregating the general concept of social network site usage, because analyses looking at usage on the aggregate mask predictors of specific site use.
Of particular interest seem to be Facebook and MySpace since they are the most popular with this group. About three quarters of students use the former and over half use the latter in the sample.
I find statistically significant differences by race, ethnicity, parental education (a proxy for socioeconomic status) and living situation (whether a student lives with his or her parents or not) concerning the adoption of Facebook and MySpace. [click to continue…]
I’m writing a piece (in the form of a debate with Jason Potts) on the Internet and non-market innovation (open source, blogs, wikis and Web 2.0 more generally) and the editors asked us to say something about digital literacy. I’ve never paid much attention to this metaphor, maybe because of excessive exposure to its predecessor, computer literacy.
It strikes me though, that discussion of digital literacy focuses almost entirely on reading (how to navigate the Web, find reliable information and so on). The things I’m talking about are forms of writing.
Thinking about the rise of text literacy, the distinction tends to be blurred a bit, because most (not all) people who learn to read also learn to write. Still, there’s plenty of discussion of the importance of writing to groups (women, working people) traditionally excluded from written culture.
So, I’m surprised at the neglect of this point in relation to digital literacy, especially because the Internet has done so much to break down the asymmetry between a small group of writers and a large group of readers that characterises most communications media. Having said this, I’m sure this point has been made many times before, and I invite readers to write in with good references.
As an aside, “computer literacy” programs in the late 70s and early 80s had, if anything, the opposite problem. Lots of emphasis on how to code in BASIC and very little appreciation of the potential for computers as tools for general use.
As you may recall, the economy was supposed to have collapsed as of two weeks ago today. Right now, you should not be able to afford a loaf of bread with a wheelbarrow full of $1000 bills.
I understand that bread baskets have been sent to headquarters in Virginia by ex-members. The sarcasm is tinged with philanthropy. LaRouche’s true believers are in serious trouble; their economy is collapsing, anyway. The group is being forced to come up with money for the IRS, and facing renewed investigation by the FEC, in the wake of events described by Avi Klein in a major article appearing in the new issue of Washington Monthly. [click to continue…]
A few weeks ago the Berkman Center for Internet and Society posted an interesting contest: create a short informative video about Web cookies and have the chance to win up to $5,000 and a trip to DC where the video would then be shown at the FTC’s Town Hall workshop on “Ehavioral Advertising“.
I’m afraid we’re past the deadline for submissions and I apologize for posting about this so late (life intervened and I got behind on a bunch of things). I wanted to post about it nonetheless, because I think it’s an interesting initiative and the resulting videos are available for viewing.
I was very intrigued by this contest given my interest in improving people’s Internet user skills. What would be a good way to communicate the concept of a Web cookie to folks who have little technical background? I haven’t looked at all of the submissions, but the ones I’ve seen I find are still too technical and are likely only comprehensible to those who already know at least a few things about Internet cookies. Alternatively, the clips are too vague and so likely have limited utility for that reason. I was a bit surprised and disappointed that people didn’t do more with the cookie analogy. Some of the videos have related cute/amusing components, but not incorporated in a particularly effective way. However, note that I have not watched all of the submitted videos so I may have missed some gems. Feel free to post links to ones you think are especially informative. I think the timeline for submissions was a bit short (I know there were particular logistical reasons for this), which may have prevented more people from getting involved and may have limited the amount of effort that could go into creation of the entries.
An interesting aside about how YouTube posts videos (assuming I understand this correctly, but I haven’t explored this aspect in depth so feel free to correct me): it seems that the creator of the video has little say over what becomes the thumbnail image for the clip. As far as I can tell, the frame is taken from the middle of the video, which is not always ideal as it’s not necessarily the most informative segment.
Siva Vaidhyanathan’s work in progress is a book that will address “three key questions: What does the world look like through the lens of Google? How is Google’s ubiquity affecting the production and dissemination of knowledge? and, How has the corporation altered the rules and practices that govern other companies, institutions, and states?” It seems likely this will add more to the sum of human knowledge than, say, Jacques-Alain Miller’s papal bull a while back.
With support from the Institute for the Future of the Book, Siva has started blogging the project as he goes. And he doesn’t sound entirely comfortable doing so, which if anything makes the experiment more interesting: [click to continue…]
My column today is a very basic introduction to Zotero. As noted there, the release of Zotero 2.0 is a thing to look forward to — it will, among other things, allow you to store your searches, annotations, etc. on a server, rather than your computer, which will have all sorts of benefits. But it’s not clear when that will happen.
People have pointed out that the enhanced version faces two potential problems: storage space and intellectual-property issues (regarding ownership and control of stored material, mainly). I asked one of the directors of the project, Dan Cohen, about that. Unfortunately he only got back to me after the column was done. But here’s his response: [click to continue…]
Radioshift from Rogue Amoeba. Because I am addicted to listening to BBC Radio 4 and Radio 7 on my iPod before I go to sleep, I already use their Audio Hijack Pro application to do effectively what this does, except more cumbersomely. This way you can subscribe to live radio broadcasts and treat them as if they were podcasts. Fantastic. Harry Brighouse take note.
“Chicken Yoghurt has the details”:http://www.chickyog.net/2007/09/20/public-service-announcement/ on the counterproductive attempts by lawyers retained by oligarch (and would-be Arsenal owner) Alisher Usmanov to prevent the dissemination of allegations made by Craig Murray (the UK’s former ambassador to Uzbekistan). From what I can gather, Murray is just begging for Usmanov to sue him in a British court.
Google is staking a claim on the moral high ground of Internet privacy. The company has called for new international rules, ostensibly to protect privacy online. Little of Google’s search information is strictly ‘personal data’, i.e. data directly concerning named individuals. But search data, potentially tied to individuals’ IP numbers, is dynamite, something it’s taken Google a long time to face up to publicly. Google got its fingers badly burnt by the incredulous reaction to its ‘trust us, we’re the good guys’ privacy policy a couple of years back. They hired Peter Fleischer, a well-respected Microsoft lawyer and data protection expert, to put their case more seriously. And now Fleischer is showing Google’s global citizenship willing by suggesting to UNESCO that an international body create a new set of rules on Internet privacy. But would this improve individuals’ privacy?
Part of the argument for a new instrument – at least as summarized in reports on the speech – is that the existing ones are too old and were crafted before the Internet really took off. The OECD Guidelines date from 1980 and the EU data protection directive from 1995, so they’re said to be out of date. Fleischer is said to argue for new rules based on the APEC privacy framework, and says Google is in favour of individuals’ privacy. The trouble is the ‘past their sell by date’ argument doesn’t hold up, and the APEC principles are a weak model to anyone who cares about privacy. [click to continue…]
This “ZDNet article”:http://news.zdnet.com/2100-9588_22-6205716.html on datascraping firm Rapleaf is both interesting and disturbing.
In the cozy Facebook social network, it’s easy to have a sense of privacy among friends and business acquaintances. But sites like Rapleaf will quickly jar you awake: Everything you say or do on a social network could be fair game to sell to marketers. … By collecting these e-mail addresses, Rapleaf has already amassed a database of 50 million profiles, which might include a person’s age, birth date, physical address, alma mater, friends, favorite books and music, political affiliations, as well as how long that person has been online, which social networks he frequents, and what applications he’s downloaded. … All of this information could come in handy for Rapleaf’s third business, TrustFuse, which sells data (but not e-mail addresses) to marketers so they can better target customers, according to TrustFuse’s Web site. As of Friday afternoon, the sites of Rapleaf and Upscoop had no visible link to TrustFuse, but TrustFuse’s privacy policy mentions that the two companies are wholly owned subsidiaries of TrustFuse.
… In other words, Rapleaf sweeps up all the publicly available but sometimes hard-to-get information it can find about you on the Web, via social networks, other sites and, soon to be added, blogs. … Apart from the unusual TrustFuse business, Rapleaf is among a new generation of people search engines that take advantage of the troves of public data on the Net–much of which consumers happily post for public perusal on social-networking sites and personal blogs. The search engines trace a person’s digital tracks across these social networks, blogs, photo collections, news and e-commerce sites, to create a composite profile. … There doesn’t appear to be anything illegal about what these companies are doing. No one’s sifting through garbage cans or peeking through windows. They’ve merely found a clever way to aggregate the heaps of personal information that can be found on the Internet. … Just ask Dana Todd … “It’s my growing horror that everyone can see my Amazon Wish List. At least I didn’t have a book like ‘How to get rid of herpes’ on there, but now I have to go through and seriously clean my wish list,” she said.
This raises all sorts of interesting issues for privacy, going way beyond the dumb-teenager-spliff-smoking-photo-on-MySpace kind of story that get most public attention. If I’m understanding the article correctly, Rapleaf have figured out ways to get at some information from social networking sites that the users of these sites mightn’t have wanted to share with the outside world. This isn’t illegal, but it is fishy. Also, by aggregating together information about people’s networks and tastes across a variety of different websites and networking sites, it’s likely that the firm can draw non-obvious connections that people would prefer not to be drawn. US privacy law is notoriously patchy (your video rental records are heavily protected, thanks to efforts to embarrass conservative Supreme Court nominees, your sensitive financial information … not so much), but I’m not sure what kinds of policies would effectively protect those people who wanted to protected from this kind of widescale data trawling, even in more privacy friendly jurisdictions like the EU. That said, I’m personally quite creeped out by this kind of thing (albeit not creeped out enough to stop blogging or to withdraw my profile from social networking sites, for whatever good that would do me at this stage).
Sometime around next Sunday, Wikipedia will reach 2 million articles. It’s about eighteen months since the millionth article was added, and the number of new articles has stabilized at around 2000 per day. So the shift from exponential to linear growth (in article numbers at least) has taken place a bit sooner than I expected. Some disorganised thoughts follow.
My ICANN colleague, Kieren McCarthy, has written an interesting piece on the ICANN Blog about types of new top level domains (e.g. .com, .info). He dusted off a 1997 proposal to put .firm, .store, .web, .arts, .rec, .info and .nom in the domain name system (DNS).
What strikes me is the taxonomic approach of what we now think of as Web 1.0. The TLDs considered ten years ago were attempts to organise the Internet from the top down by category and generic activity type. If and when a process for approving new TLDs begins next year (it’s subject to a vote by the ICANN Board, probably in October), it won’t yield anything like this organised and thematic approach.
Rather than creating a hierarchy of meaning, we’ll see an explosion of ideas pushing up from below. About the only new TLD proposal we know we’ll get is .berlin, which has put a glint in the eyes of city managers and tourist authorities all over the world. We don’t know which new TLDs will be created, but as Kieren says they’ll probably be things like .blog, .news, .coffee, .google and the like, i.e. services in search of a market and branding efforts by companies, cities and pretty much anything you can think of.
The predominantly English-speaking technical cadre that looked at this issue 10 years ago only came up with one non-English TLD (.nom) which was still pure ASCII text. Today, the global technical community is working hard to smooth the way for internationalised domain names, i.e. names in non-Roman characters.
It’s clear that the Internet will start changing as soon as the new TLDs begin to appear. What’s not as obvious is how ICANN may change. Just as the European Economic Community was fundamentally altered by conceiving and administering the Common Agricultural Policy, ICANN may itself be changed by the new gTLDs programme. The CAP is a bad example substantively, as it was designed to shut competition out. The DNS isn’t a way to organise the world’s information, but is a tool people can use to organise and express themselves. I hope the new gTLDs will give expression and form to communities and interests around the world that use the Internet but don’t yet see themselves in it.
Tyler Cowen is somewhat “suspicious”:http://www.marginalrevolution.com/marginalrevolution/2007/07/how-far-behind-.html of an FCC Commissioner’s statistical claims about broadband penetration. Given the FCC’s past form, a general suspicion of any statistics that it trots out on broadband penetration is entirely warranted. The FCC has generated copious statistics to support their claims that there is a thriving competitive market among broadband providers. However, as the General Accounting Office “points out”:http://www.gao.gov/new.items/d06426.pdf (pdf) in polite governmental administratese, their numbers are a crock. They pump up the number of competitors in a given local market by including satellite (not a significant option for most consumers), lumping together data on specialized business services and consumer broadband, and failing to consider whether the fact that two cable companies operate in the same zipcode means that they actually compete with each other (their coverage areas may not in fact overlap). When these biases are corrected for, the GAO finds that the median number of providers for a given respondent is two, and 9% of respondents have no access to broadband at all. Given the near-total lack of resemblance between these figures and the reality that American consumers have to deal with, it’s hard to avoid the suspicion that they were generated with the purpose of muddying debate.
I added a link to it on my daily links list where Liz Losh saw it and then included it in a blog post “Just Say Know” discussing all sorts of parody videos and sites related to drug use including the artist-created fictional drug Web site Havidol, and this video:
These are some great parodies. Work in the field of health communication looks at the effects of health campaigns, but tends to focus on serious ones. I wonder what type of work may be going on in the domain of parody viral videos online for similar purposes.
1. “Read”:https://crookedtimber.org/2007/07/11/trying-not-to-lose-face/ Henry’s post on Facebook. Signed up out of curiosity and masochistic desire to have smallness of social network confirmed.
2. Joined the University of Arizona network. Noodling around, saw the profile for Joe Grad Student from my department. Looked at his list of friends.
3. Noticed that one of Joe Grad Student’s friends looked familiar. Realized I knew him. He had been a year ahead of me in Secondary School in Ireland in the late 1980s. Jaysus.