Previous month:
January 2004
Next month:
March 2004

Freedom to Think

MOM'03 / Exposure Book

This follows on from my posting about the Deja View head cam...

Exposure: From Friction to Freedom Book

In 2003 Aula invited a group of artists, pundits and entrepreneurs to Helsinki to discuss the future of media, copyright, and privacy in a world where life is routinely captured and published in digital form. Collected in this volume are the discussions along with selected essays.

At the heart of the book is the idea that there is one and the same phenomenon at work as different areas of culture start flowing in digital networks, be they music or gaming, journalism or personal media. We have called this phenomenon exposure. It is a broad shift from a world of friction and control of information to a world of freedom to create and distribute information.

Cultural creations want to be free, they want to change, they have to keep moving to stay alive.

Thanks to Preoccupations for the steer on this one.

Follow me on Twitter: @IanYorston

Networks may deliver Social currencies

David Smith writes in Preoccupations

Mediachest is attracting interest. MetaFilter calls it 'actually useful social-software' ('a sharing community for books, DVDs, and CDs. You can borrow your friends books and music and movies, and they can borrow yours.').

Boing Boing has a post from Michael Edwards (one of the team which developed Mediachest), part of which runs, 'Mediachest is a social software site that allows users to inventory their collection of physical media items and search the collections of their friends and friends-of-friends for items such as DVDs or books that they would like to borrow.

The site facilitates the borrowing and loaning of these items in a similar way to how EBay facilitates online auctions %u2014 there are user profiles, feedback pages, and rankings. In addition to searching the collections of friends you are able to see the items of people that are geographically close to you, or that are members of groups that you associate with (such as a student organization, gym, or work place group).'

David also points to The Aula Point of View "which has a particularly striking post by Jyri Engestrm":

As the cost of publishing the things you have and the things you want decreases linearly, the volume of non-monetary exchange (lending, sharing, donating things) will increase exponentially. ... In the coming years, blogging the products you own will be further simplified when barcode- and RFID-readers become embedded in cheap everyday handheld devices such as cell phones. I believe the resulting change in social behavior will not be just quantitative ... The shift to a blog-driven Web can set in motion a new, lively circulation of pre-owned products among networks of friends who play with the dynamics of social capital, not financial capital. Where Amazon pioneered the Web%u2019s retail layer, and eBay pioneered the bargaining layer, a service like Mediachest could pioneer a new lending layer in product circulation. ... Some distinct characteristics of the emerging online lending layer: First, it is about retail products, but unlike Amazon, it is not about retail transactions. Rather, it%u2019s about recycling, swapping, donating and borrowing (mostly) pre-owned products. Second, it%u2019s about moving material goods, but unlike eBay, it doesn%u2019t require national or global logistics. It%u2019s about very local logistics %u2014 not the suburban neighborhoods as much as the trust-based interpersonal networks that inhabit every institution in our society: the workplace, the school, the sports team, the hospital, the university dorm. And third, it may not be about PC users as much as it is about mobile users (although that is contingent on the trend of more hackable mobile terminals). And finally, the emerging online small worlds oriented around non-money-based circulation of material objects might not at first reach many of the more affluent 30 to 40-somethings. But they are much more likely to reach their kids, or be originated by them.
Follow me on Twitter: @IanYorston

Hindsight or Foresight 2.

gladwell.com

This New Yorker article by Malcolm Gladwell dates back to 2001, but it is has timely relevance to the UK Higher ducation debate...

The SAT is now seventy-five years old, and it is in trouble. Earlier this year, the University of California - the nation's largest public-university system - stunned the educational world by proposing a move toward a "holistic" admissions system, which would mean abandoning its heavy reliance on standardized-test scores. The school backed up its proposal with a devastating statistical analysis, arguing that the SAT is virtually useless as a tool for making admissions decisions.

The report focussed on what is called predictive validity, a statistical measure of how well a high-school student's performance in any given test or program predicts his or her performance as a college freshman.

If you wanted to, for instance, you could calculate the predictive validity of prowess at Scrabble, or the number of books a student reads in his senior year, or, more obviously, high-school grades.

What the Educational Testing Service (which creates the SAT) and the College Board (which oversees it) have always argued is that most performance measures are so subjective and unreliable that only by adding aptitude-test scores into the admissions equation can a college be sure it is picking the right students.

This is what the UC study disputed. It compared the predictive validity of three numbers: a student's high-school Grade Point Average (GPA), his or her score on the SAT (or, as it is formally known, the SAT-I), and his or her score on what is known as the SAT-II, which is a so-called achievement test, aimed at gauging mastery of specific areas of the high-school curriculum.

Drawing on the transcripts of seventy-eight thousand University of California freshmen from 1996 through 1999, the report found that, over all, the most useful statistic in predicting freshman grades was the SAT-II, which explained sixteen per cent of the "variance" (which is another measure of predictive validity). The second most useful was high-school GPA, at 15.4 per cent. The SAT was the least useful, at 13.3 per cent.

Combining high-school GPA and the SAT-II explained 22.2 per cent of the variance in freshman grades. Adding in SAT-I scores increased that number by only 0.1 per cent.

Nor was the SAT better at what one would have thought was its strong suit: identifying high-potential students from bad schools.

In fact, the study found that achievement tests were ten times more useful than the S.A.T. in predicting the success of students from similar backgrounds. "Achievement tests are fairer to students because they measure accomplishment rather than promise," Richard Atkinson, the president of the University of California, told a conference on college admissions last month.

"[Achievement tests] can be used to improve performance; they are less vulnerable to charges of cultural or socioeconomic bias; and they are more appropriate for schools because they set clear curricular guidelines and clarify what is important for students to learn. Most important, they tell students that a college education is within the reach of anyone with the talent and determination to succeed."

Follow me on Twitter: @IanYorston

Hindsight or Foresight 1.

gladwell.com

This incisive piece of journalism from Malcolm Gladwell was published in in the New Yorker in early March of 2003. But it has a useful resonance one year on.

Military intelligence is charged with looking forward. Inquiries after the event have the obvious advantage of hindsight...

"Connecting the Dots- The paradoxes of intelligence reform."

In the fall of 1973, the Syrian Army began to gather a large number of tanks, artillery batteries, and infantry along its border with Israel. Simultaneously, to the south, the Egyptian Army cancelled all leaves, called up thousands of reservists, and launched a massive military exercise, building roads and preparing anti-aircraft and artillery positions along the Suez Canal. On October 4th, an Israeli aerial reconnaissance mission showed that the Egyptians had moved artillery into offensive positions. That evening, AMAN, the Israeli military intelligence agency, learned that portions of the Soviet fleet near Port Said and Alexandria had set sail, and that the Soviet government had begun airlifting the families of Soviet advisers out of Cairo and Damascus. Then, at four o'clock in the morning on October 6th, Israel's director of military intelligence received an urgent telephone call from one of the country's most trusted intelligence sources. Egypt and Syria, the source said, would attack later that day.

Top Israeli officials immediately called a meeting. Was war imminent? The head of AMAN, Major General Eli Zeira, looked over the evidence and said he didn't think so.

He was wrong. That afternoon, Syria attacked from the east, overwhelming the thin Israeli defenses in the Golan Heights, and Egypt attacked from the south, bombing Israeli positions and sending eight thousand infantry streaming across the Suez. Despite all the warnings of the previous weeks, Israeli officials were caught by surprise. Why couldn't they connect the dots?

If you start on the afternoon of October 6th and work backward, the trail of clues pointing to an attack seems obvious; you'd have to conclude that something was badly wrong with the Israeli intelligence service.

On the other hand, if you start several years before the Yom Kippur War and work forward, re-creating what people in Israeli intelligence knew in the same order that they knew it, a very different picture emerges.

Follow me on Twitter: @IanYorston

More Australians needed...

Web site statistics by Time Zone...

eMail your friends in Australia, New Zealand, Hong Kong, Japan, China, India... heck, ANYWHERE in a time zone that is GMT positive... and point them to www.unreasonableman.net

Update:

Thank you...

Any late readers of this post may not appreciate that when originally posted I had no readers further east than Paris (as in Paris, France...)

Follow me on Twitter: @IanYorston

eOrganise

PublicTechnology.net

Public administrations that combine substantial reorganisation of the way they work with the use of information and communication technologies to deliver new eGovernment services , get higher appreciation ratings from business and citizens, says the EU.

This finding emerges from a recently published survey: "Reorganisation of Government Back Offices for Better Electronic Public Services European Good Practices".

Better results are due to the fact that reorganisation reduces costs, increases productivity, and provides flexibility and simpler organisational structures. This also helps to improve how systems work together across the administration and can improve the working environment for staff.

The practical results for the public and for businesses are fewer visits to administrations, together with faster, cheaper, more accessible and efficient services. Benefits are also reflected in fewer errors, more openness, easier to use systems and greater user control.

The European Commission's Erkki Liikanen, Commissioner for Enterprise and the Information Society, said: "The success of eGovernment depends on the right combination of ICT, re-organisation and training. This survey provides a helpful guide for public administrations wanting to improve the quality and the take up of their own on-line public services through back-office re-organisation".

Report

Follow me on Twitter: @IanYorston

Deja vu 2.

DEJA VIEW, Inc.

Deja View's Camwear Model 100 captures everything you see and records it in memory.

If you've seen something interesting (note the past tense...) you simply press the record button and the previous 30 seconds of video with audio will be copied out of memory onto a removable storage card.

The video can then easily be stored, transferred by e-mail or uploaded to the Internet.

You can record a minimum sixteen 30 second videos, with a 60 degree field of view, even with only 64MB of SD Memory.

Count slowly to 30 and remember how long that really is. Long enough to capture an important conversation at work, a throw away remark, a reckless incident, a genuine mistake...

Now throw in wireless, better cameras, smart image processing and you have the "always on" version of your life... and everyone else's.

We will have to live our lives, and hold all conversations, on the assumption that every moment is being captured and, potentially, broadcast.

Follow me on Twitter: @IanYorston

Deja vu 1.

A wonderfully imaginative suggestion from Halfbakery for the "deja vu home - remembrance of times past"

Hidden microphones and speakers in the walls loop sounds from a week ago superimposed on the current day. (And maybe the week before, at 1/2 the volume, and so on.)

Even if you're living alone, there would be a stampede of your old selves trudging into the kitchen around 7:30 am, turning on one more in an ocean of softly sloshing coffee machines...

Go the site and read the feedback...

eg: from st3f "I can't hear myself think. Would that I had turned my music down, please."

On the basis of the feedback I'm categorising this as humour, but I'm guessing that in 2 years time the original idea won't seem anything of the sort.
 

Follow me on Twitter: @IanYorston

RSS: A Big Success In Danger of Failure

The Weekly Read

Bill Burnham's full article originally appeared on Burnham’s Beat. Mr. Burnham is a Managing Partner at Softbank Capital Partners.

One measure of the success of RSS is the number of RSS compliant, feeds or channels available on the web. At Syndicat8.com, a large aggregator of RSS feeds, the total number of feeds listed has grown over 2000% in just 2.5 years; from about 2,500 in the middle of 2001 to almost 53,000 in February of 2004.

The growth rate also appears to be accelerating as a record 7,326 feeds were added in January of 2004, which is 2X the previous monthly record.

The irony of this success though is that it may ultimately contribute to its failure. RSS now has tens of thousands of channels and will probably hundreds of thousands of channels by the end of the year. While some of the channels are branded, most are little known blogs and websites. You will have to tune into hundreds, if not thousands, of channels and then try to filter out all the “noise”. That’s a lot of channel surfing!

The problem is only going to get worse. Each day as the number of RSS channels grows, the “noise” created by these different channels (especially by individual blogs which often have lots of small posts on widely disparate topics) also grows, making it more and more difficult for users to actually realize the “personalized” promise of RSS. After all, what’s the point of sifting through thousands of articles with your reader just to find the ten that interest you? You might as well just go back to visiting individual web sites.

What RSS desperately needs are enhancements that will allow users to take advantage of the breadth of RSS feeds without being buried in irrelevant information. One potential solution is to apply search technologies, such as key word filters, to incoming articles (such as pubsub.com is doing).

This approach has two main problems: 1) The majority of RSS feeds include just short summaries, not the entire article, which means that 95% of the content can’t even be indexed. 2) While key-word filters can reduce the number of irrelevant articles, they will still become overwhelmed given a sufficiently large number of feeds. This “information overload” problem is not unique to RSS but one of the primary problems of the search industry where the dirty secret is that the quality of search results generally declines the more documents you have to search.

While search technology may not solve the “information overload” problem, its closely related cousins, classification and taxonomies, may have just what it takes. Classification technology uses advanced statistical models to automatically assign categories to content. These categories can be stored as meta-data with the article. Taxonomy technology creates detailed tree structures that establish the hierarchical relationships between different categories.

It’s easy to see how RSS could benefit from the same technology. Assigning articles to categories and associating them with taxonomies will allow users to subscribe to “Meta-feeds” that are based on categories of interest, not specific sites. With such a system in place, users will be able to have their cake and eat it to as they will effectively be subscribing to all RSS channels at once, but due to the use of categories they will only see those pieces of information that are personally relevant.

But there’s a catch. Even though RSS supports the inclusion of categories and taxonomies, there’s no standard for how to determine what category an article should be in or which taxonomy to use. Thus there’s no guarantee that that two sites with very similar articles will categorize them the same way or use the same taxonomy. The theoretical solution to this problem is get everyone in a room and agree on a common way to establish categories and on a universal taxonomy. Unfortunately, despite the best efforts of academics around the world, this has so far proven impossible.

Follow me on Twitter: @IanYorston