I found this comment (see below) posted by Evan at dronamraju.com on the subject of aggregation. This may, or may not, be Evan Williams of Blogger fame...
Looking at aggregation more generally, there is another aggregation component that is not quite a solved problem: effective aggregation of user profile/preference information. So on one side you aggregate content, and on the other side you aggregate information about the user, and then relevance is matching between the two.
For some applications (you mention Personals), the content is the user information. For others - web search - this information tends to be more overlooked because understanding and drawing unambiguous conclusions from it can be hard.Another way to think about it is that the the text portion of a search engine query is only one of potentially many inputs to finding the best matches - perhaps even the least important!
If we know everything about a user, shouldn't we be able to anticipate what they're searching for? Maybe there is no need for the query page, only a results page :-)
Aggregating content, as you point out, is becoming easy.With the now-ubiquitous privacy policies most reputable sites have adopted, aggregating user information is hard if you don't already have a lot of users coming to your site and doing things/telling you things about themselves (passively or actively). So there's a bootstrap problem, and in my opinion this is why the most successful aggregators today might continue to be some of the most successful aggregators tomorrow: they already know alot about what their users are doing, they just need to relate it to the mass of content they aggregate (as well as to other users' information/preferences).
Amazon is the perennial example of an aggregator doing this reasonably effectively. Their next move is to bring communities into the picture (sound familiar?). "Amazon book clubs" anyone?
Hard hitting stuff from Rupert Murdoch talking to the American Society of Newspaper Editors. Webcasting, BlogCasting, Podcasting, VideoCasting...
Like many of you in this room, I’m a digital immigrant. I wasn’t weaned on the web, nor coddled on a computer. Instead, I grew up in a highly centralized world where news and information were tightly controlled by a few editors, who deemed to tell us what we could and should know. My two young daughters, on the other hand, will be digital natives. They’ll never know a world without ubiquitous broadband internet access.
The peculiar challenge then, is for us digital immigrants – many of whom are in positions to determine how news is assembled and disseminated -- to apply a digital mindset to a new set of challenges.
We need to realize that the next generation of people accessing news and information, whether from newspapers or any other source, have a different set of expectations about the kind of news they will get, including when and how they will get it, where they will get it from, and who they will get it from.
Anyone who doubts this should read a recent report by the Carnegie Corporation about young people’s changing habits of news consumption and what they mean for the future of the news industry.
According to this report, and I quote, “There’s a dramatic revolution taking place in the news business today, and it isn’t about TV anchor changes, scandals at storied newspapers or embedded reporters.” The future course of news, says the study’s author, Merrill Brown, is being altered by technology-savvy young people no longer wedded to traditional news outlets or even accessing news in traditional ways.
Instead, as the study illustrates, consumers between the ages of 18-34 are increasingly using the web as their medium of choice for news consumption. While local TV news remains the most accessed source of news, the internet, and more specifically, internet portals, are quickly becoming the favored destination for news among young consumers.
And their attitudes towards newspapers are especially alarming. Only 9 percent describe us as trustworthy, a scant 8 percent find us useful, and only 4 percent of respondents think we’re entertaining. Among major news sources, our beloved newspaper is the least likely to be the preferred choice for local, national or international news going forward.
What is happening is, in short, a revolution in the way young people are accessing news. They don’t want to rely on the morning paper for their up-to-date information. They don’t want to rely on a god-like figure from above to tell them what’s important. And to carry the religion analogy a bit further, they certainly don’t want news presented as gospel.
Instead, they want their news on demand, when it works for them. They want control over their media, instead of being controlled by it. One commentator, Jeff Jarvis, puts it this way: give the people control of media, they will use it. Don’t give people control of media, and you will lose them.
In the face of this revolution, however, we’ve been slow to react. We’ve sat by and watched while our newspapers have gradually lost circulation. We all know of great and expensive exceptions to this – but the technology is now moving much faster than in the past.
Where four out of every five americans in 1964 read a paper every day, today, only half do. Among just younger readers, the numbers are even worse, as I’ve just shown.
One writer, Philip Meyer, has even suggested in his book The Vanishing Newspaper that looking at today’s declining newspaper readership – and continuing that line, the last reader recycles the last printed paper in 2040 – April, 2040, to be exact.
Just watch our teenage kids. What do they want to know, and where will they go to get it?
They want news on demand, continuously updated. They want a point of view about not just what happened, but why it happened.
They want news that speaks to them personally, that affects their lives. They don’t just want to know how events in the Mid-east will affect the presidential election; they want to know what it will mean at the gas-pump. They don’t just want to know about terrorism, but what it means about the safety of their subway line, or whether they’ll be sent to Iraq. And they want the option to go out and get more information, or to seek a contrary point of view.
And finally, they want to be able to use the information in a larger community – to talk about, to debate, to question, and even to meet the people who think about the world in similar or different ways.
I just saw a report that showed Google News’s traffic increased 90 percent over the past year while the New York Times’ excellent website traffic decreased 23 percent. The challenge for us – for each of us in this room – is to create an internet presence that is compelling enough for users to make us their home page. Just as people traditionally started their day with coffee and the newspaper, in the future, our hope should be that for those who start their day online, it will be with coffee and our website.
To do this, though, we have to refashion what our web presence is. It can’t just be what it too often is today: a bland repurposing of our print content. Instead, it will need to offer compelling and relevant content. Deep, deep local news. Relevant national and international news. Commentary and debate. Gossip and humor.
At the same time, we may want to experiment with the concept of using bloggers to supplement our daily coverage of news on the net. There are of course inherent risks in this strategy -- chief among them maintaining our standards for accuracy and reliability. Plainly, we can’t vouch for the quality of people who aren’t regularly employed by us – and bloggers could only add to the work done by our reporters, not replace them. But they may still serve a valuable purpose; broadening our coverage of the news; giving us new and fresh perspectives to issues; deepening our relationship to the communities we serve, so long as our readers understand the clear distinction between bloggers and our journalists.
To carry this one step further, some digital natives do even more than blog with text – they are blogging with audio, specifically through the rise of podcasting – and to remain fully competitive, some may want to consider providing a place for that as well.
And with the growing proliferation of broadband, the emphasis online is shifting from text only to text with video. The future is soon upon us in this regard. Google and Yahoo already are testing video search while other established cable brands, including FOX News, are accompanying their text news stories with video clips.
There is a lot of junk orbiting the Earth and the problem will worsen unless there are changes in how spacecraft operators operate. But it is not all doom and gloom. The first steps toward a comprehensive solution are already well underway including a European code of conduct for space debris mitigation.
According to Dr Ruediger Jehn, a space debris specialist working at ESA's Space Operations Centre (ESOC) in Darmstadt, there are several relatively simple measures that will help reduce the amount of debris in space. Some are already being implemented by spacecraft operators at little or no cost.
"These steps," he explains, "are based on common sense and include measures that should be acceptable to any spacecraft operator."
The basic concept is simple: do not make the existing problem worse; reduce or prevent the creation of any new debris; and, in particular, strive to protect the commercially valuable low Earth and geostationary orbits.
The amount of debris created during normal operations can be reduced by not discarding, ejecting or detaching anything that does not have to be discarded, ejected or detached. This includes payload covers, Yo-Yo despinners and instrument covers such as those used to protect the highly sensitive optical windows of sensors during launch. Lastly, minimise break-ups, a major source of small but deadly debris.
But while technology will likely provide many solutions and many nations are now serious about following a code of behaviour, Dr Jehn and others in ESA's space debris community argue that, ultimately, what is needed is a CoC negotiated at the UN level to push everyone to adhere to standards.
In the meantime, how can the average person become involved?
"Call your space agency," says Dr Jehn, "tell them: 'My kids want to travel in space in 30 years and I don't want you guys spoiling it'. Pressure from the public could help. Once space is polluted it's too late and I wouldn't dare go up there."
Two large fragments of a Delta second stage which re-entered the Earth’s atmosphere on 22 January 1997 were recovered in Georgetown, Texas. The large object seen here is the main propellant tank made of stainless steel with a mass of more than 250 kg which landed only 45 metres from a farmer’s home.
I don't want to discourage further reading, so I'll leave the title of my latest paper until the end of the posting.
It isn't easy to precisely define SCIgen methodology, but the website opines that "SCIgen is a program that generates random Computer Science research papers, including graphs, figures, and citations. It uses a hand-written context-free grammar to form all elements of the papers. One useful purpose for such a program is to auto-generate submissions to "fake" conferences; that is, conferences with no quality standards, which exist only to make money."
I attach below a brief example of my SCIgen output for which I can claim no credit whatsoever.
Just note that it reads rather better than much of the student coursework that one sees flying about...
5.1 Hardware and Software Configuration
A well-tuned network setup holds the key to an useful evaluation. We executed a software simulation on CERN’s desktop machines to disprove topologically classical information’s inability to effect the work of British gifted hacker Charles Darwin. Primarily, we removed some NV-RAM from our system. Similarly, we tripled the USB key space of our mobile telephones. We added some flash-memory to DARPA’s system. Along these same lines, we removed more ROM from MIT’s reliable overlay network. In the end, we removed some ROM from our decommissioned Apple IIs. This step flies in the face of conventional wisdom, but is instrumental to our results.
And the title of my paper?
Uzema: A Methodology for the Understanding of Object-Oriented Languages
Great stuff... I throughly recommend you try SCIgen for yourself.
Oh - and if you want to know why anyone would want to submit "fake" papers to so called "fake" conferences then check out the details of the 9th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI) taking place July 10-13, 2005 in Orlando, Florida, USA:
Through WMSCI conferences, we are trying to relate the analytic thinking required in focused conference sessions, to the synthetic thinking, required for analogies generation, which calls for multi-focus domain and divergent thinking. We are trying to promote a synergic relation between analytically and synthetically oriented minds, as it is found between left and right brain hemispheres, by means of the corpus callosum. Then, WMSCI 2005 might be perceived as a research corpus callosum, trying to bridge analytically with synthetically oriented efforts, convergent with divergent thinkers and focused specialists with non-focused or multi-focused generalists.
And thanks to Martin G at Ohpurleese for the steer. Read his site. It will make you smile.
I know there are lots of small players out there, but "uploading for sale" really hasn't happened yet - but someday soon there will sites that allow everyone to upload books, journals, music, video - the full gamut of multi-media - and sell it using micropayments.
The obvious players are Apple (iTunes + Garageband...), Amazon (who've just bought an "OnDemand" printing capability), eBay (because they handle everything and digital downloads are just another item) and Google (because, hey, they've got access to everything...). Yahoo and Microsoft might be in there too - but they keep playing "catch-up" and I'm really not convinced they've yet understood the bottom-up nature of the future.
Google has unleashed a beta version of its video-hosting service. Users can upload videos of any size and Google will host it for free. Amazing as that is, it isn't the most interesting feature. It also will allow you to charge whatever you want for users to download the videos.
The implications of this are utterly staggering. Any person with a video can now sell that video for any amount they want at no overhead cost. It potentially creates an opportunity for video producers to make a living from their work. The types of files probably will range the breadth of garage-band music videos, indy movies, the inevitable porn, and maybe even news.
Another angle to consider is the effect this will have on news. If someone captures an incredible event with a camcorder, how many would be inclined to give it to a local news channel for free when they have a free micropayment system to sell it to a worldwide audience?
New DNA studies suggest that all humans descended from a single African ancestor who lived some 60,000 years ago. To uncover the paths that lead from him to every living human, the National Geographic Society today launched the Genographic Project at its Washington, D.C., headquarters.
The project is a five-year endeavor undertaken as a partnership between IBM and National Geographic. It will combine population genetics and molecular biology to trace the migration of humans from the time we first left Africa, 50,000 to 60,000 years ago, to the places where we live today.
Ten research centers around the world will receive funding from the Waitt Family Foundation to collect and analyze blood samples from indigenous populations (such as aboriginal groups), many in remote areas. The Genographic Project hopes to collect more than a hundred thousand DNA samples to create the largest gene bank in the world.
Scientists have developed a tiny microscope - the width of a human hair - which they say could "revolutionise" the examination of biological samples.
Cardiff University researchers, lead Professor Paul Smith, said the optical biochip could help doctors test for diseases and develop new drugs. The team is looking to integrate the biochip into medical technology, such as diagnostic equipment.
The biochip, developed with a GBP2.2m grant, works by emitting tiny lasers which analyse a cell. Biological samples can be placed on the biochip - just visible to the human eye - which then relays what it finds via an electrical signal. Future generations may even be able to use these as the basis for hand-held system.
In theory, the biochip could detect diseases such as HIV, malaria and some cancers, or aid drug development by analysing how a cell reacts to a substance.