We welcome Joyce to CommerceNet Labs; in the coming months, she’ll be working with us on projects such as zClassifieds and declassifieds and other fun things to do with ad networks…
Dan Gillmor wrote in the 10/31/2004 SJ Mercury News:
Google will have all kinds of company in this expanding world of advertising. That will include, I would expect, many of the more traditional media companies that will see a chance to expand their advertising base beyond the equivalent of the blockbuster (expensive) model that now prevails.
The competitors will also include big companies that have already shown an appreciation of Net-based economics. Microsoft, Yahoo, eBay and at least a few others will certainly be among them.
Google will also find competitors, small ones, out at the edges. And some of those will be new entrants that are figuring out ways to create targeted advertising without massively centralized infrastructures. The principles of peer-to-peer file-sharing will come to the ad marketplace, too.
Google is unquestionably positioning itself in a smart way. The critical mass it’s creating may even prove unbeatable, or turn into a new kind of monopoly that sucks up an astonishing portion of all advertising dollars into its corporate coffers. (That would be a dangerous dominance if it happened.)
Today, eBay, online classified-ad sites and traditional media are the marketplace of choice for the single-item seller. Ultimately, Google and others could even go after that market.
How many dollars (and euros, yen, pesos, renmimbi, etc.) will there turn out to be in the low-end advertising market? It’s a big, big number.
Google may not own it. But it’s going to get a share, a large one. I wouldn’t touch the company’s stock at today’s prices, but there’s plenty of room for growth in its primary revenue base. Nothing grows to the moon, as the saying goes, but there’s a fair amount of sky left.
A month ago we gave the name declassifieds to the concept of decentralizing ads. Now we give the name zClassifieds to an internal CommerceNet project to work on “targeted advertising without massively centralized infrastructures”. Will post more as we learn more.
After our decentralized filesharing post we discovered this item from ACM News Service, “Is P2P Dying or Just Hiding?”
High-order bits excerpted from the paper itself:
- In our traces, P2P traffic volume has not dropped since 2003. Our datasets are inconsistent with claims of significant P2P traffic decline.
- We present a methodology for identifying P2P traffic originating from several different P2P protocols. Our heuristics exploit common conventions of P2P protocols, such as the packet format.
- We illustrate that over the last few years, P2P applications evolved to use arbitrary ports for communication.
- We claim that accurate measurements are bound to remain difficult since P2P users promptly switch to new more sophisticated protocols, e.g., BitTorrent.
More bits:
CAIDA monitors capture 44 bytes 2 of each
packet (see section III), which leaves 4 bytes of TCP packets to
be examined (TCP headers are typically 40 bytes for packets that
have no options). While our payload heuristics would be capable
of effectively identifying all P2P packets if the whole payload
was available, this 4-byte payload restriction limits the number
of heuristics that can undoubtedly pinpoint P2P flows. For example,
BitTorrent string “GET /torrents/” requires 15 bytes of
payload for complete matching. Our 4-byte view of “GET ”
could potentially indicate a non-P2P web HTTP request.
The ACM News Service summary…
“Is P2P Dying or Just Hiding?”
CAIDA.org (10/04); Karagiannis, Thomas; Broido, Andre; Brownlee, NevilUC Riverside’s Thomas Karagiannis, the Cooperative Association for Internet Data Analysis’ (CAIDA) Andrew Broido, et al. dispute popular media reports that peer-to-peer (P2P) file-sharing has declined precipitously in the last year, and contend that the reverse is actually the case. The authors attempted to measure P2P traffic at the link level more accurately by gauging traffic of all known popular P2P protocols, reverse engineering the protocols, and labeling distinctive payload strings. The results support the conclusion that 2004 P2P traffic is at least comparable to 2003 levels, while rigid adherence to conventional P2P traffic measurement techniques leads to miscalculations. The percentage of P2P traffic was found to have increased by about 5 percent relative to traffic volume. Furthermore, comparisons between older and current P2P clients revealed that the use of arbitrary port numbers was elective in older clients, while current clients randomize the port number upon installment without the need for user action. Meanwhile, P2P population studies found that the ranks of IPs grew by about 60,000 in the last year, and the number of ASes participating in P2P flows expanded by roughly 70 percent. These findings outline several trends, including evolving tension between P2P users and the entertainment sector; increasing demand for home broadband links; plans to directly induce P2P applications into profitable traffic configurations; and a significant transformation in supply and demand in edge and access networks, provided that P2P traffic maintains its growth and legal entanglements are eliminated.
The full article (pdf) is contains many more bits for the interested reader…
Science News: Best guess: economists explore betting markets as prediction tools
The research that led to future-predicting markets stems from the 1960s and 1970s, when Vernon Smith and Charles Plott, now of George Mason University and the California Institute of Technology in Pasadena, respectively, began using laboratory experiments to study different market designs. In the early 1980s, Plott and Shyam Sunder, now of Yale University, tested how well markets aggregate information by designing a set of virtual markets in which they carefully controlled what information each trader had.
In one experiment, Plott and Sunder permitted about a dozen study participants to trade a security, telling them only that it was worth one of three possible amounts–say, $1, $3, or $8–depending on which number was picked by chance. Plott and Sunder then gave two of the participants inside information by telling them which amount had been selected. Traders couldn’t communicate with each other; they could only buy and sell on the market.
“The question was, Would the market as a whole learn what the informed people knew?” Plott says. “It turned out that it would happen lightning fast and very accurately. Everyone would watch the movements of the market price, and within seconds, everyone was acting as if they were insiders.”
In another experiment, Plott and Sunder gave the inside traders less-complete information. For instance, if the outcome of the random pick were $3, they would tell some traders that it was not $1, and others that it was not $8. In these cases, the market sometimes failed to figure out the true value of the security.
However, if Plott and Sunder created separate securities for each of the three possible outcomes of the random pick instead of using one security worth three possible amounts, the market in which some traders had incomplete tips succeeded in aggregating the information.
The studies established that, at least in these simple cases, markets indeed can pull together strands of information and that different setups affect how well they do so.
This type of experiment gave researchers a “wind tunnel” in which to test different market designs, says John Ledyard, a Cal-tech economist who chairs the board of Net Exchange. “With experiments, we’re starting to zero in on what really works,” he says.
InformationWeek > Future Of Transactions > Peer-To-Peer Payoff > October 18, 2004
Some people, including myself, believe the next step is for some of those bits to have value. That is to say, consider a string of bits to be like a virtual cow or shell. In order to distinguish these bits (like telling the difference between a beautiful seashell and a piece of coal), they would need an agreed identity. To avoid forgery, they would need a unique and secure ID. And to stop multiple spending of the same bits, there would need to be a clearing process or a means to reveal the identity of anybody who tries to double-spend. All of these requirements are easily achieved in both traceable and anonymous systems of E-cash. In these cases, the money does move. The bits are money. The more you have, the richer you are. This is the future, though maybe only in part.
A parallel and more intriguing form of trade in the future will be barter. Swapping is a very attractive form of exchange because each party uses a devalued currency, in some cases one that would otherwise be wasted. Many of us are too embarrassed to run yard sales or too lazy to suffer the inconvenience and indignity of eBay. But imagine if you weren’t. The unused things in your basement can be converted into something you need or want. Likewise, the person with whom you’re swapping is giving something of value to you which is less so to them. With minimal computation, three-way, four-way, and n-way swaps can emerge, thereby removing the need for any common currency.
Swapping is extended easily to baby-sitting for a ride to New York, a mansion for a two-hundred-foot yacht, or leftover food for a good laugh. In some cases, people will swap for monetary or nonmonetary currencies. Without question, we’ll see new forms of market-making and auctions. But the most stunning change will be peer-to-peer, and peer-to-peer-to-peer- … transaction of goods and services. If you fish and want your teeth cleaned, you need to find a dentist who needs fish, which is so unlikely that money works much better. But if a chauffeur wanted fish and the dentist wanted a driver, the loop is closed. While this is nearly impossible to do in the physical world, it’s trivial in cyberspace. Add the fact that some goods and services themselves can be in digital form, and it gets easier and more likely. An interesting side benefit will be the value of one’s reputation for delivering on your promises — thus, identities will have real value and not be something to hide.
The point can be generalized beyond money. Peer-to-peer is a much deeper concept than we understand today. We’re limited by assumptions rooted in and derived from the physical world. Information technology over the next 25 years will change those limits through force of new habits. Let me cite just one: I think nothing of moving millions of bits from one laptop to another (inches away) by using the Internet and transferring those bits through a server 10,000 miles away. Imagine telling that to somebody just 25 years ago.
Nicholas Negroponte is the founding chairman of MIT’s Media Laboratory and the author of the seminal work on the digital revolution, “Being Digital” (Knopf, 1995).
We’ve spent a little time working on the Decentralization page in the CommerceNet Labs wiki. Here’s a snapshot of what we have so far:
Decentralization in Commerce means the freedom to do commerce the way you want, rather than the way your software wants.
We believe that to build software that works the way society works, that software design must reflect the principles of decentralization.
An agency is an organization with a single trust boundary. One way to think about decentralization is that it allows multiple
agencies to have different values for a variable.See also:
- Wikipedia – Decentralization
- The Now Economy – Decentralization
Update. Allan Schiffman notes that Webster’s has a fine definition of decentralization as well: “the dispersion or distribution of functions and powers”. I’ll add that to the wiki as well…
I want SupplyFX, Webify, and Bonsai Development to be in CommerceNet’s Neighborhood. I also want Rob Rodin, Allan Schiffman, Kevin Hughes, and Marty Tenenbaum in CommerceNet’s Neighborhood.
And by linking to those pages like I just did, I just did. Such is zSearch.
We love Google’s new Desktop Search. We’ve been arguing about something like this for a year or more. The idea of searching everything you’ve seen — not just your hard drive, but everything hyperlinked to it (such as your surfing history) — is so intriguing we’ve built something similar for ourselves.
We’ve modified Nutch to search not just CommerceNet’s website, weblog, and wiki, but also everything we link to. Go ahead and try our index of CommerceNet’s Neighborhood. If you query for Nutch you won’t just see pages from our sites, but Nutch’s home page and even an application of it at CreativeCommons…
We’re also exploring a bunch of other ideas that GDS and other desktop indexing projects like StuffIveSeen haven’t tackled yet. Foremost is ranking — just like the AltaVista engine that Google itself dethroned, GDS doesn’t have anything like a PageRank for the gigabytes of information on your disk. Like many researchers, we suspect that a users’s social network is the key to discerning which hits are likely to be most useful. After all, if the Web is drowning in infoglut on any given query term, a user should be such an expert on the terms of his or her art that there ought to be even more hits to rank on localhost. One cure may be collaborative filtering with your friends…
Secondary aspects of the problem include tackling the fact that many of us have multiple computers and identities on the Internet, so we’d need networks of personal search engines. Or that a local-proxy-server approach might be better at capturing the ”dynamics” of our interaction (how often we re-read the same email over IMAP, say).
But rather than rattling off a longer list of half-baked hypotheses, I’d like to cite GDS for at least one idea that never occurred to us: integrating it seamlessly with the public site. Sure, we thought AdWords-like ads were the key to a better revenue model for the Fisher category of PersonalWeb products.
No, what’s cool is that Google’s ordinary results pages from the public website automatically include hits from your hard drive. How’d they ”do” that?! Read on…
CommerceNet Labs Wiki : FluffyBunnyBurrowsIntoWinSock
…we found that Google Desktop Server actually hooks into Windows’ TCP/IP stack to directly modify incoming traffic from Google’s websites to splice its local results in. Once you install GDS, there’s a bit of Google’s code running inside every Windows application that talks to the Internet.It’s done using a long-established hook in WinSock2, its Layered Transport Service Provider Interface (SPI)…
The Winsock LSP is mostly only used by spyware and censorware; it’s a surprise to see a positive use for it. Spyware detectors like HijackThis consequently detect it.
[An aside: why is Rifkin’s GLAT posting more relevant on the query “rifkin fisher” than Rifkin’s actual Fisher posting? I think it’s Battelle’s fault, for increasing the GLAT’s PageRank! :-]
From Biz Stone’s excellent article, The Wisdom of Blogs:
Bloggers are a wise crowd.
- Diversity of opinion – That’s a no-brainer. Bloggers publish hundreds of thousands of posts daily, each one charged with its author’s unique opinion.
- Independence of members – Except for your friends saying “You’ve got to blog about that!” bloggers are not controlled by anyone else.
- Decentralization – There is no central authority in the blogosphere; publish your blog anywhere you want with any tool you want.
- A method for aggregating opinions – Blog feeds make aggregation a snap and there is no shortage of services that take advantage of that fact.
The article goes on to talk about how MIT Media Lab project Blogdex (one of the longest-operating and most-visited opinion aggregators) is like a hive mind of the blogosphere, collectively creating a modern Oracle with no single opinion about anything. Excellent.
Cachelogic Research paints an interesting picture of decentralized filesharing.
The most astonishing item is that global Internet traffic analysis in June 2004 revealed that in the United States peer-to-peer represents roughly two-thirds of traffic volumes, and in Asia peer-to-peer represents more than four-fifths of traffic volumes. By comparison, HTTP is less than a tenth of the traffic in Asia and less than a sixth of the traffic in the United States. CacheLogic calls peer-to-peer the killer application for broadband with a global reach and a global user base.
Perusing the architectures and protocols section of CacheLogic’s site we find a table comparing the characteristics of web traffic (HTTP) with those of common peer-to-peer protocols. They point out that first generation p2p systems were centralized like Napster; second generation p2p systems were decentralized like Gnutella; and now
The third generation architecture is a hybrid of the first two, combining the efficiency and resilience of a centralized network with the stealth characteristics of distributed/decentralised network. This hybrid architecture deploys a hierarchical structure by establishing a backbone network of SuperNodes (or UltraPeers) that take on the characteristics of a central index server. When a client logs on to the network, it makes a direct connection to a single SuperNode which gathers and stores information about peer and content available for sharing.
Recent developments in peer-to-peer include dynamic port selection and bidirectional streaming of download traffic in the most popular peer-to-peer applications in 2004, BitTorrent (more useful thanks to many available BitTorrent clients and DV Guide) and eDonkey (and eMule). BitTorrent is by traffic the most popular peer-to-peer application:
BitTorrent’s dominance is likely to be attributed to two factors: the rise in popularity of downloading television programmes, movies and software; and the size of these files – a MP3 maybe 3-5Mb while a BitTorrent often sees files in excess of 500Mb being shared across the Peer-to-Peer network.
The high usage of eDonkey in Europe can be attributed to the fact that the eDonkey interface is available in a number of different languages – French, German, Spanish, etc.
So even though the hype machine has stopped pumping p2p, the quieter revolution of the last few years has shown that peer-to-peer traffic has steadily grown to a majority of the Internet traffic worldwide.