In an annual review of the most pressing issues for health executives and policy makers, PwC identified nine top issues for 2009:

  1. The economic downturn will hit healthcare
  2. The underinsured will surpass the uninsured as healthcare’s biggest headache
  3. Big pharma turns to M&A to build the drug pipeline
  4. From vaccines to regulation, prevention is on the rise
  5. Genetic testing reaching a price point for the masses
  6. The Internet and social networking is a powerful health extended “Technology will empower patients in new ways during 2009. The increased information and growing patient-to-patient interaction over social networking platforms and websites such as patientslikeme.com and americanwell.com are changing how healthcare is navigated and experienced by consumers, especially as electronic health records become more common.”
  7. Hospitals must perform to get paid
  8. Payers and employers to give incentives for wellness programs
  9. ICD-10 will require a major resource investment

Read full Healthcare IT News article

By Earnest Cavalli

WIRED Blog Network

A new immersive web platform called Vivaty Scenes lets users create tiny virtual worlds and decorate them with content from around the Internet.

After adding Vivaty Scenes, which entered public beta Tuesday, to a Facebook or AOL Instant Messenger account, users can set up a customizable “room” where they can host chat sessions or small virtual gatherings within a web browser.

Read full article

We are offering a couple internship positions at CommerceNet this summer.

CommerceNet is an entrepreneurial research institute, dedicated to fulfill the promise of the Internet. We are currently seeking Software Engineer interns to implement a data visualization Web application for public health information. Involves JavaScript and Python, both data access and graphics. CommerceNet may also accept proposals for internships to work on well-specified projects of the intern’s own design.

What you’ll do

  • Develop open source libraries or widgets for graphing and data visualization
  • Build public service, community oriented Web site
  • Be part of a small team or work nearly independently
  • Develop with minimal guidance, using rapid iteration and feedback loop and with leeway in choices of tools.
  • Borrow, create or collaborate on visual design and visual elements

Required Skills:

  • Web Applications development, including CSS and JavaScript
  • Python or demonstrated ability to pick up languages
  • MySQL or similar data management experience
  • Great ability to extrapolate from raw ideas to realistic implementations.
  • Demonstrated initiative pulling a project forward
  • Some experience using graphics libraries
  • Familiarity with Cleveland or Tufte principles would be a bonus

Email cn-hr@box5837.temp.domains with questions or cover letter and resume.

A goal of many new Web “2.0” ventures is to build a large or at least persistent community. Success is difficult to measure, but breaking into the top 100,000 sites by traffic measured by Alexa is one goal. It might be a good sign if the site designers send emails to a few people asking them to take a look at the site and after only two days over 3000 people sign up to beta test. You could do worse than to have a list of 60,000 people desperate to join your site before it leaves beta — so desperate that the site admins put a “waiting list checker” page up just so that an impatient person can see how many people are in line to get accounts before he or she does. Read more

A longer, though not necessarily more accurate job description, can be found here.

>> Healthcare Ecosystem
>> Healthcare 3.0 Workshop main page

Healthcare 3.0 Workshop Program

Evening Arrival – October 15, 2007
  • 8pm – Arrival and check-in, Monterey, California
  • 9-10:30pm – Networking Cocktail and Evening Speaker: TBD

Workshop Day 1 – October 16, 2007
  • 8-9am – Registration and Continental Breakfast
  • 9-9:50am – Keynote
    • Speaker: Dr. Marty Tenenbaum
  • 10-11:50am – Panel 1: Collective Intelligence for EBM
    • Speaker: Leslie Michaelson
    • Panelists: 3-5 TBD
  • 12-1pm – Lunch
    • Featured Speaker: TBD Celebrity
  • 1-2:45pm – Panel 2: Empowering Patients via the Net
    • Speaker: TBD
    • Panelists: Ryan Phelan of DNA Direct, Adam Bosworth of Google, Paula Kim + ?
  • 3-4:45pm – Panel 3: Internet for Citizen Science
    • Speaker: James Heywood
    • Panelists: Zak Kohane + ?
  • 5-6:30pm – Web-Med Mixer and Paper & Poster Presentations
  • 7:30-9:00pm – Dinner
    • Featured Speaker: TBD Celebrity

Workshop Day 2 – October 17, 2007
  • 8-9am – Continental Breakfast
  • 9-9:50am – Keynote
    • Speaker: Andy Grove or TBD Celebrity
  • 10-11:50am – Panel 4: Kaizen Trials
    • Speaker: TBD
    • Panelists: TBD
  • 12-3pm – Lunch and Event

I’ve been seeing the word “compliance” tossed around a lot for HTTP and other standards lately, with much ambiguity. Let’s say you read RFC2068 and implemented a client very carefully. Does it make your client implementation “uncompliant” if a new standard updates or obsoletes RFC2068 and adds requirements, as RFC2616 did?

My answer is “that’s not even a meaningful question”. Compliance can be a very loose concept.

  • If your software claims compliance to HTTP, there can be a lot of variation in how that actually works because different versions of HTTP have significant differences.
  • If your software claims HTTP/1.1 compliance, we have a somewhat better idea what that means. A client advertising HTTP/1.1 support in its requests can be assumed to understand the “Cache-Control” response header from the server, because all the specs that ever defined HTTP/1.1 (RFC2068 and RFC2616) define that header. However, we can’t tell if such a client supports the “s-maxage” directive on the Cache-Control header (the maximum age allowed for a shared cache entry) because that was only defined in RFC2616.
  • If your software claims RFC2068 compliance we don’t know whether it understands “s-maxage”, but we can assume that it supports “maxage”.
  • If your software claims RFC2616 compliance we can assume that it understands “s-maxage” as well as “maxage”. But support for RFC2616 isn’t advertised over the wire to servers, so we can’t tell the difference from clients that only implement RFC2068.

With this knowledge, you can ask if the new caching features in RFC2616 made existing clients uncompliant with RFC2068. Of course not. RFC2068 didn’t change — there’s a reason the IETF doesn’t replace its standards in-place but defines new RFC numbers. Do the new caching features make the client uncompliant with RFC2616? Well it never claimed to be compliant with a spec that was probably published after the client was written.
The important question to ask is whether a new feature or requirement is backwards-compatible (and if it’s not, whether the feature is important enough to break backwards-compatibility). Let’s consider The Cache-Control header a little further: a response with “Cache-Control: no-store” can be sent to any client that advertised HTTP/1.1 support, because that directive works the same way in both specs. If the response has “Cache-Control: s-maxage=1600”, then we’re not so sure if all HTTP/1.1 clients support it, but that might be OK — only shared caches can possibly do the wrong thing if they don’t implement RFC2616 yet, and the server could limit the out-of-date cache entries of pre-2616 shared caches by having a backup plan, e.g. “Cache-Control: s-maxage=1600, maxage=36000”.

This new feature was a reasonable choice in the standardization of RFC2616. If the writers of RFC2616 had been prevented from making any requirements that weren’t already met in deployed clients, they would not have been able to add features like “maximum age limits for shared cache entries”. The limitation would have unduly restricted their ability to improve RFC2616. Instead, the authors considered whether each feature was worth the new code and other design/test effort, and the backwards-compatibility considerations, and whether there were reasonable work-arounds or fall-backs.

It’s a very engineering approach but that’s what we do at the IETF. We don’t do scientific theories of protocol compliance that must be true for all instances of protocol X. We do engineering.

Name Creator Code Version Paper Description
Zocalo Chris Hibbert open source latest release 2006/7/14 CommerceNet Tech Report: The Zocalo White Paper A java toolkit for prediction markets. Includes an general purpose prediction market configuration as well as a configuration intended for use in laboratory economics experiments.
Free Market Jesse Gillespie open source link is broken “a PHP-based freeware virtual market package using MySQL. When completed, FreeMarket will provide an open-source way for researchers and educators to easily incorporate IEM-like electronic virtual prediction markets into their programs.”
jMarkets CalTech open source version 1.5 released 2005/11/1 “a web-based platform for running large-scale market experiments.” CDA with book
jAuctions CalTech   “Available soon!” since at least mid-2005   a web-based platform for running auctions.
multistage CalTech open source last release 2004/9/16   “a modular package, designed to deal with a broad class of multi-stage games, where the stages may be determined by players’ play and/or random events.”
JASA Liverpool University. open source latest release 2005/2/18 Phelps, et. al., Co-Evolutionary Mechanism Design: A Preliminary Report “a high-performance auction simulator that allows researchers in agent-based computational economics to run trading simulations.”
Idea Futures Ken Kittlitz open source (limited license) latest release 2005/8/25 Experiences with the Foresight Exchange The source code for the longest running open play-money prediction market site on the web.
usifex Peter McCluskey open source 2000/2/9 none An implementation of prediction markets that makes general conditional betting possible.
MUMS HP Labs closed source Not available COMPUTER GAMES AND ECONOMICS EXPERIMENTS, Yut Chen and Ren Wu focused on games & turn taking; no foundation for switchable auctions/markets
Gambit Texas A&M open source latest release 2006/1/20 focused on general games
AuctionBot Wellman, UMich closed source retired a mention supported auctions between automated agents and people; demand functions
Walras Wellman, UMich closed source not available Market Oriented Programming call market; competitive equilibria; demand functions; market simulation
Resource Oriented Multi-commodity Algorithm Fredrik Ygge open source 1998/2/27 Publications An algorithm, not a toolkit

Search

The main user task is locating existing microformatted data found on the public Web. This will eventually require ranking multiple results to find the most relevant. It may also require a more-specific query language for working with particular fields (“Advanced Search”). A secondary task is reusing this information once found.

Query Interface

A one-line input form would be ideal. Using query terms alone won’t specify the range of microformat types to search. Not knowing the type also complicates handling of compound microformats — should it return hCards within hCalendar events as ‘naked’ hCards?

  • text Terms: should be case-folded and matched anywhere, in any text node.
    • Çelik, Argent, web-2.0, “Web 2.0”, smith +sunnyvale, yankees -NY -“New York”
  • h* Directives: should choose a schema-appropriate range of shortcuts for common elements of popular microformats
    • Tantek hCard — where name-of-spec should force type matching
    • region:CA, postal-code:94040 — literal matching of class names from the specs
    • Tantek zip:9*, Yankees in:CA, party by:Rohit — meta-matching across colloquialisms for “all location-related stuff” or “organizer or participant”
  • xq Directives: should be applied to some? XML representation of all the stored microformats
    • //region=”CA”, //given-name=”Khare*” (?). Based on automatic application of miniML transformation rules, with dictionaries for all the common µf terms (XMDP)
  • CSS Selectors ?: would it be designer-friendly to use CSS3
    • (.vcard .fn):foo == fn:foo. Basically, roll-your-own directive by using css to specify what will match.

Result Interface

Start with a standardized card UI element.

Submission

Client-side: Miffy

Server-side: UFP (?)

could easily be delayed — make it work with interactively submitted content first.

Crawling

How to schedule regular refreshes? How to avoid duplicating all of evdb & upcoming, say?

Deployment Issues

If we invest in the Microsearch scenario, it’s worth asking what it will take to fund it all the way through to deployment.

Scalability

dbxml truly sufficient in the short term? concurrency?

Privacy

how to “unsubmit”?

is downloading images still a risk in this context? Probably not.

Security

… scrubbing & sterilization …

Spam

no, PageRank-of-the-source-page is no protection :(

user accounts with strong passwords? (blacklists…)

should we burn a unique-key into every IPL, so we can at least distinguish contributors?

Recap: Goals

Searcher’s Goals:

  1. Find people, events, reviews — the sorts of things microformats have been invented for
  2. Re-use the search results easily (e.g. in .vcf or other formats)
  3. Explore the world of microformats

Author’s goals:

  1. Test out their markup
  2. Attract readers for their content

Our goals:

  1. Promote microformats by “showing off” how much is out there already
  2. Give microformated data chunks their own addresses for further automation/remixing

Microformats.org community goals:

  1. Gather all the “test cases” together
  2. Cross-check specs with actual practice

Following up on our recent release of a long-delayed tech report on social ranking of email search, another news article recently highlighted a similar approach to a different aspect of the problem, socially sorting incoming email.

In either case, there’s a crying need for smarter email clients (at least, for us few doddering old souls who live and die by email — apparently, few creatures under 18 have ever heard of the medium, between their gum-chewing, IMing, and illegal downloading…). As Microsoft Research sociologist Marc Smith points out in the news report on the SNARF system, there’s already an immense reservoir of personal behavioral information on the average PC hard drive. Taking advantage of this information to surface implicit relationships between correspondents, whether based on a graph analysis (as our KudoRank proposal does); interaction timelines (apparently a key part of SNARF and its USENET predecessor, NetScan); or the kinds of vocabulary and relationships used (Andrew McCallum‘s Author-Role-Topic model); are all signs of a trend to put the vast computing resources available today to work on the treacherous problem of getting computers to read email for us :)


SNARF begins indexing e-mail messages on initial launch. Once it’s finished indexing, it shows a window with three panes.

  • Top pane: People who have sent recent e-mail addressed or cc’d to the mailbox owner. Messages are unread.
  • Middle pane: People who have sent recent, unread e-mail addressed to anyone.
  • Bottom pane: All people mentioned in any e-mail the mailbox owner has received in the past week.

SNARFing your way through e-mail
By Ina Fried
Story last modified Fri Dec 02 04:00:00 PST 2005

With the world’s in-boxes overflowing with unread messages, researchers at Microsoft are offering up a tool they hope will help people sort through the morass.

The software maker this week released a free utility that aims to sort e-mail in a new way: It can organize messages not just by how recent they are, but also by whether the recipient knows the sender well.

The program, known as SNARF, bases its approach on the fact that people tend to interact more with messages from those they care about.

“You don’t respond to everybody, and not everybody responds to you,” said Marc Smith, one of the Microsoft researchers who developed SNARF, or Social Network And Relationship Finder.

Though SNARF is a research project for now, Microsoft said that similar features could soon make their way into its e-mail products.

Smith boils it down this way. His computer, for all its power, serves up his e-mail without distinguishing junk mail from messages sent by close friends. His dog, on the other hand, learns who his friends are and stops barking at them.

“If my dog can tell who strangers are, apart from friends…my e-mail reader should be able to do the same,” he said.

The task is increasingly important as people become overloaded with e-mail. Though many like to be alerted to new messages, the barrage of notifications is now so frequent for many workers that it is nearly impossible to get any creative work done without being interrupted.
SNARF screenshots

“The machines got us into this problem,” Smith said. “They are going to have to get us out of it.”

Smith calls today’s method of sorting e-mail the “ADD sort order,” in which the newest messages are constantly presented first, regardless of who sent them. There has to be a better way, he said.

Figuring out who your friends are may not seem like a task well-suited to computers, but Smith said it’s simply a matter of making sure that the computer is adding up the right things.

“The beautiful thing about computers is that they are really, at their core, accounting machines. They love to count things. Social relationships are countable,” Smith said.

In SNARF’s case, the software looks at how often people correspond with particular content in the body of a message and how often they reply to one another’s correspondence, among other things.

The concept is not new. The idea of “social sorting” has been explored by Microsoft and others for years. Researchers at Hewlett-Packard, for example, looked at the patterns of who e-mailed who within HP Labs. Doing so, the researchers found, turned out to be a more effective means of determining working groups than looking at an organizational chart.

Microsoft has also used social sorting to help users wade through Internet forums, in a research effort known as NetScan.

Smith points out that our PCs already know tons about us, in many cases storing years’ worth of messages and replies. “This is more than the diarists of the 17th and 18th centuries,” he said.

SNARF can also sort messages based on whether they were sent directly to you, whether you were copied on the message or whether you were part of a distribution list.

While such an approach can help sort through the sea of messages, it’s not flawless. Smith noted that not everyone who is important to him returns his e-mails.

“My mother, I’m sorry to say, just never replies to my e-mail,” he said, quickly noting that it’s no reflection on the quality of his relationship with her.

Smith said there is a strong chance the social sorting techniques will find their way into Microsoft products. There have been feelers from the teams responsible for Outlook, Exchange, Hotmail and Outlook Express, he said.

“We’re having lots of meetings with people,” Smith said.

For now, the research team has put its software out there as a download for people to experiment with. Officially, Microsoft says SNARF will definitely work with Outlook 2003 and Windows XP Service Pack 2, though Smith said it may work with other software. SNARF also requires the .Net framework, though it will install it if a computer does not already have the operating system add-on.

Smith is also working on expanding the research project in several ways. For example, the current version cannot be customized so that a user can say that a certain friend is important, even though they only exchange e-mail once a year.

Allowing users to “tag” e-mails in various ways is among the features that the company is looking at. “We are exploring a range of ideas around that,” he said. “It’s a very important direction,” he added, noting that the next version of Outlook also includes new tagging capabilities.

Moving onto cell phones would be another good move for SNARF, he said. “If you are not at your computer to do triage, having 150 e-mails can be daunting,” he said. “It would be nice to have the seven e-mails from colleagues in a separate folder.”