Visitors Items
November 30, 2006

T3 12/7: Rick Wesson on detecting botnets, click-fraud and id theft

Events, Talks, Visitors By: ams

Rick Wesson, of Support Intelligence, will speak about the tools he is building to detect global abuse patterns, using realtime black lists, spamtraps and honey pots.

Understanding what your network is doing to the rest of the community is difficult- we discuss how to use our tools to understand how your network is abusing other networks, and we show graphs and statistics of trends globably and within the USA on identity theft and click-fraud.

Rick has worked in the IETF and ICANN on DNS, whois, and Registry and Registrar protocols. He served for 4 years as the CTO of the Registrars Constituency in the GNSO/ICANN framework and for 2 years on the ICANN Security and Stability Committee.

His talk is scheduled for Thursday 12/7 at 4PM.

October 26, 2006

T3 10/26: Alex Piner on Semantically Interlinked Online Communities (SIOC)

Talks, Visitors By: ams

Alex Piner, an evangelist and activist for a practical Semantic Web and organizer of the Santa Monica SemWeb Meetup group (lots of links there), will be visiting Palo Alto to meet with Marty Tenenbaum and Rohit Khare to present a new initiative co-sponsored by DERI: Semantically Interlinked Online Communities (SIOC):

SIOC provides methods for interconnecting discussion methods such as blogs, forums and mailing lists to each other. It consists of the SIOC ontology, an open-standard machine readable format for expressing the information contained both explicitly and implicitly in internet discussion methods, of SIOC metadata producers for a number of popular blogging platforms and content management systems, and of storage and browsing / searching systems for leveraging this SIOC data.

They’ve also got some nice 1-page summary pdfs, particularly for users and developers. It’s an innovative, community-focused effort by all accounts, and shares some of the ideals of microformats, as discussed in a blog post by John Breslin:

So, with this in mind and in terms of SIOC, I hope that we can use Microformats to help create closer interlinks between the objects that make up online communities – I’m talking mainly about posts, forums / blogs, communities and user profiles. I’m going to start with three or four things I want SIOC to do for Microformats and vice versa, and then we can go from there. These correspond to some of the links shown in my “connecting discussion clouds” picture.

Alex will be visiting Thursday afternoon, with a talk scheduled for the usual time of 4PM.

September 7, 2006

T3 9/7: SystemOne’s collaborative KM

Talks, Visitors By: ams

On the recommendation of both Thomas Vander Wal (information architect and past T3 speaker) and Jerry Bowles (publisher of Enterprise Web 2.0 and fan of KnowNow’s), we were able to invite Bruno Haid of SystemOne to come by CommerceNet this week during the team’s trek from Austria to Austrian-held California.

A screencast of their technology has set the blog world buzzing about their new approach to collaborative knowledge management:

“System One has all the web 2.0 buzzwords under the hood, but they focus on a simple to use tool that pulls together the best of the new components, but only where it makes sense to create a simple tool that addresses complex problems.” — Thomas Vander Wal

“Think of how much more productive your organization would be if everyone worked at the same level of your star performers. Imagine an industrial-strength enterprise app that is so simple to use that it requires no training or special knowledge to learn and so smart that it makes all users instantly more productive? Imagine the knowledge office equivalent of the supermarket revolution that turned every checker into a whiz.” — Jerry Bowles

“The concept of a search engine that can search across the many and varied systems inside a company firewall is a very appealing one. System One does this, but also extends it to a true read/write app – enabling people to take notes and share them with their workmates. This is a very promising piece of software…” — Richard MacManus on ZDNet

“For a start there’s seamless integration of enterprise info and authoring with real-time analysis of what you write. Although there are some familiar technologies involved as well (Wiki/blogging, syndication etc), the tech is presented in a way that from a user’s point of view, it gets out of the way and just works. ” — Danny Ayers, XML/RDF researcher

August 24, 2006

T3 8/24: Alex Russell of Dojo on Comet event notification

Meetings, Visitors By: ams

It’s short notice, but we’re glad to welcome back one of CommerceNet’s Open Source partners, Alex Russell of Dojo Toolkit fame. We’ll be discussing our experience with streaming DHTML event notification over HTTP into a browser using JSON-like object encoding, and recent discussion proposals within Dojo for a Comet-style new event notification service that also uses JSON to carry its notification payloads.

August 17, 2006

T3 8/17: Christopher Allen on SynchroEdit

Talks, Visitors By: ams

Christopher Allen is a futurist who has been working in the area of social
software for over 15 years. He founded Consensus Development in 1988 as a
groupware engineering firm; Consensus later went on to develop the SSL
standard with Netscape Communications, a security standard which is now at
the heart of all secure commerce on the World Wide Web.  He later founded
Skotos Tech in 1999, an online game channel centered on creating online
communities. More recently Christopher consults for social software
companies such as SocialText, Opinity, and various other startups, and
speaks on the topic of social software at various conferences. Since 2003
has been sharing his experience by blogging about social software and online
trust. Some
of his most popular articles have been Tracing
the Evolution of Social Software
, Four
Kinds of Privacy
, Intimacy
Gradient and Other Lessons from Architecture
, Progressive Trust, On Being
an Angel
, a series of articles on the Dunbar
Number
, and a recent series of articles on types of Collective Choice.

August 10, 2006

T3 8/10 Bonus: Jeremy Ruston on TiddlyWiki

Talks, Visitors By: ams

In a stroke of STIRRing luck, we ran into the founder/creator/incantor of TiddlyWiki, one of the lightest-weight and uniquely in-browser wiki packages reusable non-linear personal web notebooks out there — and invited him to join us on very short notice between talks at (insert other Valleywag-able Web search companies here).

Welcome to TiddlyWiki, a free MicroContent WikiWikiWeb created by JeremyRuston and a busy Community of independent developers. It’s written in HTML, CSS and JavaScript to run on any modern browser without needing any ServerSide logic. It allows anyone to create personal SelfContained hypertext documents that can be posted to a WebServer, sent by email or kept on a USB thumb drive to make a WikiOnAStick. It also makes a great GuerillaWiki.

August 10, 2006

T3 8/10: Duncan McCall on PublicEarth

Talks, Visitors By: ams

[Thursday the 10th will be by Duncan McCall on a concept he’s rallying effort around; stay tuned for the 17th, which will likely be about SynchroEdit and other two-way-web technology…]

No one will deny that the ‘geospatial’ revolution is well underway, with an explosion in GPS based personal navigation devices, the rapid GPS enabling of the ubiquitous cellphone platform – and the mainstream adoption of internet mapping services such as Google Local and Microsoft Virtual Earth, the stage is truly set for us to embark on the ‘next generation’ of location based services.

So it’s not hard to envisage a very near future where we truly have an annotated planet – and users can interrogate a digital device on the fly to view rich layers of location specific categorized content, that literally bring the world around a user alive with specific, organized, concise and up-to-date descriptions of specific points of interest in that users physical environment.

Find out what exactly that scenic viewpoint off the highway ahead of you has in store, learn about the history of a historical building as you stand in front of it, locate the description of a great local hike, find a pickup game of soccer in a local park on the fly, or search for a nearby hotel with exactly the right amenities…

Few will deny that this reality will happen, indeed many of the pieces are already in place, the mapping and satellite imagery infrastructure, the delivery mechanism through personal navigation devices, cellphones and ‘smart’ mobile devices, a hugely enthusiastic user community, and of course the internet to tie many of the pieces together.

But there is one critical component missing, if we truly want to experience this ‘next generation’ of location based services.

That is the content.

Currently no one entity is focused on creating a data source of meaningful next generation location based content, in an organized, categorized, relevant and useable format. Much of it already exists in many fragmented forms, but until now there has been no one unifying entity to capture, edit and create all of this critical location based content.

It is the goal of PublicEarth to become the Number 1 data source for this next generation of location specific data and content.

PublicEarth will create a database of relevant and concise, constantly updated content, created and managed by users, for users, content attached through geographical coordinates to relevant points of interest in the physical world, describing in organized detail the attributes of those points of interest, accessible through a simple intuitive web mapping interface, with the goal of making much of the content available to mobile devices. This open and ever evolving database, describing the world around us, in organized, structured detail – will become the de facto international standard for the next generation of mapping and mobile geographical specific content, with a very large participating user community, operating in an open source and collaborative Wiki model, tagging a huge range of searchable and organized content to huge range of specific points of interest in the physical world. Access to this database will be licensed to mobile carriers and content providers looking to sell access to location specific rich content on the new generation of location aware mobile devices. With additional revenue driven by the huge opportunity for geo-specific paid content & advertising.

Importantly the goal of PublicEarth is not to be another technology tour de force, narrowly focused web 2.0 app, looking solely to monetize eyeballs – but it will build out a worldwide directory of organized, structured, relevant, meaningful location based content – that can deliver (and derive) huge value from a very demographically wide, international user base, with a multitude of very real and meaningful commercial opportunities.

PublicEarth will create a database of relevant and concise, constantly updated content, created and managed by users, for users – content attached through geographical coordinates to relevant points of interest in the physical world, describing in organized detail the attributes of those points of interest, accessible through a simple intuitive web mapping interface, with the goal of making much of the content available to mobile devices.


This open and ever evolving database ‘describing the world around us’ in organized, structured detail – will become the de facto standard for the next generation of mapping and mobile geographical specific content, with a very large participating user community, operating in an open source and collaborative Wiki model – tagging a huge range of searchable and organized content to anything and everything in the physical world.


Access to this database will be licensed to mobile carriers and content providers looking to sell access to location specific rich content on the new generation of location aware mobile devices, (smart phones, GPS’s, Onstar, etc… wanting to provide an additional layer of depth and richness of POI data.) and additional revenue driven by the huge opportunity for geo-specific paid content & advertising.

August 3, 2006

T3 8/3: Server-side Microformats parsing and MFML

Talks, Visitors By: ams

This summer, Kiran Hegde and Michael Gysbers have been working with CommerceNet as software engineering practicum students from Carnegie-Mellon University’s West Coast campus. They have been working with microsear.ch to extend its capabilities to parse microformatted HTML pages crawled from the Web by a server, in addition to our existing Javascript-based Miffy technology for analyzing and editing microformatted pages within a browser.

At this talk, they will present the results of their port to server-side Java, and more importantly, report on the software engineering process that led them here — experimenting with re-hosting the Javascript version with Rhino, creating a test environment, tracking moving requirements, and attemptiong to formalize what it means to parse the entire range of microformats. One of the innovations within CommerceNet’s microformats research they built upon was MFML, an internal microformats markup language that makes it easier for programmers to work with microformatted data as if they were XML data (and use XQuery, etc).

As virtual interns, they have been working by phone from Redmond, WA and Denver, CO respectively — this talk will be delivered by LiveMeeting at our offices at 169 University Ave, and open to all.

Title:

Miffy – A Microformat Parser

Team:

Team Miffy (Kiran Hegde and Michael Gysbers)

Sponsor:

CommerceNet

Brief Abstract:

Team Miffy created a configurable and testable server-side microformat parser for CommerceNet. The parser accepts well-formatted xhtml web pages as input and outputs MFML (MicroFormat Markup Language) and formatted XHTML.

Time:

4:00 PM PST, Thursday, August 3rd

LiveMeeting:

https://www.livemeeting.com/cc/cmuwest/join?id=NBP3FK&role=present&pw=HT3
P27

Teleconference:

(605) 772-3001 with access code: 210714#
July 13, 2006

T3 7/13: Distributed Archiving for Long-Term Storage

Talks, Visitors By: ams

In mid-July it’s our pleasure to host the much-honored researcher (and former chair of the HTTP Working Group) Larry Masinter to talk about an area he’s been interested in for the past few years: long-term reliance on electronic documents. From an abstract of a paper he recently co-authored with Michael Welch on the topic:

This paper analyzes the requirements and describes a system
designed for retaining records and ensuring their legibility,
interpretability, availability, and provable authenticity over long
periods of time. In general, information preservation is
accomplished not by any one single technique, but by avoiding all
of the many possible events that might cause loss. The focus of the
system is on preservation in the 10 to 100 year time span—a long
enough period such that many difficult problems are known and
can be addressed, but not unimaginable in terms of the longevity of
computer systems and technology.

The general approach focuses on eliminating single points of
failure – single elements whose failure would cause information
loss – combined with active detection and repair in the event of
failure. Techniques employed include secret sharing, aggressive
“preemptive” format conversion, metadata acquisition, active
monitoring, and using standard Internet storage services in a novel
way.


[Larry Masinter is] a Principal Scientist in the Office of Technology at Adobe, where I’ve been since late 2000. These days, I’m looking at product interoperability within Adobe. In the past, I’ve worked on forms technology and the problems of long-term document archives and document validity; a paper describes some work in that area.

In 2000, I had a brief tenure at AT&T Labs, where I learned a lot about the telecommunications industry, standards, and corporate politics. Before that, I was at Xerox PARC: in the ’90s, I worked mainly on document management, Web and Internet standards, and Internet-based document services; in the ’70s and ’80s, I worked on the Interlisp system (from microcode to programmer tools to the graphics environment) and the Common Lisp standard. In the early 70’s, I worked on the DENDRAL project at Stanford.

July 6, 2006

T3 7/6: Innovations in Programming 3D Virtual Worlds

Talks, Visitors By: ams

We’re going to hear about the latest technology being unveiled by MediaMachines from VRML and X3D pioneer Tony Parisi:

FLUX™ is the premier platform for developing high performance real-time 3D applications for the web.
Flux is the next logical step in the evolution of real-time 3D software: a content platform that brings the power of the
3D rendering pipeline to the web, enabling the rapid development of real-time communications applications.

Flux provides content creators with the capability to create real-time
animations, virtual worlds, physical simulations, and advanced user interfaces that can be deployed
on desktops and mobile devices, in web pages or as components of larger applications.

Media Machines offers Flux in several editions. The Flux product family includes:

  • Flux Player – Flux Player provides a fully-featured, high performance implementation of the X3D,
    VRML and MPEG-4 standards packaged in a lightweight, just-in-time installed web browser plugin. Flux Player is
    available for free for personal use. Visit our download page to try out the latest
    version of Flux Player.

  • Flux Studio – Coming on June 26th – Flux Studio is an affordable authoring
    and publishing tool that allows content creators to unlock the full potential of Flux as a commercial web deployment platform.
    In the next release, Media Machines will be combining Flux Player and Flux Studio into a
    complete solution that includes advanced content creation features such as binary compression and encryption.

  • KML to X3D Translator (KML2X3D) – KML2X3D is a simple tool that translates
    KML, the Google Earth markup language, into X3D so that authors can take advantage of X3D’s rich feature set to
    create interactive ads, presentations and “mashups” for embedding within Google Earth. KML2X3D is free to use and the source
    code is also available under the GNU Lesser GPL (LGPL) license.