EC’05-SSA (Sponsored Search Auctions) Workshop Notes
These are Rohit Khare’s notes on the SSA Workshop; links to each of the papers can be found on the SSA Workshop Agenda Page.
Pennock: History
In the beginning: exact match, rank by bid price, pay per click, human editors – worked, if ad hoc.
Today & tomorrow:
- AI match
- Rank by expected revenule
- 2nd price (proxy bid)
- pp-c, -impression –conversion
- budget optimization
- contextual
- automating editorial filtering
- “exploration vs. exploitation”
See also Mehta-Saberia, Vazirani, & Vazirani: AdWords and generalized on-line matching.
‘’’This workshop nearly doubles the academic literature on the topic.’’’
Interesting how interested MSR is: 4 papers at this ws alone!
Y!RL has a data sharing policy in place…
Kursad’s Welcome. He has two papers in this area so far: Internet ad pricing models, “A theory of bidding in search phrase auctions: Can bidding wars be collusive?”
Introductions (20ppl)
- Tuomas Sandholm
- first automation of combinatorial bidding for computational resources (1991). CMU. Founder/CTO of CombineNet (rel=Parkes). Also interested in muti-agent learning. New to this sub-field.
- Michael Ostrovsky
- Harvard Econ graduate, game theory, micro. About to start at Stanford in July.
- Jim Jansen
- Penn State IST – information-seeking perspective
- Siva Viswanathan
- UMd Business school: intersection of game theory and economics. Online intermediaries.
- Jan Pedersen
- Chief scientist at Yahoo!Search.
- David Radicheck
- grad student Berkeley, Y!RL summer intern. Applied mechanism design.
- Samueli Angelov
- TU Eindhoven, automatic-contracting, looking to test it in this field.
- Vandana Ramachandran
- 2nd year UMd PhD with Siva. Asymmetry, how market makers are changing that.
- Animesh Animesh
- thesis student at UMd as well, looking to learn more about this area.
- ? Chen
- UT Austin, Grad strudent, Mechanism design in online auctions
- Chris Meek
- MSR, new to this area, it’s exciting to learn more. Machine-learning person by background.
- Brendan Kitts
- iProspect Inc, developed a bidding system over the years, currently managing $3M at a time.
- Joshua Goodman
- MSR, worked on spam (email), started in Machine Learning, moved to economic mechanisms. Now interested in all the bad things you can do: clickfraud is a great challenge area. He’s started a new group (2ppl so far) interested in spam, running the anti-spam conf this year (not the MIT one)
- Qi Lu ; Y! VP Engg of search marketplace
- Madhu Therani
- U Az. IS dept, got into this area 6 mos ago, contacted by an SEO looking for advice.
- Badrui Sarwar
- research scientist in personalization at Shopping.com (now eBay). {Formerly of Outride, Sehda.com, part of their new-ish research unit}.
…two other students joined in later.
Goodman: Pay-percent-of-impressions.
09:00a - 09:30a Pay-Per-Percentage of Impressions: An Advertising Method that is Highly Robust to Fraud Joshua Goodman, Microsoft
Pie analogy: found a fake plastic pie in his cafeteria – why is this fraud going on? Attacker is trying to reduce the price of pies… but now they’re using sophisticated fake-pie detection algorithms, but now they’re adding scent to the plastic ones, etc, etc.
Fix? Offer to buy 10% of all the pies for $1. So I don’t care that they deliver some fake pies, I got the one slice I wanted.
Cute detached-head-shot of Reyes, GOOG CFO, “the biggest threat to the Internet economy… something has to be done about this really, really quickly, because potentially, it threatens our business model. As early as 1999, hackers on IRC colluded to click each others’ ads.
- broad match does not work / prefix match does
- Importance of being truly random
- Suggest it’s a great system to use in combination with other methods (e.g. for the most fraud-susceptible terms)
Click Fraud: 1) click on a competitor to tap them out. This means you pay less if join later, reduce their ROI, Pay-per-conversion is one mooted-way to tackle this, but then you get into advertiser-fraud, so they’ll minimize their conversion. Another example: he buys ads on other email conferences, so those organizers might click me to get me off of “their” term. Type 2) syndication click fraud, clicking on ads running on your own site. {Side discussion ensues about how much money GOOG makes from syndication by content-sensing} {Kursad: doesn’t type-1 clicking increase your competitor’s CTR, and raising their competitive placement, perhaps canceling out? No, not enough. }
Impression Fraud: A little harder to do, since you don’t want impressions on your ‘‘own’‘ ads (one version: impression-fraud everyone, click-fraud ‘‘yourself’‘, then even have the chutzpah to demand a refund!). Goal is to lower competitor’s CTR until they fall off.
Solution: pay for share – {people adding crap to the mix just falls out of the math.} except for the new kind of fraud: Misinformation Fraud. This attack would be to run up lots of fakes and get a novice buyer to pay a lot for a “highly-trafficked” term – in this model, advertisers must have enough info to make their ‘‘own decisions’‘ about what %age is probably not fraudulent when bidding. Best predictor will be their own previous experience.
-> haven’t we just shifted the detection problem to the advertiser?? ‘‘Yes’‘! Shifting the burden back to them is a good thing, gives them more options, but most of all delinks the incentives from the all-powerful ad network.
Other advantages: it seems easier to determine aggregate %age fake than which are fake. It’s important, though, that you get a genuinely random subsample for your 10%. Don’t settler for 1/10th of the day, or 1/10th of the geography, etc – need a nongame-able subset. {Reminds me of cut-and-choose rules in cryptography}
Broad match: can’t oversell. The market for “digital” and “camera” overlap. Historical data can’t disambiguate, b/c that retrospective also has fraud in it.
Fix: use prefix matching. Property that must be maintained? “probability that I get any real lead is independent of the actions of any other advertisers.
Ultimately, we’d like an auction algorithm that unifies PP%, prefix PP%, PPC, exact-PPC). First, normalize all adds to prefix-pp%; create pseudo-markets by aggregating prefixes (and running hierarchical sub-auctions).
Concluding thought: may seem wacky, but this is what we do in ‘old media’ – you buy %ages of time, and large ad buyers may be happy with this again.
Q: how do you account for positioning-within-pages? See paper: there’s a point scheme for making those distinct markets. Q: how continuous? A: paper argues for 1 week windows. Q (Jan): if the issue is to randomize impressions to get a share of the true impressions, can we just apply this to budget constraints… {actually, I didn’t capture this point accurately}
Meek: Stochastic and contingent-payment auctions
09:30a - 10:00a Stochastic and Contingent-Payment Auctions Christopher Meek, Microsoft Research David Maxwell Chickering, Microsoft Research David Bruce Wilson, Microsoft Research
Definition: action for items where paments are contingent on some future event. Today: allocation is by-bid/by-expected-revenue, with “vickery-like” pricing {?}
Problem 1: Vindictive bidding: a spoiler that raises the price to just below the max (i.e. incentive in-compatible if you actually have the goal of increasing competitor’s expenditures…)
Problem 2: Uncertainty – low CTR estimates are reinforcing, so such hypotheses need to be re-tested (and that “costs” the auctioneer)
His proposal: allocate and price items stochastically. Properties: not as efficient, but can limit vindictive bidding, increase # participants, learn probabilities, etc.
Allocation rule is a function from the bid-vector to a distribution over allocations (in general). In the one-item case, simplifies to the “marginal allocation” function, a cumulative-density-function that the i-th bidder wins with a bid at price p-sub-i. Suppose bids were $8/$1/$1 – you’d get 80% in a poll, but if you exponentiate the shares, you can tune it towards winner-take-all, but this factor of Beta lets us tune between these.
“Condex {conditional expectation} pricing rule” – price charged the winner is less than the expected value of the ‘second-price’ bidder.
Go back to the 8/1/1 case: with a very high beta, anyone who bids over 1 would win, so winner should pay nearly-$1 – what the $8 player would have paid if they “bid” a density function rather than the $8 “point”. He shows one example curve, where the $8 player ‘spread their bids’ and ended up paying, on average, $2.
Next step is transforming this into a “workable” scheme, one that uses a random number assigned to each bidder (a ‘coin’). This factor is used to discount their effective bid, but also uses other’s factors to discount what the winner actually pays (!)
For Problem 1 (vindictive), the spoiler bears some risk of winning – and they’d lose real money. For Problem 2, this in the setting of a repeated game. The auctioneer faces a classic exploration-vs-exploitation tradeoff; recent literature looks at this very old problem with a ‘reinforcement-learning’ perspective. E.g. interval-estimation, where we “fuzz upward” CTRs to the 95% level of the PDF. Or 1933-era Thompson strategy of drawing randomly from the two PDFs.
Q (Kitts): observation that Goog, others seems to rotate in and out randomly in effect, for some secret reasons. A: they may appear to be very precise in their language, but the fudging is around CTR. A (Kitts): we found this out because we raise our client’s budgets to to $250K/day, and then our agents dynamically monitor pricing/usage to override Goog budget control (and thus our client’s ads don’t get rotated out of #1).\
Sandholm (with Parkes): Optimize-and-Dispatch
10:00a - 10:30a Optimize-and-Dispatch Architecture for Expressive Ad Auctions David C. Parkes, CombineNet, Inc. and Harvard University Tuomas Sandholm, CombineNet, Inc. and Carnegie Mellon University
Bit more of a framework, not really a particular auction design. Expressiveness has its advantages in other markets, but the challenge is combinatorial explosion in the ad domain. Historically with OR-matching; Sandholm added XOR in 98/99, he added OR-of-XORs, and then Nisan added XOR-of-Ors in 2000, and OR* (a general graph, more compact, Nisan and Fujishima). Other new ones: recursive, side constraints (linear expressions). A lot of experience was gained in the B2B procurement world. CombineNet provide(s)(d): package bids, conditional discounts, capacity constraints, multi-attribute bidding, ‘detailed cost structures.’
Why? Improving economic (allocation) inefficiency in the supply chain (Pareto optimality). Without it, having to bid incrementally for each atomic item, means that risk exposure reduces prices. {More disclosure == market designs that do better, in the same way that ASTs let compilers do a better job at runtime?}. Also reduces need to pre-package bundles. Collaborative bidding – sharing backhaul transport, etc. CombineNet has cleared $11B, saving $1.5 in new value (not merely zero-sum moving of savings along the chain).
{he offers a useful first-stab at mathematically formalizing the Google SSA model – should check the literature to see if there’s better ones in the literature yet}
What’s unique to SSA? 1) performance: auction has to run in real-time, seems to rule out optimization; 2) on-line: must be solved without waiting for future knowledge to reduce uncertainty. The framework he’s proposing to address this is a near-line optimizer than runs “often” and whose output parameterizes a fast dispatcher.
Expressiveness encompasses local & global (namely, constraints that apply to a particular auction, or must be calculated over a collection). Examples of global: volume SLAs, interdependencies (per-slot, or not-alongside-competitors), bonuses (threshold & incremental payouts); and price schedules (price-adjustment) – note that this subcategory of global constraint can be implemented/enforced by the dispatcher using local info & its past dispatch log.
Examples of local: grammar (core+optional good words, phrases, {thesaurus}, hits (by URLs), even query expansion (how targeted would a refinement be if I added my own terms?)); user profiling (geoIP, history, etc); price-adjustments (bonus solely to trump competitor C, or reduce their rank, etc). {Refers to some old notable paper on selling nukes that yields unbounded revenue – wonder which one he’s talking about?}
OK, so what’s the information flow between the optimizer and dispatcher? They seem to operate at different time-horizons (i.e. opt=month, disp=day). See paper for details.
Q: Transparency: how much of these expressive bids should be public? A: We’ve build 190+ markets, and it’s varied quite a bit, from r/t feedback to never to occasional, etc – how to shape/improve your bids. Expect to see the quoting function become important – the system needs to offer some feedback to new bidders in the form of a quote with expected yield.
Q: How well could these strategies be imposed unilaterally? What if I informed my bidding agent, and then it went into a single-event market and dealt with it dynamically? A: No, the optimal market outcome requires access to all of the competitive bids… the point is that a clearing engine must assess global information.
{Q: how to audit this? Q: how to build actual experiments with this? – maybe bring this back up at the panel?}
Edelman, Ostrovsky: Bidder Strategy
11:00a - 11:30a Strategic Bidder Behavior in Sponsored Search Auctions Benjamin Edelman, Harvard University Michael Ostrovsky, Harvard University
Ostrovsky spoke (he will be teach Economics at Stanford this year). Fundamental question: do bidders modify their behavior based on ‘‘other bidder’s’‘ actions?
{Nobody from Google here, btw.} They’re frankly wrong on Goog official propaganda that Vickrey auctions are strategy-proof – AdWords doesn’t work b/c there’s multiple objects (3 ad slots per page, say). Quick example: if there’s only two bidders and three slots, you should shave your prices down and take the second spot for less than the first.
Key data set: Overture data from 2002-3 when (like eBay) they offered two strategies to advertisers: first-price fixed price, or proxy-bidding (effectively second-price). They can detect auto-bidding from the transaction logs by looking for very-fast cycles between pairs of bidders (note that the log timestamps were only 15-min accurate).
Comment by Sandholm: do you see preference curves that ‘price’ the tradeoff in ranking?
Second data set: a week of watching Y!/Goog for top rankings for ads on some well-known keywords every 7 minutes – and once again, saw cycling between warring bidders who wanted top-rankings. ‘‘The median streak length was 14 minutes!’‘
Punchline? How much money is left on the table? … compare a counterfactual dataset where you ran a true second-price auction using the highest-of-the-week value as the putative private value to the proxy-bid data. Data shows gaps of 60% (in the paper) to 22% (newer data set, median) to 48% (top decile of competitive words).
Theoretical wrapup: The Vickerey-Clarke-Groves auction actually states that the winning bid pays the externalities imposed on other bidders, which is a second-price auction for a single good – but not in a multiple good scenario. Instead, you need to account separately for the value of an ad placed at a rank and the tradeoff costs of dropping a rank, etc.
So rerunning the counterfactual process against VCG, it does poorly on thin (3.9 competitor) markets, but adds 8% on liquid (10+ bidders) on average and a minimum of 1%.
Conclusions: there is strategy (it’s not incentive-compatible today); there’s real money at stake; and a correctly implemented VCG would increase revenue for competitive keywords while making life easier for bidders (though, admittedly, if most revenue comes from thin markets, it might not).
Ramachandran (UMd): Empirical data on bidding strategy
11:30a - 12:00a Online Advertisers' Bidding Strategies for Search, Experience, and Credence Goods: An Empirical Investigation Animesh Animesh, University of Maryland Vandana Ramachandran, University of Maryland Siva Vaswanathan, University of Maryland
Trying to compare behavior across the spectrum of Search (e.g. CD’s, no uncertainty about its value) – Experience (e.g. Flowers, you at least learn quality upon delivery) – Credence (used cars, even after using it, you’re uncertain of its value). Hypotheses: in Search, high-quality firms should advertise more, while in Experience/Credence goods, advertising is itself a signal, making it even possible that a low-quality firm should advertise. Empirical findings are inconclusive.
Approach: regression of observed ad rank against Alexa data about site traffic. Findings: in Search goods, advertisers are stable and place a high value on top rankings. Credence goods have the most rank-volatility.
Isobar/iProspect: Everything you ever wanted to know about clickfraud but were afraid to ask…
12:00p - 12:30p A Formal Analysis of Search Auctions Including Predictions on Click Fraud and Bidding Tactics Brenden Kitts, Isobar Communications Corporation Parameshvyas Laxminarayan, Isobar Communications Corporation Benjamin LeBlanc, Isobar Communications Corporation Ryan Meech, Isobar Communications Corporation
Media would have you believe that 10-20% fraud. Overall, across their client base, 12.5% of clicks are being rebated (presumed fraud). They model 16.9% — so it all lines up. How many clicks-w/no-conversion in a row can there be? Use the binomial distribution at a confidence of 0.01 and up to .8% of public IPs fail. Their models do line up with the rebates, too – but the outliers are quite intriguing.
Grandiloquently-titled “Ryan’s Theorem” is that ‘‘if’‘ clickfraud affects all advertisers {in a market} equally, nobody cares. Well, no advertiser or engine – but content-publishing partners are playing a zero-sum game with fraudsters and, in extremis, the adjusted price could fall below the minimum floor.
There will still be a need for third parties to arbitrage budgets amongst search engines.
Aggarwal, Hartline: Knapsack Auctions
02:00p - 02:30p Knapsack Auctions Gagan Aggarwal, Stanford University Jason D. Hartline, Microsoft
Basic problem: sell a single ad-slot on a web page over the course of a day. C pageviews; advertisers require c-i impressions per click (1/CTR); and each advertiser will pay v-i for a single clickthrough.
Batched auctions are easier to make “temporally strategyproof” but admittedly not practical yet. On the good side, this maps directly to the knapsack problem: how many advertisers to admit into the mix to maximize revenue?
CMU: SSA design via Machine learning
02:30p - 03:00p Sponsored Search Auction Design via Machine Learning Mari-Florina Balcan, Carnegie Mellon University Avrim Blum, Carnegie Mellon University Jason D. Hartline, Microsoft Yishay Mansour, Tal-Aviv University
Aiming to reduce revenue maximization problems to well-known algorithms . Reduction based on random sampling.
Typical analysis of independent auctions does not account for competition between advertisers bidding on similar keywords. How to optimize when the semantic similarity information is private?
Trying to create clusters of terms to form interlinked markets and thus offer a fixed price in each market.
UT Austin: Divisible goods
03:00p - 03:30p Designing Share Structure in Auctions of Divisible Goods Jianqing Chen, University of Texas at Austin De Liu, University of Kentucky Andrew B. Whinston, University of Texas at Austin
Bid for shares, not just positions.
Penn/FIU: Searcher perceptions
04:00p - 04:30p Examining Searcher Perceptions of and Interactions with Sponsored Results Bernard J. Jansen, Pennsylvania State University Marc Resnick, Florida International University
Clever hack: controlled experiments where they swapped the ads and the top hits to see what difference it made (testing the value of editorial integrity, what a nifty J-school koan!). 82% of users go first to the organic links, 6% to ads (according to self-reported “talk-out-loud” recordings). Only 27% of users never looked at the ads at all. Of ads, 56% could dismiss it by title alone, but 67% also read the summary to decide that it was relevant.