We scan new podcasts and send you the top 5 insights daily.
Uber Eats' use of personalized pricing was only confirmed because a New York state law requires companies to disclose it. This highlights that without specific, localized regulation, controversial corporate practices fueled by algorithms can remain hidden from the public and regulators in other jurisdictions.
Digital platforms can algorithmically change rules, prices, and recommendations on a per-user, per-session basis, a practice called "twiddling." This leverages surveillance data to maximize extraction, such as raising prices on payday or offering lower wages to workers with high credit card debt, which was previously too labor-intensive for businesses to implement.
By aligning its RAISE Act with California's SB 53, New York is helping create a powerful, bi-coastal regulatory consensus. This convergence counters the industry's argument against a "chaotic patchwork" of state laws and establishes a baseline for AI transparency that other states may adopt, effectively setting a national standard in the absence of federal action.
Contrary to their current stance, major AI labs will pivot to support national-level regulation. The motivation is strategic: a single, predictable federal framework is preferable to navigating an increasingly complex and contradictory patchwork of state-by-state AI laws, which stifles innovation and increases compliance costs.
While seemingly promoting local control, a fragmented state-level approach to AI regulation creates significant compliance friction. This environment disproportionately harms early-stage companies, as only large incumbents can afford to navigate 50 different legal frameworks, stifling innovation.
Don't just ask customers about their business—independently verify it. When launching Uber Eats, the team couldn't get clear answers on restaurant economics. So they ordered food, weighed the ingredients, and built their own model, giving them the "ground truth" needed to confidently propose their pricing structure.
Contrary to the common view, algorithms charging different prices based on a consumer's wealth can be beneficial for market efficiency. The real harm occurs when algorithms exploit a lack of information or behavioral biases, not simply when they adjust prices based on a person's ability to pay.
Instacart's AI-driven personalized pricing created a PR crisis because it directly conflicts with the grocery industry's core value proposition of low, consistent prices. This was especially damaging during a period of high inflation, making the company appear exploitative in a price-sensitive market.
Current regulatory focus on privacy misses the core issue of algorithmic harm. A more effective future approach is to establish a "right to algorithmic transparency," compelling companies like Amazon to publicly disclose how their recommendation and pricing algorithms operate.
Uber's algorithm offers drivers different wages based on their perceived desperation. When a driver accepts a low fare, it sets a new, lower ceiling for their future earnings, creating a downward wage spiral.
Companies like Uber Eats use personalized data to set prices, a practice dubbed "AI spy pricing." This fosters consumer paranoia and erodes trust, which, if scaled across the economy, could discourage spending and negatively impact GDP.