Politician Alex Boris argues that expecting humans to spot increasingly sophisticated deepfakes is a losing battle. The real solution is a universal metadata standard (like C2PA) that cryptographically proves if content is real or AI-generated, making unverified content inherently suspect, much like an unsecure HTTP website today.
The bill regulates not just models trained with massive compute, but also smaller models trained on the output of larger ones ('knowledge distillation'). This is a key technique Chinese firms use to bypass US export controls on advanced chips, bringing them under the regulatory umbrella.
Rather than viewing the massive energy demand of AI as just a problem, it's an opportunity. Politician Alex Boris argues governments should require the private capital building data centers to also pay for necessary upgrades to the aging electrical grid, instead of passing those costs on to public ratepayers.
Amid cynicism fueled by daily scams and digital annoyances, small quality-of-life policies have an outsized impact on public trust. A NY politician points to his popular 'click-to-cancel' bill as an example of how government can demonstrate tangible value by addressing everyday frustrations and making life simpler.
Ex-Palantir lead Alex Boris clarifies the company's 'unsexy' function. Its key is building an 'ontology'—a high-level view defining what each data piece means. This allowed the DOJ to treat a single loan as a trackable object, spotting fraud by seeing it reappear across different mortgage-backed securities.
Alex Boris passed a bill requiring mopeds to be registered at the point of sale. When data showed total registrations subsequently declined, he publicly admitted the policy missed the real issue: owners weren't re-registering after a year. This demonstrates a rare political willingness to use data to admit failure and iterate on policy.
When addressing AI's 'black box' problem, lawmaker Alex Boris suggests regulators should bypass the philosophical debate over a model's 'intent.' The focus should be on its observable impact. By setting up tests in controlled environments—like telling an AI it will be shut down—you can discover and mitigate dangerous emergent behaviors before release.
When lobbying against New York's RAISE Act for AI safety, the industry's own estimate of the compliance burden was surprisingly low. They calculated that a tech giant like Google or Meta would only need to hire one additional full-time employee, undermining the argument that such regulation would be prohibitively expensive.
