Wednesday, November 22, 2023
Big Oil - Looking for Hopefully Vulnerable
Even the tekkies who invented the building blocks of artificial intelligence are concerned… and don’t completely understand exactly how it works, how it has “self-evolved.” Oh, it works… and very efficiently. When it makes mistakes, they are both scary and amusing. Yet even without creating aggregate capacity that provides the functional equivalent of the human brain (and we are not there… yet), it can out “think” our brains with about 10% of the equivalent human processing power. While this technology does not have to control bodily functions and can be powered with electricity, its de facto cranial capacity can take an AI program, be fed massive data, and begin to organize that data, improve its own program, and achieve results “on its own.” Add the open-source structure, and it can learn and teach itself new tricks at alarming speeds, making “computer rookie mistakes,” but self-correcting all the time.
Visions of a Terminator take-over, the annihilation of millions of educated jobs rendering human beings slow, redundant, way too limited… plus the erosion of the viability of existing political systems to cope with the change used to be the stuff of science fiction movies… but there’s now truth in them thar hills. Do large language models (LLMs) trample copyrights as the computer is literally “fed” everything, everywhere all at once? Like every book, script, musical composition, film, TV program, etc.? As of now the US Copyright Office won’t grant copyrights to works created by AI, and where there’s a mix, the AI contribution is excluded.
AI champions, and there are a lot of them, simply believe that “for the greater good and the advancement of technology,” access to copyrighted material should be more akin to withdrawing a book from a public library than exploiting the underlying copyright itself. Author and Los Angeles Times columnist November 16th), Michael Hiltzik, scoffs at the idea: “Followers of high finance are familiar with the old principle of privatizing profits and socializing losses — that is, treating the former as the rightful property of investors and shareholders while sticking the public with the latter. It’s the principle that gave us taxpayer bailouts of big banks and the auto companies during the last recession.
“Investors in artificial intelligence are taking the idea one step further in fighting against lawsuits filed by artists and writers asserting that the AI development process involves copyright infringement on a massive scale… The AI folks say that nothing they’re doing infringes copyright. But they also argue that their technology itself is so important to the future of the human race that no obstacles as trivial as copyright law should stand in their way…The only way AI can fulfill its tremendous potential is if the individuals and businesses currently working to develop these technologies are free to do so lawfully and nimbly.— Andreessen Horowitz
“They say that if they’re forced to pay fees to copyright holders simply for using their creative works to ‘train’ AI chatbots and other such programs, most AI firms might be forced out of business… Frank Landymore of Futurism.com had perhaps the most irreducibly succinct reaction to this assertion: ‘Boohoo.’ … Lest anyone think that AI investors are only in it for the money, Andreessen Horowitz caps off its comment by arguing that ‘U.S. leadership in AI is not only a matter of economic competitiveness—it is also a national security issue. ... We cannot afford to be outpaced in areas like cybersecurity, intelligence operation, and modern warfare, all of which are being transformed by this frontier technology.’” So that justifies “whatever”? Oh, and China is ahead of US research. Hmm!
Ordinary creative individuals also have their own concerns. Hollywood writers were afraid of being replaced by AI, second rate now but getting better, and actors were afraid of digitized avatars being so good that their images and persona were no longer theirs to control. The settled collective bargaining agreements reflect those fears. Politicians are watching what seemed to be their own images and words saying things they never said. What a mess! First Amendment?
Ainsley Harris (FastCompany.com, November 19th), describes a battle over transparency and corporate responsibility in the AI world created a “just like when Steve Jobs was fired from Apple” scenario in the Silicon Valley: “It’s not just OpenAI… The long-simmering fault lines within OpenAI over questions of safety with regard to the deployment of large language models like GPT, the engine behind OpenAI’s ChatGPT and DALL-E services, came to a head on Friday [11/17] when the organization’s nonprofit board of directors voted to fire then-CEO Sam Altman. In a brief blog post, the board said that Altman had not been ‘consistently candid in his communications.’…
“But OpenAI is not the only place in Silicon Valley where skirmishes over AI safety have exploded into all-out war. On Twitter, there are two camps: the safety-first technocrats, led by venture firms like General Catalyst in partnership with the White House; and the self-described ‘techno-optimists,’ led by libertarian-leaning firms like Andreessen Horowitz. [see above quotes]
“The technocrats are making safety commitments and forming committees and establishing nonprofits. They recognize AI’s power and they believe that the best way to harness it is through cross-disciplinary collaboration… Hemant Taneja, CEO and managing director of General Catalyst, announced on Tuesday that he had led more than 35 venture capital firms and 15 companies to sign a set of ‘Responsible AI’ commitments authored by Responsible Innovation Labs, a nonprofit he cofounded. The group also published a 15-page Responsible AI Protocol, which Taneja described on X as a ‘practical how-to playbook.’” While rumors abounded that Altman and his team would return, they actually wound up at Microsoft leading a rising division there. Then Wall Street Journal (November 21st) put it this way: “OpenAI’s future is unclear after the majority of its employees threatened to quit if the board didn’t resign itself and reinstate Sam Altman at the helm. Meanwhile, Emmett Shear’s sudden appointment as the company's interim CEO puts him at the center of high-stakes drama.” Drama on steroids that most folks do not understand. Spoiler alert: he’s baackkkk! And with a reconfigured board of directors!
The firing/rehiring of Sam Altman is/was far more important than the mere firing/rehiring of Sam “the AI genius” Altman might suggest. Kevin Roose, writing in the November 21st issue of the NY Times feed, The Morning, explains this corporate melee: “If they had been the plot of a science fiction movie, or an episode of ‘Succession,’ the events at OpenAI last weekend [11-17 through 19] would have seemed a little over-the-top… A secret board coup! Fears of killer A.I.! A star C.E.O., betrayed by his chief scientist! A middle-of-the-night staff revolt that threatens to change the balance of global tech power!...
“The coup was led by Ilya Sutskever, OpenAI’s chief scientist, who had butted heads with Altman. Sutskever wants the company to prioritize safety and was worried that Altman was more focused on growth… Sutskever is among a faction of A.I. experts who are fearful that A.I. may soon surpass human abilities and become a threat to our survival. Several of OpenAI’s board members have ties to effective altruism, a philosophical movement that has made preventing these threats a top priority. Altman has concerns about A.I. risks, too. But he has also expressed optimism that A.I. will be good for society, and a desire to make progress more quickly. That may have put him at odds with the safety-minded board members, whose job is to see that powerful A.I. is developed responsibly.” But as noted above, Altman is back, and his “failed” ouster has led to a new board with Altman on top. But what exactly does that mean in a world where guardrails are been ripped away by nation states, unscrupulous politicians and avaricious investors? Stay tuned!
I’m Peter Dekom, and if you add the complexity of rising plasma computing – multiple sockets of multicore processors using qubit vs binary calculations at hundreds of times the speed of our fastest supercomputers (yes, very complicated) – to accelerating artificial intelligence, exactly what is the future for humanity?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment