As ChatGPT recently its first year anniversary, gifts for the massive language model that stunned the world are arriving. A massive “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” has been issued by US President Joe Biden. In addition, UK Prime Minister Rishi Sunak hosted a party with a scintillating topic of the extinction of humanity, which was concluded with a 28-nation accord (with the EU counted as one nation) pledging global collaboration to properly develop AI.
Experts noted that reliable research has been predicting catastrophic climate change for more than 50 years. With the ocean physically crashing down around us and large swathes of civilisation becoming uninhabitable due to heat, the world order has done very little to stop the gigatons of carbon dioxide from fossil fuels gushing into the atmosphere. A climate denier has recently been appointed as the second-in-line candidate for president. Amid all of this, the question remains, will the state of AI regulation improve?
There are a few reasons to believe so. The major applications of AI appear to be crying out for regulation, in contrast to the climate crisis, where a multi-trillion-dollar industry launched an aggressive campaign to deny the risks and obstruct essential steps to reduce carbon emissions. Although they undoubtedly have their own interests in mind, at least they acknowledge the necessity of regulations. Furthermore, governments are carefully considering the risks to AI at a relatively early stage of the technology’s development, in contrast to the situation with the climate. The international accord and the Biden proposal are both very serious attempts to control AI before it controls humans.
Joe Biden’s new executive order on AI harnesses positive and negative aspects that lead to a human bureaucratic wave. The order just requests the establishment of new task forces, boards, committees, and working groups. Experts think that the duties of present public servants and political appointees should include oversight of AI, since this has also been consistently called for.
Among the problems the plan does not have is a solid legal foundation for all the rules and requirements that can arise from it: Courts frequently reject executive decrees, and Congress is considering enacting its own AI law. Furthermore, a lot of Biden’s recommendations rely on the business being studied to self-regulate, as the effort was heavily influenced by these powerful industries.
It’s impossible to criticise Biden’s order for being too narrow. Almost all AI hot buttons are addressed, even if it’s just with a promise to address them later (In this way, it addresses the complex problem of copyright and generative AI). All things considered, it’s an astounding dedication to mobilise government bureaucracy to address every concerning aspect of a new class of technology, even ones that the majority of us had never considered. The White House orders intricate multi-agency studies in paragraph after subparagraph, each requiring extensive industry engagement and expert consultation. Biden’s directive requires bureaucrats to generate intricate reports with the same casualness as some people ordering DoorDash food.
To coordinate the use of AI, the director of the Office of Management and Budget will call an interagency council, the Department of Homeland Security will establish an Artificial Intelligence Safety and Security Board, and the Department of Health and Human Services will set up an AI Task Force. It’s going to be a beast to make sure that the groups including outsiders won’t be plagued by conflicts of interest. The order also stated that those serving on all government boards and committees would have to wait until later to accept positions in the AI powers’ Washington, DC offices. The White House AI Council, which consists of more than thirty powerful bureaucrats ranging from the director of the Gender Policy Council to the chairman of the Joint Chiefs of Staff, will be the mother of all AI working groups.
After reading the order, one might assume that the executive branch will be overtaken by AI homework for the next year. Everyone from the Attorney General to the Secretary of Agriculture will need to have Sam Altman on speed dial before the year is out to fill in some gaps. Experts tallied the due dates for every report and assignment that Biden assigned. Two have a 30-day deadline, six have a 60-day deadline, five have a 45-day deadline, eleven have 90 days, seven have 120 days, three have 150 days, 38 have 180 days, eleven have 270 days, and eleven more have 365 days.
There are also numerous other reports and tasks that need to be completed without a deadline. As a typical illustration: the Council of Economic Advisors chairman is required to draft and turn in a report on the “labour-market effects of AI” that is essentially at the thesis level. Experts feel certain requests, such as ‘encouraging’ the Federal Trade Commission and the Federal Communications Commission to consider a long list of actions, are ambiguous. However, each one will need laborious execution, meetings, drafts, interviews, talks with academics and business executives, and last-minute covering.
It’s unclear if those who are currently employed by the government are qualified for these positions. Talent in AI is desperately needed in Silicon Valley, where competition for talent is fierce. A-level programmers skilled in red-teaming, removing bias from datasets, and, as the order states, the workings of “machines physically located in a single datacenter, transitively connecting by data centre networking over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI” will be needed to carry out some of the highly technical tasks requested, such as screening “frontier” LLMs even more potent than the current mind-blowing chatbots.
It is required by the order that the official website, AI.gov, include a few pages devoted to hiring. Visitors are urged to “join the national AI talent surge” on the homepage. But even with offers of high six-figure incomes from Google or OpenAI, it might be hard to entice young grads with AI training. One great suggestion in the EO is to alter immigration laws to eliminate the obstacles that currently stand in the way of AI talent wanting to work in the US. However, experts think that people who are against any exclusions that lead to more immigration—all Republicans, believe—might take offence at this. Perhaps it will face legal challenges, similar to past immigration decrees issued by the administration.
Jennifer Pahlka, a co-founder of the US Digital Service, has suggested that the government could only update its antiquated hiring procedures to meet the unexpected demand for AI specialists.
She writes, “AI is going to hit us like a Mac truck. We need a functioning civil service, and we need it now.”
Her proposed makeover is unlikely to be completed in time to fulfil all of those deadlines—60, 90, or even 270 days away.
The Biden executive order is a hefty, detailed to-do list; by comparison, Rishi Sunak’s Bletchley Declaration reads like a declaration of good intentions. The accomplishment lies in getting all those nations to sign a single statement before leaving, rather than dictating any specific course of action. The international community is still in its infancy as a whole, but many of the signatories—most notably the EU and China—are far along in their efforts to regulate AI. The proclamation warns those developing AI to use it responsibly while acknowledging both its potential and risks. Google, Microsoft, and the rest will tell you that they already are.
Furthermore, the declaration’s emphasis on the urgency of the issue appears to be at odds with the absence of details. It claims that AI models “have the potential for serious, even catastrophic harm,” ostensibly alluding to the annihilation of humans. However, issues of bias transparency, privacy, and data are also acknowledged, and those who have signed concur that tackling these issues is both necessary and urgent.
Experts say, the only deadline in this document is a promise to meet again in 2024. By then, the Biden administration will be waist-deep in reports, interagency committees, and recruiting efforts. Meanwhile, nothing in either document seems likely to impede AI from getting more powerful and useful, or potentially more dangerous.
The ugly truth behind AI
There is no denying that Google, Facebook, Instagram, Twitter, Amazon, and other similar companies provide the world with faster, more affordable, real-time communication and services; however, many of these companies—including Facebook—are also dealing with legal issues, some of which are domestically based. The Australian government has filed a lawsuit against Facebook, claiming that the social media platform permitted scammers to use fictitious celebrity endorsements to target users. The Australian Competition and Consumer Commission filed a lawsuit against Facebook in 2020 for using the Onavo VPN app to spy on users for profit between February 1, 2016, and October 1, 2017. It’s interesting to note that Meta, the parent company of Facebook, is ranked among the top 10 listed AI companies for 2022. The business created face recognition software powered by AI to be used on all of its social media channels.
Robots with specific programming were used to scan people’s faces and identify possible criminals in the first-ever joint study between Johns Hopkins University and the Georgia Institute of Technology. The robots were provided billions of images and captions to sift through, along with widely used AI algorithms. People of colour were frequently selected by the robots as possible study subjects. The research presents empirical proof of racial bias present in the algorithms fed into the system that gives these machines their intelligence. This is enough to suggest that these machines are also capable of being “racist” and “sexist.”
Uber tested an AI-driven vehicle in 2016 that ran through six red lights. In 2021, a French chatbot with AI capabilities that aimed to reduce reliance on medical professionals recommended suicide to a fictitious patient for testing purposes. Take “Autocorrect,” a basic feature on our phones, for example. This feature allows users to edit their typed messages to their preference even if they are not familiar with their native tongue. The AI’s innate tendency to doubt humans is one of its main issues.
Another instance of AI going awry occurred during the Moscow Chess Championship. A seven-year-old boy tried to outwit an AI-enabled robot by playing two moves in a row, but the robot broke his finger. Clearly, the move was incorrect, and it’s possible that the robot was programmed to react negatively. However, it turned out that the little boy was being punished by the AI by having his finger squeezed.
In spite of these setbacks, the industry is predicted to expand by more than 40% over the following six years, reaching a projected valuation of $62.35 billion in 2020. The great majority of business problems should be resolved by its accompanying technologies, which include machine learning, natural language processing, object and voice recognition, with astonishing efficiency and accuracy. The fact that 50% of companies use AI in at least one business function, according to the 2020 McKinsey Global survey on artificial intelligence, indicates the extent of AI deployment. One can only anticipate this trend to continue.
The perils of AI
Human progress has been greatly outpaced by technological advancement, and the world is only now making an effort to catch up—mostly through aggressive means and, to a lesser extent, through cooperation. The rationale is straightforward: to maintain an advantage in a competition lacking a predetermined destination or threshold for success. Since the end of World War II, the world has already become dangerous due to the mad race to accumulate nuclear weapons. Roughly 90% of the world’s nuclear arsenal is in the hands of the two wartime heavyweights, the US and Russia. The remaining seven nations are in possession. Geopolitical tensions have escalated significantly, as evidenced by the ongoing conflicts between China and Taiwan and Russia and Ukraine.
Moreover, distinct power blocs have formed and animosity between China and India—both nuclear powers—is growing as a result of their ascent to prominence on the international scene. In the event that NATO launches an attack, Russia has even alluded to using nuclear weapons.
Also, since the 1960s, AI and nuclear capabilities have been used in tandem. Both the US and Russia anticipated using AI to develop nuclear capabilities that would allow them to quickly and advantageously detect, neutralise, and eliminate an offensive nuclear threat. Both sides argued that AI technology should be limited to sub-systems and that human intervention is inevitable given the severe and irreversible consequences in the event that launch systems are compromised or fail. In some ways, this appealed to their common sense.