A grant in the amount of $600,000 has been awarded by the Department of Defense and the Office of National Intelligence (ONI) for research into the possibility of fusing human brain cells with artificial intelligence.
DishBrain is the product of a collaborative effort by researchers at Monash University and Cortical Labs. This group is responsible for developing brain cells that are able to play the classic video game Pong.
Associate professor Adeel Razi, who works at the university’s Turner Institute for Brain & Mental Health, explained that their research “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms.”
Hundreds of thousands of live brain cells that were produced in a laboratory are taught how to perform a variety of tasks, such as playing Pong. The cells are provided with feedback via electrical activity by a multi-electrode array, which indicates when the “paddle” is making contact with the “ball.”
A synthetic biological intelligence “previously confined to the realm of science fiction” could be within reach, the researchers claimed in an article that was published in the scientific publication Neuron.
According to Razi, the reason why the group was awarded the grant from the Office of Naval Intelligence and the Department of Defense National Security Science and Technology Centre was because a new sort of artificial intelligence that could “learn throughout its lifetime” was required.
He believes that with such intelligence, machine learning might be improved for technologies such as driverless automobiles, unmanned aerial vehicles, and robots that deliver packages.
“The results of such research would give Australia a significant strategic advantage because they would have significant implications across multiple fields, including but not limited to planning, robotics, advanced automation, brain-machine interfaces, and drug discovery,”
Artificial intelligence suffers from what academics refer to as “catastrophic forgetting,” whereas the human brain excels in lifelong learning, which is necessary to gain new skills, adapt to change, and apply current knowledge to new tasks. When AI moves on to other duties, it forgets the information it had previously gathered for those activities.
The research being conducted by DishBrain attempts to gain an understanding of the molecular principles behind continuous learning.
“We will be using this [national intelligence and security discovery research] grant to develop better artificial intelligence machines that replicate the learning capacity of these biological neural networks,” Razi explained.
“This will assist us in scaling up the hardware and methods capacity to the point where they become a viable replacement for in-silico computing [through the use of simulations].”
The announcement comes as leaders in the field of artificial intelligence have recently called on the government to acknowledge “the potential for catastrophic or existential risks from AI.”
An open letter was produced by the organization Australians For AI Safety and sent to the minister of industry, science and technology, Ed Husic. The letter was signed by industry leaders and academics.
A review of artificial intelligence will be conducted by the government, as Husic has stated that “what we want is modern laws for modern technology.”
The letter requests that he “acknowledge that catastrophic and existential consequences are possible,” that he collaborate with the international community to manage the risks, that he promote research into AI safety, and that he “urgently train the AI safety auditors that industry will soon need.”
According to a spokesman named Greg Sadler, Australia is “falling behind” when it comes to paying attention to the risks posed by artificial intelligence.
“What’s alarming is that even deliberate and methodical bodies like the United Nations have recognised the potential for catastrophic or existential risks from AI, but the Australian government won’t,” he said. “What’s more concerning is that the Australian government won’t.”
Using artificial intelligence (AI) in a secure and responsible manner was described by Husic as “a balancing act the whole world is grappling with” when the review was first introduced.
“However, as I have been emphasizing for a considerable amount of time now, there is a critical need for appropriate safeguards in order to guarantee the safe and responsible utilization of AI.”