Google AI teaches itself Bangla
He said advanced artificial intelligence was mostly to cause job losses among so-called “knowledge workers,” such as writers, accountants, architects and software engineers
Google's very own AI chatbot "Google Bard" has taught itself Bangla – a language that the chatbot is not supposed to have knowledge of.
Sundar Pichai, CEO of Google's parent company Alphabet, has made multiple appearances on interviews and podcasts in recent months, offering in-depth details into the company's future AI goals.
Pichai and a few other Google executives appeared in a video feature on CBS' "60 Minutes" last week. Pichai discusses AI and its impact on society extensively in the interview.
The programme discusses how Google Bard began teaching itself skills that they did not expect the chatbot to possess.
"For example, one Google AI programme adapted, on its own, after being prompted in the language of Bangladesh, which it was not trained to know," Pichai said.
The language under discussion here is Bengali, which, in addition to being spoken in Bangladesh, is also widely spoken in the Indian states of West Bengal and, to a lesser extent, Tripura and Assam.
In the video, James Manyika, SVP, Google, goes on to state that after feeding the algorithm just few Bangla instructions, it has learned to translate all of Bangla.
Former Google researcher Margaret Mitchell, on the other hand, moved to Twitter to correct the remark with evidence. She mentioned that Google's PaLM, the AI model that inspired Bard, has been trained to interpret Bangla. A cursory glance at PaLM's datasheet reveals that Bangla is one of the languages on which it has been trained.
The Google CEO said advanced artificial intelligence was mostly to cause job losses among so-called "knowledge workers," such as writers, accountants, architects and software engineers. He said that there have been times in which Google's AI programs have developed "emergent properties" – or learned unanticipated skills in which they were not trained – something his company's engineers could not fully explain.
"There is an aspect of this which we call, all of us in the field call it as a 'black box'. You know, you don't fully understand. And you can't quite tell why it said this, or why it got [it] wrong," Pichai said."AI will impact everything. For example, you could be a radiologist – if you think about five to ten years from now – you are going to have an AI collaborator with you. You come in the morning, let's say you have a hundred things to go through, it may say, 'these are the most serious cases you need to look at first.'"
"There are two ways I think about it. On one hand, I feel, no, because you know, the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology's evolving — there seems to be a mismatch," he added.
"On the other hand, compared to any other technology, I have seen more people worried about it earlier in its life cycle – so I feel optimistic. The number of people, you know, who have started worrying about the implications, and hence the conversations are starting in a serious way as well."
Debate over AI safety has intensified in recent days due to the runaway success of Microsoft-backed OpenAI's ChatGPT. The chatbot has gained a massive following due to its humanlike responses to various prompts, even as it exacerbates concerns about potential job losses and the spread of misinformation.
Elon Musk recently warned that AI has the potential for "civilisation destruction" even as he launches his own AI startup that will directly compete with ChatGPT. Last month, Musk joined with more than 1,000 experts in advocating for a six-month pause in AI development until proper guardrails are in place.