newsroompost
  • youtube
  • facebook
  • twitter

Google engineer ‘pays price’ for claiming that its Articifical Intelligence is sensitive

The issue was raised by Google engineer months ago as he kept disputing company’s managers and maintained that the language model of company’s Dailog application/LaMDA had consciousness and also had a soul.

New Delhi: Google, the global search engine giant, has sidelined one of its employees for raising an alarm over its Artificial Intelligence feature and claiming that it is sensitive, thus setting the stage for criticism of company’s most advanced technology.

Blake Lemoine, a seasoned engineer at Google’s Resonsible AI organisation, recently told mediapersons that he was sent on paid leave after he flagged glaring gaps in the state-of-the art technology. Lemoine said that he has submitted documents to US senator’s office, claiming that evidence was available to show that Google & its technology engaged in religious discrimination.

Addressing the issue, Google spokesperson Brian Gabriel said in a statement, “Our team has addressed & reviewed Blake’s concerns in accordance with AI guidelines and nowhere did we found that the evidence supports his claims.”

Google

According to reports, this is not something that hit the company just now. The issue was raised by Google engineer months ago as he kept disputing company’s managers and maintained that the language model of company’s Dailog application/LaMDA had consciousness and also had a soul. On the other hand, Google countered his claims and said that several researches & engineers have conducted review of LaMDA but noone has come to the conclusion that Mr Lemoine has reached. Most AI experts have concurred with Google’s point that Artificial Intelligence is far from recording human sentiment.

AI researchers, however, have not ruled out emergence of a close sync between technology and human behavoiur. They believe, soon a synergy could be developed and both can understand each other.

Google tech

In last few years, these IT giants have taken a whopping task of creating neural networks. Under the ‘large language models’, the technology is used to summarize articles, answer questions and even write long blogs. However, they come with own shortcomings & associated risks, besisdes being highly flawed. Sometimes, they produce good content, sometimes extremely bad. Overall, they have been found to adopt a pattern & work flawlessly but they indeed can’t think & act like humans.