1. Guest
  2. Login | Subscribe
 
     
Forgot Login?  

FREE Newsletter Subscription, Click The 'Subscribe' Button Below To Subscribe!

Weekday News Bulletin

PortMac.News FREE Weekday Email News Bulletin

Be better informed, subscribe to our FREE weekday news Update service here:

PortMac Menu

This Page Code

Page-QR-Code

Last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to - It turned down the client's idea deeming it ethically dicey.

Source : PortMac.News | Street :

Source : PortMac.News | Street | News Story:

main-block-ear
 
Mimicry & Mind control? Big Tech slams ethics brakes on AI
Last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to - It turned down the client's idea deeming it ethically dicey.

News Story Summary:

They deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender.

Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system.

All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants.

Reported here for the first time, their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.

"There are opportunities and harms, and our job is to maximize opportunities and minimize harms," said Tracy Pizzo Frey, who sits on two ethics committees at Google Cloud as its managing director for Responsible AI.

Judgments can be difficult.

Microsoft, for instance, had to balance the benefit of using its voice mimicry tech to restore impaired people's speech against risks such as enabling political deepfakes, said Natasha Crampton, the company's chief responsible AI officer.

Rights activists say decisions with potentially broad consequences for society should not be made internally alone. They argue ethics committees cannot be truly independent and their public transparency is limited by competitive pressures.

Jascha Galaski, advocacy officer at Civil Liberties Union for Europe, views external oversight as the way forward, and U.S. and European authorities are indeed drawing rules for the fledgling area.

If companies' AI ethics committees "really become transparent and independent – and this is all very utopist – then this could be even better than any other solution, but I don't think it's realistic," Galaski said.

The companies said they would welcome clear regulation on the use of AI, and that this was essential both for customer and public confidence, akin to car safety rules. They said it was also in their financial interests to act responsibly.

They are keen, though, for any rules to be flexible enough to keep up with innovation and the new dilemmas it creates.

Among complex considerations to come, IBM told Reuters its AI Ethics Board has begun discussing how to police an emerging frontier: implants and wearables that wire computers to brains.

Such neurotechnologies could help impaired people control movement but raise concerns such as the prospect of hackers manipulating thoughts, said IBM Chief Privacy Officer Christina Montgomery.

AI can see your sorrow:

Tech companies acknowledge that just five years ago they were launching AI services such as chatbots and photo-tagging with few ethical safeguards, and tackling misuse or biased results with subsequent updates.

But as political and public scrutiny of AI failings grew, Microsoft in 2017 and Google and IBM in 2018 established ethics committees to review new services from the start.

Google said it was presented with its money-lending quandary last September when a financial services company figured AI could assess people's creditworthiness better than other methods.

The project appeared well-suited for Google Cloud, whose expertise in developing AI tools that help in areas such as detecting abnormal transactions has attracted clients like Deutsche Bank (DBKGn.DE), HSBC (HSBA.L) and BNY Mellon (BK.N).

Google's unit anticipated AI-based credit scoring could become a market worth billions of dollars a year and wanted a foothold.

However, its ethics committee of about 20 managers, social scientists and engineers who review potential deals unanimously voted against the project at an October meeting, Pizzo Frey said.

The AI system would need to learn from past data and patterns, the committee concluded, and thus risked repeating discriminatory practices from around the world against people of color and other marginalized groups.

What's more the committee, internally known as "Lemonaid," enacted a policy to skip all financial services deals related to creditworthiness until such concerns could be resolved.

Lemonaid had rejected three similar proposals over the prior year, including from a credit card company and a business lender, and Pizzo Frey and her counterpart in sales had been eager for a broader ruling on the issue.

Story By | Paresh Dave & Jeffrey Dastin - Reuters


Same | News Story' Author : Staff-Editor-02

Users | Click above to view Staff-Editor-02's 'Member Profile'

Share This Information :

Submit to DeliciousSubmit to DiggSubmit to FacebookSubmit to Google PlusSubmit to StumbleuponSubmit to TechnoratiSubmit to TwitterSubmit to LinkedIn

Add A Comment :


Security code

Please enter security code from above or Click 'Refresh' for another code.

Refresh


All Comments are checked by Admin before publication

Guest Menu

All Content & Images Copyright Portmac.news & Xitranet© 2013-2024 | Site Code : 03601