Home Science & TechSecurity Trust is Earned, Not Given. Has AI Proven Itself?

Trust is Earned, Not Given. Has AI Proven Itself?

by ccadm


Undoubtedly, the rise of AI has been phenomenal. Estimates show that the AI market is all set for nearly 28% annual growth between 2025 and 2030. It is expected to grow from US$243.72 billion in 2025 to US$826.73 billion in 2030. 

This market of AI has many components. It includes computer vision, AI robotics, machine learning, autonomous and sensor technology, and natural language processing. Analysts identify machine learning as the most significant component among all these. However, while all components are expected to grow, their degree of growth might vary. 

Living in a digital age implies that we have grown keener to adopt digital technologies. Significant volumes of scientific literature – readable and audiovisual – have made people aware of how AI could be beneficial. 

Customers prefer efficiency, while providers look for convenience and ease of operations. Both of these could be achieved through the deployment of AI. 

Yet, sustainable growth in the long run is not merely a function of operational ease and efficiency. It is a mix of many perceptive factors as well. 

How people perceive a technology determines whether they’ll keep using it in the future once they’ve adopted it. And how people perceive any technology, in the long run, would depend significantly upon whether they could trust it. 

As AI prepares for exponential growth, we must ask: has AI proven itself? Because trust is something to be earned. It is not handed over seamlessly from one entity to another. 

Are We Trusting AI Too Much?

A new study from the University of Surrey has certainly emerged as a disruptor in the way we perceive AI and currently use it in all walks of life, from banking and healthcare to crime detection. 

As reported, the study calls for an immediate shift in how AI models are designed and evaluated, emphasizing the need for transparency and trustworthiness in these powerful algorithms. ‘It is a strong call indeed. 

What are the reasons that prompt such a call to action? Let us delve deeper.

The article was published1 in Applied Artificial Intelligence, titled ‘Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design.’ 

The paper demonstrated a design and evaluation approach for delivering the real-world efficacy of an explainable artificial intelligence (XAI) model. The paper claimed the approach as the first of its kind. It stood upon three complementary yet distinct frameworks that helped it become user-centric, context-sensitive, and able to offer post-hoc explanations for fraud detection. 

It drew its inspiration from the principles of scenario-based design and brought together two independent real-world sources to set up a realistic card fraud prediction scenario. 

Subsequently, it deployed the SAGE framework, which is named as an acronym for four of its key components:

  • Settings
  • Audience
  • Goals
  • Ethics

The framework helped identify key context-sensitive criteria for model selection, helping to reveal gaps in the current XAI model design for further model development. A functionally-grounded evaluation method was also put in place for effectiveness assessment. 

The outcome was in the form of explanations that could represent real-world requirements more accurately than established models. 

AI Should Explain their Decisions Better to Earn Trust

While these descriptions might sound too scientific, at its core, the research did shed light on a crucial area: it advocated for enhanced user trust in AI. It focused on areas where AI needed to offer adequate explanations for its decisions. Only then could it become trustworthy, as users could understand AI and clear any confusion or vulnerability they might have felt in their adoption of AI. 

AI’s inability or inherent deficiencies in explaining their decisions could be potentially harmful since they’re now used in many critical engagement areas such as healthcare, banking, etc. 

As presented by researchers, instances where AI systems have failed to adequately explain their decisions have been alarming. The report directs towards an inherent imbalance in fraud datasets where the magnitude of damage could reach the scale of billions of dollars, but they constitute only 0.01% of all transactions. 

While it is a good thing that most of the transactions are transparent and genuine, the imbalance could pose challenges to AI models in learning fraud patterns. The silver lining is that AI algorithms can identify fraudulent transactions with great precision. 

However, what harms the trust-building efforts is that they cannot explain the reasons for the fraud. 

Dr Wolfgang Garn, co-author of the study and Senior Lecturer in Analytics at the University of Surrey, makes a crucial observation in this regard. He says:

“We must not forget that behind every algorithm’s solution, there are real people whose lives are affected by the determined decisions. We aim to create AI systems that are not only intelligent but also provide explanations to people – the users of technology – that they can trust and understand.” 

He essentially points us toward the larger scheme of things: technology does not thrive in isolation. It thrives by making a meaningful impact on human lives. To earn human trust, AI should explain its actions as other humans do. 

We have already discussed the framework Dr. Garn and their paper has proposed for making AI more humane. In addition to his proposed workflow, he sees AI’s lack of contextual awareness as a challenge to offer meaningful explanations. 

As a remedy, Garn’s paper “advocates for an evolution in AI development that prioritizes user-centric design principles.” 

Garn knows that user-centricity would require specialist inputs. And that’s why he says:

“It (the research paper) calls for AI developers to engage with industry specialists and end-users actively, fostering a collaborative environment where insights from various stakeholders can shape the future of AI. The path to a safer and more reliable AI landscape begins with a commitment to understanding the technology we create and the impact it has on our lives. The stakes are too high for us to ignore the call for change.” 

How to Ensure a Trustworthy AI?

One of the crucial industry participants, Vice President & Head of Digital Modalities at SAP (SAP -0.19%), leading AI innovation adoption to increase sales & business development productivity, gave crucial pointers on how to ensure a safe, secure, and trustworthy AI. He believed that establishing AI’s trustworthiness would require assuring that AI’s decision-making process is ethical, equitable, and in harmony with human values.

He also advocated for demystifying AI by developing systems that are understandable and whose rationale can be easily explained. Among other things, he stressed the importance of exceptional data governance, the need to fortify AI with robust security, and the necessity to formulate a comprehensive, multidisciplinary approach to ensure AI systems positively impact society, fostering innovation while guarding against potential harm.

According to Dr. Amit Kalele, a solutions architect for TCS Incubation’s AI Performance and Trust Management (AIPM) program, and Ravindran Subbiah, an Entrepreneur-in-Residence (EIR) with the Operations Framework Incubation Program at TCS, the five pillars of a trustworthy AI are explainability, bias and fairness, reproducibility, sustainability, and transparency. 

They believe that it is crucial to keep on developing tools and processes that allow us to improve the explainability of machine learning systems and outcomes, help understand, document, and monitor or mitigate bias in development and production, and ensure fairness. 

Another tech giant that is aligned with similar lines is IBM (IBM +0.16%). According to IBM, building trust in AI will require a significant effort to instill in it a sense of morality, operate in full transparency, and provide education about the opportunities it will create for businesses and consumers.

IBM recommends having a solid AI lifecycle management strategy, where organizations can have a line of sight into each step of the AI process and rely on verifiable touchpoints that continue to reflect the overall goal of the organization, ensuring greater transparency and a better understanding of outcomes to provide accurate, trustworthy AI decisions. 

Rob Katz, Vice President of Product, Responsible AI and Tech, Salesforce, has also laid out a roadmap of five ways to build trustworthy AI agents. 

He believes that Building trust in AI is a journey that requires careful design, rigorous testing, and ongoing innovation. Salesforce claims that focusing on intentional design, system-level controls, and implementing trust patterns is paving the way for a future where humans and AI can work together seamlessly and effectively.

In the coming segment, we will discuss Agentforce by Salesforce, which is claimed to be built on trustworthy AI. 

1. Salesforce (SAP -0.19%)

Agentforce by Salesforce is a proactive, autonomous AI application that provides specialized, always-on support to employees or customers. It can be equipped with any necessary business knowledge to execute tasks according to its specific role. Agentforce can help build a variety of agents across verticals, including service, sales, marketing, commerce, and more. 

A Service Agent, for instance, replaces traditional chatbots with AI that can handle a wide range of service issues without preprogrammed scenarios, improving customer service efficiency. 

A buyer Agent enhances the B2B buying experience, helping buyers find products, make purchases, and track orders via chat or within sales portals. 

With Agentforce, organizational teams can create their own customized Agentforce fast for any department with a new library of pre-built skills. These skills span CRM, Slack, Tableau, and partner use cases, as well as the company’s customized skills. Agentforce can also take action in any system or workflow by connecting to existing APIs or with MuleSoft’s pre-built connectors to over 40 systems.

Salesforce claims that Agentforce 2.0 is more trusted than ever. Its reasoning engine Atlas is now smarter with enhanced reasoning and data retrieval techniques. This enables Agent Force to think deeply when presented with complex, multi-step questions – reasoning across data sources that have been enriched with additional customer-specific metadata. Through that, Agentforce can take the best action and deliver accurate, well-researched responses with inline citations.

Salesforce believes that trust is at the core of AgentForce’s success. Indeed, the potential of artificial intelligence (AI) agents can only be realized if they’re trusted to act on someone’s behalf. 

To satisfy this essential criterion, Salesforce ensures that AI is designed in a way that allows humans to partner safely and easily with AI. 

Salesforce claims its approach is built on intentional design and system-level controls that emphasize and prioritize transparency, accountability, and safeguards.

Salesforce claims to be the world’s number one AI customer relationship management (CRM) platform, and more than 150,000 companies use its cloud-based software. In fiscal year 2024 (ended January 31, 2024), Salesforce reported $34.9 billion in total revenue, representing 11% growth year-over-year.

2. IBM (IBM +0.16%)

Another company that considers trustworthiness pivotal in building effective AI solutions is  IBM. IBM claims that its research wing is working on a range of approaches to ensure that AI systems built in the future are fair, robust, explainable, accountable, and aligned with the values of the society they’re designed for. It also claims to be ensuring that in the future, AI applications are as fair as they are efficient across their entire lifecycle.

In 2018, IBM introduced its Principles for Trust and Transparency. It was among the first major companies to create an AI Ethics Board to govern the internal processes, tools, guidelines, education, and risk assessments for the company’s AI development and usage. 

IBM also co-founded the AI Alliance in December 2023, a group that now consists of more than 100 companies, academic institutions, government agencies, and research labs around the world, including Meta, Sony, NASA, Harvard, and the Cleveland Clinic, and works to accelerate open-source innovation to improve trust in AI to ensure it benefits the entirety of our society. 

According to Darío Gil, Senior Vice President and Director of IBM Research,  

“Artificial intelligence is a horizontal technology with implications for every sector, country, and value system.” 

He believes it must be developed through the collaboration of many diverse institutions. Mutual trust is crucial, therefore. 

IBM Research also claims to have figured out why in-context learning improves a foundation model’s predictions, demystifying machine learning and adding transparency to the technique. The research was carried out by IBM Research and a team of scientists working at the Rensselaer Polytechnic Institute (RPI). 

The team working at IBM believes that part of building trustworthy AI involves looking into the underlying working mechanisms of these complicated systems, such as LLMs and generative AI, and understanding them bit by bit and component by component to know how they work or when they will succeed or fail. 

“We’re adding transparency to this dark magic. People use it a lot but don’t understand how it works.”

– Senior researcher Pin-Yu Chen, Trusted AI group, IBM Research

International Business Machines Corporation (IBM +0.16%)

On January 29, 2025, IBM announced fourth-quarter 2024 earnings results. For the fourth quarter, IBM reported revenue of $17.6 billion, up 1 percent, up 2 percent at constant currency. For the full year 2024, the company registered a revenue of $62.8 billion, up 1 percent, up 3 percent at constant currency. 

A Trustworthy Future of AI

The exponential growth AI is witnessing requires it to have a robust ethical ecosystem to surround and safeguard it. To have trust and transparency as inbuilt qualities, it is necessary for AI to become as responsible as possible. 

It would have to be useful for everyone. Its benefits should reach all levels. A trustworthy AI should work on the principle that data and insights belong to their creators. 

Fair and transparent data policies would help enhance trust. AI, to earn trust in the long run, should also be open to declaring who has trained their AI systems and what sort of data has been fed to it.

Ease of use and the designing of intuitive user interfaces must not mean that transparency is to be compromised. A good design can be seamless yet effectively explain what’s going on inside it. If the AI system is trustworthy, fair, balanced, and adequately calibrated, it will successfully assist users in making fair choices as well. 

AI has to be accurate and accountable at the same time. It must understand the context well. Developers must not use the technological complexity of AI models as an excuse to keep their solutions inaccessible. 

Trust can not be earned by boasting technological sophistication. Good technologies are those that are easy to use, comprehensible, sensitive to user contexts and needs, replicable, explainable, and worthy of trust! 

Click here to learn all about investing in artificial intelligence.


Study Reference:

1. Mill, E., Garn, W., & Turner, C. (2024). Real-World Efficacy of Explainable Artificial Intelligence using the SAGE Framework and Scenario-Based Design. Applied Artificial Intelligence, 38(1). https://doi.org/10.1080/08839514.2024.2430867



Source link

Related Articles