Generative AI is revolutionising the way we live and work. Across industries, leaders and users alike are experimenting and tapping into the power of these technologies to their advantage.
Nearly 7 in 10 workers say generative AI will help them better serve customers.
But like any new technology, generative AI is not without risks. Unlike consumer AI, like Apple’s Siri and Amazon Alexa, enterprise customers require higher levels of trust and security, especially in regulated industries.
When working with the world’s leading businesses, it’s critical to explore this technology in an intentional and responsible way so that ethics remain top of mind for customers, addressing concerns around data ethics, privacy and control of their data.
It’s encouraging to see governments begin to take definitive action to ensure trustworthy AI. Businesses are eager for guardrails and guidance, and are looking to the government to create policies and standards that will help ensure trustworthy and transparent AI.
Helping users understand when and what AI is recommending, especially for high risk or consequential decisions, is critical to ensuring that end users have access to information about how AI-driven decisions are made.
Creating risk-based frameworks, pushing for commitments to ethical AI design and development, and convening multi-stakeholder groups are just a few key areas where policymakers must help lead the way.
It’s not just about asking more of AI. We need to ask more of each other — our governments, businesses, and civil society — to harness the power of AI in safe, responsible ways.
We don’t have all the answers, but we understand that leading with trust and transparency is the best path forward.
There are numerous ways that business and government can deepen trust in AI, providing us with the technical know-how and muscle memory to handle new risks as they emerge.
Protect people’s privacy
The AI revolution is a data revolution, and we need comprehensive privacy legislation to protect people’s data.
At Salesforce, we believe companies should not use any datasets that fail to respect privacy and consent.
By creating a separation of the data from the Large Language Model (LLM), organisations can be confident that their data is being protected from access via third parties without customer and user consent. When that data is accessed by the LLM, it’s important it is kept safe through a number of methods like secure data retrieval, dynamic grounding, data masking, toxicity detection, and zero retention.
When collecting data to train and evaluate models, it’s important to respect data provenance and ensure that companies have consent to use that data.
For governments, protecting their citizens while encouraging inclusive innovation means creating and giving access to privacy-preserving datasets that are specific to their countries and cultures.
Policy should address AI systems, not just models
A lot of attention is being paid to models, but to address high risk use cases we must take a holistic view: on data, models, and apps. Every entity in the AI value chain must play a role in responsible AI development and use.
A one-size-fits-all approach to regulation may hinder innovation, disrupt healthy competition, and delay the adoption of the technology that consumers and businesses around the world are already using to boost productivity.
Regulation should differentiate the context, control, and uses of the technology and assign guardrails accordingly. Generative AI developers, for instance, should be accountable for how the models are trained and the data they are trained on. At the same time, those deploying the technology and deciding how the tool is being used should establish rules governing that interaction.
When it comes to model sizes, bigger is not always better. Smaller models offer high quality responses and can be better for the planet. Governments should incentivise carbon footprint transparency and help scientists advance carbon efficiency for AI.
Appropriate guardrails will unlock innovation
Trust in AI is as important as functionality. Enterprises increasingly require on-demand availability, highly solid uptime, and reliable security. For example, when companies offer a service, customers expect it to be available most of the time. This powers trust.
Organisations need AI tools that are available, fault-tolerant, secure, and sustainable – this is ultimately how they build trust both within their organisation and with their customers.
The post How Business and Government Can Put Trust at the Centre of Our AI future appeared first on Tech | Business | Economy.

 Save as PDF

By 2amw

Buy and Sell WordPress Businesses. The private acquisition marketplace for WordPress. Exclusive, no middlemen.Hello, from Your Web Company .We have  PROFITABLE domains that are currently on sale that you might be interested in !


Whatsapp 2348064950565

Average Google Search Results for these domains is: 846,000
You can easily redirect all the traffic this domain gets to your current site!


SWORDPRESS ! 5,000 Daily page views Advertise from 10,000 naira daily on africa's leading trending news search engine and investigative blog

Sell any kind of product, service or subscription

Buy and Sell
WordPress Businesses.

The private acquisition marketplace for WordPress.
Exclusive, no middlemen.

Buy Now Start Profit


Anytime someone types these premium domains like  Online, or any other phrase with this keyword into their browser, your site could be the first they see!

The Internet is the most efficient way to acquire new customers

Avg Google Search Results for these domains is: 846,000
You can easily redirect all the traffic this domain gets to your current site! appraises these  domains at a range of 500 to  $1,969.

Priced at only $200 for a limited time! If interested please go to and select Buy Now, or purchase directly at
Act Fast! First person to select Buy Now gets it!

Thank you very much for your time.
Best Regards,

Abi John Balogun



Buy Now Start Profit




This will close in 20 seconds