• Strategic Consultancy
  • Campaigns
  • Content
  • Social Media
  • Media & PR
  • Employer Brand
Loading

AI Ethics, Risks and Safety Conference: 8 Key Takeaways

Post Image

Insight:

The increasing adoption of AI across all types of organisations is set to bring profound changes to sectors and markets globally and to society.

Last week, business professionals and innovative thinkers from across the nation came together in Bristol for the first AI Ethics, Risk, and Safety Conference event.

This first-of-its-kind event in the South West saw experts share best practices and key insights about upcoming Artificial Intelligence (AI) regulations and standards, highlighting case studies and resources available for businesses currently grappling to navigate the challenges associated with implementing the technologies.

While the benefits to those that succeed in this are great, the ethics and risks associated with deployment continue to be considerable barriers to overcome.

It was a very successful event, offering valuable insights for all! For those of you unable to make it on the day, we’ve collated the key insights from the event in this blog article.

 AI regulation: Where are we now?

  1. Despite the AI regulatory framework being introduced last year via the UK government’s AI Regulation White Paper, AI regulation is not new. Think GDPR, Medical Device Regulation, competition laws, and employment laws.
  2. The UK has stated that they are not planning to legislate around AI use at present. Instead, the current focus is around principles and standards. That said, experts believe this is likely to change. Although there are currently less strict regulations here in the UK, compared to for example the EU, this lack might be seen as an initial benefit however this isn’t the case. With less rules to follow it can be more difficult for organisations to have clear boundaries at the start of implementation, which later down the line may be against new and updated regulations or legislations, which will undoubtably happen as regulations evolve. Companies need to act now to educate themselves and ensure they are ethical and safe in their use of AI us.

Tools for trustworthy AI: Implementing the UK’s proposed regulatory principles.

  1. AI systems should function in a robust, secure, and safe way throughout the AI lifecycle, with risks continually identified, assessed, and managed. Systems should be appropriately transparent and should not undermine the legal rights of individuals or organisations.
  2. The UK Government’s Department for Science, Innovation, and Technology is establishing central activities, initially within the government, to drive coherence and identify emerging issues at a system level. These activities include providing testbeds and sandbox initiatives, raising awareness through education, and promoting interoperability with international frameworks.

AI regulation or AI innovation – is that really the choice?

  1. Innovation is always regulated, even when there are no specific tech rules, as technology doesn’t emerge into a regulatory vacuum. Innovation is considered a good thing, but it always comes with both risks and benefits that are not always welcomed by all. It needs to be considered whether AI requires overarching regulations or bespoke rules for set for specific AI use cases.

Implementing Technology Ethics from theory to practice

  1. There are just under 7 million results for ‘AI ethics’ on Google – it’s a common question, yet with so much information out there some of it quite conflicting, it can be confusing to know what the right answer is. However, it’s so important to remember that although the conversation around AI is theoretical, the implications of AI are not therefore ethics needs to be taken seriously.
  2. Business owners need to make a space for ethics in their company; it can’t just be something on the backburner that doesn’t have an owner, people need to be accountable for ensuring ethical practices are carried out. Give ethics a home!

Creation of UK AI standards can help ethical and responsible deployment while offering governance assurance.  

 

  1. The UK AI Standards Hub has been created to help advance responsible AI innovations and deployments while unlocking the potential of standards as a governance tool. International standards are used across many areas of life as governance tools and innovation enablers. The role of standards in AI can help breakdown the challenges of data bias and concerns around data quality which is holding many organisations back from realising the potential of AI. 

 

The Conference saw the launch of the Ethical Technology Network, a pioneering initiative to provide businesses with the advice, training, and resources needed to implement a successful AI Ethics and Governance strategy. To find out more visit here.

Prev
No more posts
Next
Is your company equipped to handle a reputation crisis?