Microsoft recently laid out its vision for the future of AI governance in a new report titled “Governing AI: A Blueprint for the Future.” In the foreword, Brad Smith, Microsoft’s Vice Chair and President, outlines the company’s approach to ethical AI and calls for robust and effective policies for AI regulation.
Five Key Areas for Government Policies
In the first part of the report, Smith enumerates five areas where governments should consider formulating policies, laws, and regulations around AI. He emphasizes that the development and usage of AI technologies like OpenAI’s GPT-4 foundation model have raised pivotal questions about the control and application of these powerful technologies.
Notably, these AI models have advanced to a point where they can make decisions that were traditionally reserved for humans. This leads to profound questions about how we can effectively use this technology to solve our problems and control its immense power.
Microsoft’s Commitment to Ethical AI
The report’s second part focuses on Microsoft’s commitment to ethical AI. The company has been operationalizing and building a culture of responsible AI since 2016, when Microsoft CEO Satya Nadella highlighted the importance of focusing on the values instilled in the people and institutions creating this technology.
Since then, Microsoft has defined, published, and implemented ethical principles to guide their work, and developed constantly improving engineering and governance systems to put these principles into practice. The company now has nearly 350 people dedicated to responsible AI, implementing best practices for building safe, secure, and transparent AI systems designed to benefit society.
The Potential of AI to Improve Lives
Microsoft’s dedication to ethical AI governance has paved the way for new opportunities to harness AI technology for the betterment of society. Smith points out how AI has helped save individuals’ eyesight, furthered research for cancer cures, generated new insights about proteins, and provided early warnings to protect people from hazardous weather.
AI is also proving its worth in combating cyber threats, safeguarding human rights, and enhancing productivity through the power of foundation models like GPT-4. Indeed, AI has the potential to rival transformative inventions such as the steam engine, electricity, and the internet, with a unique ability to advance human learning and thought.
AI Needs Guardrails
Despite AI’s promising prospects, Smith warns of the need for guardrails to prevent potential misuse. Drawing a parallel with social media, which was initially hailed for its ability to connect people but later became a tool and a weapon against democracy, Smith emphasizes the necessity of foresight in anticipating potential problems.
The company stresses that the responsibility of ensuring proper control over AI should not fall solely on technology companies but should be shared broadly. Smith reiterates Microsoft’s commitment to safe and responsible AI development and deployment, underpinned by the foundational principle of accountability.
Ensuring Accountability and Human Control
To guarantee accountability, Microsoft calls for the need to ensure effective human oversight over AI. Smith states that the people who design and operate these AI systems must remain accountable to everyone else. This is part of ensuring that AI remains under human control, which should be a priority for technology companies and governments alike.
Smith’s message is clear: just as no individual, government, or company is above the law, neither should any product or technology be. He advocates for a world where AI technology is regulated by a framework that ensures accountability, human control, and ultimately benefits all of humanity.
By forging a path towards ethical AI and advocating for effective regulation, Microsoft demonstrates its vision for a future where AI is not only powerful and beneficial, but also governed responsibly and ethically.