The first section of the executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence includes the evocative sentence “AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built.” This single sentence nicely clarifies the solution space for responsible AI. If we turn our attention to each of the areas of concern in turn, we get a clear way forward for the development of AI technology.
For the People Who Build It
- Entrepreneurs and technologists will need to design their solutions to make AI more secure, accurate, unbiased, explainable, and thoughtfully designed to interact with human users. These solutions will require ingenuity for engineers and entrepreneurs, and resources, in large part from the investors who back them.
- The quest for these solutions presents significant opportunity for those building in AI. The EO can be read to identify holes in the world that startups can fill. For example, the EO articulates a need for testing and evaluation of AI, which could spawn a whole new market for products that fill this need. The EO also underscores the need for data provenance, an exciting area where our portfolio company Fluree is building.
For the People Who Use It
- Users need to appreciate what AI is and what it is not. AI can be used to augment human intelligence but it should not be a replacement for it. It is important to keep top of mind that users are interacting with technology, not an oracle. Humans should remain critical about responses generated by AI and interrogate confidence levels.
- Effective AI should be easy to use right and very difficult to use wrong. “Fool proofing” the technology will make it more trustworthy and therefore more valuable to customers.
For the Data It Is Built On
- We want AI to produce results that are accurate and free of bias. Algorithms do not have discriminatory intent but may create discriminatory outcomes if they are trained on biased data. Minimizing bias in AI is a data practice issue. Entrepreneurs building in this space need to ensure that input data is bias free, or that inherent bias is understood and accounted for. Additionally, data sources must be understood, trusted, accessed with permission, and not tampered with in model development or the use cycle.
- We also need assurance that the data used to train our models is not tampered with or manipulated at any point in the AI life cycle.
As the EO gains acceptance among purchasers, we expect a vibrant market to evolve to provide technical solutions to address these concerns. SineWave is uniquely positioned to evaluate how alignment with the key principles articulated in the EO make AI more valuable as an investment and as a product for customers, and the technical innovations necessary to make the principles a reality.