What Does The AI Bill Of Rights Mean For Marketers?

0
69

All of us have a role to play to ensure that innovation is rooted in inclusion, integrity, and our common humanity, says the Blueprint for an AI Bill of Rights that the White House Office released to guide automated systems so that they protect the rights of the American public.

Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public, says the opening line of the AI Bill of Rights released by the White House. 

According to research by Pew Research Center, a majority of Americans believe their online and offline activities are being tracked and monitored by companies and the government with some regularity. It is such a common condition of modern life that roughly six-in-10 U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government. Some 81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection.

It acts as guideline for enterprises and hope to safeguard against algorithms used in hiring and credit decisions that have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. It points to unchecked social media data collection that has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity — often without their knowledge or consent.

The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. It encourages leaders to use it as a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. 

According to a McKinsey study, consumers are most comfortable sharing data with providers in healthcare and financial services, though no industry reached a trust rating of 50 percent for data protection. Consumer data is clearly transforming business, and companies are responsible for managing the data they collect. This is a positive step as  leading companies are learning that data protection and privacy can create a business advantage.

Here are the broad issues it hopes to address that marketers should be cognizant of:

Safe and Effective Systems

Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.

Algorithmic Discrimination Protections

Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. 

Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.

Data Privacy

Consumers or users should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek permission of users and respect their decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. This refers to dark patterns in UX injected with the aim to generate more sales and increase lead conversion. The term was originally coined by UK-based designer Harry Brignull in 2010, when the eCommerce industry had begun to pick up the pace.

Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. 

Notice and Explanation

Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. 

Human Alternatives, Consideration, and Fallback

Consumers should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law. Consumers should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or if they would like to appeal or contest its impacts. 

Conclusion

According to a recent Deloitte study, consumers have a keen sense of awareness of the risks surrounding data security and privacy, and that many consumer product executives are likely overestimating the extent to which they are meeting consumer expectations related to data privacy and security. It is time for a privacy-forward stance that priorities consumer trust over merely meeting regulations.