I have been an AI developer, integrator and tutor for over 15 years. Several years were spent working with various Federal Agencies implementing AI, and commercial providers of AI services. Since entering the gaming space, particularly the brick-and-mortar segment, l had yet to experience the magnitude of vendor AI misrepresentation that is present and growing in the gaming space. This is not true of all gaming space vendors, and seems concentrated in the analytical services sector. This is troublesome. At a minimum, it is unethical, but worse, it may bring harm to unwary resort casino operators and their guests.
Given that AI is an ambiguous term with many possible definitions, the reader may want a quick ârefresherâ by referring to my previous article.
Â
What are AI misrepresentations?
There are several ways that AI is misrepresented, where the person or company making the misrepresentation can do so intentionally, or unintentionally.
One AI misrepresentation is the deliberate mislabeling and marketing of basic math and statistics as AI. For example, when I asked one executive why basic math was being pawned off as AI, the executive replied, ânobody knows this stuff, and it is great for marketing.â
Another way is when vendors, with little to no experience in developing and evaluating AI algorithms, deploy these capabilities without understanding or articulating to a potential customer, its capabilities or risks. When explaining to an executive that a particular system was not in his words, âcontinuously learning, and is thinking,â I received the same basic response referencing customer AI illiteracy, and that it is a marketing strategy.
Steve Hunt, Chief Expert at SAP cites another marketing phenomenon called âAI washing.â This is where an existing functionality is presented as a new AI algorithm or service, without changing the product.
Â
Why is this occurring and getting worse?
Â
đ Market share
The burden of securing AI should fall on the companies (e.g., OpenAI, Google, Meta, et.al.,) who develop and make AI frameworks publicly available. These companies should be building privacy and security by design. But these concerns were and continue to be cast aside to deploy AI quickly and gain market share. This transfers the security and privacy burden to users and organizations; in particular, those gaming vendors supplying and deploying these systems without understanding the security and privacy risks.
The Federal Trade Commission (FTC) is clear in that it has warned AI companies about the need to understand and reasonably foresee risks, and to fix them before making AI generally available to the public. If something goes wrong or leaks of personally identifiable information (PII) or company financials occur â the blame cannot be transferred to a third-party developer of the technology.
Â
đź Gaming Media
Gaming magazines and podcasts perpetuate the problem by not fact-checking their articles before publishing. These media outlets have become the marketing arm for AI charlatans. The gaming media needs to âup their gameâ and not become the National Enquirer for the gaming industry. If operators (and vendors) are interested in truly learning about AI technology, attend an AI conference where experienced and professional AI experts give talks, demonstrations and educate attendees.
Â
đ€ Lack of AI trained staff
Most resort casino operators lack AI knowledgeable staff that have the background to perform due AI diligence and ask vendors probing questions that would help identify risks associated with AI misrepresentations.
Â
What are the impacts?
By misrepresenting AI, one of the potential impacts is that resort-casino operators can mistakenly pay a premium price for analytical services that either do not exist or are introducing unnecessary business risks. Though unfortunate, this impact pales in comparison to risks associated with security and privacy.
If there is a business that understands risk management, it is banks and financial institutions. A recent Forbes magazine article noted that CitiGroup, Bank of America, Deutsche Bank, Goldman Sachs, JP Morgan Chase, Wells Fargo and Amazon have restricted employeesâ use of LLMs (large language models) and ChatGPT. Their primary concern is the sharing of sensitive information, including company and customer financial and proprietary information. These financial institutions with the resources to hire top AI talent have sounded the alarm. All the while, some resort casino operators are deploying or are considering deploying this technology, unaware of security and privacy risks.
Â
What can operators do?
An operator will unlikely be able to prevent vendors from misrepresenting basic math as AI algorithms or prevent the gaming media from including unverified claims in their AI based stories. So here are a few actions that operators can take to reduce their risk:
1ïžâŁ Provide the opportunity for select staff to receive AI training. Training is available as formal academic training, on-line classes or AI conferences. An alternative is to hire qualified individuals with a solid background of developing and deploying AI solutions, or hire a qualified independent consultant. These individuals would be tasked to perform due diligence of vendors purporting to supply AI based solutions, in addition to developing and deploying organic AI solutions.
2ïžâŁ If a vendor has integrated Large Language Models (e.g., ChatGPT) into their analytics offering, demand full transparency into the full LLM supply chain. Who trained the model, on what data was it trained and when? How and by whom was the training conducted? What has been outsourced? And pay close attention to the terms surrounding the data that will be loaded into the model.
3ïžâŁ Ask the vendor to describe the math upon which the AI model is built, and to show the accuracy of the models by using your propertiesâ operational data. Request that the vendor make available their modelâs verification and accuracy data. If the vendor claims that it is company proprietary or is a âblack boxâ and cannot be explained, be wary, be very wary.
4ïžâŁ Until media companies hire staff to fact-check their articles, avoid reading any AI articles contained in gaming magazines or podcasts. As noted above, attend one or more of the numerous AI conferences held world-wide throughout the year.
Â
Conclusion:
At the end of the day, resort casino operators will potentially base financial and personnel decisions on information generated by AI algorithms. Operators have a responsibility to understand AI technology so that they have confidence in relying upon their output. As the phrase âCaveat Emptorâ reminds us, the buyer alone is responsible for checking the quality of goods before a purchase is made.
Thatâs it! We hope that you learned something. If you have any questions, please feel free to reach out to me at stuart.kerr@vizexplorer.com.
Stuart Kerr – Senior VP of Development & AI