Skip to content Skip to footer

AI chatbot, supported by New York City, offers illicit guidance to users.

In October 2023, New York City Mayor Eric Adams announced a partnership with Microsoft to launch an AI-powered chatbot to help entrepreneurs understand government regulations. However, this project quickly took a wrong turn, delivering illegal advice on sensitive issues such as housing and consumer rights. The bot suggested business owners could disregard laws prohibiting discrimination based on sources of income, lock out tenants, set unrestricted rents, go cashless, reduce employee tips, and not inform staff of schedule changes.

Rosalind Black, the Citywide Housing Director at Legal Services NYC, first highlighted these issues. According to Black, the bot inaccurately informed landlords they could refuse tenants with Section 8 vouchers and lock them out, neither of which is legal. Expanding on its unlawful guidance, the bot also confirmed that restaurants could become cash-free. This directly contravenes a 2020 city law that compels businesses to accept cash to prevent discrimination against those lacking bank accounts.

Additionally, the bot disseminated false information to employers indicating they could take cuts from employees’ tips and erroneously explained staff scheduling change notifications. Black made it clear that if the bot couldn’t provide accurate and responsible advice, it should be decommissioned. Andrew Rigie, the Executive Director of the NYC Hospitality Alliance, mirrored Black’s concerns, stating anyone following the bot’s advice risked significant legal ramifications.

Leslie Brown of the NYC Office of Technology and Innovation claimed the bot was still under development. However, such a “work-in-progress” tool dealing with such critical advice was seen as a risky move.

Elsewhere, in February, Air Canada landed in a legal wrangle due to its AI chatbot’s false information on its bereavement fare policy, which led the court to enforce the bot’s incorrect advice. Similarly, New York lawyer Steven A Schwartz found himself in hot water for inadvertently citing fictional legal cases provided by ChatGPT.

The series of incidents involving AI chatbots providing illegal advice underline the potential risks and legal liabilities involved in entrusting these tools with generating legal advice, despite their apparent convenience and seemingly harmless nature.

Leave a comment

0.0/5