Skip to content Skip to footer

Tech firm revokes rights of AI employees following resistance.

Lattice, an HR software company founded by Jack Altman, recently attempted an ambitious move to generate official employee records for its ‘digital workers.’ However, after facing significant backlash, the company had to backtrack on this decision just three days later.

CEO Sarah Franklin initially declared through LinkedIn that Lattice was pioneering the responsible employment of AI digital workers. She emphasized the importance of ensuring transparency and accountability for these AI entities through the creation of digital employee records. Franklin pointed out that these digital workers would undergo proper onboarding, be given standard goals and performance metrics, be provided with necessary system access, and would even report to a responsible manager.

However, this move to anthropomorphize AI was met with criticism. Many particularly took issue with the implication of humans as optimizable resources comparable to machines. Some even voiced concerns about potential unionization efforts and whether these ‘digital employees’ would have the right to vote. Consequently, the AI industry experts, among others in the online community, heavily criticised this unprecedented development.

Lattice had expected this pushback and realised that society might not be prepared for the concept of ‘digital employees.’ As such, the company announced its decision to cancel this project a mere three days later.

This situation underscores a significant debate about anthropomorphizing AI models and robots, who often show surprising emotional intelligence and even self-awareness. For instance, Google dismissed an engineer in 2022 who claimed that its AI model was sentient, while other AI models like Claude 3 Opus have shown self-awareness in tests. Despite this, the question of whether AI tools should be considered ’employees’ with worker rights remains unanswered.

A recent study provided an interesting insight into this matter. It found that two-thirds of the 300 US citizens surveyed believed that AI tools like ChatGPT were sentient with subjective experiences. Interestingly, people who interacted more frequently with these AI tools were more likely to attribute consciousness to them. This suggests that as AI ‘colleagues’ become more integrated into our work life, there could be a shift in attitudes towards workers’ rights for AI in the future.

Leave a comment

0.0/5