Apple CEO, Tim Cook, recently discussed the company’s interest in incorporating AI into its products during an earnings call. Although Cook did not reveal any specific product roadmaps, he hinted at Apple’s focus on building unbiased AI systems.
Cook emphasized that AI is “huge” and stated that the company would “continue weaving it into our products on a very thoughtful basis.” Apple’s careful approach could explain its absence in the generative AI space. However, internal research shows Apple is examining related models.
A paper set to be published at the Interaction Design and Children conference in June presents a system to combat bias in machine learning dataset development. The study suggests multiple users contribute equally to an AI system’s dataset, integrating human feedback at early stages of model development. The result is a “hands-on, collaborative approach to introducing strategies for creating balanced datasets.”
While scaling the techniques for large language models like ChatGPT and Google Bard may be challenging, the research presents an alternative approach to fighting bias in AI development. Unbiased AI systems can significantly impact fintech, cryptocurrency trading, and blockchain, making high-level trading knowledge accessible to more people.
Creating unbiased large language models (LLMs) could also address government safety and ethical concerns surrounding generative AI. This is particularly important for Apple, as any generative AI products it develops or supports would benefit from the iPhone’s integrated AI chipset and 1.5 billion user footprint. As Apple delves further into AI, its commitment to unbiased systems will shape the future of AI across various sectors.