WordPress Ad Banner

US Penalizes Chinese Firm Misusing AI in Recruitment

In what serves as a stark reminder against the unlawful application of AI in business operations, a significant settlement has been reached in the US, marking the inaugural resolution involving AI-driven recruitment tools. The Equal Employment Opportunity Commission (EEOC) successfully resolved a dispute with a Chinese online education platform, underscoring the growing importance of ethical AI practices in the hiring domain.

The focal point of the settlement is iTutorGroup, an entity that came under scrutiny in 2020 for allegedly employing AI tools to engage in discriminatory practices during the recruitment process. The platform, which hires online educators across a range of subjects, was accused of segregating older and younger job applicants through its AI-powered processes.

The EEOC, in its charge sheet filed in 2022, stated, “Three interconnected enterprises offering English-language tutoring services under the ‘iTutorGroup’ brand in China violated federal law by programming their online recruitment software to automatically dismiss older candidates based on their age.”

Having initiated an initiative in 2021 aimed at ensuring that Artificial Inteligence software employed by US employers adheres to anti-discrimination legislation, the EEOC underscored its commitment to scrutinizing and addressing instances of AI misuse. According to a report by the Economic Times, the EEOC made it clear that it would focus its enforcement endeavors on companies found to be misappropriating AI capabilities.

The culmination of this effort resulted in a settlement agreement, with iTutorGroup agreeing to pay $365,000 to over 200 ‘senior’ job applicants whose applications were purportedly disregarded due to their age. The settlement, documented in a joint submission to the New York federal court and reported by Reuters, encompasses remedies such as back pay and liquidated damages.

Central to the allegations against iTutorGroup was its AI software’s systematic exclusion of female candidates aged above 55 and male candidates above 60, contravening the provisions of the Age Discrimination in Employment Act (ADEA). This case exemplifies the significance of fair and just application of AI in HR processes.

Interestingly, a parallel lawsuit was filed by another entity, accusing iTutorGroup of having developed AI-powered software that aids other companies in singling out applicants based on characteristics such as race, age, and disability. This legal action was brought forth by Derek Mobley against Workday, claiming that the AI software developed by the latter facilitated biased candidate screening. Mobley, a Black man over 40 who grapples with anxiety and depression, alleged that Workday’s software worked against him as he applied for positions at organizations utilizing Workday’s recruitment screening tool.

The unfolding scenario highlights the imperative for automated AI systems, designed to assist HR departments, to be equitable and secure. Noteworthy players like Accenture and Lloyds Banking Group have already incorporated innovative techniques like virtual reality games into their hiring processes. With the rise of AI in recruitment, a report by Aptitude Research revealed that 55% of companies are augmenting their investments in recruitment automation. This underscores the need for a thoughtful, ethical, and legal approach to AI utilization in the employment sphere.

Nvidia Unveils GH200 GraceHopper: Next-Gen Superchips for Complex AI Workloads

In a recent press release, Nvidia, the world’s foremost supplier of chips for artificial intelligence (AI) applications, has introduced its latest breakthrough: the next generation of superchips, designed to tackle the most intricate generative AI workloads. This revolutionary platform, named GH200 GraceHopper, boasts an unprecedented feature: the world’s first HBM3e processor.

Combining Power: The Birth of GH200 GraceHopper

Nvidia’s ingenious GH200 GraceHopper superchip is the result of merging two distinct platforms: the Hopper platform, housing the graphic processing unit (GPU), and the Grace CPU platform, responsible for processing needs. These platforms, named in honor of computer programming pioneer Grace Hopper, have been seamlessly amalgamated into a singular superchip, paying homage to her legacy.

From Graphics to AI: The Evolution of GPUs

Historically, GPUs have been synonymous with high-end graphic processing in computers and gaming consoles. However, their immense computational capabilities have found new applications in fields like cryptocurrency mining and AI model training.

Powering AI through Collaborative Computing

Notably, Microsoft’s Azure and OpenAI have harnessed Nvidia’s chips to build substantial computing systems. By employing Nvidia’s A100 chips and creating infrastructures to distribute the load of large datasets, Microsoft facilitated the training of GPT models, exemplified by the popular ChatGPT.

Nvidia’s Pursuit of AI Dominance

Nvidia, the driving force behind chip production, now seeks to independently construct large-scale data processing systems. The introduction of the Nvidia MGX platform empowers businesses to internally train and deploy AI models, underscoring Nvidia’s commitment to AI advancement.

The GH200 GraceHopper: A Leap Forward in Superchip Technology

Nvidia’s achievement in crafting the GH200 superchip can be attributed to its proprietary NVLink technology, which facilitates chip-to-chip (C2C) interconnections. This innovation grants the GPU unfettered access to the CPU’s memory, resulting in a robust configuration that offers a substantial 1.2 TB of high-speed memory.

Unveiling the HBM3e Processor

The GH200 GraceHopper is distinguished by the inclusion of the world’s inaugural HBM3e processor, surpassing the computational speed of its predecessor, HBM3, by an impressive 50%. In a single server setup, featuring 144 Neoverse cores, a staggering eight petaflops of AI performance can be achieved. With a combined bandwidth of 10TB/sec, the GH200 platform possesses the capability to process AI models that are 3.5 times larger and 3 times faster than previous Nvidia platforms.

Nvidia’s Unrivaled Market Position

Having briefly entered the $1 trillion valuation echelon earlier in the year, Nvidia commands over 90% of the market share in chip supply for AI and related applications. The demand for GPUs extends beyond training AI models to their operational execution, and this demand is poised to escalate as AI integration becomes commonplace. Evidently, not only chip manufacturers such as AMD, but also tech giants like Google and Amazon, are actively developing their offerings in this burgeoning sector.

Charting a Technological Course: GH200’s Arrival

The unveiling of the GH200 GraceHopper superchip solidifies Nvidia’s status as the premier technology provider. Anticipated to be available for users in Q2 2024, these groundbreaking chips promise to reshape the landscape of AI processing, further establishing Nvidia’s dominance in the industry.

Spotify Expands its AI-powered DJ Feature Globally

After successfully debuting its AI-powered DJ feature in North America six months ago, Spotify is now rolling out this innovative tool to numerous international markets.

Accessible via the “music” feed section within the Spotify mobile app, the DJ function personalizes users’ listening experiences by curating a selection of music. This selection is accompanied by spoken-word commentary, brought to life by a synthetic voice. The commentary encompasses playful conversations and contextual insights, referencing specific songs and artists that the user has previously enjoyed.

In essence, it’s akin to having a personalized radio DJ who customizes their show for each individual listener.

Spotify initially introduced DJ to audiences in the United States and Canada in February. Subsequently, the company expanded its availability to the United Kingdom and Ireland three months later. Although DJ will continue to be in beta testing, it is now accessible to premium subscribers across approximately 50 markets worldwide. These markets include countries such as Sweden, Australia, New Zealand, Ghana, Nigeria, Pakistan, Singapore, and South Africa.

However, it’s important to note that a large portion of the European Union will not yet have access to this feature. Furthermore, in the newly added markets, DJ will only be offered in the English language.

Great Wall Motor and Baidu Team Up for AI-Integrated Cars

Great Wall Motor (GWM), the Chinese automaker, is introducing Baidu’s AI system, similar to ChatGPT, into its mass-market cars to enable seamless conversation between drivers and vehicles, according to a report by the South China Morning Post (SCMP). This collaboration between GWM and Baidu aims to make cars more intelligent and user-friendly.

Baidu’s AI model, known as Ernie Bot, is positioned as a Chinese competitor to OpenAI’s ChatGPT. GWM stated that they have been testing innovative features in their mass-produced vehicles, and these features will gradually be incorporated into commercial use on a wider scale.

Baidu has been heavily investing in AI, with a particular focus on the development of its language model, Ernie. The company announced a substantial investment of $140 million (1 billion yuan) to support Chinese startups working on generative AI.

In their pursuit of enhancing the in-car experience, Baidu recently revealed that their Ernie 3.5 beta has shown significant progress, outperforming both ChatGPT (3.5) and GPT-4 in various Chinese language skills.

GWM and Baidu have been working together using the latest iteration of the Ernie model to research applications of this advanced language model in intelligent in-car interactions. They have already identified many novel features that can be implemented in their upcoming vehicle models.

During the Shanghai Auto Show in April 2023, Baidu Apollo, the autonomous driving solutions platform of the Chinese search giant, showcased various intelligent driving technologies based on the ERNIE model. These applications included journey planning, in-car entertainment, knowledge Q&A, and AI sketching.

The demand for intelligent solutions in the automotive industry is growing rapidly, as consumers and manufacturers alike seek more intuitive interfaces, expanded functionalities, and smoother driving experiences. Other Chinese manufacturers, such as Lynk and Smart, have also expressed their intentions to incorporate Ernie Bot technology into their vehicles.

However, GWM has not disclosed which specific car models will first include the built-in Ernie Bot technology or provided a timeline for its release. Additionally, Baidu is actively exploring opportunities to integrate Ernie Bot into other businesses, including its cloud services, to compete with Western rivals like OpenAI, Google, Microsoft, and Apple in the AI market.

Meta to Introduce AI-Powered Chatbots ‘Personas’ in September

Financial Times (FT) recently reported (2 Aug 2023) that Meta, the owner of the tech giant company, is gearing up to launch a series of AI-backed chatbots. These innovative chatbots, set to debut next month, aim to enhance engagement with Meta’s massive user base of nearly four billion users by offering human-like discussions.

The company has been developing persona-based chatbot prototypes, each exhibiting distinct personalities, including an AI impersonating Abraham Lincoln and another providing travel advice with a laid-back surfer style. The primary purpose of these chatbots will be to offer a novel search function, provide recommendations, and entertain users with interactive experiences.

With several tech giants like Microsoft, Google, and Elon Musk’s ventures entering the AI space, Meta’s move into the AI industry is driven by the goal of attracting and retaining users amidst fierce competition from other social media platforms like TikTok and Twitter.

While the AI-backed chatbots present exciting possibilities, experts have raised concerns regarding user data privacy, manipulation, and potential data exploitation. Ravit Dotan, an AI ethics adviser and researcher, pointed out that interacting with chatbots exposes more user data to companies, leading to concerns about privacy and potential manipulation of users’ preferences.

Meta’s CEO, Mark Zuckerberg, envisions these AI agents serving as assistants, coaches, and facilitators for user interactions with businesses and creators. The company also plans to develop AI-powered productivity assistants for internal use and an avatar chatbot in the metaverse in the future.

To address ethical concerns, Meta intends to employ technology to screen users questions and automate checks on chatbot outputs, ensuring appropriate speech and preventing hate speech or rule-breaking vocabulary.

Meta is expected to provide further details about its AI product roadmap during the Connect developer event in September. As the tech industry rapidly advances in the AI race, Meta’s AI-powered chatbots will undoubtedly play a significant role in shaping the future of user interactions and content delivery.

Graft: Empowering Companies with a User-Friendly AI Platform

In a bid to democratize AI development and make it accessible to companies of all sizes, Graft launched its AI development platform in beta last year. Today, the startup celebrates a significant milestone with a $10 million seed investment and an exciting step towards opening the platform to a larger audience of companies.

The brainchild of Graft’s co-founder and CEO, Adam Oliner, the idea for the company emerged while he was overseeing AI at Slack. Oliner recognizes the tremendous potential AI, particularly ChatGPT, holds for businesses. However, he emphasizes the distinction between experimenting with ChatGPT and creating a robust, production-ready AI application.

“The shiny AI toys available do not cater to production needs, and non-experts often find existing platforms cumbersome. Graft aims to bridge this gap by providing a modern, production-grade AI platform accessible to everyone,” explained Oliner to TechCrunch.

Indeed, while large language models like ChatGPT have simplified some aspects of AI development, the journey to creating a fully operational application remains challenging. The complexity of these models, their lack of transparency, and the emergence of new concerns regarding compliance, privacy, and AI ethics have added layers of intricacy.

User-Friendly Apps: Simplifying AI Adoption

In response to these challenges, Graft has introduced a series of user-friendly apps to facilitate customers quick entry into the AI space without the need for starting from scratch. These apps are templatized use cases, allowing users to easily instantiate them into fully functional production use cases with their data.

For instance, some of the current offerings include visual search and identifying customer champions. Graft strives to simplify the onboarding process—users can create a Graft account, choose from pre-defined templates, and seamlessly integrate their data. The company handles the infrastructure, making the application deployment process a breeze.

As Graft secures additional funding, it gears up to open its platform to more companies in a controlled manner. The vision is to make AI development an inclusive process, empowering businesses of all sizes to harness the potential of artificial intelligence effectively. With Graft’s dedication to creating a user-friendly, production-grade AI platform, the future of AI adoption in the business world looks promising.

OpenAI Introduces ‘Custom Instructions’ Feature for ChatGPT Users

Tired of repeating instructions to ChatGPT every time you interact with it? OpenAI has come to the rescue with the launch of their new ‘Custom Instructions‘ feature on Thursday. While currently in beta and exclusive to Plus users, this handy tool allows users to save instruction prompts in the chatbot’s memory, streamlining the conversation process.

OpenAI acknowledged user feedback about the inconvenience of starting each ChatGPT conversation from scratch. With ‘Custom Instructions,’ this friction is greatly reduced, saving users from repetitive prompts.

How ‘Custom Instructions’ Works: Tailoring Responses to User Needs

As described in OpenAI’s blog post, users will be prompted to answer two key questions to utilize ‘Custom Instructions.’ The first question is: “What would you like ChatGPT to know about you to provide better responses?” Users can provide information relevant to their specific needs. For instance, a chef using ChatGPT for recipes might respond with: “I am a chef at a Manhattan restaurant.” The second question is: “How would you like ChatGPT to respond?” Users can then specify their desired responses. For example, the chef may type in: “When I ask for recipes, give me a variation of 3-4 recipes from the best cooks in the world, with portions designed for a single serving.”

Currently, the character limit for user responses is set at 1500. The introduction of ‘Custom Instructions’ has been met with a positive response on social media.

However, OpenAI is aware of potential drawbacks during the beta phase. ChatGPT may not always interpret instructions perfectly, sometimes overlooking or misapplying them. In terms of safety, OpenAI addresses concerns by implementing their Moderation API to prevent the storage of instructions that violate their Usage Policies. Additionally, the model can refuse or ignore instructions that lead to policy-violating responses.

Joanne Jang, a strategic product manager at OpenAI, confirmed that the chatbot refused to respond when prompted with harmful queries, demonstrating the safety measures in place.

One limitation of the feature is that users like the chef in the example will need to switch off the custom instructions tool or provide new instructions if they wish to use ChatGPT for other purposes beyond recipes. This might be time-consuming but could potentially be addressed in future updates.

While ‘Custom Instructions’ is currently available in 22 countries, it is not accessible in the UK and EU. Nevertheless, OpenAI plans to expand the feature to all users worldwide in the coming weeks. The new tool promises to enhance the ChatGPT experience, making interactions more efficient and tailored to individual preferences.

Google’s Genesis: An AI Tool for News Writing Raises Concerns Among Journalists

Google has engaged in discussions with media organizations under the News Corp umbrella, which includes The New York Times, The Washington Post, and The Wall Street Journal. The purpose of these meetings is to introduce an AI tool named Genesis, designed to produce and write news stories. According to an exclusive report by The New York Times, Google aims to promote journalism productivity through this tool.

Despite Google’s assurance that the AI tool is meant to assist journalists rather than replace them, it has sparked concerns among industry professionals. Some executives present during the pitch expressed discomfort with the AI’s lack of understanding of the effort required to produce accurate news stories.

While AI can support journalists with research tasks, it may struggle to provide credible and original reporting value. The risk of misinformation spreading is a significant concern, as large language models can sometimes produce incorrect information with confidence.

As the media landscape shifts toward AI-generated content and crowdsourced news, the importance of investigative journalism and fact-checking becomes even more critical.

Several media organizations, including Insider, NPR, and The Times, have already started exploring ways to integrate AI tools into their newsrooms. Google’s spokeswoman, Jenn Crider, clarified that the AI tool is intended to handle menial tasks, such as generating headline options, rather than replacing human journalists.

While some argue against outright rejection of Genesis, citing past instances where technology has transformed aspects of journalism, others highlight potential pitfalls. For instance, Joshua Benton, an American journalist and founder of Nieman Journalism Lab, conducted an experiment using ChatGPT, an AI language model. The result revealed that AI-generated reports can be problematic, containing purple prose, racist elements, and ethical concerns.

In conclusion, the introduction of AI tools like Genesis may have the potential to aid journalists in their work, but the concerns surrounding misinformation and ethical issues warrant cautious consideration. The evolution of technology in journalism has shown both promise and challenges, making it essential for the industry to strike a balance between embracing innovation and upholding journalistic integrity.

Elon Musk Launches xAI: A New Startup Blurring the Lines of His AI Stance

The world’s wealthiest individual, Elon Musk, has recently unveiled his latest startup venture called xAI. Musk has assembled a team of seasoned professionals in the field of artificial intelligence (AI) to establish this new enterprise. However, Musk’s announcement on Twitter provides only limited details about the objectives of the startup, leaving its purpose somewhat ambiguous for now.

This is not Musk’s first involvement in the realm of AI. Back in 2015, he became an investor in OpenAI, a non-profit research lab dedicated to AI exploration. However, Musk gradually distanced himself from the organization, which eventually split into two entities: one focused on profit-driven endeavors, and the other remaining committed to its non-profit nature.

The profit-driven arm of OpenAI gained significant recognition last year with the launch of ChatGPT, a conversational chatbot that has found applications in generating content, ideas, and code for users.

Musk’s Stance on AI Deployment Musk has been vocal about his concerns regarding OpenAI’s trajectory and AI in general. In a recent interview with the BBC, he revealed his longstanding worries about AI safety, spanning over a decade.

During a CNBC interview, Musk raised questions about the transformation of OpenAI from a non-profit, open-source organization to a for-profit entity that began withholding information about its technology. He also expressed caution about OpenAI’s close ties to Microsoft and the potential impact of the latter’s investments on the former’s future growth.

In March, Musk joined other notable figures in signing a letter calling for a six-month pause on Giant AI Experiments. However, Interesting Engineering reported a month later that Musk had established his own AI startup called xAI, which is now taking shape.

What is xAI?

Currently, limited information is available about the specific plans of xAI. The company has a website that lists several individuals with previous experience at prominent companies like DeepMind, OpenAI, Google Research, Microsoft Research, and Tesla, among others, who have now joined the xAI team.

Notable names include Igor Babuschkin, Manuel Kroiss, Yuhuai (Tony) Wu, and Christian Szegedy, who have made significant contributions to advancements in the AI field, such as AlphaCode, Minerva, GPT 3.5, and the upcoming GPT-4.

The announcement confirms that Elon Musk will lead the team, with Dan Hendrycks, the current Director at the Center for AI Safety, serving as the team’s advisor.

xAI has clarified that it operates independently from X Corp, dispelling any notion that it is an AI project directly tied to X (formerly Twitter). However, this statement does not rule out the possibility of future collaboration with Musk’s other ventures like Tesla or X.

More details are expected to emerge as xAI prepares to host a Twitter Spaces chat on Friday, July 14.

Nevertheless, the fundamental question remains: what unique approach does Musk intend to take with a diverse group of individuals who are actively involved in developing the very AI he strongly opposes? Additionally, if his intention is to open-source AI, how does xAI plan to generate revenue?

The Twitter Space event on Friday promises to be an intriguing platform for further insights and discussions.

Google’s Bard Chatbot Expands to 40 Languages and EU After Privacy Delay

Google’s ChatGPT competitor, Bard, is now available to a wider audience, including the European Union (EU) and users in over 40 languages. The launch was initially delayed due to concerns over data privacy. Bard comes with several new features, although some are currently only available in English.

Google introduced Bard as a response to the growing success of ChatGPT, developed by OpenAI, a company supported by Google’s rival, Microsoft. While Bard was initially accessible for early access in the United States and the United Kingdom in English, it expanded globally in May to 180 countries with support for Japanese and Korean. However, the EU launch was postponed after the Irish Data Protection Commission (DPC) raised privacy concerns. Google has now addressed these concerns and launched Bard in the EU.

New Features and Improved Performance Accompany Bard’s Wider Rollout

According to Bard’s product lead, Jack Krawczyk, and VP of engineering, Amarnag Subramanya, Google actively engaged with experts, policymakers, and privacy regulators during the expansion process. This launch is considered Google’s largest expansion to date, offering support for Arabic, Spanish, Chinese, German, and Hindi. Additionally, Bard is now available in Brazil.

Alongside the expansion, Bard introduces new features focused on enhancing its responses and productivity. Users can now adjust the tone and style of Bard’s responses with options like “simple,” “long,” “short,” “professional,” or “casual.” The text-to-speech AI feature allows Bard to vocalize its responses in over 40 languages, accessible through a sound icon next to the prompt. For productivity, Bard can export Python code to Replit, a browser-based integrated development environment. Users can also include images in prompts for analysis, pin, rename, and resume recent conversations, and easily share Bard’s responses through links.

Google faced challenges with Bard initially, as it struggled to match the quality of responses from ChatGPT and even provided factually incorrect answers with fabricated citations. This led to criticism from Google employees and a drop in the company’s stock. However, Google claims that Bard has improved, particularly in areas like math and programming. It has gained extensions from Google’s apps and services, as well as third-party partners like Adobe. Bard can now explain code, structure data in tables, and include images in its responses.

However, recent reports from Bloomberg highlighted that the contractors who train Bard are often overworked and underpaid, receiving minimal training and rushed to complete complex audits. This follows an earlier report by Insider, which revealed insufficient time given to verify Bard’s most accurate answers. It appears that these issues have not been addressed adequately.