WordPress Ad Banner

Unleash the Power of AI with the Latest Update for Nvidia ChatRTX

Exciting news for AI enthusiasts! Nvidia ChatRTX introduces its latest update, now available for download. This update, showcased at GTC 2024 in March, expands the capabilities of this cutting-edge tech demo and introduces support for additional LLM models for RTX-enabled AI applications.

What’s New in the Update?

  • Expanded LLM Support: ChatRTX now boasts a larger roster of supported LLMs, including Gemma, Google’s latest LLM, and ChatGLM3, an open, bilingual LLM supporting both English and Chinese. This expansion offers users greater flexibility and choice.
  • Photo Support: With the introduction of photo support, users can seamlessly interact with their own photo data without the hassle of complex metadata labeling. Thanks to OpenAI’s Contrastive Language-Image Pre-training (CLIP), searching and interacting with personal photo collections has never been easier.
  • Verbal Speech Recognition: Say hello to Whisper, an AI automatic speech recognition system integrated into ChatRTX. Now, users can converse with their own data, as Whisper enables ChatRTX to understand verbal speech, enhancing the user experience.

Why Choose ChatRTX?

ChatRTX empowers users to harness the full potential of AI on their RTX-powered PCs. Leveraging the accelerated performance of TensorRT-LLM software and NVIDIA RTX, ChatRTX processes data locally on your PC, ensuring data security. Plus, it’s available on GitHub as a free reference project, allowing developers to explore and expand AI applications using RAG technology for diverse use cases.

Explore Further

For more details, check out the embargoed AI Decoded blog, where you’ll find additional information on the latest ChatRTX update. Additionally, don’t miss the new update for the RTX Remix beta, featuring DLSS 3.5 with Ray Reconstruction.

Don’t wait any longer—experience the future of AI with Nvidia ChatRTX today!

Nvidia Unveils ‘Chat with RTX’ Next Game-Changer in AI Technology

Nvidia is once again making waves in the tech world with its latest innovation: ‘Chat with RTX.’ Fresh off the success of their RTX 2000 Ada GPU launch, Nvidia is now venturing into the realm of AI-centric applications, and the early buzz surrounding ‘Chat with RTX’ is hard to ignore, especially among users with Nvidia’s RTX 30 or 40 series graphics cards.

Yesterday, Nvidia had heads turning with the introduction of the RTX 2000 Ada GPU. Today, they’re back in the spotlight with ‘Chat with RTX,’ an application designed to harness the power of newer Nvidia graphics cards, specifically the RTX 30 or 40 series.

If you’re onboard the tech train, get ready for an immersive AI experience that puts your computer in control of handling complex AI tasks effortlessly.

This groundbreaking application transforms your computer into a powerhouse, seamlessly managing the heavy lifting of AI-related functions. It is custom-built for tasks ranging from analyzing YouTube videos to deciphering dense documents.

The best part? You only need an Nvidia RTX 30 or 40-series GPU to embark on this AI adventure, making it an irresistible proposition for those already equipped with Nvidia’s latest graphics technology.

Time-Saving Capabilities with ‘Chat with RTX’

The allure of this lies in its potential to save time, particularly for individuals dealing with vast amounts of information. Imagine swiftly extracting the essence of a video or pinpointing crucial details within a stack of documents.

Its aims to be your go-to AI assistant for such scenarios, joining the ranks of other prominent chatbots like Google’s Gemini or OpenAI’s ChatGPT, but with the distinctive Nvidia touch.

However, let’s not overlook its imperfections. When functioning optimally, ‘Chat with RTX’ adeptly guides you through critical sections of your content. Its true prowess shines when tackling documents – effortlessly navigating PDFs and other files, extracting vital details almost instantaneously.

For anyone familiar with the overwhelming task of sifting through extensive reading material for work or school, ‘Chat with RTX’ could be a game-changer.

Yet, like any innovation, ‘Chat with RTX’ is a work in progress. Setting it up requires patience, and it can be resource-intensive. Some wrinkles still need smoothing out – for instance, it struggles with retaining memory of previous inquiries, necessitating starting each question anew.

Nevertheless, given Nvidia’s pivotal role in the ongoing AI revolution, these quirks are likely to be addressed swiftly as ‘Chat with RTX’ evolves.

Looking Ahead: The Future of AI Interaction

As we eagerly await the refinement of ‘Chat with RTX,’ the application provides a glimpse into the future of AI interactions. Nvidia, renowned for its trailblazing efforts in the AI field, appears poised to push the boundaries further and shape the future of AI assistance.

While ‘Chat with RTX’ may have some rough edges at present, it represents a promising stride forward in AI integration. Keep an eye out as Nvidia continues to lead the charge in driving innovation. Stay tuned for updates on ‘Chat with RTX’ and the exciting possibilities it holds.

Otter is Introducing a Meeting-Oriented AI Chatbot

Today, automatic transcription service Otter announced the launch of its new AI-powered chatbot. Designed to facilitate seamless collaboration among meeting participants, the Otter AI Chatbot enables users to ask questions during and after meetings, helping them catch up and interact effectively with their teammates.

The Otter AI Chatbot offers various functionalities, such as providing meeting updates for latecomers with questions like “I’m late to the meeting! What did I miss?” It can also generate follow-up emails containing action points once the meeting concludes. This chatbot leverages contextual understanding to offer relevant answers based on the meeting discussions.

In March, Zoom introduced a similar feature to provide meeting summaries for users who join late.

Unlike chatbots like ChatGPT that focus on one-on-one conversations, Otter’s bot can cater to multiple individuals. Teammates can tag each other for clarification or assign action items, enhancing collaboration. Previously, Otter facilitated this through comments on the transcription.

Although Otter mentions transcribing over 1 million spoken words per minute, it does not specify if the Otter AI Chatbot was trained on that data.

The company plans to roll out the Otter AI Chat feature to all users in the coming days and assures that AI chat data will not be shared with third parties.

In February, Otter launched the OtterPilot bot, which automatically emailed meeting summaries to participants. Additionally, the bot included images of important slides within the meeting notes and transcription. With the introduction of Otter AI Chat, the company aims to provide an added layer of intelligence to its existing AI-powered note-taking feature, enabling users to ask more complex questions.

The incorporation of AI-generated meeting notes and summaries in various formats has become a common feature in meeting-related tools, thanks to advancements in Large Language Models (LLMs). Meeting companies are continually striving to enhance their offerings, leveraging chatbots that utilize meeting data to provide valuable insights.

Meet Pi, A New AI Chatbot for Personal Assistance and Emotional Support

“Hey there, great to meet you. I’m Pi, your personal AI.”

This is the message one is met with when you open Inflection AI’s chatbot, which is the latest addition to a stream of chatbots introduced by big tech companies over the last couple of months.

Called Pi, it’s a ChatGPT-like competitor designed to be a kind and supportive companion assistant.

“My goal is to be useful, friendly, and fun. Ask me for advice, for answers, or let’s talk about whatever’s on your mind.”

Pi stands for ‘Personal intelligence’

Early user feedback suggests that it’s a slower version of ChatGPT. When given a prompt, the green cursor dwindles for a couple of seconds before it coughs up an answer. Unlike with other chatbots, it gives the impression of talking to a chatty friend.

Launched in 2022, Inflection AI is an artificial intelligence startup founded by LinkedIn co-founder Reid Hoffman and Google DeepMind co-founder Mustafa Suleyman.

The startup developed the chatbot technology in-house, keeping in mind human-like conversations with high emotional intelligence. Pi comes at a time when there’s much hype around generative AI, which is driving a surge of investor and consumer interest.

“Pi is a new kind of AI, one that isn’t just smart but also has good EQ. We think of Pi as a digital companion on hand whenever you want to learn something new, when you need a sounding board to talk through a tricky moment in your day, or just pass the time with a curious and kind counterpart, Suleyman told Business Wire.

“We have a lot to learn and a long way to go, but we are excited to bring this first version of Pi to people around the world.”

The chatbot is useful for everyday interactions, but it cannot generate code or essays. When asked, ‘How can I enhance my badminton skills?’ Pi was able to generate a handful of recommendations after having a shot at it.

However, the chatbot was unable to determine who won the 2022 FIFA World Cup. It stated that it was last updated in November 2022 and hence lacked that information. As a result, it has a long way to go in terms of development.

Chinese Brokerage Firm Launches AI Chatbot for Stock Trading

Tiger Brokers, a stock brokerage firm based in Beijing, China, has launched an artificial intelligence (AI) powered chatbot “to address some of the pain points” of the site’s customers, South China Morning Post reported. The firm began working on the project in January this year and is the first brokerage firm to offer such a service to its customers.

The popularity of OpenAI’s ChatGPT chatbot has opened up the application of AI to a plethora of services, with financial institutions taking a strong interest in its usage. Interesting Engineering reported last month that financial software and media company Bloomberg had launched its own AI model for the financial markets while JPMorgan Chase put its AI to identify trends from the Federal Reserve’s statements over the past 25 years and predict its next moves.

Tiger Brokers has taken AI a step further to analyze real-time market trends and make suggestions to users about trading decisions.

The power of AI to beat the markets

Anybody who has delved into the trading space knows very well the amount of data that one needs to go through and make some informed decisions about their money, if they do not wish to gamble it away. The amount of information to be analyzed increases exponentially depending on the number of stocks in one’s portfolio and factors that can impact those businesses.

With such large amounts of information to be processed, it makes sense that one deploys AI to do the hard work and identify the market trends and make recommendations. Tiger Brokers spent a good part of three months determining if it wanted to provide its own AI to its platform’s two million customers.

In January this year, the company began training its AI model using premium content that it has access to. The chatbot is now capable of analyzing current affairs and macroeconomic trends, which is expected to help users save time on market research and get the latest information.

Called TigerGPT, the chatbot is available only for a small set of users currently. Interested users might be interested in knowing that the financial company used GPT-3, the precursor to the AI model that powers ChatGPT to train its own AI. OpenAI has since launched GPT-4.

Nevertheless, AI models are also prone to “hallucinations”, a term used to describe inaccurate and sometimes even made-up responses given by the chatbot. Investment advice based on such information could be detrimental to user interest. The company is currently working with regulators to ensure that its model is compliant with current rules for such technology.

Financial markets have previously deployed robo-advisers and algorithms for investments, so the introduction of AI should not be a dramatic change, experts told SCMP.

AI Chatbots Will Teach Kids How to Read and Write: Bill Gates

Microsoft co-founder Bill Gates has once again spoken fondly about how artificial intelligence (AI) will change the world for the better and is confident that chatbots in the future will be able to teach kids how to read and even hone their writing skills. Gates was speaking at the ASU+GSV Summit in San Diego last week, CNBC reported.

AI chatbot ChatGPT has taken the world by storm and Gates’ company Microsoft is busy integrating the AI model into its existing products. Others like Elon Musk are not very happy about the pace at which AI is being introduced to society and have even called for a moratorium on the release of new products.

Gates, however, is not perturbed but is impressed by the chatbot’s ability to read and write. Interesting Engineering has previously reported how ChatGPT can write poems and essays and can even pass standardized tests. But Gates expects the ability of the chatbots to improve even further in the coming 18 months.

A chatbot is my tutor

In a keynote address at the Summit, Gates showered praise on the “fluency” of chatbots to read and write and went on to add that this ability would soon allow them to help teach children and improve their writing and reading.

Gates believes that these improvements, which no technology has ever offered before, will stun us at first as the chatbot will assume the role of reading research assistant and even provide feedback on writing.

ChatGPT has been impressive due to its ability to make human-like conversations and give human-like responses. This has happened since the AI has learned to recognize and recreate human language by itself, instead of relying on the code written by its makers.

Over the next 18 months, Gates expects AI chatbots to get even better at language and maybe even become teacher’s aide, before it finally heads to become language tutor in about two years’ time.

Gates added that AI would make available private tutoring to a large swath of students, who were previously unable to afford it. Even though services like ChatGPT come with a subscription plan, Gates thinks that it will still make AI-led private tutoring cheaper than hiring a human instructor.

Apart from language, Gates even expects AI to get better at math in the near future, even though it typically tends to struggle at even some basic calculations. Engineers at Microsoft are working to empower AI with more reasoning ability to handle such requirements.

Earlier this month, Interesting Engineering reported similar views from Sal Khan, the founder of Khan Academy that is also using GPT-4 to develop virtual tutors to aid learning.

Be Careful What you tell your Chatbot Helper

Alluring and useful they may be, but the AI interfaces’ potential as gateways for fraud and intrusive data gathering is huge – and is only set to grow.

Concerns about the growing abilities of chatbots trained on large language models, such as OpenAI’s GPT-4, Google’s Bard and Microsoft’s Bing Chat, are making headlines. Experts warn of their ability to spread misinformation on a monumental scale, as well as the existential risk their development may pose to humanity. As if this isn’t worrying enough, a third area of concern has opened up – illustrated by Italy’s recent ban of ChatGPT on privacy grounds.

The Italian data regulator has voiced concerns over the model used by ChatGPT owner OpenAI and announced it would investigate whether the firm had broken strict European data protection laws.

Chatbots can be useful for work and personal tasks, but they collect vast amounts of data. AI also poses multiple security risks, including the ability to help criminals perform more convincing and effective cyber-attacks.

Are Chatbots a larger privacy concern than search engines?

Most people are aware of the privacy risks posed by search engines such as Google, but experts think chatbots could be even more data-hungry. Their conversational nature can catch people off guard and encourage them to give away more information than they would have entered into a search engine. “The human-like style can be disarming to users,” warns Ali Vaziri, a legal director in the data and privacy team at law firm Lewis Silkin.

Each time you ask an AI chatbot for help, micro-calculations feed the algorithm to profile individuals

Chatbots typically collect text, voice and device information as well as data that can reveal your location, such as your IP address. Like search engines, chatbots gather data such as social media activity, which can be linked to your email address and phone number, says Dr Lucian Tipi, associate dean at Birmingham City University. “As data processing gets better, so does the need for more information and anything from the web becomes fair game.”

While the firms behind the chatbots say your data is required to help improve services, it can also be used for targeted advertising. Each time you ask an AI chatbot for help, micro-calculations feed the algorithm to profile individuals, says Jake Moore, global cybersecurity adviser at the software firm ESET. “These identifiers are analysed and could be used to target us with adverts.”

This is already starting to happen. Microsoft has announced that it is exploring the idea of bringing ads to Bing Chat. It also recently emerged that Microsoft staff can read users’ chatbot conversations and the US company has updated its privacy policy to reflect this.

ChatGPT’s privacy policy “does not appear to open the door for commercial exploitation of personal data”, says Ron Moscona, a partner at the law firm Dorsey & Whitney. The policy “promises to protect people’s data” and not to share it with third parties, he says.

However, while Google also pledges not to share information with third parties, the tech firm’s wider privacy policy allows it to use data for serving targeted advertising to users.

How can you use chatbots privately and securely?

It’s difficult to use chatbots privately and securely, but there are ways to limit the amount of data they collect. It’s a good idea, for instance, to use a VPN such as ExpressVPN or NordVPN to mask your IP address.

At this stage, the technology is too new and unrefined to be sure it is private and secure, says Will Richmond-Coggan, a data, privacy and AI specialist at the law firm Freeths. He says “considerable care” should be taken before sharing any data – especially if the information is sensitive or business-related.

The nature of a chatbot means that it will always reveal information about the user, regardless of how the service is used, says Moscona. “Even if you use a chatbot through an anonymous account or a VPN, the content you provide over time could reveal enough information to be identified or tracked down.”

But the tech firms championing their chatbot products say you can use them safely. Microsoft says its Bing Chat is “thoughtful about how it uses your data” to provide a good experience and “retain the policies and protections from traditional search in Bing”.

Microsoft protects privacy through technology such as encryption and only stores and retains information for as long as is necessary. Microsoft also offers control over your search data via the Microsoft privacy dashboard.

ChatGPT creator OpenAI says it has trained the model to refuse inappropriate requests. “We use our moderation tools to warn or block certain types of unsafe and sensitive content,” a spokesperson adds.

What about using chatbots to help with work tasks?

Chatbots can be useful at work, but experts advise you proceed with caution to avoid sharing too much and falling foul of regulations such as the EU update to general data protection regulation (GDPR). It is with this in mind that companies including JP Morgan and Amazon have banned or restricted staff use of ChatGPT.

The risk is so big that the developers themselves advise against their use. “We are not able to delete specific prompts from your history,” ChatGPT’s FAQs state. “Please don’t share any sensitive information in your conversations.”

Generating emails in various languages will be simple. Telltale signs of fraud such as bad grammar will be less obvious

Using free chatbot tools for business purposes “may be unwise”, says Moscona. “The free version of ChatGPT does not give clear and unambiguous guarantees as to how it will protect the security of chats, or the confidentiality of the input and output generated by the chatbot. Although the terms of use acknowledge the user’s ownership and the privacy policy promises to protect personal information, they are vague about information security.”

Microsoft says Bing can help with work tasks but “we would not recommend feeding company confidential information into any consumer service”.

If you have to use one, experts advise caution. “Follow your company’s security policies, and never share sensitive or confidential information,” says Nik Nicholas, CEO of data consultancy firm Covelent.

Microsoft offers a product called Copilot for business use, which takes on the company’s more stringent security, compliance and privacy policies for its enterprise product Microsoft 365.

How can I spot malware, emails or other malicious content generated by bad actors or AI?

As chatbots become embedded in the internet and social media, the chances of becoming a victim of malware or malicious emails will increase. The UK’s National Cyber Security Centre (NCSC) has warned about the risks of AI chatbots, saying the technology that powers them could be used in cyber-attacks.

Experts say ChatGPT and its competitors have the potential to enable bad actors to construct more sophisticated phishing email operations. For instance, generating emails in various languages will be simple – so telltale signs of fraudulent messages such as bad grammar and spelling will be less obvious.

With this in mind, experts advise more vigilance than ever over clicking on links or downloading attachments from unknown sources. As usual, Nicholas advises, use security software and keep it updated to protect against malware.

The language may be impeccable, but chatbot content can often contain factual errors or out-of-date information – and this could be a sign of a non-human sender. It can also have a bland, formulaic writing style – but this may aid rather than hinder the bad actor bot when it comes to passing as official communication.

AI-enabled services are rapidly emerging and as they develop, the risks are going to get worse. Experts say the likes of ChatGPT can be used to help cybercriminals write malware, and there are concerns about sensitive information being entered into chat enabled services being leaked on the internet. Other forms of generative AI – AI able to produce content such as voice, text or images – could offer criminals the chance to create more realistic so-called deepfake videos by mimicking a bank employee asking for a password, for example.

Ironically, it’s humans who are better at spotting these types of AI-enabled threats. “The best guard against malware and bad actor AI is your own vigilance,” says Richmond-Coggan.