Nvidia Launches Chat With RTX: A New Personalized AI Chatbot

Key Takeaways:

– Nvidia introduces Chat With RTX, a personalized AI chatbot usable on Windows PCs with Nvidia RTX graphics card.
– Chat With RTX uses a combination of retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software, and RTX acceleration.
– Nvidia’s AI chatbot can converse using local files as datasets, enabling contextually relevant answers.
– Despite some operational issues, the chatbot has the advantage of local processing, making certain sensitive data more secure.

Nvidia Enters into AI Chatbot Realm

Leading GPU manufacturer Nvidia has ventured into the AI chatbot domain with the introduction of Chat With RTX. Launched on Tuesday, this free personalized AI chatbot functions like ChatGPT and runs locally on PCs equipped with an Nvidia RTX graphics card.

Chat With RTX Highlights

Unlike the cloud-based ChatGPT, Chat With RTX employs retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software, and RTX acceleration. It uses large language models (LLMs) such as Mistral or Llama to allow conversations employing local files as datasets. This function provides quick and contextually relevant responses to users’ queries.

Supported by NVIDIA GeForce RTX 30 or 40 Series GPUs, Chat With RTX needs at least 8GB of VRAM to function. It is compatible with different file types, including .PDF, .TXT, .DOCX, and .XML. The tool offers the functionality of browsing certain folders to answer queries swiftly. With an added feature, it can even incorporate information from YouTube videos and playlists when connected to the internet.

What Sets Chat With RTX Apart?

Nvidia’s new chatbot provides a unique user experience. It can discuss various themes and analyze or summarize data like its counterpart, ChatGPT. The Mistral-7B model eliminates specific sensitive topics such as sex and violence. However, users have the option to introduce an unrestricted AI model for wider topic discussions.

Despite its Strengths, Challenges Persist

Downloading and operating Chat With RTX presented a few challenges during testing. Its file size is around 35 gigabytes; this is due to the inclusion of the Mistral and Llama language model weight files. Additionally, it downloads more required files during installation, leading to intermittent crashes.

To run chat With RTX, one needs Python and a web browser window. The application execution involves a web of dependencies like Python, CUDA, TensorRT, and others. This complex setup earned it a reputation of being a layered dependency mess. Nevertheless, the chatbot’s potential is undeniable, especially considering it comes directly from Nvidia.

Focusing on Benefits: Privacy and Local Processing

Despite its nascent stage and a few technical hiccups, Chat With RTX packs a punch with its local processing capability. This feature adds a robust layer of privacy, with the chatbot functioning without needing to transmit sensitive data to cloud-based services. Although it falls short of offering the processing capability of GPT-4 Turbo or Google Gemini Pro/Ultra, it shows promising potential for further developments in the same domain.

Nividia’s Chat With RTX can be downloaded for free from the Nvidia website, extending the reach of AI technology to GPU owners. As Nvidia continues to explore the realm of AI chatbots, further enhancements and advancements are anticipated. This marks the beginning of an exciting new era in the integration of AI in local processing and communication, setting the stage for future development.

LEAVE A REPLY

Please enter your comment!
Please enter your name here