Nvidia’s program, Chat with RTX, is a new AI tool based on large language models (LLMs) that can be run on your own computer. It requires a computer running Windows 11, a Nvidia RTX 30/40 series GPU, and 16GB of RAM. The program interface allows you to select the open source AI model (Llama or Mistral) and send prompts for generating text. It can also work with documents and YouTube videos as input for generating responses. While it’s labeled as a “demo app,” it can be useful for various tasks, although it might have some bugs. All chat and model information is stored locally, increasing privacy and security. However, the reliance on average, pro and gaming GPUs likely means slow performance suitable for personal use only, which is another reason it is tagged as a demo app.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…