Integrating an LLM [A.I.] into an operating system. Is this a good thing?

Artificial Intelligence integration into future versions of operating systems raises questions about its potential impact on user experience and personal privacy.

On one hand, incorporating advanced machine learning techniques into operating systems could potentially enhance functionality and make devices smarter.

For instance, the inclusion of natural language processing capabilities could simplify commands and streamline interactions with digital assistants. In addition, intelligent algorithms could optimize performance and power consumption, resulting in improved battery life and faster response times.

On the other hand, there are legitimate concerns regarding the collection and storage of user data, which becomes increasingly valuable to tech companies seeking new revenue streams through advertising and targeted marketing campaigns. With little transparency around data usage, users risk having their private information harvested without explicit consent.

To address these concerns, manufacturers must establish clear guidelines outlining how collected data will be handled, allowing customers to make informed choices about their level of engagement with AI-powered features.

Overall, the introduction of AI elements in future iterations of operating systems presents both benefits and challenges that require careful consideration before implementation.

How do the members of Techlore feel? I’m interested in reading about people’s thoughts and potential concerns.

Just like any other software, AI can be as private or as non private as you make it.

If someone manages to integrate AI into an OS the proper way (with the proper way meaning something like KITT from Knight Rider to me) and this AI is private in the sense that either every inference is done locally or on a server that simply takes the input, runs the model and returns the output to you without collecting it, storing it or doing anything else, i would probably use it.

For what is worth, Microsoft is integrating (aka half-baking) the OpenAI models on literally everything they have, including Windows 11. However, “Windows Copilot” (that’s the name) is not different from a native-looking window for Bing Chat with the option to enable dark mode for you

You can just go to Huggingface and download trained and tuned models for free, and run them locally on your own system.

If you have an GPU with 16GB VRAM, you can run 7B or 13B models without any issues, I’m getting pretty much instant replies when using 13B models. 30B and 33B models will work with RAM offloading, but you are going to need 64 GB system memory, and performance it noticeable worse, often replies take 10-20 sec.