Published in AI

AI PC will take over the world

by on12 January 2024


But CES shows most of them are rubbish for now

CES 2024 is packed full of AI PCs this year which are designed to speed up AI tasks on your own device, rather than relying on cloud servers, but most of them are not much chop yet.  

HP's new Omen Transcend 14 showed off how the NPU can be used to stream videos while the GPU ran Cyberpunk 2077. While this was pretty cool, it was focused on creators rather than the great unwashed.

 Acer's Swift laptops were more practical. They integrate Temporal Noise Reduction and what Acer calls PurifiedView and PurifiedVoice 2.0 for AI-filtered audio and video, with a three-mic array, and there are more AI features promised to come later this year.

MSI's attempt at local AI tackles cleaning up Zoom and Teams calls. A Core Ultra laptop demo showed Windows Studio Effects using the NPU to automatically blur the background of a video call. Next to it, a laptop set up with Nvidia's AI-powered Broadcast software was doing the same. The Core Ultra laptop used much less power than the Nvidia notebook, since it didn't need to fire up a separate GPU to process the background blur, shifting the task to the low-power NPU instead.

Just as practically, MSI's new AI engine detects what you're doing on your laptop and changes the battery profile, fan curves, and display settings as needed for the task. Play a game and everything gets cranked; start typing Word docs and everything slows down.

MSI also had an AI Artist app, running on the popular Stable Diffusion local generative AI art framework, that lets you create images from text prompts, create suitable text prompts from images you plug in, and create new images from images you select.

Windows Copilot and other generative art services can already do this, of course, but AI Artist does the task on your own device and is more versatile than simply typing words into a box to see what pictures it can create.

Lenovo's vision for NPU-driven AI seemed the most appealing. Called "AI Now," this text input-based suite of features looks genuinely helpful.You can use it to generate images and ask it to automatically set those images as your wallpaper.

More helpfully, typing prompts like "My PC config" instantly brings up hardware information about your PC, removing the need to dive into complicated Windows sub-menus.

Asking for "eye care mode" enables the system's low-light filter. Asking it to optimise battery life adjusts the power profile depending on your usage, similar to MSI's AI engine.

More useful was Lenovo's Knowledge Base feature. You can train AI Now to sift through documents and files stored in a local "Knowledge Base" folder, and quickly generate reports, summaries, and synopses based on only the files within, never touching the cloud.

 If you stash all your work files in it, you could, for instance, ask for a recap of all progress on a given project over the past month, and it will quickly generate that using the information stored in your documents, spreadsheets, et cetera. Now this seems truly useful, mimicking cloud AI-based Office Copilot features that Microsoft charges businesses an arm and a leg for.

AI Now is in the experimental stage, and when it launches later this year, it will come to China first. What's more, the demo I saw wasn't actually running on the NPU yet - instead, Lenovo was using traditional CPU power for the tasks.

What CES shows us is that NPUs are only just starting to appear in computers, and the software that uses them ranges from gimmicky to "way too early" - it'll take time for the rise of the so-called "AI PC" to develop in any practical sense.

Nvidia was doing a lot more with features like DLSS, RTX Video Super Resolution, and Nvidia Broadcast are natural, practical real-world AI applications that users love and use every day.

The Green giant mainly was showing off cloud-based AI tools including its ACE character engine for game NPCs can now hold full-blown generative chats about anything, in a variety of languages.

A lineup of creator-focused Nvidia Studio laptops were on-hand showing just how powerful GeForce's dedicated ray tracing and AI tensor cores can be at speeding up creation tasks, such as real-time image rendering or removing items from photos. But again, while that's amazing for creators, it's of little practical benefit to everyday consumers.

One, a supplement to the existing RTX Video Super Resolution feature that uses GeForce's AI tensor cores to upscale and improve low-resolution videos, focuses on using AI to convert standard dynamic range video into high dynamic range (HDR). RTX Video HDR looked good in demos. The overly dark shadows in a Game of Thrones scene caused by video compression were cleared up and brightened using the feature, delivering a stunning increase in image quality.

It was a similar story in an underground scene from another still, where the back of a subway was dark beyond recognition, but RTX Video HDR let you pick out a tunnel, rubbish bins, and other hidden aspects previously lost to the gloom. It looks great and should be arriving in a GeForce driver later this month.

Last modified on 12 January 2024
Rate this item
(2 votes)