The new machines will have improved resolution and other features, but it seems that Volish Researchers have managed to make it identify the difference between an open hand and a fist. According to i-Programmer the latest work on the Kinect uses the same sort of machine-learning approach.
It uses a large number of images of people's hands and supervised training to distinguish between open and closed hands. It takes a lot to do this as it uses forests of decision trees, which is the same general method used to implement the skeleton tracking.
It will make Kinect incredibly useful. It will mean that the user interface to distinguish a "pick up" or "grip" gesture. Not only can you move the hands within an image, close both hands to grip the image points and move apart to zoom. It is not clear when Vole will install the software in any of its products yet. When it does it will be able to identify that you are shaking your fist at the screen and screaming “curse you Steve Ballmer, we hates you forever” which would be cathartic.