New shipped laptops have two cameras integrated. This leads to two new innovations:
3D-Chat: Calculating the two pictures delivered by the two cameras, the computer can generate a 3 dimensional picture of your face.
Tracking of eye-movements: With two cameras it's possible to calculate how far away the face is from the screen. Then the computer calculates at what point on the screen the eyes are looking at.
Eye-tracking delivers several new possibilities:
Controlling-device for disabled people or enhancing the control for non-disabled persons
Monitoring of attention-level of the user
Arguments: Problems could arise, if the user's face is not parallel to the screen (eg. if the face is diagonal to the screen) or if there are two faces in sight of the cameras. With two cameras you don't have the problem of configuration. Having the two pictures should be enough to calculate the distance.
Questions: Would it be possible to do eye-tracking with only one camera? But then, the eye-tracking would need to be configured before use, because one camera can't calculate the distance between the user and the screen all by itself, right? It needs two know distance between the eyes to calculate the distance, as far as I can see.