Sony Ericsson announced the main feature of its new K980: To copy-paste text from books and journals by using the camera. It has never been easier to comment a passage of a book.
Comments of books can be made available online for others.
Whenever you find a remarkable passage in a book, just hold the camera of your mobile phone over it, mark the desired text and save it for later commenting or directly make a note.
If you photograph the book cover, the title of the book is automatically added to the comment entry as meta information.
Arguments: Students all over the world would be happy to have an easy way to digitally comment books in order to make a summary of the book and to facilitate research. The main reason why this has not been implemented yet is probably the performance needed for Optical Character Recognition. Another cause can be the limited possibilities to mark text (no touch screen and low screen resolution) and to enter comments on current mobile phones (only keypad).
A very interesting component of this scenario is also the sharing of the comments via internet. Books could become more and more annotated and commented like blog entries.
Can you think of any other causes that hinder the implementation of such an application?
Shoot & Translate - a J2ME application with OCR that translates text. But from the information given it is not clear whether the OCR-part happens on the phone or on a server.
knfb Mobile Reader - an application that reads photographed documents aloud. Especially useful for blind people. Only for Symbian 3rd Edition phones.
ABBYY Mobile OCR SDK - The one year old video on that site shows a mobile OCR application in action. Altough it looks a lot like a pure demo application. The OCR is done by using an already prepared image of a business card.
New shipped laptops have two cameras integrated. This leads to two new innovations:
3D-Chat: Calculating the two pictures delivered by the two cameras, the computer can generate a 3 dimensional picture of your face.
Tracking of eye-movements: With two cameras it's possible to calculate how far away the face is from the screen. Then the computer calculates at what point on the screen the eyes are looking at.
Eye-tracking delivers several new possibilities:
Controlling-device for disabled people or enhancing the control for non-disabled persons
Monitoring of attention-level of the user
Arguments: Problems could arise, if the user's face is not parallel to the screen (eg. if the face is diagonal to the screen) or if there are two faces in sight of the cameras. With two cameras you don't have the problem of configuration. Having the two pictures should be enough to calculate the distance.
Questions: Would it be possible to do eye-tracking with only one camera? But then, the eye-tracking would need to be configured before use, because one camera can't calculate the distance between the user and the screen all by itself, right? It needs two know distance between the eyes to calculate the distance, as far as I can see.
GTA V generates more revenue from in-game-advertising than from sales of the game itself. Advertising is not obtrusive in the game, but is seamlessly integrated into the game environment. For example on the main squares the advertising looks similar to those in reality, and pedestrians drink Coca Cola or eat at Burger King.