Capturing, Encoding and Rendering Gestures using the Kinect.

Many countries lack networking infrastructures which support video chatting applications such as Skype. Video chatting applications generally requires the transmission of a large amounts of data and slow connections can bottleneck the service causing the image to lag, freeze and even become out of sync. The above problems dont make video chatting a viable option for the hearing impaired or when trying to identify gestures during a conversation.

The application to be developed will allow users to converse asynchronously using the Kinect, by capturing, encoding and rendering the encoded gestures using an avatar. If successful, the application can be used extensively by the deaf in the future.

Downloads

  • Term 1 Presentation
  • Term 2 Presentation
  • Term 3 Presentation
  • Term 4 Presentation
  • Documentation