Skip to content

Latest commit

 

History

History
77 lines (55 loc) · 1.85 KB

README.md

File metadata and controls

77 lines (55 loc) · 1.85 KB

EmoVIS

Prerequisite

IBM Waston IM Services

  1. Create two services at IBM Waston: Speech-to-Text & Tone-Analyzer

  2. Create config.js in the server directory and put credentials in the config.js as follows.

const config = {
    'speech_to_text': [{
        'credentials': {
            'url': ''
            'iam_apikey': ''
        }]
    }],
    'tone_analyzer': [{
        'credentials': {
            'url': '',
            'iam_apikey': ''
        },]
    }]
}

module.exports = config;
  1. Generate Your Own Trusted LocalHost Certificate and put them at server/src/keys/ to enable https at the localhost
openssl req -x509 -out localhost.crt -keyout localhost.key \
  -newkey rsa:2048 -nodes -sha256 \
  -subj '/CN=localhost' -extensions EXT -config <( \
   printf "[dn]\nCN=localhost\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:localhost\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth")

Usage

Client

cd client
npm install
npm start

Server

cd server
npm install
npm start

TODO

- redesign the radial chart
- redesign the text cloud
- redesign the flow chart
- implement the control panel

Acknowledgement

I don't have enough time and computing resources to train models for the facial emotion prediction and speech emotion recognition so I borrowed existing services.

The emotion model and classifier, as well as the landmark tracker are from auduno/clmtrackr. The speech-to-text & tone analyzer are from IBM Waston.

Contributing

Feel free to implement anything from the roadmap, submit pull requests, create issues, discuss ideas or spread the word.

License

MIT © Yuan Chen