Node.js client library to use the Watson APIs.
- You need an IBM Cloud account.
- Node >= 20: This SDK is tested with Node versions 16 and up. It may work on previous versions but this is not officially supported.
npm install ibm-watson
import AssistantV2 from 'ibm-watson/assistant/v2';
import { IamAuthenticator } from 'ibm-watson/auth';
const assistantClient = new AssistantV2({
authenticator: new IamAuthenticator({ apikey: '{apikey}' }),
version: '{version}',
});
// ...
The examples folder has basic and advanced examples. The examples within each service assume that you already have service credentials.
Starting with v5.0.0, the SDK should work in the browser, out of the box, with most bundlers.
See the examples/
folder for Browserify and Webpack client-side SDK examples (with server-side generation of auth tokens.)
Note: not all services currently support CORS, and therefore not all services can be used client-side. Of those that do, most require an auth token to be generated server-side via the Authorization Service.
Watson services are migrating to token-based Identity and Access Management (IAM) authentication.
- With some service instances, you authenticate to the API by using IAM.
- In other instances, you authenticate by providing the username and password for the service instance.
- If you are using a Watson service on ICP, you will need to authenticate in a specific way.
- If you are using a Watson service on AWS, you will need to authenticate using mcsp.
Authentication is accomplished using dedicated Authenticators for each authentication scheme. Import authenticators from ibm-watson/auth
or rely on externally-configured credentials which will be read from a credentials file or environment variables.
To learn more about the Authenticators and how to use them with your services, see the detailed documentation.
To find out which authentication to use, view the service credentials. You find the service credentials for authentication the same way for all Watson services:
- Go to the IBM Cloud Dashboard page.
- Either click an existing Watson service instance in your resource list or click Create resource > AI and create a service instance.
- Click on the Manage item in the left nav bar of your service instance.
On this page, you should be able to see your credentials for accessing your service instance.
In your code, you can use these values in the service constructor or with a method call after instantiating your service.
There are two ways to supply the credentials you found above to the SDK for authentication:
- Allow the credentials to be automatically read from the environment
- Instantiate an authenticator with explicit credentials and use it to create your service
With a credentials file, you just need to put the file in the right place and the SDK will do the work of parsing it and authenticating. You can get this file by clicking the Download button for the credentials in the Manage tab of your service instance.
The file downloaded will be called ibm-credentials.env
. This is the name the SDK will search for and must be preserved unless you want to configure the file path (more on that later). The SDK will look for your ibm-credentials.env
file in the following places (in order):
- Directory provided by the environment variable
IBM_CREDENTIALS_FILE
- Your system's home directory
- Your current working directory (the directory Node is executed from)
As long as you set that up correctly, you don't have to worry about setting any authentication options in your code. So, for example, if you created and downloaded the credential file for your Assistant instance, you just need to do the following:
const AssistantV2 = require('ibm-watson/assistant/v2');
const assistant = new AssistantV2({ version: '2024-08-25' });
And that's it!
If you're using more than one service at a time in your code and get two different ibm-credentials.env
files, just put the contents together in one ibm-credentials.env
file and the SDK will handle assigning credentials to their appropriate services.
Special Note: Due to legacy issues in Assistant V1 and V2, the following parameter serviceName
must be added when creating the service object:
const AssistantV2 = require('ibm-watson/assistant/v2');
const assistant = new AssistantV2({
version: '2024-08-25',
serviceName: 'assistant',
})
It is worth noting that if you are planning to rely on VCAP_SERVICES for authentication then the serviceName
parameter MUST be removed otherwise VCAP_SERVICES will not be able to authenticate you. See Cloud Authentication Prioritization for more details.
If you would like to configure the location/name of your credential file, you can set an environment variable called IBM_CREDENTIALS_FILE
. This will take precedence over the locations specified above. Here's how you can do that:
export IBM_CREDENTIALS_FILE="<path>"
where <path>
is something like /home/user/Downloads/<file_name>.env
. If you just provide a path to a directory, the SDK will look for a file called ibm-credentials.env
in that directory.
The SDK also supports setting credentials manually in your code, using an Authenticator.
Some services use token-based Identity and Access Management (IAM) authentication. IAM authentication uses a service API key to get an access token that is passed with the call. Access tokens are valid for approximately one hour and must be regenerated.
To use IAM authentication, you must use an IamAuthenticator
or a BearerTokenAuthenticator
.
- Use the
IamAuthenticator
to have the SDK manage the lifecycle of the access token. The SDK requests an access token, ensures that the access token is valid, and refreshes it if necessary. - Use the
BearerTokenAuthenticator
if you want to manage the lifecycle yourself. For details, see Authenticating with IAM tokens. If you want to switch your authenticator, you must override theauthenticator
property directly.
To use the SDK in a Cloud Pak, use the CloudPakForDataAuthenticator
. This will require a username, password, and URL.
To use the SDK through a third party cloud provider (such as AWS), use the MCSPAuthenticator
. This will require the base endpoint URL for the MCSP token service (e.g. https://iam.platform.saas.ibm.com) and an apikey.
import AssistantV2 from 'ibm-watson/assistant/v2'
import { McspAuthenticator } from 'ibm-watson/auth';
# In the constructor, letting the SDK manage the token
const authenticator = new McspAuthenticator({
url: 'token_service_endpoint',
apikey: 'apikey',
})
const assistant = AssistantV2(version='2024-08-25',
authenticator=authenticator)
assistant.setServiceUrl('<url_as_per_region>')
When uploading your application to IBM Cloud there is a certain priority Watson services will use when looking for proper credentials. The order is as follows:
- Programmatic (i.e. IamAuthenticator)
- Credentials File
- VCAP_SERVICES (an environment variable used by IBM Cloud, details found here)
You can set or reset the base URL after constructing the client instance using the setServiceUrl
method:
const AssistantV2 = require('ibm-watson/assistant/v2');
const assistant = AssistantV2({
/* authenticator, version, etc... */
});
assistant.setServiceUrl('<new url>');
All SDK methods are asynchronous, as they are making network requests to Watson services. To handle receiving the data from these requests, the SDK offers support with Promises.
const AssistantV2 = require('ibm-watson/assistant/v2');
const assistant = new AssistantV2({
/* authenticator, version, serviceUrl, etc... */
});
// using Promises
assistant.listAssistants()
.then(body => {
console.log(JSON.stringify(body, null, 2));
})
.catch(err => {
console.log(err);
});
// using Promises provides the ability to use async / await
async function callAssistant() { // note that callAssistant also returns a Promise
const body = await assistant.listAssistants();
}
Custom headers can be passed with any request. Each method has an optional parameter headers
which can be used to pass in these custom headers, which can override headers that we use as parameters.
For example, this is how you can pass in custom headers to Watson Assistant service. In this example, the 'custom'
value for 'Accept-Language'
will override the default header for 'Accept-Language'
, and the 'Custom-Header'
while not overriding the default headers, will additionally be sent with the request.
const assistant = new watson.AssistantV2({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message({
workspaceId: 'something',
input: {'text': 'Hello'},
headers: {
'Custom-Header': 'custom',
'Accept-Language': 'custom'
}
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
The SDK now returns the full HTTP response by default for each method.
Here is an example of how to access the response headers for Watson Assistant:
const assistant = new AssistantV2({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message(params).then(
response => {
console.log(response.headers);
},
err => {
console.log(err);
/*
`err` is an Error object. It will always have a `message` field
and depending on the type of error, it may also have the following fields:
- body
- headers
- name
- code
*/
}
);
Every SDK call returns a response with a transaction ID in the X-Global-Transaction-Id
header. Together with the service instance region, this ID helps support teams troubleshoot issues from relevant logs.
const assistant = new AssistantV2({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message(params).then(
response => {
console.log(response.headers['X-Global-Transaction-Id']);
},
err => {
console.log(err);
}
);
const speechToText = new SpeechToTextV1({
/* authenticator, version, serviceUrl, etc... */
});
const recognizeStream = recognizeUsingWebSocket(params);
// getTransactionId returns a Promise that resolves to the ID
recognizeStream.getTransactionId().then(
globalTransactionId => console.log(globalTransactionId),
err => console.log(err),
);
However, the transaction ID isn't available when the API doesn't return a response for some reason. In that case, you can set your own transaction ID in the request. For example, replace <my-unique-transaction-id>
in the following example with a unique transaction ID.
const assistant = new AssistantV2({
/* authenticator, version, serviceUrl, etc... */
});
assistant.message({
workspaceId: 'something',
input: {'text': 'Hello'},
headers: {
'X-Global-Transaction-Id': '<my-unique-transaction-id>'
}
}).then(
response => {
console.log(response.headers['X-Global-Transaction-Id']);
},
err => {
console.log(err);
}
);
By default, all requests are logged. This can be disabled of by setting the X-Watson-Learning-Opt-Out
header when creating the service instance:
const myInstance = new watson.WhateverServiceV1({
/* authenticator, version, serviceUrl, etc... */
headers: {
"X-Watson-Learning-Opt-Out": true
}
});
The SDK provides the user with full control over the HTTPS Agent used to make requests. This is available for both the service client and the authenticators that make network requests (e.g. IamAuthenticator
). Outlined below are a couple of different scenarios where this capability is needed. Note that this functionality is for Node environments only - these configurtions will have no effect in the browser.
To use the SDK (which makes HTTPS requests) behind an HTTP proxy, a special tunneling agent must be used. Use the package tunnel
for this. Configure this agent with your proxy information, and pass it in as the HTTPS agent in the service constructor. Additionally, you must set proxy
to false
in the service constructor. If using an Authenticator that makes network requests (IAM or CP4D), you must set these fields in the Authenticator constructor as well.
See this example configuration:
const tunnel = require('tunnel');
const AssistantV2 = require('ibm-watson/assistant/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const httpsAgent = tunnel.httpsOverHttp({
proxy: {
host: 'some.host.org',
port: 1234,
},
});
const assistant = new AssistantV2({
authenticator: new IamAuthenticator({
apikey: 'fakekey-1234'
httpsAgent, // not necessary if using Basic or BearerToken authentication
proxy: false,
}),
version: '2024-08-25',
httpsAgent,
proxy: false,
});
To send custom certificates as a security measure in your request, use the cert
, key
, and/or ca
properties of the HTTPS Agent. See this documentation for more information about the options. Note that the entire contents of the file must be provided - not just the file name.
const AssistantV2 = require('ibm-watson/assistant/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const certFile = fs.readFileSync('./my-cert.pem');
const keyFile = fs.readFileSync('./my-key.pem');
const assistant = new AssistantV2({
authenticator: new IamAuthenticator({
apikey: 'fakekey-1234',
httpsAgent: new https.Agent({
key: keyFile,
cert: certFile,
})
}),
version: '2024-08-25',
httpsAgent: new https.Agent({
key: keyFile,
cert: certFile,
}),
});
The HTTP client can be configured to disable SSL verification. Note that this has serious security implications - only do this if you really mean to!
To do this, set disableSslVerification
to true
in the service constructor and/or authenticator constructor, like below:
const assistant = new AssistantV2({
serviceUrl: '<service_url>',
version: '<version-date>',
authenticator: new IamAuthenticator({ apikey: '<apikey>', disableSslVerification: true }), // this will disable SSL verification for requests to the token endpoint
disableSslVerification: true, // this will disable SSL verification for any request made with this client instance
});
To see all possible https agent configuration options go to this link for the quickest and most readable format. For even more detailed information, you can go to the Node documentation here
You can find links to the documentation at https://cloud.ibm.com/developer/watson/documentation. Find the service that you're interested in, click API reference, and then select the Node tab.
If you have issues with the APIs or have a question about the Watson services, see Stack Overflow.
Use the Assistant service to determine the intent of a message.
Note: You must first create a workspace via IBM Cloud. See the documentation for details.
const AssistantV2 = require('ibm-watson/assistant/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const assistant = new AssistantV2({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.assistant.watson.cloud.ibm.com',
version: '2018-09-19'
});
assistant.message(
{
input: { text: "What's the weather?" },
assistantId: '<assistant id>',
sessionId: '<session id>',
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Assistant service to determine the intent of a message.
Note: You must first create a workspace via IBM Cloud. See the documentation for details.
const AssistantV2 = require('ibm-watson/assistant/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const assistant = new AssistantV2({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.assistant.watson.cloud.ibm.com',
version: '2024-08-25'
});
assistant.message(
{
input: { text: "What's the weather?" },
workspaceId: '<workspace id>'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
Use the Discovery Service to search and analyze structured and unstructured data.
const DiscoveryV2 = require('ibm-watson/discovery/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const discovery = new DiscoveryV2({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.discovery.watson.cloud.ibm.com',
version: '2019-11-22'
});
discovery.query(
{
projectId: '<project_id>',
collectionId: '<collection_id>',
query: 'my_query'
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
Natural Language Understanding is a collection of natural language processing APIs that help you understand sentiment, keywords, entities, high-level concepts and more.
const fs = require('fs');
const NaturalLanguageUnderstandingV1 = require('ibm-watson/natural-language-understanding/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const nlu = new NaturalLanguageUnderstandingV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
version: '2018-04-05',
serviceUrl: 'https://api.us-south.natural-language-understanding.watson.cloud.ibm.com'
});
nlu.analyze(
{
html: file_data, // Buffer or String
features: {
concepts: {},
keywords: {}
}
})
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log('error: ', err);
});
Use the Speech to Text service to recognize the text from a .wav
file.
const fs = require('fs');
const SpeechToTextV1 = require('ibm-watson/speech-to-text/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const speechToText = new SpeechToTextV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.speech-to-text.watson.cloud.ibm.com'
});
const params = {
// From file
audio: fs.createReadStream('./resources/speech.wav'),
contentType: 'audio/l16; rate=44100'
};
speechToText.recognize(params)
.then(response => {
console.log(JSON.stringify(response.result, null, 2));
})
.catch(err => {
console.log(err);
});
// or streaming
fs.createReadStream('./resources/speech.wav')
.pipe(speechToText.recognizeUsingWebSocket({ contentType: 'audio/l16; rate=44100' }))
.pipe(fs.createWriteStream('./transcription.txt'));
Use the Text to Speech service to synthesize text into an audio file.
const fs = require('fs');
const TextToSpeechV1 = require('ibm-watson/text-to-speech/v1');
const { IamAuthenticator } = require('ibm-watson/auth');
const textToSpeech = new TextToSpeechV1({
authenticator: new IamAuthenticator({ apikey: '<apikey>' }),
serviceUrl: 'https://api.us-south.text-to-speech.watson.cloud.ibm.com'
});
const params = {
text: 'Hello from IBM Watson',
voice: 'en-US_AllisonVoice', // Optional voice
accept: 'audio/wav'
};
// Synthesize speech, correct the wav header, then save to disk
// (wav header requires a file length, but this is unknown until after the header is already generated and sent)
// note that `repairWavHeaderStream` will read the whole stream into memory in order to process it.
// the method returns a Promise that resolves with the repaired buffer
textToSpeech
.synthesize(params)
.then(response => {
const audio = response.result;
return textToSpeech.repairWavHeaderStream(audio);
})
.then(repairedFile => {
fs.writeFileSync('audio.wav', repairedFile);
console.log('audio.wav written with a corrected wav header');
})
.catch(err => {
console.log(err);
});
// or, using WebSockets
textToSpeech.synthesizeUsingWebSocket(params);
synthStream.pipe(fs.createWriteStream('./audio.ogg'));
// see more information in examples/text_to_speech_websocket.js
The SDK always expects an authenticator to be passed in. To make an unautuhenticated request, use the NoAuthAuthenticator
.
const watson = require('ibm-watson');
const { NoAuthAuthenticator } = require('ibm-watson/auth');
const assistant = new watson.AssistantV2({
authenticator: new NoAuthAuthenticator(),
});
This library relies on the axios
npm module written by
axios to call the Watson Services. To debug the apps, add
'axios' to the NODE_DEBUG
environment variable:
$ NODE_DEBUG='axios' node app.js
where app.js
is your Node.js file.
Running all the tests:
$ npm test
Running a specific test:
$ jest '<path to test>'
Find more open source projects on the IBM Github Page.
See CONTRIBUTING.
We love to highlight cool open-source projects that use this SDK! If you'd like to get your project added to the list, feel free to make an issue linking us to it.
This library is licensed under Apache 2.0. Full license text is available in [COPYING][license].