Wave AI is our native ChatGPT integration that allows you to ask questions and receive answers directly from your terminal. This feature streamlines your workflow by providing AI assistance without the need to switch between multiple tools or applications.

Wave AI also supports the use of third party LLM providers, allowing you to tailor your AI experience to your specific needs and concerns, whether they’re related to privacy, ethics, or accessing the latest technologies. See the Third-Party LLM Support section for a full list of supported LLM providers.

Using Wave AI

​There are currently two ways to use Wave AI: interactively, and via the /chat command.

Interactive

The first way is by clicking the “Wave AI” button in the command box or using the shortcut ctrl + space. This will open an interactive chat session where you can have a continuous conversation with the AI assistant.

In the interactive mode, you can ask follow-up questions, provide additional context, and engage in a more dynamic dialogue with Wave AI. This is particularly useful when you need to iterate on a problem or explore a topic in more depth.

Chat command

Alternatively, you can use the /chat command followed by your question to get a quick answer from the terminal.

Usage:
/chat How do I resolve a merge conflict?

This method is ideal for one-off queries or when you need a straightforward answer without the need for a back-and-forth conversation.

Note: Invoking Wave AI via the /chat command will result in an ad-hoc query, as each command in Wave gets its own command box. If you anticipate needing to ask follow-up questions or provide more context, it’s recommended to use the interactive mode.

Customization

By default, Wave will proxy your requests through our cloud servers to OpenAI. However, you can customize your experience by modifying the following settings in the UI or by using the /client:set command:

  • aiapitoken: Set your own OpenAI API key if you prefer not to use Wave’s default configuration. Note that your API token will never be sent to Wave’s cloud servers. When you provide your own API key, all future requests will be sent directly to OpenAI or any other endpoint you specify using aibaseurl.
  • aibaseurl: If you want to use other 3rd party LLM providers compatible with the OpenAI API, you can change the base URL. You will also want to set your aiapitoken in conjunction with the base url to use a different service if that service requires one.
  • aimaxchoices: This option determines the number of different response variations the AI model will generate for each query. Increasing this value will provide more diverse responses, while decreasing it will make the model’s output more focused and consistent.
  • aimaxtokens: This setting allows you to control the maximum number of tokens (words or word pieces) that the AI model will generate in a single response.
  • aimodel: By default, Wave uses ChatGPTs gpt-3.5-turbo model, however you can choose a different model if you wish. You will need to also set aiapitoken if you choose to use another ChatGPT model. Also, when configuring other third-party services you will want to change this setting to the appropriate model.
  • aitimeout: Specify the maximum time (in milliseconds) to wait for a response from the AI service before timing out. The default value is 10 seconds. This setting is particularly useful when configuring and troubleshooting LLM providers, as response times can vary significantly depending on the hardware constraints of the system running the model.

Note: In order to prevent abuse, telemetry must be enabled to use the cloud servers. There is currently a rate limit of 200 requests per day.

Third-Party LLM Support (BYOLLM)

Wave AI supports various third-party Large Language Model providers, allowing you to bring your own LLM (BYOLLM) and choose the model that best suits your needs and preferences. This section provides a comprehensive list of providers that are compatible with Wave AI, enabling you to make an informed decision based on your specific requirements and concerns.

To get started with a specific integration, simply click on the provider to access the setup instructions and configuration details for that particular LLM.

Supported LLM Providers:

Note: Currently, Wave’s third-party LLM integration supports language models that are compatible with OpenAI’s Chat Completions API. We are actively working on expanding our integrations to support a wider range of LLM providers and API formats in the near future.

Disabling Wave AI

Wave AI functionality can be disabled by simply turning telemetry off. This can be done in the UI under “Settings”, or by issuing the /telemetry:off command.

Future Plans

We’re excited about the future of Wave AI and have plans to expand its capabilities. In the near future, we’ll be expanding our third-party offerings to include more integrations around popular cloud-based models like Anthropic’s Claude, Google’s Gemini, and more.