LocalAI is a free, open-source alternative to OpenAI that allows users to run large language models (LLMs), generate images, audio, and more locally or on-premises using consumer-grade hardware. It acts as a drop-in replacement REST API compatible with OpenAI API specifications, enabling users to perform local inferencing. LocalAI supports multiple model families and architectures without requiring a GPU, giving users the power and flexibility to leverage AI while maintaining full control over their data and infrastructure.

Note: At the moment, Wave’s integration with LocalAI only supports their LLM features. Image generation, audio processing, speech to text, and other capabilities are not yet available but may be added in future updates.

To see a full list of supported LLM providers, please visit the Third-Party LLM Support section in the Wave AI features page.

Installation

Please visit LocalAI’s quickstart guide for instructions on downloading and installing LocalAI.

Configuration

After installing and configuring LocalAI, you can start using it in Wave by setting two parameters: aibaseurl and aimodel. These parameters can be set either through the UI or from the command line, but please note that the parameter names are slightly different depending on the method you choose.

Parameters

  • AI Base URL: Set this parameter to the base URL or endpoint that Wave AI should query. For LocalAI running locally, use http://localhost:8080. Please note that the port number 8080 may be different depending on your specific installation. For remote LocalAI instances, replace localhost with the appropriate hostname or IP address of the server where LocalAI is running. If the port number is different from the default 8080, update it accordingly in the URL.

  • AI Model: Specify the LocalAI model you want to use. To discover available models, you can query the LocalAI endpoint, /v1/models.

    Example: http://localhost:8080/v1/models.

Configuring via the UI

To configure LocalAI from Wave’s user interface, navigate to the “Settings” menu and set the AI Base URL and AI Model parameters as described in the previous section.

Configuring via the CLI

To configure LocalAI using the command line, set the aibaseurl and aimodel parameters using the /client:set command, as shown in the example below.

/client:set aibaseurl=<your-LocalAI-base-url>
/client:set aimodel=<your-LocalAI-model-name>

Usage

Once you have installed and configured LocalAI, you can start using it in Wave. There are two primary ways to interact with your newly configured LLM: Interactive Mode and by using the /chat command.

  • Interactive Mode: To enter Interactive Mode, click the “Wave AI” button in the command box or use the ctrl + space shortcut. This will open an interactive chat session where you can have a continuous conversation with the AI assistant powered by your LocalAI model.
  • /chat: Alternatively, you can use the /chat command followed by your question to get a quick answer from your LocalAI model directly in the terminal.

Troubleshooting

If you encounter issues while using LocalAI with Wave AI, consider the following troubleshooting steps:

  • Connection failures: If Wave AI fails to connect to LocalAI or returns an error message, verify that LocalAI is running and accessible from the system where Wave is installed. Check the LocalAI logs for any error messages or indications of why the connection might be failing.
  • Timeouts: If you’re unable to complete a query or incur frequent timeouts, try adjusting the aitimeout parameter to a higher value. This will give LocalAI more time to process and respond to your requests, especially if you are running it on a system with limited hardware resources.
  • Incorrect base URL or port: Ensure that the aibaseurl parameter points to the correct URL and port number where LocalAI is running. If you have changed the default port or are running LocalAI on a remote server, update the URL accordingly.
  • Incorrect model selection: If you have multiple LocalAI models installed, make sure to set the aimodel parameter to the specific model you want to use.
  • Unexpected behavior or inconsistent results: If you encounter unexpected behavior or inconsistent results when using LocalAI with Wave AI, try resetting the aibaseurl and aimodel parameters to their default values and reconfiguring LocalAI from scratch. This can help rule out any configuration issues that might be causing problems.

If you continue to face issues after trying these troubleshooting steps, please see the Additional Resources section below for further assistance, or feel free to reach out to us on Discord.

Reset Wave AI

At any time if you find that you wish to return to the default Wave AI experience, you can reset the aibaseurl and aimodel parameters to their default state by using the following commands.

/client:set aibaseurl=
/client:set aimodel=

Note: This can also be done in the UI just as described in previous steps.

Additional Resources