# Self-Hosting
You can self-host the Platform code and use it to connect to the Inference Grid directly. This is recommended for users who want to run their own private endpoint, have full control over their funds, and ensure that no one can censor or block their requests.
# Prerequisites
Every node on the Inference Grid is identified by a private/public key pair. You can generate your own key pair with the following command:
./oaica generate
Next, you need some way to make Lightning payments. We recommend using Spark (opens new window) is fully-custodial and easy to set up, but you can use any Lightning node you want.
WARNING
As you make requests, the adapter will automatically pay for them using your attached wallet. If the wallet is empty, the adapter will not be able to pay for requests and will return an error.
# Configuration
Now, you need to define your config.json
file. In addition to the private key and Spark mnemonic,
you can (optionally) specify your app's information in order to get it listed on the Inference Grid
leaderboard.
{
// Your connection to the Inference Grid.
"relay": "wss://relay.inferencegrid.ai/consumer/ws",
"private_key": "...", // Hex-encoded private key.
// Your personal OpenAI-compatible API endpoint.
"port": "5001",
"secret_api_key": "...",
// (Optional) Your branding.
"display_name": "ArcticSQL",
"website_url": "https://arcticsql.app",
"logo_url": "https://arcticsql.app/favicon.png",
// Your Lightning wallet - you can use Spark or Lightspark!
"spark": {
"mnemonic": "...",
},
"lightspark": {
"token_client_id": "",
"token_client_secret": "",
"node_id": "",
"node_password": ""
}
}
# Running
Finally, you can download the adapter release from the releases page (opens new window) and run it with the following command:
./oaica --config config.json
This will start an OpenAI-compatible API server that you can use to make requests to the Inference
Grid. Note that you will need to use the secret_api_key
you specified in your config file to
authenticate requests to your endpoint.