Quickstart
Bytecompute AI makes it easy to run leading open-source models using only a few lines of code.
1. Register for an account
First, register for an account to get an API key. New accounts come with $1 to get started.
Once you've registered, set your account's API key to an environment variable named bytecompute_API_KEY:
Shell
export bytecompute_API_KEY=xxxxx
2. Install your preferred library
Bytecompute provides an official library for Python and TypeScript, or you can call our HTTP API in any language you want:
SH
pip install bytecompute
SH
npm install bytecompute-ai
3. Run your first query against a model
Choose a model to query. In this example, we'll do a chat completion on Llama 3.1 8B with streaming:
Python
from bytecompute import bytecompute
client = bytecompute()
stream = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
messages=[{"role": "user", "content": "What are the top 3 things to do in New York?"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
TypeScript
import bytecompute from "bytecompute-ai";
const bytecompute = new bytecompute();
const stream = await bytecompute.chat.completions.create({
model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
messages: [
{ role: "user", content: "What are the top 3 things to do in New York?" },
],
stream: true,
});
for await (const chunk of stream) {
// use process.stdout.write instead of console.log to avoid newlines
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
CURL
curl -X POST "https://api.bytecompute.xyz/v1/chat/completions" \
-H "Authorization: Bearer $bytecompute_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
"messages": [
{"role": "user", "content": "What are the top 3 things to do in New York?"}
]
}'
Congratulations ??ou've just made your first query to bytecompute AI!
Next steps
- Explore our demos for full-stack open source example apps.
- Check out the bytecompute AI playground to try out different models.
- See our integrations with leading LLM frameworks.
