Software & Apps

Jigsawstack / Omiai: Omiai is an opinion on AI SDK for the type of scheduled to select the best model from a suite models that depend on quickly. This includes built-in O3-like Reason, Curated tools, Internet access and full multi-modal support with almost all types of media

Omiai

Omiai is an opinion of AI SDK for the script type that selects the best model from a suite of curate models depending on quickly. This includes built-in O3-same arguments, chopped tools, Internet access and full multi-modal support with almost all media classes.

The idea for Omiai is the last frame you need for LLMS where you feel your use of a LLM good at all!

  • ⭐ Model models models based on model quality, speed and cost
  • 🧠 Automatically selects the best model for each assignment
  • 🔗 Automatic chain models for complex tasks
  • 🤔 Built-in Reason (O3-Mini, Deeresek R1)
  • 🔨 Built tools and tool calling (image generation, OCR, SST, etc.)
  • 🌐 Internet access in real-time
  • 🔁 Model rate-limiting fallback, retry
  • 📁 Multimodal LLM support (PDF, images, files, audio, CSV, JSON)
  • 🧮 Multimodal Emldedding Model

Is run by The Vercel’s has SDK For Orchestration & Jigsawstack for tools and embed.

View all models used HERE

Some benefits include:

  • No need to choose to model or face configuration
  • Plug> Prompt> Play
  • Always have full multimodal support regardless of model
  • Continued up to date with newest models and features
  • Use the reasoning mixed with other models to solve complex tasks

You need to put all the API keys for LLM providers used in SDK. Check the .env.example file for all API keys required.

To make a .env file to the root of your project based on .env.example or pass your api keys to createOmiAI Like this:

const omi = createOmiAI( "image" );
“image” ); console.log(result?.text);”>

import  "file";
        data: string; //url or base64
        mimeType?: string; //mimeType of the file
       from "omiai";

const omi = createOmiAI();

const result = await omi.generate(
        type: "text" );

console.log(result?.text);

Structured output with zod

import  "file";
        data: string; //url or base64
        mimeType?: string; //mimeType of the file
       from "zod";

const result = await omi.generate({
  prompt: "How many r's are there in the word 'strawberries'?",
  schema: z.object(
  type: "audio" ),
});

console.log(result?.object);
“text-other”;
text?: string;
url?: string;
fileContent?: string;
); let text = ""; for await (const chunk of result?.textStream) { text += chunk; console.clear(); console.log(text); }”>

const result = await omi.generate({
  prompt: "Tell me a story of a person who discovered the meaning of life.",
  stream: true,
});

let text = "";
for await (const chunk of result?.textStream) {
  text += chunk;
  console.clear();
  console.log(text);
}
const result = await omi.generate({
  prompt: "Tell me a story of a person who discovered the meaning of life.",
  schema: z.object({
    story: z.string().max(1000).describe("The story"),
    character_names: z
      .array(z.string())
      .describe("The names of the characters in the story"),
  }),
  stream: true,
});

for await (const chunk of result?.partialObjectStream ?? ()) {
  console.log(chunk);
}
const result = await omi.generate({
  prompt: ({ role: "user", content: "What is the meaning of life?" }),
});

console.log(result?.text);
const result = await omi.generate({
  prompt: (
    {
      role: "user",
      content: (
        {
          type: "text",
          data: "Extract the total price of the items in the image", //will tool call OCR tool
        },
        {
          type: "image",
          data: "https://media.snopes.com/2021/08/239918331_10228097135359041_3825446756894757753_n.jpg",
          mimeType: "image/jpg",
        },
      ),
    },
  ),
  schema: z.object({
    total_price: z
      .number()
      .describe("The total price of the items in the image"),
  }),
});

console.log(result?.object);

Reason is automatically so you don’t need to call it. On the basis of complexity to the promptado, automatically decide if necessary to use reasoning or not.

If you want to force reasoning, you can put the reasoning parameter of true or if you want to disable it permanently, put it in false. Deleting the key put it in the car.

const result = await omi.generate({
  prompt: "How many r's are there in the text: 'strawberry'?",
  reasoning: true,
  schema: z.object({
    answer: z.number(),
  }),
});

console.log("reason: ", result?.reasoningText);
console.log("result: ", result?.object);

Take the Reason Text result?.reasoningText

Multi-LLM is a technique running your prompts to many LLMs and combines results. This is useful if you want to get a more accurate consent of the models to check each other.

Note: This can shoot your expenses because it will run throughout the ~ 5-6 llms together.

const result = await omi.generate({
  prompt: "What is the meaning of life?",
  multiLLM: true,
});

The web automated search is that you don’t have to turn it on. This will automatically decide whether to use web search or not based on quick. You can also force it to run by setting up the contextTool.web parameter of true or if you want to disable it permanently, put it in false. Deleting the key put it in the car.

const result = await omi.generate({
  prompt: "What won the US presidential election in 2025?",
  contextTool: {
    web: true,
  },
});

Embedding model is run by Jigsawstack and you can look at the Full Docs Here

const result = await omi.embedding({
  type: "text",
  text: "Hello, world!",
});

console.log(result.embeddings);
const result = await omi.embedding({
  type: "pdf",
  url: "https://example.com/file.pdf",
});

console.log(result.embeddings);
const result = await omi.generate({
  prompt: "Generate an image of a cat",
});

const blob: Blob = result?.files?.(0).data;

You can pass your own SDK tools by using the tools parameter. These are the tools of tool functions based on Vercel’s AI SDK. Check full docs for tools here

import { createOmiAI, tool } from "omiai";

const omi = createOmiAI();

const result = await omi.generate({
  prompt: "What is the weather in San Francisco?",
  tools: {
    weather: tool({
      description: "Get the weather in a location",
      parameters: z.object({
        location: z.string().describe("The location to get the weather for"),
      }),
      execute: async ({ location }) => ({
        location,
        temperature: 72 + Math.floor(Math.random() * 21) - 10,
      }),
    }),
  },
});

reasoning,, contextTool,, autoTool And the real LLM will enforce your prompting all automatically selected based on your quick. You can turn off vehicle decisions for any of these by placing the relevant false. You can also force them to run by setting them in true. If the field is undefined or not given, it will be placed in the vehicle.

interface GenerateParams {
  stream?: boolean;
  reasoning?: boolean; // Auto turns on depending on the prompt. Set to true to force reasoning. Set to false to disable auto-reasoning.
  multiLLM?: boolean; // Turn on if you want to run your prompt across all models then merge the results.
  system?: string;
  prompt: string | GeneratePromptObj(); // String prompt or array which will treated as messages.
  schema?: z.ZodSchema; // Schema to use for structured output.
  contextTool?: {
    web?: boolean; //Auto turns on depending on the prompt. Set to true to force web-search. Set to false to disable web search.
  };
  autoTool?: boolean; // Auto turns on depending on the prompt. Set to true to force tool-calling. Set to false to disable tool-calling.
  temperature?: number;
  topK?: number;
  topP?: number;
  tools?: {
    (key: string): ReturnType<typeof tool>;
  };
}

interface GeneratePromptObj {
  role: CoreMessage("role");
  content:
    | string
    | {
        type: "text" | "image" | "file";
        data: string; //url or base64
        mimeType?: string; //mimeType of the file
      }();
}

view Full Docs Here

interface EmbeddingParams {
  type: "audio" | "image" | "pdf" | "text" | "text-other";
  text?: string;
  url?: string;
  fileContent?: string;
}

Contributions are welcome! Please feel free to submit a PR 🙂


https://opengraph.githubassets.com/944905d8d6efa4f05ae2445fff7f5b204cf28816aad65e02a9b9d8494ccfc246/JigsawStack/omiai

2025-02-02 10:14:00

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button